OT114: Penelopen Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. Two new ads recently. One is for psychiatrist Laura Baur. She’s based in NYC but also does some teletherapy. I know her and can vouch for her (she’s affiliated with the NYC rationalist social scene, so if you are involved in that it counts as a conflict of interest and you shouldn’t try to get treatment with her). And the other is for Portfolio Armor, a company that helps you calculate how to organize your portfolio around a given level of risk.

2. Comment of the week is from the subreddit: Most Of What You Read On The Internet Is Written By Insane People.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

531 Responses to OT114: Penelopen Thread

  1. onyomi says:

    Recently I was applying for life insurance to fund cryopreservation and got thinking, based on the questions, that maybe life insurance companies would have better, more detailed information on healthy lifestyles than popular books on diet, exercise, etc. most people consult on such issues (probably not a high bar to clear: maybe even better than most doctors, or than can be revealed by examination of isolated studies?). That is, life insurers, seemingly more than almost any other group, have an incentive to make it their business to know what keeps people alive and what doesn’t (I guess other health insurance companies would know about what keeps you from visiting the doctor, but that seems to have a lot more confounds, like being super-regulated and patient discretion about how much medical care to undergo; alive/not alive is a much clearer line to focus on than healthy/not healthy).

    Of course, some of the factors relevant to life expectancy, most prominently age, are nothing one can do anything about, but it sure seems like they also take into account many factors one does have at least some control over, like smoking, driving record, and weight, in addition to some blood numbers. Some conclusions one might draw from such information, if it’s publicly available, would surely be obvious: don’t smoke, don’t be a reckless driver, but other conclusions might be surprising: maybe it turns out that if you care about not dying, driving at all isn’t a good trade-off for most people, especially if e.g. walking plus subway is available (and walking reduces all-cause mortality, I believe; of course, some people would still conclude driving is worth it to them, but maybe the risk they’re taking is bigger than they think–probably is). Or maybe it turns out that certain things we imagine are harmful don’t actually hurt you, like that life insurance companies don’t care what you eat so long as you aren’t overweight, therefore, if you can maintain an ideal weight on meat and candy, there’s no reason to eat less of those things than you’d prefer in favor of e.g. more fruits and vegetables.

    The extent to which this info could resolve smaller issues like whether eating a high-carb, low-protein diet is superior to a low-carb, high-protein diet controlling for weight would, I guess, depend on the extent to which they collect such information, but presumably if something is likely to have a big financial impact they would be incentivized to check for it (albeit also taking into consideration the trouble of doing so and getting accurate answers), which could, in itself be informative (maybe some things we expect matter actually don’t, as revealed by the fact that no life insurance company takes it into consideration).

    • Aapje says:

      I think that you are way too optimistic. These are extremely difficult questions for even scientists to answer, because there are so many confounders and reverse correlations.

      For starters, humans don’t neatly randomize their behaviors, allowing us to to isolate the variable we are interested in. Instead, you have mostly poorly educated people who more often have dangerous and disabling jobs, who more often smoke, who have poorer/no healthcare insurance, who more often eat junk food, who do sports less often, etc. Then you also have well educated people who mostly behave in the opposite way. So if you want to only determine the effect of one variable, you have to use complex statistical tricks to try to neutralize the effects of the other behaviors/differences. However, these tricks very easily don’t work correctly.

      An example of a likely confounder is when you note that: “walking reduces all-cause mortality.” Whenever some variable correlates with very many things, there is a very high chance that confounders are at play.

      A second issue is that people often change their behaviors to supposedly more healthy ones, in response to health issues. Teetotalers are very unhealthy people as a result. Not because drinking helps your health, but because unhealthy people get told by doctors not to drink.

      A third issue is that much data is unreliable and may be biased. The person with a worse driving record is not necessarily the worst driver, but she can:
      – have a commute where the police are very active, while a much worse driver may have a clean record due to only driving in places where the police doesn’t care
      – not use technology to temporarily moderate her speed at speed traps, while a much worse driver may only rarely get ticketed by outwitting the police
      – have characteristics that make her don’t care about getting tickets, like being rich or having immunity (diplomatic immunity or being police themselves or …)

      So then you can conclude that the worst drivers die very often, while you actually noticed that people who live in place with heavy policing (perhaps because they have budgeting issues because their population is poor and often out of work) die more often than the place with mostly well-off, well-educated people with jobs.

      Some data can be missing entirely. The insurance companies will not know whether people have a high-carb, low-protein diet or a low-carb, high-protein diet. Even if they do surveys, it’s notoriously difficult to figure out people’s diet, because comparing survey results to the results from actually monitoring people has shown that people are generally very poor at noting what they actually eat.

      Another issue is that many effects are probably not big enough to be visible over the noise. The noise is so large that only fairly large effects are visible.

      Ultimately, I think that there is a large risk that insurance companies will start demanding life style changes that they claim will have a major impact on health, while the main reason why the life style is ‘healthy’ is that the well-educated and wealthy have that life style. So then that intervention would not work, but demanding that their insured become well-educated and wealthy would. 🙂

      • onyomi says:

        I feel like the challenges facing a life insurance company you raise are pretty much equally challenges faced by anyone trying to figure out how to decrease his chances of death. The difference I’m pointing out is that life insurance companies are seemingly the organizations with the most financial incentive to get it right, or as close to right as is possible given all the known confounds.

        • albatross11 says:

          They have the incentives to get things right, but only to the extent this helps them set rates properly and works in legal and business terms. It’s not so clear how much they could benefit from knowing whether, say, low-carb vs low-fat diets are better, or whether intermittent fasting or a paleo diet or a mediterranean diet are more likely to be good for you. (And as Aapje pointed out, it’s not so easy to work that out even if you have observational data about diet vs lifespan.)

          • onyomi says:

            only to the extent this helps them set rates properly and works in legal and business terms

            In what ways would this extent not be coterminous with simply accurately predicting risk of death, given that’s what they’re insuring against (apparently you can even get a policy that pays out in case of suicide, which is relevant for people who may wish to cryopreserve themselves when before legally declared dead)?

            To elaborate a little more, I feel like one sees this discussion pretty often:

            a: there’s no proof that being moderately overweight has a negative health impact.
            b: what about this studying showing decreased life expectancy for people with moderately high BMIs?

            But I’ve never seen this discussion:

            a: there’s no proof that being moderately overweight has a negative health impact.
            b: then why does this life insurance company offer a 2% discount for every 1 point reduction in BMI from 24 to 18 and a 4% reduction for every point above 24 lost?

            Personally, I’d find the second much more convincing (above arguments purely hypothetical wrt object-level).

          • albatross11 says:

            onyomi:

            That’s a good point–the insurance company may be wrong (following conventional wisdom off a cliff), but they definitely have an incentive to be right–much more so than nutrition experts who get paid the same whether their advice helps their patients or not.

          • The insurance company wants to predict mortality, not to control it. So it doesn’t matter to them whether the reason that being teetotal correlates with higher mortality after controlling for everything else they have data on is that moderate drinking is good for you or that being teetotal is a proxy for health problems they can’t observe directly.

            So they don’t have an incentive to generate the information you want.

          • albatross11 says:

            I’d say they have the incentive to generate information that is closely related to what we care about, but not exactly the information we care about.

        • Aapje says:

          @onyomi

          Researchers already appear to have strong incentives to figure out how to make people live longer. I’m not convinced that a lot of stones are left unturned right now. Given the many scientific claims about lifestyle differences that supposedly influence health, but that don’t hold up under further scrutiny, it seems to me that the we simply lack the tools to do more than find the largest effects, which are mostly obvious anyway. The effects that are non-obvious, but large, seem to be rare.

          Life insuring itself generates very little information about people’s lifestyle, aside from when they die, what data they volunteer once (and may lie about) and what you get from outside sources. Those outside sources are usually available to researchers as well (in fact, I would expect researchers to have access to more data). If the data comes from other insurance products, then I wonder whether correlating those is actually legal.

          There is a major ‘unfair competition’ issue if the insurer who sells both life insurance and health insurance can use the information from the latter to improve rates of the former. The insurer who only sells life insurance then cannot compete.

      • arlie says:

        They are already demanding lifestyle changes, at least for employer-provided health insurance in the US. My employer charges extra for health insurance if any of the insured smoke, except in one jurisdiction where the law forbids that. Several acquaintances have been “forced” to wear fitness trackers. With a previous insurer/employer combo, I was charged $300 per year in extra premiums for refusing to answer a nosy health questionnaire, submit to some unneeded and inconvenient tests, and received unwanted and outdated counselling about the results of those tests.

        • Randy M says:

          I admire your commitment to your principles. I take the money.

          • arlie says:

            I’m not sure how much was principles, and how much was cost-benefit analysis – the same kind I use when I decide to pay someone else to do any task I could do myself, but either dislike intensely or don’t feel I have time for.

            I could afford to pay $300 (in pre-tax $) to get out of an ungodly hassle, that predictably made me furious, and was consuming 1-2 hours of my time. So I did ;-(

            I can also afford to pay for someone else to clean my house, and while it’s costing me less per hour of my time not used, it’s not hugely less. I make that trade too.

          • Garrett says:

            The trick is to get the $300, but by using your employer’s time. “Sorry – I’ll be late on that project because I need to fill out $paperwork in order to take advantage of our health insurance.”

            It’s not a benefit if you can’t use it.

    • helloo says:

      Recall that insurance companies have a far greater interest in GROUPS rather than individuals.
      That is, they might prefer say… swimming over running because of the selection effect of swimmers rather than the results of the sport itself.

      Additionally, they mainly use this to offer better prices than their competitors – in terms of profit, they might prefer smokers if they pay 10x the premiums but only cost 5x as much.

      Also, insurance companies have tons of restrictions on what can be used to price discriminate.
      The types and amount depends a lot on the type of insurance.
      Life insurance I believe is much less restricted than health insurance and auto tends to be the least restricted (thus things like credit scores and even good grades allowing discounts there).

      • Nornagest says:

        Recall that insurance companies have a far greater interest in GROUPS rather than individuals.
        That is, they might prefer say… swimming over running because of the selection effect of swimmers rather than the results of the sport itself.

        I’m not sure that makes sense with an insurance company’s business model. It’s going to pick up some people that want individual coverage, but I expect it to spend most of its time trying to get picked for corporate coverage. And there would be no selection effects for swimmers at all there, or only very weak ones (you might get a few people who were undecided between e.g. an HMO plan and a PPO plan, and were enticed into the latter by incentives matching their hobby — but only a few).

        • helloo says:

          As long as there’s a private market, it will make sense for the insurance company’s private insurance share.

          It might be much smaller than the group/corporate coverage, (~49 vs 7% for health https://www.kff.org/other/state-indicator/total-population/), but that’s still a decent market.

          Plus, some insurance like auto generally isn’t bought in a group. Life is iffy as most corporate insurance policies are only held when employed and that’s generally not a good idea for life insurance.

          • Nornagest says:

            I’m talking about health insurance only; for auto or life policies which are typically bought on an individual basis, it does make sense to chase better demographics as a cost-minimization strategy (and we do sometimes see that in practice, though other models exist). Don’t think I buy your logic for health, though. If the group market is roughly 7x larger than the individual market, then any policy which seeks to incentivize healthier people signing up on the individual market needs to be getting an effect there 7x larger than any perverse incentives it introduces in the group market (whose composition can’t be substantially changed, but which is still run by HR departments trying to get a good deal for themselves and their employees).

            I’m not an insurance adjustor, but I don’t think I’d often expect to see effects that outsized. Smoking might be an exception, but that’s about the single worst easily identifiable habit, and smokers already expect to face medical discrimination. Runners don’t.

  2. KieferO says:

    For this and the past 2 years, I will/have read devotions at one of my Church council meetings. Each year, I end up reading a section of a sci-fi story that 1) has deep Judeo-Christian themes, and 2) requires me to (poorly attempt to) put on an Australian accent. One year I read Cantors and Singers from Unsong (http://unsongbook.com/interlude-%D7%92-cantors-and-singers/) Another year, I read a section of a story set in a universe where young earth creationism and microevolution are simultaneously true. Does anyone have a recommendation for another story that I could read?

  3. BBA says:

    A non-political election note: The ballots at today’s NYC elections were twice the usual size, but the ballot scanners weren’t modified to accept larger sheets of paper. Voters need to tear on the perforation before scanning.

    That’s a lovely message to send. Rip your ballot in half or it won’t count!

    • honoredb says:

      The whole time I was voting, I was stressing out over the tearing step, since I have a very low success rate at tearing neatly. And then the ballot (probably having had a long day by that point) just fell apart naturally and neatly while I was walking it over to the scanner.

      I thought it was funny that they gave me a Privacy Folder to hide my ballot while I walked the 30 feet from the voting booth to the scanner, then had a polling worker take the folder away for reuse when I was still 5 feet away. No provable invariants here.

    • albatross11 says:

      When I worked on voting security, one thing I recall was that just about every seasoned election official I talked to had at least one horror story about the horrible things that happen with paper ballots. (The one like this I remember hearing about was a central-count optical-scan paper ballot election where some genius ordered ballot boxes that were too narrow for the ballots to fit in. The voters all just did the obvious thing and folded the ballots lengthwise to get them to fit. The optical scanners then happily choked on the ballots, which all seemed to have a spurious line drawn down the middle.)

  4. johan_larson says:

    Has Iceland done something unusual recently?

    Paul Graham, the Silicon Valley entrepreneur and VC, tweeted this yesterday:

    It’s hard to be a world leader economically without a large population, but Iceland shows a tiny country can be a world leader morally.

    Any guesses as to what he’s referring to?

    • The Nybbler says:

      Outlawing circumcision, maybe?

      In general, it’s hard to understand how a country founded by the Vikings who were so quarrelsome that the other Vikings exiled them has become so Social Democratic.

      • They weren’t so quarrelsome that the other Vikings exiled them. They, at least by their account, left Norway for Iceland in response to the unification of Norway by Harald Haarfagr and the establishment of a considerably more powerful royal authority than had existed earlier.

        It’s true that Kveldulf’s family conducted a multi-generation feud with Harald and his descendants, but Harald started it.

        • The Nybbler says:

          The story “came to Iceland due to being on the wrong side of a blood feud back in Norway” seems rather common among the early Icelanders, including Ingólfr Arnarson. I guess that might not count as exile, exactly, but clearly they felt remaining in Norway was extremely non-viable.

    • rlms says:

      They successfully prosecuted bankers after the financial crisis.

      • albatross11 says:

        That seems like it might be a practically good idea (to create the right incentives for the future), but not like it would be such a huge moral victory. I mean, yeah, the law should apply to rich and powerful people, too, but the main reason to want to put people in jail for financial fraud is to get less financial fraud. Maybe it’s just my Christian moral system coming through, but it sure seems like being a world moral leader would come from, say, taking in a lot of refugees and integrating them into your society successfully, or making sure there aren’t any people sleeping in cardboard boxes, or (depending on your moral premises) abolishing abortion and capital punishment. Putting some unsympathetic bankers in jail for financial fraud doesn’t seem like enough, somehow.

        • Brad says:

          In terms of equality before the law the entire industrialized world is in a position akin to that of the American South during Jim Crow vis-a-vis voting–de jure it exists, de facto it doesn’t. Any country that makes any progress whatsoever in this area deserves effusive praise.

    • Nicholas Weininger says:

      I’d guess some combination of their adoption of non-fossil-fuel energy and their high degree of gender equality both cultural and legally-mandated. Trying to find non-CW-stoking neutral descriptors for those, apologies if I’ve fallen short, but those really are my best guesses for the things Graham might be referring to.

      • Lambert says:

        ‘Adoption of non-fossil-fuel energy’ is a bit of a strong way of putting it.
        More like ‘loads of non fossil fuel energy lying around and no fossil fuels for hundreds of miles’

  5. baconbits9 says:

    My wife’s water just broke, what is the policy of liveblogging in the open thread?

  6. proyas says:

    What would a space ship built for interplanetary combat look like?

    It must be able to traverse our Solar System in a matter of weeks without refueling during a cruise (similar speed and endurance demands to a modern aircraft carrier or submarine that can go from friendly port to any point in the ocean), must have the ability to destroy other space warships and satellites, and must have features that enable it to survive some level of combat damage.

    Edit: The ship must be able to cover 3 billion miles in six weeks (I choose that because Neptune is a maximum distance of 2.7 billion miles away), with enough fuel to return to Earth. The return trip can be slower.

    • cassander says:

      I don’t think you could ever really have such a thing, at least not without implausibly exotic technologies. A ship with a lot of people in it is going to be big, expensive, and very visible. It can be defended against very cheaply platforms that are much cheaper and stealthier, rail guns or missiles placed in orbit, painted black, and turned off most of the time so they don’t radiate heat. The better your space propulsion system is, the easier it gets to accelerate a tiny chunk of matter to some fraction of C pointing in the direction of any enemy ship.

    • bean says:

      What sort of tech are you giving me? Can I miniaturize my propulsion system into a missile? How good are my lasers and optics? Any other fun tech you expect me to use?

      To a first approximation, any spacecraft is going to be mostly fuel. There are some fairly significant economies of scale in lasers, so expect one or a few big lasers instead of a bunch of small ones. And things travelling at the kind of speeds you mention hit really, really hard.

      Stealth in space basically doesn’t work, despite what cassander says. I doubt you’ll see heavy manning. Space is a fairly benign environment, and supporting people there is extremely expensive. So expect it to be the bridge crew, their support people, and maybe a guy to fix the life support.

      • proyas says:

        What sort of tech are you giving me? Can I miniaturize my propulsion system into a missile? How good are my lasers and optics? Any other fun tech you expect me to use?
        The only constraint is that you can’t violate the known laws of physics.

        • bean says:

          That’s leaving the field really wide open, to the point where I can’t give you an answer. Am I allowed to use antimatter? How many assumptions can I make about lasers and efficiency?
          Atomic Rockets (linked below already, but good enough to duplicate) is the place to go for this kind of stuff. If you can constrain the solution space slightly more, then we can talk.

      • cassander says:

        My whole argument was that stealth in space doesn’t work. Your ship will be huge and visible no matter what you do, much more visible than the things trying to shoot it down, many of which will also have the benefit of having a planetary object between them and the thing they’re trying to shoot half the time.

        • bean says:

          I’m pointing out that the small things will also be plenty visible. They’re in space, a fairly empty environment, and in fairly predictable orbits. A decent radar system with a few years of tech progress will be able to see anything of military significance. You could disguise them, but that’s a rather different matter.

          • cassander says:

            hard to spot would have been a better word than stealth, sure. The point was you can put ship killing weapons in orbit that are a lot cheaper than ships and which are relatively hard to first strike.

          • Protagoras says:

            Suppose there’s good laser technology, but no solution to the problem of laser beams spreading, so they are of relatively limited effective range. In that case lasers could still be effective point defense against the cheap ship-killers cassander suggests; weapons that are genuinely stealthy, or protected against lasers somehow, would be needed to attack a ship from long range. And if no such are available, maybe the only option remaining would be to get into close range and attack with lasers, something you’d need another ship with enough power to move and to power its own lasers in order to do.

          • cassander says:

            @Protagoras

            You’d have to be able to detect the projectiles flying at you and then vaporize them very quickly. I don’t know much about lasers, but assuming that part is feasible, the detection part still strikes me as problematic. The projectiles won’t radiate any EM, can be shaped to be radar deflecting, and won’t be much warmer than the space around them.

          • Protagoras says:

            I think you’re much too quick to assume stealth will work well. We’re talking significantly far in the future here; this likely means many frequencies of sensors (which surely could include visible light, for example; computers can make some sense of camera images now). Different frequencies of EM will of course be deflected (or absorbed, when that tactic is employed) differently by the same shapes/materials. How will your projectile manage to be stealthy to all sensor types simultaneously? Obviously current stealth technology doesn’t make planes invisible to vision. It doesn’t even achieve undetectability by radar, and reduced detectability may not be enough when there are no birds to hope to be confused with and when the signal’s being analyzed by a much more sophisticated computer than anything we’ve got now.

          • cassander says:

            @Protagoras says:

            As I see it, there are 4 primary ways of detecting things in space. EM emissions, radar, heat, and visible light. Accelerated projectiles won’t be emitting any EM signature and won’t be producing any new heat, so those two are out. As for radar, with aircraft stealth is limited by the need of the plane to be an effective aircraft. That limit doesn’t apply to projectiles, they won’t need fuel ports, flaps, wings, or lift, or anything else that detracts from the ideal stealthy shape.

            On top of that, they’ll be moving incredibly quickly, and you still have the square law to deal with. If they’re moving an order of magnitude or two faster than existing weapons, you need to detect them an order of magnitude farther away. Doing that with something only as stealthy as modern fighter jets would require stupendously powerful radars.

            With visible light, we’re talking about very small objects moving very quickly against a star field, so I think that it is a difficult prospect at best, but frankly it’s not a subject I know much about.

          • albatross11 says:

            If I shoot off a slug from a linear induction gun or railgun or something, I assume I’m going to emit some kind of EM radiation that tells you my slug is coming. (Which probably means you start thrusting in some random direction perpendicular to the line between you and me.) Is that right, or is there some way I might mask that signal?

          • As I see it, there are 4 primary ways of detecting things in space. EM emissions, radar, heat, and visible light

            That’s three ways.

          • cassander says:

            @albatross11

            If I shoot off a slug from a linear induction gun or railgun or something, I assume I’m going to emit some kind of EM radiation that tells you my slug is coming. (Which probably means you start thrusting in some random direction perpendicular to the line between you and me.) Is that right, or is there some way I might mask that signal?

            Assuming you can’t use use some sort of faraday cage, you’ll often be able to use a weapon that has some large mass between it and and the target.

          • ryan8518 says:

            @DavidFriedman I would argue that it’s really just 2, heat/EM emissions being functionally the same just on different wavelengths, and a space based sensor suite probably shouldn’t have such hard line definitions since there aren’t any pesky spectrum interference lines from the atmosphere. Radar and visible light (I’m assuming this means reflected light, not that somebody let the EE on the project fulfill his natural instinct to put a status LED on the warhead) are again different spectrums, just generated from a source other than thing you’re looking for instead. Both branches fall apart as a useful detector against something moving an appreciable fraction of c, but for slower stuff it’s nigh impossible to create a true black body that won’t show up on some spectrum (and there’s probably other ways to pick up a weaponized black hole).

            As for the railgun emissions, ehh…. over the ranges you’d want to be fighting at in space (e.g. practical targeting range for mobile targets defined by reaction mass of your guidance system, practical time to impact for fixed targets defined by how long your political masters are willing to wait for a first strike) the inverse power law will spread out the signal of your launch (most of the signal of which should be moving at right angles to the projectile if I remember my EE at all) will have degraded enough to not be much of a useful signal.

          • Radar and visible light (I’m assuming this means reflected light, not that somebody let the EE on the project fulfill his natural instinct to put a status LED on the warhead) are again different spectrums

            I considered claiming two. But radar is electromagnetic radiation generated by you and bounced off your target. Visible light, in this context, is generated by an external source, probably the sun, and bounced off something. Those seemed to me to be essentially different.

            If all you care about is whether you are detecting via some form of electro-magnetic radiation in some way, then it’s down to one.

      • Paul Brinkley says:

        Stealth in space basically doesn’t work

        Nonsense. Just paint “the the” in large letters on the side of the ship. Presto! No one will notice it.

    • proyas says:

      I think the ship would need to carry nuclear weapons to destroy enemy space ships, rebellious city-colonies on other planets, and to knock threatening asteroids out of their orbits. I’m going to use an Ohio-class sub’s nuclear weapon armament as a guidepost:

      Armament: 24 × Trident II D5 SLBM with up to 12 MIRVed W76 or W88 (300–475 ktTNT) nuclear warheads each, range 6,100 nmi (11,300 km; 7,000 mi)

      • dick says:

        It seems like nukes would either be not used at all, or saved only for fairly specific scenarios like very close combat. Obviously a lot depends on particulars (like how far is “far apart” and how fast can the missiles accelerate), but I’m imagining ships about as far apart as Earth is from Mars, firing missiles at each other that are basically just a gas tank and an engine. If such a missile can accelerate fast enough to arrive in days rather than weeks, it’ll be going so fast on arrival that it won’t matter much whether the payload is a nuclear weapon or a rock painted to look like one.

      • bean says:

        First, range as a concept in space warfare doesn’t make a lot of sense for physical weapons. Second, the performance of even a missile like the Trident II in a space battle at the level you describe is rather like a bow and arrow in modern air combat. Third, why bother with nukes? For space-to-space, use lasers or kinetics, as appropriate. For space-to-surface, high-velocity missiles are going to probably be more cost-effective.

        • albatross11 says:

          I’m sure you’ve thought about this more than I have, but it seems like range is pretty relevant to your ability to hit me in space, as long as I’ve got some reaction mass to keep randomly jinking around. Suppose you’re shooting at me with a projectile. You have to wait for your radar pulse to return from me to determine where I am, then shoot your projectile. If the radar pulse takes t1 seconds to come back, and the projectile takes t2 seconds to arrive, you need to guess where my ship will be in t1+t2 seconds. (For a laser, this is 2*t1 seconds, but I think then you have to worry about how much the laser spreads out.)

          I think the relevant question is how much I can change my velocity from when your radar pulse arrives at your ship until your projectile arrives at mine, and how that relates to my ship’s cross section to you. If we’re far enough apart, you end up with a very low probability of hitting my ship, even if I’m not dodging around very hard. If we’re close enough, you can just shoot at my apparent center of mass (expected based on my trajectory when your radar pulse got back to you) and put a hole in my ship somewhere.

          • dick says:

            You can also use a projectile that explodes halfway, in order to spread out and turn in to shrapnel. This would probably be effective, since it’s plausible to get projectiles up to velocities where even a gram or two can fuck up a spaceship. But at long ranges, this is less like a torpedo than a moving minefield: “Gunnery officer, see this region of space 200 million kilometers from us? I want you to kill anything that’s in it 2 days from now.”

          • bean says:

            All true (and I’ve done a lot of analysis on this kind of stuff in the past), and I was writing somewhat faster than was good for me. You’re correct about unguided weapons, which is why I don’t expect them to be common. Guidance systems are relatively cheap even today, and the enormous mass leverage you get out of them is enough to make shooting ballistic weapons a bad idea. (Dealing with defenses is a somewhat different issue, but the answer there is probably a bus that carries in a bunch of smaller weapons. I think the math I did on this is on Atomic Rockets somewhere.)
            My point was more that saying “what about the Trident II” as a performance benchmark makes no sense.

          • John Schilling says:

            If the enemy is shooting at you with guided projectiles, then what matters is whether the delta-V you can use to evade is greater than the delta-V the projectiles can use to counter your evasion(*). This is probably false, because the projectile has no mission other than to hit you and you presumably are burdened by some mission other than evading projectiles. If it is true that your delta-V exceeds that of the projectile, it is probably true regardless of range. There is only a small range of plausible assumptions where range matters.

            If the enemy is firing at you with unguided projectiles, then whether he is using slugs or birdshot, you win if you brought the smart stuff. Be smart, and either bring smart munitions or submit to the enemy’s authority until you can score enough weaponized smartness to matter.

            If the enemy is firing at you with beam weapons, then range will probably matter a great deal.

            * Well, that and things like whether you have lasers or defensive projectiles of your own.

      • John Schilling says:

        It’s not clear that nuclear weapons will have a significant role in combat between space platforms, though they may have a role against e.g. planetary targets.

        The effective radius of even megaton-yield thermonuclear warheads against moderately hardened spacecraft, will be maybe ten to twenty kilometers. That’s negligible by space-combat standards. If you can get a munition within twenty kilometers of the enemy, you can score a direct hit. If the enemy’s defences can destroy your munition before it hits, they can destroy it at least twenty kilometers out. And there’s probably no good reason for either of you to position ships close enough that more than one will be destroyed by the same nuke.

        Nor do you need their raw destructive power. Take the ~140 kg mass of your Trident missile’s smaller W-76 warheads and replace it with a simple iron slug (or, better, 140kg of sensors and smarts and divert propulsion and abaltive armor). Hit the enemy at 15 kilometers per second, which would be the approximate closing velocity of Earthican and Martian battle fleets sortieing to engage each other with current propulsion technology. That inert projectile will deliver the approximate destructive force of a Grand Slam bomb, and it will I think be a very long time before anyone builds a ship (space or otherwise) that can’t be mission-killed by a Grand Slam.

        Use a Trident missile’s boosters to add an extra 7 km/s of speed to the warhead(s) before impact, and each one is now roughly a MOAB.

        And the original poster wanted space cruisers that could reach Neptune in six weeks. At that speed, 140kg of cast iron becomes a Fat Man bomb, a 20-gauge shotgun slug is a Grand Slam and a pellet of buckshot is a JDAM. I think we’re quite a ways away from those speeds, but when we reach that level, nuclear explosives will rank with stone knives and bearskins.

    • liate says:

      The game Children of a Dead Earth started as an attempt to look at this premise; it only uses tech that is known to work now, just scaled up based on the relavent physics and actually in space. Most of its ships are very slightly conical (for reduced surface area and railgun slug defense when facing towards enemies) with a lot of radiators on the back and gun placements all over; iirc some people have had success with a much more small-drone-heavy setup, though. Pretty sure that its ships are not that fast, though; that’s really fast.

      Otherwise, Atomic Rockets is generally good source for accurate scifi stuff.

    • Le Maistre Chat says:

      I’m going to technically ignore your “3 billion miles in 6 weeks” rule, because I did the math as Earth-to-Neptune when he’s at aphelion on the other side of Sol instead (31.4 AU).
      Your ship needs to accelerate to an average of 1,294,500 m/s… if you use an Orion drive, that’s 129,450 bombs that accelerate it by 10 m/s for each half of the outbound journey. Exploding 1 bomb/second for until you reach required cruising speed would take the first and last 36 hours of the voyage.
      This may not be doable.

      • Le Maistre Chat says:

        And even if the engineering is possible, what the heck political event would make an Earth government decide it needs to project force to Neptune in only 6 weeks? I don’t think there’s anything you can get around Neptune that you can’t get from Uranus instead, and as far as I can think Uranus would only be preferable to the closer gas giants if hydrogen-scooping for Helium-3 is more economical in a smaller gravity well than scooping it in Jovian orbit (which is the way Mobile Suit Gundam did it. Laugh off any “hard SF” setting that involves colonizing the Moon to sift Helium-3 atoms out of the regolith.)

    • John Schilling says:

      Insufficient data by far, and we aren’t going to get that data any time soon. For example, the parameter I have defined as BK – the mass of optimized kinetic munitions (i.e. missiles, including deployment hardware) that can be defeated by an optimized beam weapon of the same mass (including necessary support hardware) in the time it takes for an optimized kinetic munition to cross the effective range of an optimized beam weapon. Space combat looks a lot different depending on whether BK is 0.1, 1, or 10, and we just don’t know. For that matter, BK variable function of mumble something.

      But, I noticed the word “cruise” sneak into your discussion, and that is one aspect of space travel that is very much unlike oceanic travel and will drive very different behavior in wartime. There will be no cruising in the traditional, naval sense. Platforms (best not to think of them as ships) will be boosted (probably by things akin to tugs) onto trajectories suitable for a specific mission. They will almost certainly know the location and trajectory of all relevant enemy forces when they depart, and they will have enough propellant to carry out their one specific mission and return. If the captain, or the admiral back home, changes their mind, tough. There may be an abort option, but there won’t be enough propellant for “…forget Neptune, we’ve just received intelligence that the decisive battle will be at Neptune! Change course for a rendezvous off Triton in four weeks”. Reserving enough propellant for that, will mean taking twice as long to carry out the Neptune mission in the first place. If you really want the flexibility to respond to the Uranus threat fast, you keep half your platforms in reserve, and the ones you send to Neptune you send on optimized trajectories with optimized propellant loads so you get them back home (and refuelled, and ready) as fast as possible.

      This is true for approximately any technology that obeys Newton; the only difference with e.g. fusion drives is that the same things happen faster.

      • albatross11 says:

        It seems like space battles would really suck for human crew. Spend six months traveling to the location of the battle, then the actual active part of the battle lasts eight seconds and runs too fast for the humans to do anything by bounce around in their seats trying not to puke as the automated systems do random burns to avoid getting hit while launching their missiles/shooting their railguns/firing their lasers to get the other guys.

        I assume the actual battles will be robots killing robots (way less payload needed, less stringent restrictions on acceleration and radiation and temperature, etc). So you wait around until the robots are done killing each other, and then find out whether they’re going to come blow a bunch of holes in the space station where you live.

        • Le Maistre Chat says:

          It would be more boring than being on a cargo ship in combat. A combat spacecraft, if such things are even built, would be like a bulk freighter carrying mostly propellant, that can also weaponize extra propellant and/or carry death robots in the hold.

        • John Schilling says:

          If BK is small, the actual exploding-spaceships part of the battle is over in eight seconds, but it is preceded by a long period of careful planning and deploying the optimal pattern of kinetic munitions and angsting over whether you got it right. If BK is large, the battle proper is a long period of trying to carve up the enemy’s ships with lasers or whatever at extreme range.

          In either case, the battles of Space War One will probably be won by someone who ensured that they had humans (or human-equivalent AI) within a light-second or so to exercise tactical control, because making decisions on the basis of inadequate information while being confounded by an adversary who cheats is one of the worst sorts of problems to leave to a computer. Space War Two, may or may not benefit from the experimental reduction of the effective decision space to something computationally tractable.

      • gbdub says:

        In your BK parameter, did you mean to say BK is the number of kinetic munitions that can be defeated by a beam weapon of the same mass?

        That seems like an important factor to be sure, not sure if it’s the only or most important one. I suspect your upper limit on defensibility is going to be driven by how many targets a sensor system can reasonably track and target in the span from “outer effective range of defense laser” to “too close to dodge the shrapnel from the dead missiles”. Which is another problem – if the missiles are kinetic, it will be relatively easy to kill their sensors/guidance but a laser isn’t going to vaporize that hunk of metal or knock it off course much. It’s still dangerous and heading right for you. Maybe you can hit the fuel tank or warhead (if it has one) and blow it up, but now you’ve created a bunch of shrapnel still mostly headed at you in a cloud, with the added problem that all those bits are clogging up your radar and tracking system.

        The other problem with lasers as defensive weapons for missile swarms is offloading all the excess heat, which will require big radiators. Mass might not get you, but radiative surface area will.

        You might be better off with relying primarily on smaller defensive missiles and some sort of point defense slug gun.

        What about space mine warfare? The thing about space is everything is zipping about with a lot of relative kinetic energy. So you could potentially kill an incoming attacker just by seeding their inbound trajectory with a cloud of small inert masses. These would be hard to detect without active scanning. They’d be relatively easy to dodge – assuming you could track the “minelayer” you’d merely have to make a small course correction to avoid the mines (although the minelayer could make this harder by popping out the masses with random velocity, making the potential size of the “minefield” very large). But every time you dodge you’re wasting fuel and putting yourself on a less efficient / longer trajectory, buying the defenders time. The need to maneuver and use active scanners would also make the attacker much harder to conceal.

        • bean says:

          The standard response to this is twofold. First, if you blind something, you can now maneuver to avoid it. Second, you can basically use the laser to ablate the incoming projectile, which pushes it onto a different trajectory.
          In terms of targeting time, phased array lasers can basically do that instantly.

        • John Schilling says:

          BK is mass vs. mass. If a 1000-kg laser can defeat only one optimal kinetic munition, but that optimal kinetic munition is a ten-tonne missile, then the winning strategy is to buy ten lasers and no missiles, and use 10-N lasers to burn the enemy’s hull (N is the number of missiles he was fool enough to buy). If the 1000-kg laser can defeat a hundred optimal kinetic munitions, each of which is a 1-kg cubesat from Hell, then the optimal strategy is to eschew laser, buy as many killer cubesats as you can deploy, and decide how many of those are needed to saturate the enemy’s defenses and how many you need to hold back to block his attack.

          Radiators count against the necessary support mass for a laser; the aforementioned “1000-kg laser” includes its own radiator, or a heat sink sufficient for the expected duration of combat. And power supply, and targeting system capable of effectively guiding its fire against as many targets as it can engage. Radiators and heat sinks are a big unknown; there’s lots of concepts spanning several orders of magnitude in effectiveness, and we don’t know which ones will work.

          Also, if you’re facing incoming kinetic munitions, you absolutely want to accelerate unpredictably during the terminal engagement phase. Simplistically speaking, anything that is a threat to you must reveal itself as such by its own divert thruster plumes, and any threat that you neutralize is properly classified as such when it stops matching your evasion (there are complications, but they probably don’t change much).

          Space is almost certainly too big for “minefields” to be tactically useful unless the mines are guided kinetic munitions waiting for the opportune moment, reducing the problem to one previously solved.

    • Civilis says:

      I think a lot of the balance for how combat will work at one generation of technology will work will depend on the previous generation of technology worked because the optimal combat build is heavily dependent on countering the tactics the other side uses, and that’s going to be a matter of predicting what the next round of war will look like based on the last round. If the last space war was dominated by large, heavily damaging weapons with slow fire rate and tracking, the development after the war will be to produce smaller, more maneuverable, expendable drones, and someone will try to jump the gun on that and come up with a contingency to use against those, and so on. If you’re writing a space battle story, come up with the older standard technology and tactics and the slightly newer technology and tactics intended to defeat it. If you’re writing a space war story, you have the pre-war status quo, the new technology and tactics that give the aggressor enough advantage to have them start the war, and the technology and tactics invented by the defender during the war to turn the tide.

      The other place to start is the objectives of both sides in the war. Destroying the other side’s ships is not normally the ultimate objective; usually you want to destroy the other side’s ships so you can accomplish something else, such as land troops, blockade, or destroy a major target (with sci-fi, up to and including the whole planet). Alternatively, you want to eliminate their ability to do something to you (such as land troops, blockade, or destroy a major target). Whoever has the objective is going to need ships built to that objective. If my goal is to blow something up, I might not need to engage their ships at all; it might be enough to bypass them long enough to get my attack in. If my goal is to invade the planet, those might not even be warships but transports. Then the other side designs the ships to stop that from happening. If I’m defending against a planet-buster, I need ships designed to go after ships that want to run from battle. If I’m defending against a blockade, ability to run is always a consideration for my ships. Then the first side designs escorts to stop the second side’s defenses, the second side comes up with a design to take care of the escorts, until it reaches a stable ship vs ship level where the designs don’t change very much from iteration to iteration.

      A major secondary factor in how space combat looks is going to depend on the cost differential between manned ships, unmanned expendable combat packages, and decoys. Decoys are natural to consider when stealth is not an option. If I can’t prevent you from seeing me coming, I can still accomplish my objective by hiding amidst a mass of potential targets. The cost differential between a ICBM with a bunch of decoy warheads intended to overwhelm ballistic missile defenses and an ICBM with real warheads isn’t that great, because the missile and associated launch systems itself is still a substantial part of the cost. On the other hand, if the cost of a combat mission spacecraft is primarily in things like life support, propulsion, and weapons, it may very well be easily affordable to slap a heat source on an asteroid (or dozens of asteroids) and throw them at the target in a vector matching your ship. Throw in some ECM to mess up their targeting, perhaps mounted on an expendable drone to avoid giving your position away, and you’ve got an effective combat multiplier tactic.

  7. Randy M says:

    Science fiction question. Assume a significant but not unlimited decrease in cost of launching shuttles. Is there any plausible justification for manufacturing on an orbital space station?
    Currently it looks like the cost to transport to/from the ISS is about $10000 per pound. If we had a ten fold increase in efficiency but had to account for transport up of raw materials, that’s $2000 per pound. How much of a stretch is some kind of zero gravity or vacuum dependent manufacturing process that manages to make those costs look trivial in comparison to the utility of the products?

    • Mr. Doolittle says:

      One really important benefit to doing some work in space is the limitations on the size of rockets. Really large payloads/construction projects cannot be launched, so they would have to be built in space out of various components.

      There could also be products that are too unstable to send as finished products. I’m thinking of chemical compounds that wouldn’t like launch conditions, but could be sent up as inert components. That would also apply to physically weak items, like glass.

      • bean says:

        That’s why we have modules. We’ve gotten quite good at snapping together stuff in orbit, so there really isn’t much call for setting up industry to build big things.

        I think Randy is more interested in microgravity crystal growth or whatever for use on Earth, not space industry for use in space. But I don’t know of anything that fits the profile.

        • Randy M says:

          Your right, though I don’t want to be too leading. My thought is that a company builds a space station in order to utilize some innovative manufacturing technique (say, some new chip-fab process that jump starts a stalled Moore’s law) and uses that as a stepping stone to begin an interstellar colonization process.

          Then I slapped my forehand at the realization that the economics of that early situation are going to be massively against them compared with manufacturing anything on earth. Obviously cell phones that are slightly better or something is not going to be revolutionary enough to justify that kind of logistical expense.

          I’m going for a near future vibe and want to carefully allocate my black box tech magic points.

          • Mr. Doolittle says:

            I thought for a while about something we could manufacture in space for use on earth, and I couldn’t come up with something until/unless we got to $0/pound or something else that’s similarly unreasonable.

            All manufacturing processes result in some waste, so you’re paying more for the raw materials to be launched then the finished product, even if you needed it to remain in space. If it needs to come back down, you’re paying both directions.

            Maybe if you get a response from someone working in a science field that can identify an item that needs zero gravity or something.

          • Skivverus says:

            @Mr. Doolittle, Bear in mind there are companies in existence today whose stated end goal is to get those raw materials from asteroids, rather than from Earth. It’s probably not too much weight on suspension of disbelief to say that one of them succeeds.

    • helloo says:

      Have you checked the wiki and it’s various links? – https://en.wikipedia.org/wiki/Space_manufacturing

      Some major points that were mentioned-
      Microgravity or no gravity allow different crystal growth and other weird physical properties
      Easy access to vacuum and general purity concerns
      Easier manipulation of very heavy objects

      Another is testing/starting point for eventual far-space expeditions where they will be unable to access terrestrial support.

      Depending on how “far future” you want it to be placed, there can be some allegory between this and the increased globalization caused by cheap shipping. That is if you think of the products that are assembled and shipped to multiple countries before completion, then replace “advanced country with high tech labor/factories” with orbit and “country with cheap labor” with Earth.

      • Randy M says:

        Have you checked the wiki and it’s various links?

        I didn’t know that was a thing that would be there! Good thing I’ve already come to terms with not being terribly original.

        A great point there is that it may be cheaper to develop extraterrestrial mining that bring raw materials up. Now I wonder if it would be possible to manufacture small rockets there that would then be sent back to earth without requiring a trip up. Seems reasonable if you’re already mining. Wonder how the fuel/propellant would work out?

        That is if you think of the products that are assembled and shipped to multiple countries before completion, then replace “advanced country with high tech labor/factories” with orbit and “country with cheap labor” with Earth.

        For sure. In any event, only the absolute necessary processes would be done in space, with all final assembly and optional components like cases or wiring being finished earthside.

        • ryan8518 says:

          For rocket fuel, see ISRU (In-Situ Resource Utilization), second use case on wiki is for rocket fuel. This is a fair bit of the specific push behind the SpaceX attempts at developing a methane engine (for the Mars equivalent), since the benefits for an Earth based system are fairly negligible to using our old favorite RP-1. The simple approach while still in the Earth-Luna system is to just convert water->hydrogen peroxide, which lets you build mono-propellant rocket using 1950’s tech.

  8. f2012203 says:

    Hi, I wasn’t sure if this was the right place to post this until I saw another comment about depression.

    Has anyone else here had immense success with caffeine and L-theanine in terms of boosting mood? Did anything else help?

    I have been going through a bad time for the last few months, and the only thing that helps me is a pill I take every morning (100 mg caffeine + 200 mg L-theanine). For the next 5-6 hours, everything seems fine again. Once the effects wear off, I’m back to the depressed state. I have periods of violent sobbing, with no trigger at all. I’m going to see a therapist soon about this.

    Any help would be really appreciated. This is my first ever comment here, so I apologize in advance if the formatting isn’t how it should be.

    • Nancy Lebovitz says:

      This is a right place to post your question and your formatting is fine.

      I have no idea what else might help. Have you tried anything else?

    • Skivverus says:

      This post may be of help.

      The short version is “there’s a long list of things to try which each help some people but not others; keep going down the list until you find stuff that works for you.”

  9. johan_larson says:

    So, Reason is stepping up and telling folks that it’s OK not to vote, because the chance your vote will make a difference in the outcome is so small.

    https://reason.com/blog/2018/10/31/its-ok-not-to-vote

    But in making decisions like this, you have to consider not only the probability of making a difference, but the magnitude of the potential change. And for some offices, the magnitude can be very large. Imagine someone other than George W. Bush had been in office in 2003. That president may well not have gone to war in Iraq, saving the nation a disaster with a bill currently estimated at $2.4 trillion. That’s quite a difference.

    Anyway, have political scientists done some sort of studies to figure out how narrow political races need to get before it makes sense even for skinflint rationalists to wander over to the ballot box?

    • IrishDude says:

      Askew to your point, but: Any given person has an infinitesimal chance of their vote making a difference in an election, so it’s rational to not vote if the only thing that matters to you is who gets elected. However, if some individuals can influence who large numbers of people vote for, it’s rational for them to spend time influencing election outcomes. Is Reason one of those influential entities? If so, they might want to be more careful in dispensing voting advice (though they claim within their article that their advice is unlikely to be convincing).

    • Randy M says:

      But you have to factor in the chance of you incorrectly predicting the outcome. For example, imagine someone who voted for Bush because he was ostensibly against nation building.
      So, a small probability of affecting large positive change, a small(er?) probability of affecting large negative change, various small probabilities of affecting no change or minor change, and a large probability of not affecting the change, whatever the magnitude happens to be.

    • Deiseach says:

      So, Reason is stepping up and telling folks that it’s OK not to vote, because the chance your vote will make a difference in the outcome is so small

      I wonder if this affect has anything to do with perceived voter suppression? Party A looks at the tallies and goes “But there should have been another X thousand votes cast, plainly this is because they would have been cast for us so Party B did something nefarious to discourage our voters!”

      Whereas, for example, in an article about Alexandria Ocasio-Cortez, it mentioned that not only were voters delighted to see a candidate actually turning up to stump on their doorstep, but a lot of them were not even registered (and Ms Ocasio-Cortez helped them get registered). So maybe it’s not “Party B’s nefarious efforts”, it’s a combination of “eh, it’s too much hassle to make time to go vote, I’m not even registered, besides the other lot always get in anyway and my vote isn’t going to make any difference”?

      The point there is a lot of those voters would have been the poor/minorities that Party A would claim Party B to be suppressing, but reading between the lines in the article it was more that everyone expected things to go on as they had been, the incumbent would continue to incumb, that’s why he hadn’t bothered canvassing, so their votes would make no difference and it wasn’t worth their while to vote or even to register. If any suppression was going on, they were suppressing themselves.

    • nimim.k.m. says:

      This “problem of voting” is one of favorite puzzles, so let me work out some notions. 🙂 I suspect nothing below is originally attributable to me because it is simple enough, but I like reiterating them and maybe someone has not yet heard about the,

      Chances of a single, mindlessly cast vote of influencing election outcome are correctly thought as negligible. If you toss a coin in a ballot booth and cast your vote according to it, chances of it affecting anything are quite small.

      However, usually people don’t do their voting decisions randomly. We do our decisions concerning voting in presence of all the other people in the country, and thus “is it worthwhile go to vote” becomes quite fascinating problem of decision-making under uncertainty.

      If you had a perfect foreknowledge how everyone else would vote, your vote would be meaningless except under the rare case that the election would be exactly even without it, and then with your perfect information you would know that you could cast a decisive vote. However, you don’t have such information.

      If you had no knowledge of the behavior of the other voters at all , your prior of the choice is the uninformative one, that is, an uniform distribution over all possible election outcomes. I assume everyone correctly determines that this is sort of prior is bollocks,as you know more about the likely election outcome than that.

      But what about the middle ground where you have some information but not perfect information?

      Enough people acting in a coordinated effort to win an election can win an election. If everyone else were voting randomly, “enough people” could be a quite small number. While fighting against the noise of randomly voting people would be infuriating, nevertheless, even a small effort would start tipping the expected value of distribution in their direction and after a several elections they would have been winning more than not.

      Of course, in any realistic election you and your fellow compatriots in the coordinated effort to win the election by the cunning stratagem of “agreeing to vote together for the same candidate” are not fighting against the menace of i.i.d. coin tosses, but most likely another coordinated effort to elect an opposing candidate. In some other countries one would have several efforts, which leads to equally fascinating question of election systems, but lets assume, for sake of an argument, the first-past-the-post system and the environment induced by it. The observed behavior in the FPTP electoral system is convergence to two opposing parties.

      There are many fascinating uncertainties involved: which is base rate of the support your favorite candidate, ie., the outcome if everyone voted; this can be estimated by gallup-style polling. But not everyone votes: if the base rate is 50%-50% and you manage to activate your base to polls better than your opponent, you win. Actually so abysmally low turnouts are common enough that a minority could win an election against a demoralized majority. Thus, you have take into account your uncertainty about the overall enthusiasm among the supporters of your preferred candidate and the supporters of your not-preferred candidate. (Pollsters try to measure this, too.)

      Anyway, I believe “winning elections is about successful management of coordinated effort” is also the reason why political parties, election rallies and other such public displays exist: you want to convince everybody in your political movement that you all together are a coordinated effort of people who are committed to keeping the whole attempt afloat. So you want to show to each other how committed you are and stir up more enthusiasm. One wishes to sell a story that you are coordinated enough, more than your opponents, so you have a quite large chance to win, but it is not foregone conclusion, so every vote counts!

      However, there are some effects that are as good as random. Weather has some kind of effect: if it rains at one county and not another, some people will stay at home and some others will not. And maybe there is some people who vote according to a coin toss.

      The reason for the overtly long exposition above (in addition, as I said, to that I love talking about election problems) is that I think it does not make sense to look at only at the historic cases where the result has been almost even and go “wee, looks like that’s pretty rare”. The thing you want to measure is the apparent strength of the competing coalitions on the election day, which can be summarized by the margin of difference between votes cast between the two candidates, and another thing that should also count in your calculations is the uncertainty of your estimate of the particular outcome.
      If it looks like either side has mustered together much larger and stable-looking alliance than the other and is going win by a very large margin, then it makes sense not to use very much of your time voting (like the author suggests), unless you find your opportunity cost also negligible, or maybe you have fun while voting. However, in addition to size of the winning margin (of the either candidate you think is going to win), you also should assess how much you believe your estimate can fluctuate.

      [Sorry, this is going to be longer than expected. 1/4]

      • nimim.k.m. says:

        Suppose you assume candidate A will win anyway and thus your vote will be meaningless. By how many votes you assume they will win? How surprised you would be if they won by margin of 10 000? 1000? 100? -100, that is, you are mistaken and they lose by 100 votes? Try to define a distribution that summarizes your best current knowledge of the situation. I myself would draw a beautiful, relatively fat Gaussian.

        Then ideally you repeat this for several elections, and look at both your predictions and the true outcome. Because election results have lots of noise (beware of the coin-tossers that lurk beyond the threshold of voting booth), ideally you want to calibrate your estimates to be wide enough that the true realized margin is usually within the fat part of the probability distribution you drew. Consider a three elections in row, where you always give always probability of one to “candidate A wins by 50 000 +/- 100 votes” and zero to other outcomes. And further suppose that in reality, candidate A wins all those three elections, but by 80 671, and 57 428 and 389 votes, respectively, so while you got the name of the winner right, your prediction is deemed useless for this kind of analysis. Likewise, consistently placing uniform distribution over all possible margins [- number of eligible voters, + number eligible voters] is also quite useless.

        Alternatively tune by looking at the historical record, or go full Nate Silver and aggregate data from several pollsters and experts.

        Now, anyway, you at least can calculate what I’d call a meaningful estimate for the probability of candidate A’s win margin falling in range [-1,1] (meaningful in the sense that the estimate reflects your personal belief of the situation, and you are going to do any personal cost-benefit analysis based on your personal beliefs). While it still will be a small number, it won’t be as astronomically low than the historical record of close ties in elections. In many cases, esp. in swing states, it will even be quite near the mode of your distribution.

        (Of course there is also an opportunity cost to this kind of long-winded process. And there’s an argument that if you are conducting this kind of explicitly quantitative System 2 level operation and do it wrong, you might end up more off than relying upon your knee-jerk intuition, but let’s not go there now.)

        For example, considering the recent US House election, we have Nate Silver’s beautiful probabilistic forecasts. I picked an arbitrary swingy-looking House district from the 538 forecast: Michigan 11th, where the forecast gave for candidates H. Stevens (D) and L. Epstein (R) 80% and 20% percent chance of winning, respectively, and it appears that Stevens did win the race, by approx. 6000 ballots. On the other hand, while the predicted favorite winning an election is not surprising, one in five things happen all the time. For me, 20% chance of rain is almost near the threshold I take umbrella with me, so if I were in Michigan and invested in the election outcome, I would have considered this a close one. If you define a Gaussian distribution that has 1/5th of its mass below zero and 4/5ths above zero, the mean is within 1 sd of the 0 and thus the band near 0 should not be abysmally small. (Of course it becomes smaller larger the district gets, because the width of the slice gets smaller. Assuming margin ~ N(6000, 6000), we have P(margin in [-1, 1] ballots) = 0.00008.)

        So I have been rambling on sources of uncertainty in elections. What about the “is it effective use of your time” angle? Now that we have got our estimate for a toss-up probability, one could try to fathom how large amounts of money the House Representative from Michigan 11th is going to be responsible for and then compare it to how large cost the act of voting is to you, as Johan suggested.

        Assuming the opportunity cost for you is 100 USD, and you vote in Michigan 11th, the act of voting was sensible if the representative is going to make decisions worth of 1 250 000 USD or more. Millions of USD is peanuts for a government budget even in my small European country, but I do not know how much one single US House representative can affect finances of their country. There are 435 representatives, but US federal government budget is several trillions. Maybe someone more knowledgeable on US politics can answer.

        What if you, nevertheless, come to conclusion that voting is waste of your time? Maybe you live in a state which is impossibly solid blue or red, or maybe you just can’t leave your job for full day to stand in a queue [see below], and like author suggests, ponder that maybe you should spend your time more optimally?

        [2/4]

        • nimim.k.m. says:

          If you value your preferred policies being enacted very much and are very good at organizing and politicking, the most optimal use of your time may become involved in politics as a politician or low-level grunt. This usually involves trying to get people to vote your way. And even if you determine that you are abysmally bad at politics business main and it is not viable use of your talents, then an EA-spirited piece of advice would be to earn enough money that you can give donations to organizations that optimally campaigns for your causes. (Or maybe become data analytics wizard that can spam 1 USD donation ads and pleads for optimal profit.) This, again, would involve attempts to convince the populace to vote the one way and not the other, if not directly then maybe by proxy. Even if your chosen organizations determine that it is more effective to just communicate with the whomever who gets elected (ie start lobbying) and only then when needed (ie lobby only when it is truly deemed cost effective), the politician in question still cares whether they are elected or not. And even if the said organization decides not to lobby at all, what they can do is restricted by the local laws and political atmosphere.

          Enterprise-minded people sometimes argue that what you really should do is to avoid politics all-together and found a start-up that has a product that is both enticing to enough people to enable running a profitable business and transforms the society bit towards to your liking in addition to you making a profit. (Like Zuckerberg claims he did with FB.) Which sounds nice, but one very interesting thing to notice is that if you look at the ex-start-ups that have managed to become so successful that they are today large corporations, in particular their founders if not their activing executives, they spend considerable amounts of money lobbying and donating to various causes.

          One could argue that even the ads for products the most non-political imaginable on the surface — say, advert for some consumer product, like soda pop, which is supposed to lead to increased sales of your sweet sugary soda pop, which you believe will make the world happier place — influence the airspace of public sphere a little, and could plausibly have some minor effect that leads to increased support for the political candidate who is most convincing at promising the people that their political platform ultimately will enable their voters to buy more tasty soda pop — but this is quite far-fetched. Probably the mere existence of your a tasty soda pop product is going to make people want it, and thus want policies that enable them to get it a bit more, on the margin. If you add enough corn syrup to make it tasty enough.

          In countries where election outcomes count, everything ultimately comes down to getting the electorate to vote right way. Or at least, it is involved in the process. It feels kinda cheating if one does not use the opportunity to participate in the actual process oneself, if the cost of participation is reasonable. And if you are planning to vote in the elections, you probably at least try to follow politics a little, maybe talk about political issues with your friends and with strangers on the internet blogs, and thus participate in the grand public discussion. (Hopefully minding all kinds of civic community virtues while doing so.) Maybe it could framed as a question of difference between acting like a personal direct utility maximizing agent and being a person who genuinely believes that societies are best organized as civic democracies, thus acts like so? Who knows, maybe they converge to similar looking behavior, because computing all the direct and indirect, time-delayed effects of your acts on your experienced utility is very difficult to do correctly.

          Some people could argue that they do not, in fact, believe in democracy. The author of that piece seems to be of the opinion that uninformed people voting is counterproductive. However, firstly, most likely if you are reading this sort of thing, you are relatively well-informed, so the advice would not apply to you as your participation would increase the average informed-ness (?) of the electoral outcome. And of course, secondly, we live in a society which is composed of the totality of the all the other people anyway. The amount of their political awareness or full lack of it is equally present in all aspects of our lives, and one of the point of having a democracy is that acts of politicians are constrained by the electorate’s will. Voter turnout is a proxy for assessing electorate’s aggregated belief that their interests will be represented by their representatives if they vote, so large turnout is good sign of health of the system.

          Also, an afterthought: if your opponent wins by a wide margin as opposed to a very close-cut election result, that is bound to affect how the every political agent involved is going to conduct themselves in preparation to the next election, and the amount how much it is off from the election forecasts is going to affect how the pollsters adjust their future forecasts.

          [3/4]

          • nimim.k.m. says:

            Also, finally:

            “The idea of leaving work, forwarding all of my calls to my phone, to go stand in line for four hours, to probably get called back to work before I even get halfway through the line, sounds terrible,” says Maria, 26.

            This, by the way, is a sign of very successful coordinated attempt to win an election that — in addition to being successful – is also very much against the spirit of the game of “democracy”. As I said, the idea is to poll the opinion of the citizens. What is described the quoted paragraph is bit like a milder version of the “in the elections the one thing that counts is people who count the votes” adage, sometimes attributed to Stalin. If you can declare the result whatever you want to be, the cost-benefit calculation from the voter’s perspective breaks down; except in Stalin’s time the cost of not voting was still bad because he wanted the elections to look like elections. However, introducing arbitrary poll-taxes-by-proxy twists the cost/benefit ratio exactly in same direction as reducing the influence of the vote.

            The principled course of action according to the ideals of democracy would be an coordinated campaign to decrease the cost of voting. (Whether that fight should be fought by voting in elections … it again depends on a great many deal of variables.) But if the opportunity cost of voting is that large, true scandal is not that participating in elections in general is worthless but that you are not living in a fully-functioning democracy. My opportunity cost to cast a vote is smaller than what it took to write this comment, because I’ve spent 140 minutes+ on thinking and typing this, which is about the same ballpark as the time cost of all acts of casting a ballot during my lifetime together, including the travel time to the polling station and back. (By revealed preference, it appears that I personally value (and thus consider equally influential?) writing 2.9k word stream-of-thought-y rambling essays about voting on internet blog comment sections about the same than actually voting in elections.)

            [4/4]

  10. CatCube says:

    Structural Engineering Post Series

    I apologize for how long this has taken since the last one, but work’s been kicking my ass. I had some drawings to finish this past weekend for a review that started today. I’m also having difficulty getting motivated to do, well, anything outside of work, and it’s been getting worse over the past few months. This technically won’t even be the full post I intended.

    I’ve been working on the posts for design of specific materials, with steel being the first one I’d touch on. It’s not complete, but as I was putting it together I realized that there was an unstated question: why do materials have separate codes? The first part of my discussion of the steel code was to answer this, but I figure I can at least post it separately.

    I discuss some subtleties of the steel code, found here, in discussing what goes into considering code requirements.

    Material Specific Codes
    I’ve discussed this before in previous posts, but I want to talk in a little more detail about what material specific building codes are and why we use them. Different materials have different properties, of course, but we’re seeking three goals that somewhat conflict: to ensure specific behaviors in the completed structure, to avoid overcomplicating the analysis, and to provide the most economic structure possible within those limits.

    AISC 360 specifically names grades of steel that presumptively will work. If you go to the first chapter, it has a section listing a bunch of ASTM standards for various steels, under a statement “…the following ASTM specifications [are] approved for use under this Specification:” Note that this list of materials does not limit the engineer from using others, but if he does it’s now on the engineer to ensure that it will behave similarly enough to use the same design methods. Most of the steels listed are mild steels with well defined yield points and long plastic regions (NB: This is for the steel forming structural members; bolts are different).

    For those not familiar with what I mean by this, suppose you were to take a sample of these steels and put it in a test frame with the top fixed and the bottom on a screw, with a gage fixed to measure the force acting on the sample. As you turn the screw at a slow, constant rate, the measured force will increase, until it reaches a particular point (the yield point). As you continue to turn the screw, the force will stay at that yield point for a while, before it finally starts to increase to a maximum (the minimum tensile strength), after which it will drop off quickly and then break. That is, the steel has linear elastic region where the force increases in proportion to the imposed load. The standard for each kind of steel will define many required properties, including what those minimum yield and tensile strengths must be, the alloying elements in the steel and their particular minimum and maximum ratios, and how much the sample must elongate before rupture. I mention that last one because that the steel will elongate significantly before breaking is an assumption underlying some of the design criteria. For example, a grade of steel that used to be very common but is on its way out, ASTM A36, has a requirement that an 8″ length must elongate no less than 18% before breaking, or, you must scribe lines 8″ apart on your test sample prior to pulling on it and they must be at least 9.44″ apart before the steel ruptures. To use more precise, technical language, the steels used for structural steel frames are ductile. The tradeoff is that these materials are not very hard; tool steels are much stronger and harder, but generally will not deform nearly as much before rupture.

    This assumption is made because of the second requirement I listed: avoiding overcomplicating the analysis. To explain this, let me contrast with a design code for steel of a different type: cold formed steel. The most common example of this would be the metal studs you may see currently being used in place of wood studs in many places. The cold-forming process changes some of the steel properties (to include some reduction in ductility), but the very thin elements forming these channels are very prone to local buckling. If you want to see an example of local buckling, take a piece of paper and fold it into a square tube and bend the tube; the crinkling you see as you start to bend it is local buckling. While local buckling is important to check in structural steel design, most hot-rolled shapes are not nearly as prone to it. The basic design equations start with relatively simple equations for structural steel framing that will not suffer local buckling, and adding epicycles to account for it when it occurs. The code that governs cold-formed steel (American Iron and Steel Institute [AISI] S100) more or less assumes that local buckling will occur and accounts for it by giving methods to calculate an equivalent reduced section that are somewhat more tractable, but probably overly conservative for larger hot-rolled sections.

    The tradeoff is that simplifying assumptions are made assuming material properties; for structural steel you don’t need to calcuate the strength of every single element to the gnat’s ass, because if one is a little higher and one is a little lower, the steel elements will yield until they share the load. For example, when you have a pattern of bolts in a connection, if you don’t have all of them exactly aligned (and they won’t be), one bolt will see the force first. The other bolts will initially see no or low load until that first bolt hole yields and brings others into bearing. The assumption made is that in a standard connection tranferring shear all the bolts will share the load equally. So long as the material can deform enough to bring all of the bolts into bearing, this is reasonable.

    A standard hole in a structural steel connection is 1/16″ (1.6 mm) bigger than the bolt for bolts less than 1″ (25 mm) diameter and 1/8″ oversize for 1″ and larger, to allow for mislocation, hole size variation, and fit-up tolerances in the field; if one set of holes between plies lines up perfectly, while the next one over has the holes juuuust line up enough to squeeze the bolt through, that tight bolt could have to deform one of the holes 1/16″ before that next bolt shares the load! All of the standard structural steels can accomodate these tolerances. An engineer is permitted to use other materials, but has to be cognizant of whether or not it meets the assumptions made in the code.

    These examples are minor details, but they’re the most tractable that I could think of to explain the kinds of things that go into developing the design equations used in a material code. Now consider that concrete will behave well outside the assumptions discussed above, because once cast with reinforcement it will have different properties in different directions! Leaving aside that nobody has yet come up with a good Grand Unified Theory of material behavior, even if somebody were to do so, it would require extremely detailed calculation compared to the simplifying assumptions made for each material. Hence, one set of methodology (code) for hot-rolled structural steel, another for cold-formed steel, yet another for concrete, and wood has yet another code containing simplifying assumptions for that material.

    Seriously, next time I’ll get into the meat of steel design requirements.

    • bean says:

      That was interesting. Mostly stuff I already knew, but in a more coherent form. I appreciate your efforts despite being busy at work.

      • albatross11 says:

        One thing that’s interesting to me is the way the building codes seem to be used, not just for enforcing rules, but for saving time in calculations–presumably a structural engineer *could* recompute this stuff himself, but it seems pretty clear from CatCube’s description that a lot of time, he’s going to just follow code and figure that this means his building will stand.

        • CatCube says:

          To a certain extent that’s exactly true. I was going to talk about this briefly at the start of the steel design part (and, well, still will talk about it). If you’ll permit me some hyperbole, codes to a certain extent function as the Great Wall of Prophecy from Futurama:

          “Great Wall of Prophecy, tell us God’s will that we may blindly obey.”
          “Free us from thought and responsibility.”
          “We shall read things off you!”
          “Then do them.”
          “Your words guide us.”
          “We’re dumb.”

          I’m obviously not saying “We’re dumb,” in general, but all of us are going to be dumb in areas where we don’t have deep expertise, so we’re all dumb in particular. For example, the more I read about seismic design, the more I come to think that there are about 14 people on Earth who truly understand how to design a building to successfully resist earthquakes. However, 14 people aren’t enough to even design all of the buildings in the Bay Area, so they write sections in the building code and tell the rest of us, “Follow these equations under these limits and everything will be fine.” The rest of us are monkeys just substituting in for the terms on these equations.

          However, if you’re working on a massive project like a skyscraper or a large bridge, you’ll hire one of the organ grinders instead of one of the monkeys.

          And, of course, the seismic experts will be in the same boat with, say, reinforcement detailing compared to experts in that area. Everybody in the field has to learn enough to know the principles under which the equations were built to know when you’re starting to push design equations outside the assumptions used to develop them, but it’s not possible for everybody to deeply understand every single one.

          • bean says:

            I think this is common in all fields of engineering. When I used to watch fatigue analysis being done (by people with decades of experience in the field) it was still basically a cauldron stirred with a slide rule. Being a good junior engineer is half knowing when to go to the SMEs, and half figuring out how to bother them as little as reasonably possible.

    • ryan8518 says:

      Ductility is also great because it’s what allows a strength engineer to sleep at night when thinking about cracks and other defects in a structure. The same logic that allows a ductile material to redistribute load around a bolt pattern allows it to carry load around a defect, and while the stress immediately around the crack can often exceed the ultimate stress of the material (in a scenario where the crack is growing anyways). This leads to a constraint in normal engineering that any tensile load bearing member should be made from a ductile material (since crack stress are only really only relevant in the tensile direction, since compressing the gap acts like the gap is non-existent, and complicated loads paths can confuse this). This is one of the purposes of rebar in concrete for example, since the raw concrete is rather not. This fact, and the relatively recent introduction of ductile materials as a common building material go a long way to explaining why classical architecture is so focused on keeping things loaded in compression, and the resulting heavy construction methods (think stone arches, flying buttress, vaulted ceilings)

      Ductility is also fun, because I’ve seen it defined as a material that has a minimum elongation (% stretch to failure) of between 2.5%-10% (the lower number definitions are usually associated with colder operating environments). A point to note is that while almost all material change in ductility with temperature (generally hotter=higher ductility for engineering materials), some (classically low carbon steel somewhere around -50 F) exhibit sharp transitions, at which point all of your ductile material assumptions go out the window and your ship sinks from a sudden offset of brittle failures in parts that normally would have yielded to share the load. This isn’t so much true of say stainless steels or aluminums, so cold weather/cryogenic equipment tends be made from these even though the room temperature ductility (usually in the 4%-10%) is much lower than for low carbon steel, and maintains that relationship down to the brittle transition point.

    • ryan8518 says:

      You mention assuming that you will assume all of the fasteners in a pattern are generally treated as equally loaded. In my world (aerospace), we would tend to agree about that for axial and shear loads, but have a lot of joints where bending and torsion loads matter and will overload bolts on the outside of the pattern. Do you typically have to account for that, or is that something that is generally handled by designing your system such that you don’t carry significant bending/torsion loads in your joints?
      Another thought is there a general rule in your world for what is too far out of pattern for a fastener? We would rarely allow anything out of tolerance by more than .060″ (and that’s pretty generous), but a building isn’t exactly a precision machining job. Something like within 10% of bolt pattern major diameter or something like that would strike me as a rule of thumb, but I’ve seen some wonky fastener connections on buildings before (and I would assume you have different standards for pre-cut material and field fitted stuff)

      • CatCube says:

        I’m going to answer both of your comments under this one.

        Ductility is also fun, because I’ve seen it defined as a material that has a minimum elongation (% stretch to failure) of between 2.5%-10% (the lower number definitions are usually associated with colder operating environments).

        These numbers are at (IIRC) 68°F, but the most common steel used for W-shapes (ASTM A992) has 21% in a 2″ and 18% in an 8″ length as absolute minimums.

        The problem of fracture in steels at low temperature is also a problem for bridges; low temperature and high stress cycles combine with relatively light, nonredundant structures to become something of a fracture nightmare. Steels with minimum Charpy V-notch impact energy, as well as special detailing and fabrication requirements were developed to minimize this problem. One requirement is the identification of fracture critical members, where a fracture in the member would result in failure of the structure. They’re not disallowed, per se, but they’re very strongly disfavored. This is why you don’t see very few new truss bridges; they’re inherently fracture-critical, since the nature of a truss means that you have tension members that you can’t afford to lose.

        You mention assuming that you will assume all of the fasteners in a pattern are generally treated as equally loaded. In my world (aerospace), we would tend to agree about that for axial and shear loads, but have a lot of joints where bending and torsion loads matter and will overload bolts on the outside of the pattern. Do you typically have to account for that, or is that something that is generally handled by designing your system such that you don’t carry significant bending/torsion loads in your joints

        I was attempting to be careful by stating “standard connection transferring shear,” that is, the connection doesn’t carry a moment, and is modeled as a pin for analysis purposes. But as you said, my statement of equal sharing only applies to connections carrying pure axial or shear loads (which are very common in structural steel). If there’s an eccentric load through the connection, the assumption isn’t valid as stated, and you have to distribute the load to bolts. You’re permitted to do a so-called elastic analysis where you model the eccentric load as a combination of pure shear acting through the centroid of the bolt group (shared equally among them) and a moment (where the shear from the moment is distributed to the bolts according to their distance from the centroid). The load on each bolt is then the vector sum of those components.
        This method is inaccurate on the conservative side, but it’s easy. A more accurate method (the instantaneous center of rotation method) is preferred, but it requires either iteration or the use of design tables.

        Another thought is there a general rule in your world for what is too far out of pattern for a fastener? We would rarely allow anything out of tolerance by more than .060″ (and that’s pretty generous), but a building isn’t exactly a precision machining job.

        There’s not a standard for mislocation of individual holes in a pattern that I’ve ever been able to find. There are standard tolerances for member dimensions, flatness, straightness, etc., and erection tolerances governing things such as plumbness. These will place some limits on bolt hole locations, because if you have to keep a member within certain dimensions, and bolt holes diameters, distances between holes, and distances from holes to edges within certain dimensions, that’s going to put an upper bound on how far out a hole can get. But so long as the fab shop doesn’t bust those dimensions, the biggest limit on hole locations is whether or not the ironworkers can beat the connections together.

        • J says:

          Isn’t a shear load where we’d have trouble? Like, let’s use a 4 bolt pattern to sandwich two rectangular pieces of plate on their faces, then try to slide (shear) the plates past each other. One of our bolt holes is slightly shifted, so assuming no friction between the plates, doesn’t that bolt hit the edge of its hole and start taking shear before the others?

          • CatCube says:

            I guess I wasn’t clear–a shear load is exactly what I was talking about in my original comment, pointing out that structural steels can deform enough in the bolt holes to make equal load sharing a reasonable assumption. The intent of this was to try to illustrate the importance of ductility in the steel, pointing out that it was implicitly assumed in design–or, more properly, if you grab any old steel, the regular assumptions used in design may not hold, even though you are building out of “steel.”

            @ryan8518 pointed out that the load on the bolts will be far from equal when you have a load that’s not pure shear, and I expanded on my original comment, explaining that he was correct.

            As an aside, sometimes you can assume friction between the plates, if you’ve designed for it. This is called a slip-critical connection, and requires the use of high-strength bolts to provide enough clamping force.

          • ryan8518 says:

            @CatCube How important are pre-loaded joints generally in your world? I get the strong impression that even requiring snug+1/4 turn as a pre-load requirement on bolts is pretty rare outside of aerospace/automotive but I can’t say I really know.

            It’s not quite the same, but as discussed in one of the other branches, the concept of deliberately introducing one direction of loading is pretty significant in pre-tensioned concrete design, and there are parallels to that in the aerospace world with COPV (composite overwrap pressure veesels) and some more exotic composite materials but I’m not sure how prevalent the use of such structural tricks are in the wider world.

            Ultimately, most of our uses of the pre-loading concepts are either to prevent hold pressure seals or to cut down on fatigue loading (by a piece of black magic, even when the fatigue inducing load remains the same size, if you can pre-load the member in significant tension/compression beforehand the member will have much greater fatigue life, at least up until the point where you overload the joint statically)

          • CatCube says:

            @ryan8518 (I realize that some of this will be known to you, but I figure I’ll discuss a wider variety since others might be interested.)

            It’s required for many applications, since that’s one of the primary ways to make sure that nuts don’t back off–methods such as tack welding are disfavored, and in the opinion of some of the SMEs in my office should be outright prohibited, since it affects the heat-treatement of the nuts and bolts, with fracture implications. Pretensioning also reduces fatigue on the bolts, which is another reason for requiring it.

            There are two kinds of high-strength bolts defined in ASTM F3125, one with a tensile strength of 90,000 psi (Grade A325), and one with 113,000 psi (Grade A490). As you might imagine, this implies the existence of non-high strength bolts (ASTM A307), which have a tensile strength of 45,000 psi, but pretensioning them is not permitted.

            There are three types of bolted connections, according to the specification of the Research Council on Structural Connections:

            1) Snug-tight, where all of the plies in the joint have been pulled into firm contact by the bolts, and all bolts have been sufficiently tightened to prevent removal without a wrench. This is the default, done when there’s not a reason to use the other two. You can also use the cheaper non-high-strength bolts.

            2) Pretensioned, where a specified amount of tension is induced into the bolt. The standard has a table specifying the minimum tension for a bolt by grade and diameter. For example, a ¾” A490 bolt must be tensioned to a force of 37,000 lbs. This creates a clamping force between the plies, and with this clamping force any tension loads on the connection will not increase the bolt tension until the clamping force has been overcome*. This makes it useful for resisting fatigue, and this is one of the cases where a pretensioned connection is required–if a connection is subject to fatigue without load reversal. Other situations are if the bolts are subject to tensile fatigue, if the connection is subject to load reversals, or column bracing in tall buildings, etc.

            3) Slip-critical, where the pretension is required (the same pretension as in the pretensioned connection), but the faying surfaces in the connection are prepared to provide a specific coefficient of friction. This allows the connection to carry load using friction between the plies, without inducing shear loads in the bolt. That will increase the fatigue resistance of the connection, and is required where fatigue with load reversals is possible. It will also prevent slipping of the connection–in a previous comment I discussed that sharing the load among all the bolts might require deforming the holes to bring all the bolts into bearing. This probably won’t matter for the strength of the connection, but if you’ve got something in the building right next to it that is sensitive to that movement, you’ll want to avoid the slipping. It’s also required if the connection uses oversized holes or slotted holes with the long axis parallel to the applied load; these are used to provide adjustment for fit-up, typically.

            If pretension is required, it can be done with the turn-of-the-nut method, as you discuss. However, the minimum turn for using this method is ⅓ in structural practice (there’s a table that specifies nut rotation based on bolt length and connection geometry when using turn-of-the-nut method, plus each lot of bolts is required to be tested for the proper rotation at the site of installation). For the most extreme case, 1 turn is required. So ¼ turn seems small to me!

            Other methods of pretensioning are the calibrated wrench method, using a torque wrench that is tested (daily!) to provide the pretension. In the same category are twist-off-type tension-control bolts, which also provide a specific torque. These torque-based methods have the downside of having a lot of scatter in the pretension compared to the turn-of-the-nut method, as well as the daily calibration for the calibrated wrench method.

            Finally, there are special washers (direct-tension indicators) that have small arches that will compress a specified amount under the required pretension. These will be checked with a feeler gage, or there are products that have a little bit of silicone in each arch that will squirt out when the compression has been reached. I will note, however, that in our office we’ve had issues with DTIs, and are somewhat reluctant to approve** them nowadays. The installation process is somewhat fiddly (it’s a lot more than “Put them under the head of the bolt and crank on it ’til the bumps are squished.”) and we’ve had poor luck in getting contractors to install them properly.

            * You did talk about this a bit in your comment. I do want to note that the stress increase in the pretensioned bolt isn’t zero, but for practical connection geometry it’s going to be very small compared to the loss of preload on the faying surfaces. My steel design text actually has a section that goes through the algebra proving this, since it’s so counterintuitive.

            ** I originally said “specify,” which isn’t quite the right word, since it’s better practice to give the contractor options. However, we do get submittals that allow us to approve the proposed method, and we use language in the contract that allows us to shoot some of these down.

          • John Schilling says:

            methods such as tack welding are disfavored, and in the opinion of some of the SMEs in my office should be outright prohibited, since it affects the heat-treatement of the nuts and bolts, with fracture implications.

            Is there a problem with lock wire or cotter pins, other than that nobody has (AFIK) figured out how to apply either in three seconds with an electro-pneumatic thingamawhatsit? Which, to be fair, may be a big problem in some contexts.

          • CatCube says:

            @John Schilling

            I don’t know the ins and outs of lockwiring or cotter pins because, well, they’re not used in structural engineering. I think it’s a combination of two things: 1) the installation labor, as you note, and 2) the fact that just cranking the bolts really tight works well enough for structural applications, since we generally have lower frequencies and fewer cycles, and we have other reasons to want pretensioning anyway. We’ve been using pretensioning for almost seventy years at this point, and the nuts coming off hasn’t been a huge concern.

            The context of the guy who apparently wanted to tack weld the nuts was, IIRC, holding nuts onto anchor bolts in concrete. He was worried about them backing off–I don’t know the details, but that’s usually crazy for several reasons beyond the “damaging the fasteners” one, since many anchor bolts are technically there only for erection purposes–and our SME (who works nationwide) had to talk him off the ledge. “Just pretension them. I promise you it’ll be fine.” There’s just enough question about situations where it might be necessary that it’s not outright prohibited.

            I think he ended up having to convince them that since ASTM A563 (the specification for the nuts) is not listed in the welding code as a prequalified material, they’d be required to qualify the welding procedure before permitting it. That is, under American Welding Society D1.1, there are combinations of material, welding process, and joint geometry that can be used without testing (AFAIK, ASME codes don’t have prequalification, and require testing for all welding procedures). Anything not in those lists requires you to weld test samples and destructively test them, which can be an expensive proposition.

          • The Nybbler says:

            These torque-based methods have the downside of having a lot of scatter in the pretension compared to the turn-of-the-nut method

            Snugging a nut and then turning it a specified amount is actually more reliable than torquing it a consistent amount? Given that “snug” at least appears to be a somewhat vague concept, that’s kinda surprising. Why is this?

          • CatCube says:

            @The Nybbler

            Snugging a nut and then turning it a specified amount is actually more reliable than torquing it a consistent amount? Given that “snug” at least appears to be a somewhat vague concept, that’s kinda surprising. Why is this?

            I actually mentioned this in passing in my comment because I found it surprising and counterintuitive when I first learned it in college. I happened to borrow a guide on structural bolting today from a co-worker that talks about this, so I’ll just quote that here since I don’t think I can improve on the answer:

            The variables in the torque-tension relationship include lubrication, thread fit, the use of a washer and the tension in the bolt. The torque used to tighten a bolt is consumed by overcoming the friction between nut and washer or steel (about 60%), overcoming the friction between bolt threads and nut threads (about 30%), and providing energy to elongate the bolt and provide tension (about 10%). Tests have indicated that torque-tension relationships for structural bolts easily vary by as much as 40%.

            Basically, most of the torque (90%) goes to overcoming friction in the connection, and that friction can be highly variable–consider that humidity and temperature changes may affect it.

            On the other hand, turning the nut a specific amount (and disregarding the torque required) will elongate the bolt a specific amount. The relationship between bolt elongation and tension stress is much more reliable–scales make use of a consistent force-deformation relationship in metals, after all.

            It’s not perfect, given that there is some permanent deformation in the fastener. Because of this, reuse of black A325 bolts is only allowed if approved by the structural engineer, and reuse of A490 and galvanized bolts of either type is outright forbidden. A490 bolts are more likely to deform and testing shows unreliable results when reused. Reusing A325 bolts requires running a nut on to check for deformation in the bolt, and the clearances required for galvanization mean that this test isn’t reliable for galvanized fasteners–the nut may run on even if the bolt has been permanently deformed.

            You’re correct about “snug” being an ill-defined concept. Plenty of arguments occur over whether a particular bolt has been properly snugged. However, the scatter from this seems to be less important than that from variable friction.

          • ryan8518 says:

            edit: ninja’d by CatCube, I had this open way to long making edits
            @ Nybbler

            I can take a crack at that by expounding on what CatCube was talking about above, though there’s quite a few things going on that I may be forgetting about. One key thing to keep in mind is that while you will install X preload in a joint, generally speaking joints tend to relax over time and loose between 5%-10% of there initial preload (and under some circumstances like vibration may come completely loose). All of these methods break down a bit in practice because the total stiffness of the joint is dependent on both the bolt, the joint construction, the nut, and to some lesser degree the amount of tension already developed in the bolt. For simplicity, the bolt can be treated on it’s own, but corrections for the other properties exist, and are a source of scatter.

            Ultimately preload is developed by stretching a bolt by a given amount (the bolt can be treated as a spring, k = cross sectional area of the bolt * Young’s Modulus of the bolt material / length of the bolt). The problem lies in measuring this stretch, of which there are 3 main methods.

            Method 1: measure the change in length of the bolt. This is basically what is done by the snug + X turn method, which relies on knowing that standard threads have Y pitch and so a turn stretches the bolt by approximately Z, where Z deviates from the nominal amount (typically undershoots) based on the shared stiffness of the bolt and the joint. For typical materials, this can be estimated by test (by actually putting a strain gauge on the bolt or a custom bolt simulator including a force gauge). This method is vulnerable to some scatter, largely to manufacturing tolerances/defects in the threads, but generally has acceptable scatter and is the simplest/cheapest, but means that it’s nigh impossible to confirm after the fact that the bolts were installed correctly (and so it’s very much frowned upon in aerospace).

            Method II, using a calibrated torque wrench to estimate the amount of preload in the bolt. This method is widely used in aerospace/automotive industries because while it brings a lot of pain and suffering to it’s practitioners, it has the key virtue of being something you can check throughout the life of the part, and can be corrected without taking things apart. That said, the downsides are numerous, especially to anyone trying to understand how the system works from first principles. Broadly speaking, the threads of the nut act as an inclined screw, which converts some percentage of the applied torque to bolt stretch (the rest going toward friction between the bolt/nut threads to each other and the bolt/nut to the joint). However, this is confounded by everything from tolerances of the threads, cleanliness of the joint, temperature of any of the components of the joint, whether the torque wrench is moving or static (simplified as static v.s moving friction, that’s a nightmare I can’t explain beyond what I leaned in 8th grade science), and as near as I can tell the phase of the Moon. I’ve seen some rather complicated models that try to reliably spit out the accurate torque/pre-load relationship, but I’ve never seen less than a 10% DUF (Degree of uncertainty factor, an engineer’s get out jail card for I can’t explain why reality works) applied to the final model, and even that is pretty sporty when we go and do tests.

            Method III, Include some witness step in the bolt stack that lets you know when you’ve developed enough load in your joint. This takes a lot of forms, several of which were mentioned by CatCube above. This method generally provides the most reliable way to initially set preload. The big problem with this is that it generally makes it difficult/expensive to take the joint apart since you have to replace the witness device and the installation time/cost of the witness device is necessarily higher than just going with Method I. This method suffers from most of the issues with Method I, but does allow you to confirm that the correct preload was in the joint at the time of installation, making inspection relatively easy

            *Cross sectional area of the bolt can be a little complicated to explain. For a fully threaded bolt, it’s the cross sectional area of the minor diameter of the threads (e.g. the minimum cross section), for a partially threaded bolt it is reasonably approximated as the cross section area of the unthreaded portion, it gets complicated if only say half the bolt between the head and nut isn’t threaded (best practice for a partial threaded bolt is to have as few threads between the nut and the bolt head as possible)

        • ryan8518 says:

          To clarify, I wasn’t intending to suggest that steels aren’t ductile, just that I’ve seen the rules for what is called “ductile” stretched, largely so that aluminum welds can still be considered ductile even at low temperature when their ductility drops below 4%, though that also gets into the nightmare of defining what ductility even means for a weld*.

          Apologies, completely missed the significance of the way you phrased shear carrying joint, largely based on some vague memory from a college structures course that mentioned many building connections being treated as effectively pure shear/tension and not being able to remember if that was a general design principal or was something that just worked out that way for the joint design we were talking about. What you’re describing makes complete sense, we’ll typically just use an elastic analysis for a quick hand calculation and if that doesn’t work give up and build a joint FEM with contact between the connected surfaces (mostly when we’re really trying to optimize the joint around it’s strength properties, which usually only applies to primary structures joint, for as weight obsessed as aerospace normally is it’s still not worth over-optimizing every secondary joint and making it a pain to install)

          *For the general public, ductility is frequently tested to laboratory standards by the use of a clip on extensometer, which works by attaching itself to two points on the material at a fixed distance apart from one and other and measuring the change in distance between the two, which divided by the gauge length between the two attaches gives you the % strain of the material, the value of which right before it fails being the % elongation to failure used in this ductility discussion. However, this is heavily reliant on your gauge length being made of an isotropic material, which for a weld is hard to come up with since the heat imposed by the welding process creates a gradient region of strength/ductility between the weld bead and the parent material, and it can be difficult to define exactly where even the weld bead itself starts/stops (since most of the weld is subsuface and decidedly not a nice smooth interface). Generally speaking, you care about the behavior over the weld, heat affected zone+bead, but every few years we have an argument with our materials testing group about what that means.

    • J says:

      I’ve researched this a bit and never found a satisfactory answer: what’s the deal with rebar? Like, I know generally what it’s for, but I can’t come up with a model in my head that lets me intuitively model it. I think that’s because I can’t imagine concrete stretching. The best I can imagine is that in tension the concrete just ends up as a bunch of chunks with microfractures held together by the steel or something like that.

      • CatCube says:

        Good news: giving non-structural engineers a general overview of how concrete behaves is one of the objectives of this post series!

        Bad news: I’m not getting there for a little while yet!

        I’m going to cover steel first, because there are some objectives in designing buildings (ductility) that I think will be much easier to explain there, since steel is isotropic and explaining how member capacities work towards those objectives will be much more tractable using the cross-section properties everybody learned in mechanics of materials.

        Short answer, though, you’re not far off. The concrete between the tension reinforcement and the neutral axis is assumed to be ineffective during bending. (In carrying the bending moment–it’s still there for shear. Maybe) It’s for exactly the reason you suppose, that it cracks.

        • J says:

          Yeah, isotropic materials definitely seem like a good starting place. Incidentally, a lot of my intuition for this stuff comes from “Structures, or why things don’t fall down”, a really excellent book I saw on some list recommended by Elon Musk.

      • idontknow131647093 says:

        To simplify what catcube said (and he will say it better because my engineering is focused on biological things) rebar and concrete are complimentary materials. In particular smashing concrete is really hard, so concrete doesn’t need to worry about that all that much, but stretching concrete (although hard to model in your mind ala stretching a block of sidewalk) is relatively easy. Rebar (steel) is strong in this, thus if you align rebar in a grid pattern (or others, there are many alignments for many applications) it will make the whole stronger to such forces.

        • J says:

          Thanks for the explanation. I get it at that level, in that I know about concrete being strong in compression and very weak in tension, and I don’t otherwise have trouble imagining stretching a sidewalk block.

          And actually if the rebar is post-tensioned, I find that much easier to contemplate, because now you’re keeping the concrete in compression and the steel is already taking a tensile load, so there’s no destructive strain (stretching) of the concrete until you get past the preload. It’s the passive rebar where you’re putting chicken wire or whatever in your sidewalk that I have trouble with.

          And maybe my intuition wasn’t so far off all along. This video claims that passive rebar doesn’t actually do much until a crack already forms:
          https://www.youtube.com/watch?v=cZINeaDjisY

          So maybe the chicken wire really is just kind of hanging out doing nothing much until a tensile load comes along and cracks the concrete, at which point its job is primarily just to hold the pieces together.

          I guess the other difficult component for me is that it’s hard to think of concrete having a modulus of elasticity at all, much less one that’s… compatible, I guess?… with steel. But hey, looks like it does, and it’s 14-41GPa vs 200 for steel. So for 50/50 rebar/concrete I guess the concrete would act almost like rubber next to the steel, and thus we’d expect the steel to take the load. Although you never do 50/50, and even the ~10:1 ratio in moduluses seems pretty unrealistic in terms of relative cross sections. But maybe that’s because chicken wire isn’t serious rebar.

          So that’s an interesting question, I guess: do you choose your rebar/concrete ratio with Young’s modulus in mind?

          • idontknow131647093 says:

            Your question is beyond my expertise.

          • CatCube says:

            That Practical Engineering video is correct. Rebar doesn’t do anything until a crack forms, at the so-called cracking moment. This is calculated by taking the second moment of area and using the good ol’ My/I = σ equation, and substituting in the tension strength for σ.

            As far as a Young’s modulus, of course it has one. I mean, concrete deflects under load just like any other material–rock has one as well. The Young’s modulus of normalweight concrete in psi is E = 57000*(f’c^0.5), where f’c is the strength of the concrete in compression, in psi. (The c would be a subscript). A more expanded version would be: E = 33*(w^1.5)*(f’c^0.5), where w is the unit weight of the concrete is lb/ft³. (Normalweight unreinforced concrete is 144 lb/ft³) This is a straight line between zero and 0.45f’c, (a secant modulus) since there’s no well-defined yield point for concrete. A ratio between the modulus of steel and modulus of concrete is usually called the modular ratio “n” in design equations, and 7 or 8 is typical.

            To my knowledge there’s no theoretical justification for that equation. They just crushed a bunch of cylinders back in the ’60s, noticed that a function using a square root seemed to fit the data, and ran with it. That’s why it’s not dimensionally homogenous–there’s a separate equation for metric units. Of note, it varies as much as ±20% around that theoretical value, depending on the properties of the aggregate. The concrete code does give a reference for the research, and one of these days I should dig it up and read it.

  11. helloo says:

    There’s theories that the increase in allergies and other auto-immune disease are due to the increasingly clean and sterile environments humans live in.

    Assuming this theory is correct-
    I doubt that the trend to cleaner living places will reverse even if it becomes clear that there are health risks with it.
    However, perhaps there will be treatments that intentionally force upon people the hazards of dirty living (without the risks).
    Sort of like vaccines but for things like colds, pollen, and even parasites.
    Additionally, as the medical community would be able to specifically design the illness to have particular symptoms and can be better monitored. (Ok you seem like a healthy youth, go ahead and take this basic virus load and let us know when you start peeing green. When that stops, we can then move you to level 2 strength colds)

    There’s already the cold turkey version where people intentionally get parasites as a therapy- https://en.wikipedia.org/wiki/Helminthic_therapy

    What other ways might humans intentionally seek “hostile things” to safely train in an increasingly less-hostile world?

    As this is a CW free, please keep it to physical examples. Feel free to post it again on a CW thread for that side of discussions.

    • Randy M says:

      There’s already the cold turkey version where people intentionally get parasites as a therapy-

      At that point, do we have to reclassify it as a symbiote?

    • Cheese says:

      The infamous ‘chicken-pox’ parties that one occasionally hears about parents doing is probably one example.

      Child care is a kind of secondary exposure thing as well. I imagine if evidence really starts to build that the hygeine hypothesis is the main driver of a lot of auto-immune diseases, then we might start to see trends towards ‘dirty’ early childhood experiences. Day care but they take your kids out to the woods and let them all run around and eat dirt and drink stream water and the like.

      You might also see the supplement industry get in on the act and start offering boutique dirt for mixing in baby food or similar.

  12. Skivverus says:

    Musing on probability estimates and the weighting thereof. Others have probably thought of this stuff before.

    Or, headline version: numeric odds are better than percentages.

    How the process for odds would work: start with 1:1, with each “1” representing all the unknown unknowns that might be in favor of or opposed to the thing (call it “A”) under consideration.
    If you have 2 arguments for A and 1 against A, your odds become 3:2. Multiply argument weights proportionally to their strength.

    Actual arguments in favor: more transparent to observers approximately how deep your analysis is if reporting honestly, approximately matches intuitions on what “I have no idea one way or the other” looks like and what “I’m absolutely sure of this barring Descartes-level confusion”.

    • Randy M says:

      Don’t you mean “known unknowns”? I’m not sure you can account specifically for the unknown unknowns, other than to include sufficient margin of error.

      • Skivverus says:

        Good point, but no, I meant unknown unknowns.
        Known unknowns I think would be “add to both sides” if they can indeed swing the outcome either way: the more of them there are, the more uncertainty your odds should end up with (i.e.: closer to 1:1).
        In a sense, unknown unknowns are there to keep the ratio from ever being X:0 or 0:X.

  13. proyas says:

    Idea for successfully thawing out cryonically frozen humans in the distant future: Use their DNA to build computer simulations of their brains (genetics partly determine brain structure), and then fix their mushed brain cells, one at a time, using their digital brain as a guide.

    Is there any merit to this strategy?

    Note: Yes, I fully understand that brain structure is more importantly determined by life experiences and that the most important structural features are at the cellular level and aren’t captured by genetics. However, I wonder if having a fully developed computer simulation of a frozen person’s brain could serve as something like an illustrative “upper limit” to how their damaged brain should be reconstructed to resemble.

    • Randy M says:

      It sounds like cloning on hard mode, not a method for actually reviving the mind.

      The only way it approximates the aging cure cryonics wants to be is if you can hold a simulation of all relevant neural connections. Synapses can strengthen or weaken, making one neuron more or less likely to fire based on particular inputs than previously. This is an element in the formation of memories. But storing the data for this seems like an impossible task, given the number of connections, even apart from the difficulty in detecting and reproducing these neural changes. It may be on the scale of plausible if we limit the areas of interest to those specifically dealing with memory.
      This was the basis of a sci-fi story I wrote ages ago, though I didn’t really know how to do anything interesting with the concept (which is quite the indictment, actually :/ )

      Given the effects of genetics on personality, however, to an outside agent cloning is going to be pretty close to resurrection, but generally the people who want to undergo the process are most concerned with the continuity of their mind, of which their memories and any effects their experiences have on their character are principal components.

      • Michael Handy says:

        I think biochemical data at the cellular level (assuming a fairly fine grain vitrification/chemical fixing) could be used to assemble an approximate image of synapse strength. In any case, Long-Term memories seem to be able to survive fairly large scale changes in synapse activity, otherwise sleep would be a real issue for us.

        The real issue is why rebuild a ruined brain when you can clearly make a new shiny one much easier.

    • Nabil ad Dajjal says:

      There is a pretty common error in reasoning I see when it comes to rationalists and cryonics. You could call it “cryonics of the gaps,” analogously to so-called God of the gaps arguments for theism.

      We just don’t know enough about neurology to say whether or not this kind of approach would be feasible. That uncertainty means that we can’t rule out the possibility that this kind of hypothetical technology could materialize in the future. But it also means that we should absolutely not count on it, or anything like it, materializing.

      Neuroscientists are always trying to develop better methods for preserving ex vivo brain tissue. But the state of the art today is to use thin sections of tissue, because preserving a whole human brain is totally impractical. From what I found from ex vivo MRI papers it looks like the formalin fixation alone takes at least a full week at room temperature for a whole human brain. In that time you’re losing a lot of RNA and short-lived proteins which will have degraded long before the fixative reached them. And that’s before you even get into the problems with freezing something that huge without destroying the cells.

      There is no particular reason to believe that our descendants are going to be able to take a cryogenically frozen brain and recover a person’s mind from it, any more than our ancestors had reason to believe that we could take the ashes of a book and recover the writing on it.

  14. Nancy Lebovitz says:

    Does anyone remember what happened to the troop of peaceful baboons that Sapolsky talked about?

    The story starts with baboons near a garbage dump– the food was a trove for them.

    Then the food got infected, and all the baboons eating it died of tuberculosis.

    I forget some details, but there was one troop where only the more aggressive males could get at the dump. So they died.

    The remaining baboons (the females, the less aggressive males, and the children) immediately and contrary to all human theories about baboons developed a peaceful culture. You see, young male baboons leave their home troop and join another troop. Typically, they’re treated very badly. As Sapolsky put it (from memory) “Would somebody please just groom this guy?”

    With the most aggressive males gone, new males started out being well treated, and it was stable for a couple or three generations. Then something happened to the troop– not something as simple as being taken over by a more aggressive troop, as I recall– and the new culture was gone.

    Does anyone remember the details?

    • Nabil ad Dajjal says:

      Didn’t we learn our lesson about credulously trusting primatologists talking about pacifistic matriarchal apes? I would have hoped that after chimpanzees and now bonobos we were done with this.

      My prediction is 90% that this never happened, 99% that it’s not replicable. No money, because I don’t gamble as a rule.

      • dick says:

        A popular write-up (from 2004) of the thing Nancy is describing is here but I don’t see anything more recent.

        Would an attempt to replicate, this by finding a different baboon troop and killing the alpha-est members, be ethical? My feeling is that an outsider could make an argument that it is, but actual primatologists would disagree.

        • Nabil ad Dajjal says:

          Baboons aren’t listed as endangered, with most species at the minimum level of “Least Concerned,” so killing a few males for research purposes doesn’t sound like it would have any meaningful ecological impact.

          Baboon hunting is apparently legal in South Africa, possibly other African countries as well. So any layman with $100, a rifle, and some time to kill could go out and replicate this. Ironically the difficulty of finding a cooperative grant agency, an IRB and a journal means that citizen scientists are in a much better position to carry out a study here than institutional researchers.

        • cassander says:

          You could presumably re-locate the alphas a couple hundred miles away instead of killing them.

      • albatross11 says:

        Was it primatologists that were big on making stuff up? I thought that was more anthropologists (who also might have been just being trolled by their local sources).

      • Enkidum says:

        Your first prediction is wrong. Your second, dunno. Sapolsky is a legit scientist.

        Also bonobos are less violent than other chimps.

        Hopefully this helps update your priors.

        • Aapje says:

          Also bonobos are less violent than other chimps.

          Hasn’t it been established that bonobos show very unnatural behavior in captivity and that their supposed propensity to solve conflicts with ‘love, not war’ has been gravely exaggerated?

    • helloo says:

      Radiolab had a podcast that detailed that story and did a followup with the researcher following them-
      https://www.wnycstudios.org/story/update-new-normal

      Also, new site for their archives sucks. A lot.
      And they got rid of the transcripts :/

      Edit: GAAH, the default player doesn’t even allow to skip around. It starts about the 7:20 mark. The followup is around the 20:00 mark.

      • Nancy Lebovitz says:

        Thank you very much. You actually can skip around in the podcast– there’s a progress bar(?) at the bottom of the screen.

  15. HeelBearCub says:

    Anyone who knows interstellar objects (@John Schilling).

    Is this really actually a legit thing?

    Oumuamua displayed non Kepplerian path, consistent with a possible light sail.!?

    • Skivverus says:

      Wouldn’t say I know interstellar objects, but I do remember Oumuamua (or ‘Omuamua if the apostrophe is important) being mentioned earlier on here, and there’s no question that it exists and looked very weird on people’s telescopes (which was why sending expeditionary rockets to it was discussed).

      It’s certainly tantalizing to believe it’s artificial, but I don’t know enough about the relevant science either to come up with a falsifiable test here.

    • John Schilling says:

      Oumaouma, aka 1I/2017 U1, is definitely a real interstellar object. It is big (hundreds of meters), cold, and very quiet, with a reflection spectrum similar to common forms of meteoric rock. Its velocity is consistent with a rock ejected from some other solar system by an accident of orbital mechanics, or with a derelict spacecraft launched using approximatly present technology, but not with significantly more advanced technology and any time value of effort beyond “meh, so it takes a million years, what’s your hurry?”.

      The observed trajectory anomaly is also unambiguously real, and a 1/R^2 radial acceleration is about the best fit anyone has been able to find. But the data is still somewhat noisy, and per the original Nature paper I think an impulsive delta-V event (i.e. it hit something on 5 November 2017) is still within the error bars.

      The most likely cause for a 1/R^2 radial acceleration of the required magnitude would be radiatively-driven outgassing as is commonly seen with cometary nuclei, and the magnitude would be about right for that. But there hasn’t been any sign of dust, and there hasn’t been any change to the object’s tumble, and I haven’t studied the issue closely enough to know how how serious those objections really are.

      Bialy and Loeb propose that if Omuamua is a very thin (~0.5 mm) sheet of material, solar radiation pressure alone would suffice to produce an appropriate level of acceleration. A very low-density material like an aerogel might behave in a similar manner. This has lead to speculation that Omuamua may be a light-sail spacecraft because reasons. The thin-film, solar-radiation model has not been around long enough for anyone to have systematically studied possible objections.

      One that comes promptly to mind is that 0.5mm is an unbelievably thick and clumsy solar sail; ours are about fifty times thinner than that. And if it’s a solar sail coupled to a dense payload, to match that average density, I’d expect that the rotational dynamics would not be as consistent as they have been following the object’s perihelion passage. But I expect people will be looking into that, and other issues, in coming weeks.

      And it’s heading out of the solar system too fast for us to reach with any probe we could realistically build using present technology. But we’ll keep track of its course, in case someone comes up with the Epstein drive any time soon.

      • dick says:

        Thanks for writing this up, I hadn’t heard of this. Am I correct in understanding that the anomalous acceleration was brief and intense? Do we know how brief?

        • John Schilling says:

          The anomalous acceleration vaguely appears to be continuous, but strongest when the object was closest to the sun. “Brief but intense” is an alternate theory that doesn’t match the data as well, but as I said it may still be within the error bars.

          And “brief but intense” means it accelerated by about 5 m/s (10 mph, if you prefer) in not more than a few days. But for something as big as this appears to be, that’s still pretty intense.

      • HeelBearCub says:

        Thanks. Interesting stuff.

      • Nicholas Weininger says:

        It occurs to me that perhaps a more advanced spacecraft might use an 0.5mm thick sheet of some sail-like material as some sort of shielding or cladding, and if it fell off during acceleration at the right velocity and angle… is there some reason that’s particularly unlikely to have produced the observed object?

    • fion says:

      I don’t know anything about this, but as a physicist, seeing “30 sigma significance” (see the abstract of the nature paper by Micheli et al) makes me very skeptical. If somebody claims they’ve found something to 30 sigma that normally means they’ve done their statistics wrong, or ignored systematics or something. We’re talking p<10^-200 here.

  16. spentgladiator says:

    There’s something I can’t stop thinking about: I have this very vivid memory of the rationality community about a year ago being really, really into some kind of online story where some three entities with funny heads observe Earth being an enormous soccer field? All of it? The title was a series of numbers. The longer I think about it, the more it just sounds like a fever dream, but I could swear it happened. Can anyone confirm?

    • smocc says:

      It’s this: https://www.sbnation.com/a/17776-football

      The observers are satellites that have become sentient, and the game is a mutation of American football involving tornadoes maybe? I never read very far.

      • Scumbarge says:

        hi, permalurker popping in. thank you for answering this, and OP for asking–this is one of my favorite pieces of online fiction since that zero HP lovecraft infohazard piece a few months back.

      • Kestrellius says:

        …huh. Is anybody else getting weirdly strong Homestuck vibes from this thing? It’s…kind of surreal how much I’m reminded of it.

        • Nornagest says:

          Chat format with flavor illustrations, vaguely post-apocalyptic, typing quirks as characterization tools, ironic detachment everywhere, silliness taken deathly seriously? You’re not the first person to think that.

  17. DragonMilk says:

    Does anyone have any browser-based multiplayer games to recommend?

    I got a Chromebook for less than $200, which does its job quite well, but as an avid Steam user, am gimped on the gaming front.

    I generally like strategical rather than action-based games where you control one dude, but I suppose I can try that too. Graphics not too important to me, and I don’t see how it could be given it would be browser-based.

    Thanks!

    • DeWitt says:

      Do as I did about a decade ago, and get yourself immersed in MUDs. Most of them can be played through your browser just fine, and there’s a good few reasons why I prefer them over other multiplayer games despite not being bound to a notebook myself.

      • nameless1 says:

        Yeah that was a decade ago. Now there are only a few MUDs with a large user base left. Most are fantasy.

        If you dislike fantasy, there is Sindome, a cyberpunk MUD, but IMHO crazy annoying. The policy is largely that players should make their own story, so no NPCs hand out quests or something. If you were smart enough to invest into an employable skill like driving, maybe you can ask the mods to puppet an NPC and give you a taxi driver job. From that point you should get together with other players and do stuff like rob a bank or something but typically the older players are just engaging in roleplaying conversation, if you just go there and tell them hey do some stuff together they won’t. Bloody annoying and boring.

        • DeWitt says:

          Who cares about large uses bases? OP asked for multiplayer, not the first M in MMO. Forgotten Kingdoms and Armageddon have perfectly healthy playerbases, and even freaking IRE went completely free to play with one MUD of theirs, Imperian.

          Not sure why the hate for fantasy, either. It may not be your cup of tea, but DragonMilk mentions no particular dislike of them.

          Agreed, at least, that Sindome is uniquely terrible and is best avoided.

    • rlms says:

      generals.io is a very simple but quite addictive RTS, you’d probably want a mouse rather than trackpad though.

    • AG says:

      I had a few uproarious nights back in college of drunken online pictionary, with isketch.
      Agar.io and Slither.io

    • James C says:

      Dungeon Crawl Stone Soup is one of the best active rouge-likes out there and I believe that it’ll run in a browser. Even if it doesn’t I imagine you could get it running on a toaster, it’s not exactly high res.

    • arlie says:

      I enjoyed Forge of Empires for a while. Unfortunately, they’ve got scale-up problems – With an advanced position, the most effective strategies require too much clicking – you can’t us any of the shortcut “collect all my resources” buttons, and daily harvesting turns into a painful slog once you get to a level where you’re ready to participate usefully in inter-player contests. I left after a new, “improved” interface made my daily harvesting take even longer. (Of course what I was supposed to do was spend RL $$ for advantages, so as to pursue a less intensive strategy. *sigh*) I was also annoyed by obvious market-segmenting pricing for in-game advantage.

      Some time before that, I loved Runescape, until the point when it decided to “modernize” itself into a poor imitation of World of Warcraft. But I don’t think it fits your strategy game preference. And for all I know its reinvention of itself as a cheap knockoff as meanwhile killed it.

  18. DragonMilk says:

    Ground Meat II: Sauce

    I discovered the hard way yesterday that ground meat (veal at least) tastes quite awful on its own.
    I was making another lasagna, and realized that I ran out of tomato sauce, so I shrugged and baked it without it.
    I now know to have tomato sauce with ground meat. But this got me thinking:

    1. What in tomato sauce makes the ground meat taste better? Can adding it after be ok, or did the presence of sauce alter the cooking process itself?
    2. What other sauces are available?
    3. How do burgers work and would they taste just as bad without sauces like ketchup and mustard?

    • Plumber says:

      Cooking onions and some garlic makes meat better.

      Curry powder and sauce is usually good.

      Ground beef by itself is better tasting than most other kinds of meat, so yes you may have a hamburger “plain” that’s okay, but onions and mustard make it better.

    • Steve? says:

      Did you salt and pepper the meat before/while cooking? YMMV, but for me meat + salt + pepper (or really just, meat + salt) tastes good for any meat.

      Generally, sauces work by adding some complexity to the flavor of the food. In the case of tomato sauce you get acid from the tomatoes, salt that gets added, and probably a bit of sugar.

      • A Definite Beta Guy says:

        Tomatoes in particular have a lot of umami. Umami bombs are always good.

        • Nornagest says:

          Throwing in a couple tablespoons of soy sauce improves a surprising number of dishes, even ones that’ve traditionally never seen a soybean.

          • DragonMilk says:

            As a Chinese person, I can say it was very tempting.

            But on the other hand, even I couldn’t imagine it in a lasagna with mozzarella or ricotta cheese. Maybe something in me just doesn’t want to mix soy sauce and cheese, but perhaps I shall try it one day when I’m more adventurous!

            I last stopped cooking about 7 years ago when I had a misadventure with “banana chicken”

          • A Definite Beta Guy says:

            I throw in soy sauce in a BUNCH of stuff.
            But lasagna, nah. There are a lot of cheeses that do the same thing, like my favorite, parmigiano-reggiano.

            Banana chicken sounds…uhhh….well, chicken goes well with ALMOST everything.

          • Re Banana chicken.

            Banana judhaba has bananas and chicken and is yummy.

            You layer thin flatbreads with sliced banana and sugar, put some rose water on top, roast a chicken over it so the drippings go in.

    • Randy M says:

      I like plain ground beef just fine, but it is improved by adding various spices before cooking (and I think most hamburger recipes will call for such). Some things I use to season ground beef: Salt, pepper, garlic, onions, oregano, cumin, cilantro, bell pepper, egg, chili powder.
      Mixing with tomato or marinara sauce is also good, but too much gives me indigestion, I think.

      • DragonMilk says:

        Yeah, so for ground veal I just put a bit of olive oil on a skillet on medium and just crumbled it in, stirring til brown. Then put in in a Pyrex dish

        I put garlic powder on top of the beef before adding the other layers, but until I figure out the dynamics, I will be using tomato sauce until I discover the secret of sauce.

        • Randy M says:

          Then again, for lasagna my favorite addition to ground beef is Italian sausage. Which, of course, in addition to ground pork has all kinds of spices, of those not mentioned above I’d point out fennel as one I’m fond of.

        • Steve? says:

          Perhaps I’m missing some context about why you’re approaching cooking the way you are. It seems like you’re trying to understand cooking from first principles on up with a bit of experimentation. That’s throwing away a lot of accumulated knowledge in the form of recipes and cookbooks.

          To me, the quickest way to cook decent tasting food is to cook from good recipes (e.g. Barefoot Contessa, Smitten Kitchen, Pioneer Woman, NYT food section). From these you can learn some rules of thumb (e.g. the recipes always suggest that I brown red meat at the beginning of the cooking process). Often, you’ll find a direct bit of advice that you can use in other contexts (e.g. wait until removing your food from heat before adding sesame oil otherwise it’ll lose its flavor).

          If instead your goal is to understand the theory of what you’re doing, something like Mark Bittman’s How to Cook Everything might be a good place to start. Going more advanced from there, I’ve heard positive things about J. Kenji Lopez-Alt’s The Food Lab

          • DragonMilk says:

            I’m going to make mistakes while learning to cook. This weekend, I thought I had tomato sauce and didn’t, shrugged, and tasted the consequences.

            So I’m trying to gather enough info to adapt – are there substitutes for tomato sauce? Should I just have gone 20 minutes to go to the store just for a $2 can of tomato sauce? How does this apply to ground meat generally?

            So more of an adaptability thing. I’m cooking partly out of interest/practice rather than necessity, so I want to understand what’s going on rather than just following recipes blindly. Ultimate goal is that in the future, I can open the fridge and come up with something to make using available ingredients, or at least finish off the aging ones.

          • Randy M says:

            I wonder how much of the problem was that baking the already cooked meat without sauce dried it out and made it unpalatable?

          • AG says:

            So long as you cover the lasagna in the oven, it should retain the original meat juices and drying out shouldn’t be the issue.

          • Jake says:

            @DragonMilk

            As previous posters have said, the tomato sauce in that recipe really gives you two things, acid/flavor which makes the meat have a better taste, and moisture, which keeps the meat from drying out when you bake it. I think you are on the right track for trying to figure out what kind of categories things need, so you can substitute based on what you have.

            If you are looking for a quick sauce for anything and you don’t have much in the house, you can always make a simple white sauce. I just do a stick of butter, a spoon of flour, and a cup of milk (you can play with the proportions, but that ratio tends to work for me) Melt the butter, stir in the flour, then pour in the milk and put it in the microwave until it just about bubbles over (6 min or so, stirring every minute or two). You can use that as a base for alfredo (add parmesan and garlic), cheese sauce (add a handful of cheese…don’t microwave after or it splits), or whatever else you feel like. (true chefs, don’t crucify me for this one….it’s delicious and simple)

          • A Definite Beta Guy says:

            Melt the butter, stir in the flour, then pour in the milk and put it in the microwave until it just about bubbles over

            (true chefs, don’t crucify me for this one….it’s delicious and simple)

            Don’t worry, if I told my chef friend you microwaved a bechamel, after adding in all the milk at once, he’d die of a heart attack loooonngggggg before he had the chance to crucify you!

          • Jake says:

            @ADBG Yeah, when I do it correctly it turns out better, but takes 20 minutes of babysitting a saucepan. The microwave gets you to a 75% solution with a tenth of the effort. When you’ve got 4 hangry kids asking for dinner, it’s definitely the way to go.

            To make him roll over in his grave after the heart attack, sometimes I even use cans of evaporated milk, to make it thicken up even more quickly.

          • Paul Brinkley says:

            As previous posters have said, the tomato sauce in that recipe really gives you two things, acid/flavor which makes the meat have a better taste, and moisture, which keeps the meat from drying out when you bake it.

            See, this is one of the explanations that always tended to irk me. I’m not holding it against you, since you have no way of knowing what irks me (let alone the inclination to avoid it), but:

            This explanation is informative, but not completely so. To wit: what is also acidic, flavorful, and moist? Orange juice. How do you think orange juice in ground veal is likely to turn out? Horribly. (You’re invited to ask how I might know that for sure; my answer is that I’ve tried orange juice as a substitute for certain other things. In small batches, in a fit of high adventurism. There’s a chance I’m wrong, but I doubt it.)

            Yet again, I’m compelled to wish there were more cooking shows where they show you what can go wrong, and why. Why should you soak french fries in cold water before frying them? Why is pepperoni baked, and never seared? Why is orange never a substitute for tomato?

            And just how feasible is it to predict that a new recipe will taste good? (New recipes hit the world every minute. Are chefs just really good at working out the theory, or are they just resorting to trial and error after a point?)

          • AG says:

            And just how feasible is it to predict that a new recipe will taste good?

            Apparently IBM’s Watson is pretty good at it.

            As for tomato vs. orange juice, I expect it’s about the concentration, right? But also, flavor. I doubt they have the same pH, but even if they did, tomato will have a whole lotta other chemicals than orange, to produce a particular flavor profile.
            Hell, cooks get tetchy about the flavor profiles of particular tomatoes. The common all-red ones are basically cardboard flavor, as breeding them to be all-red bred out the flavorful genes. So really, any “flavor” from regular tomato paste is more about the remaining texture and additives. Using heirlooms will get another different result.

          • Jake says:

            @Paul Brinkley

            Good points on the orange juice/tomato sauce comparison. I have used orange juice with meat before, but never with veal lasagna…that just sounds wrong. I think for the criteria I listed earlier, it may be a case of necessary, but not sufficient. You need moisture and acid, but just having moisture and acid isn’t enough, it still needs to taste good.

            As for whether things taste good together or not, that can definitely be tricky to predict, though thinking of other places similar things go together can help. No clue how to help out there other than just saying to try lots of different dishes.

            I like the idea for a TV show about why cooking works, though my dream show has always been to give top-tier chefs access to an average Joe’s kitchen and see what they are able to come up with when they only have access to some random canned vegetables and a leftover roast from 3 days ago.

          • Paul Brinkley says:

            my dream show has always been to give top-tier chefs access to an average Joe’s kitchen and see what they are able to come up with when they only have access to some random canned vegetables and a leftover roast from 3 days ago

            This actually happened in the late 1990s, and was called Door Knock Dinners.

            I only know about this because of a special episode of Iron Chef in which the Chefs traveled to New York City (“Iron Chef: New York Battle”). Just before Morimoto battles Bobby Flay, they film an episode of DKD.

          • Nancy Lebovitz says:

            AG, I’ve had tasty red tomatoes with ordinary shapes. They may have been superior varieties, but whether they’re in season is crucial.

            There’s a green/brown tomato called Kumato which I’ve generally been happy with, but I got some in the middle of winter and there was no flavor.

            It’s possible to have the ability to imagine flavor combinations, and I’m at least fair at it– I guessed that fresh dill would work in lamb stew, and I was right. On the other hand, cauliflower and asparagus soup turned out to be bitter– a friend warned me, and he was right.

            When the idea of substituting orange juice for tomato sauce was mentioned, my reaction was close to neutral– as though it was something that wouldn’t necessarily be good, but wouldn’t be awful.

            I’ll note that there are orange sauces in Chinese (or possibly Chinese-American) meat dishes. Orange juice is a lot less concentrated than tomato sauce, though I don’t think that’s the only issue.

            Also, prosciutto wrapped around melon slices is delightful, and I wouldn’t be surprised if it would work with orange segments.

            And there’s Stilton cheese with orange in it, and that works.

            I still can’t figure out how you’d make an orange-flavored lasagna.

          • AG says:

            I still can’t figure out how you’d make an orange-flavored lasagna.

            It would have to be a fairly extensive retooling of the dish, in the way the “mexican lasagna” is.

            I think you’d want to go towards a kind of mole variant for the sauce-meat aspect. Although, speaking of mexican lasagna, chicken mole enchiladas already exist…

            That, or as you pointed out, an orange-chicken style, but lowering the sweetness for more savory. There are google results for “asian lasagna”…

          • Nancy Lebovitz says:

            I’m guessing that an orange-tomato sauce would be excellent if you could figure out how to amp up the orange enough.

          • Nornagest says:

            Orange zest? Dried orange peel?

    • Lambert says:

      Good cuts of meat are more worth leaving in one piece than bad ones.
      So ground meat is disproportionately low quality stuff.

    • AG says:

      The difference between tomato paste and tomato sauce is in the seasonings. But if you look at any “Grandma’s secret pasta sauce” recipes, the secret ingredient is usually sugar.

      As for making ground meat taste good, a modicum of salt is the bare minimum.
      However, I have heard that grass-fed meat is much more likely to taste good with little to no seasoning.

      As for other sauces, any other sauce could work, for ground meat by itself. Ketchup, BBQ, vinegar variants like steak sauce or worcestershire or just plain vinegar (with salt), a little mustard, soy sauce, oyster sauce, choice of salad dressing, etc.
      For lasagna specifically, there are tomato-less variations, using cheese sauces.

      You can add any kind of seasoning into the burger meat to not need sauce. Salt+pepper+garlic as a minimum, but onion, steak spices, curry, taco seasoning, dried ranch powder, and more all work. Has anyone tried using the instant ramen flavor packets?

      • HeelBearCub says:

        But if you look at any “Grandma’s secret pasta sauce” recipes, the secret ingredient is usually sugar.

        As a (1/4) Sicilian with a family recipe, I am indignant.

        The proper way to get the effect is to start with sautéed onions and cook for 12 hours.

        • AG says:

          Man, I’ve attempted “slow cooker caramellized onions” a few times now and I don’t think I’ve succeeded yet.

          • toastengineer says:

            The key is to add sugar. 😛

          • Nancy Lebovitz says:

            I don’t know whether you’ve tried this already, but you can get caramelized onions with a frying pan, very low heat, and some patience.

          • A Definite Beta Guy says:

            I don’t know whether you’ve tried this already, but you can get caramelized onions with a frying pan, very low heat, and some patience.

            Gotta say, crockpot has gotta be the dominant use here. I make a big batch every few months, freeze, and defrost as needed. How else do you have French onion soup ready for when Mom comes over with 15 minute notice?!

          • AG says:

            @Nancy Lebovitz:
            The frying pan method is how it’s supposed to be done, but it’s also high maintenance, continuous stirring. The point of the slow cooker method is that you supposedly can just throw it in the crock pot overnight.

            @toastengineer:
            Yeah, that’s actually what’s happened when I’ve tried. “Season to taste” 😛
            And that even with using sweet/maui onions! Still, it doesn’t taste like when I’ve had the stuff from stores, so I’m doing something wrong.

          • HeelBearCub says:

            slow cooker

            I am now indignant-er.

            The sauce must be in a large pot the size of small washtub, on the stove, open top, stirred every 10 to 15 minutes scraping the bottom. Both the higher heat at the bottom and the open top are critical to creating the character of the sauce.

            You start by sautéing the onions in the pot over higher heat (they don’t have to go all the way to carmelized). We add some type of bone (I like pork neck bones), the marrow of which will be drawn into the sauce, and braise slightly. Crushed Tomatoes and a little tomato paste. Basil, oregano, salt, pepper. Italian sausage and homemade meatballs added after the sauce has been at temp for a long time.

          • Randy M says:

            stirred every 10 to 15 minutes scraping the bottom.

            for 12 hours? I hope Grandma had a lot of little helpers.

          • HeelBearCub says:

            It’s not something you make every week, and as long as you are going to be in the house anyway, it’s really not that big a deal anyway. Read a book, watch TV, comment all day on SSC, … whatever.

        • baconbits9 says:

          You can get a similar (but lesser) effect with sauteed carrots.

          • A Definite Beta Guy says:

            Onion, carrot…add in some celery and I’d say SSC has successfully reinvented the wheel. 🙂

          • HeelBearCub says:

            The holy trinity, some might say.

          • littskad says:

            The holy trinity, some might say.

            Only if they want to be wrong. The holy trinity is onion, green pepper, and celery. Onion, carrot, and celery make up a standard mirepoix.

    • nameless1 says:

      Ground meat tastes pretty much like the meat it was made from. If it is awful, that is because they make it a store policy to grind the bad quality meat. When people want really good burgers, they get good meat and ground it themselves. Over here typical ground meat is 50% beef (likely bad) and 50% pork. Maybe try this. At least it is fattier hence juicier.

  19. Humbert McHumbert says:

    I’d like to know more about the known risks and harms of plastic pollution in oceans, which is becoming a big issue. Can anyone fill me in?

  20. Aging Loser says:

    Do Normal People while conversing with other people very frequently imagine the situation from the other person’s point of view in order to arrive a somewhat reliable conclusion as to how a possible remark would make the other person feel?

    This is what is ordinarily called “empathy”, right?

    I’ve realized that there are only two people in the world, both of them close family-members, for/with(?) whom I occasionally do this.

    I’m not manipulative or cruel, and I generally prefer that the people around me be cheerful rather than sad. I’m good at identifying people’s emotional states, and often try to help them to become more cheerful. But I suspect that what I take to be an extremely minimal level of “empathy” might be the reason for my isolated manner of existence. If Normal People normally “empathize” — if they empathize, say, three times per conversation and fifty times a day — then I can see that they would find conversational interactions with me disturbing. They would see from the subtle patterns of muscular contraction around my eyes and mouth that I’m not relating to them in the Empathetic way that they take for granted.

    I suspect that I’m in some sense capable of Empathizing during conversational exchanges, but I’m not inclined to do so and in fact the thought of doing so disturbs me in a nausea-like way. It seems to me that I occasionally Empathize with the two close family-members I’ve mentioned only because they’re so much like me (or seem to me to be) that it’s as though their identities overlap with my own and so I’m not really departing from my own head when I take their points of view.

    • sentientbeings says:

      Not too long ago I listened to an interview with psychologist Paul Bloom of Yale University in which he talks about his book Against Empathy. Based on what I remember, I don’t think that’s how psychologists define empathy, but I’m not a psychologist. I think you are just trying to simulate others’ thought processes.

      • rubberduck says:

        I read that book! IIRC he drew a distinction between two types of empathy, forgot what terms he used but it was something like “logical empathy” (being aware of others’ emotions and concerned for them, ex: you see a child drowning and understand that it is terrified, if you’re compassionate you feel a drive to help it) vs. “emotional empathy” (actually feeling what others are feeling, ex: you see a drowning child and you feel terrified yourself). Lest the title mislead someone, he was specifically against emotional empathy and not logical empathy, making a case for compassion in the latter half.

        I don’t think what OP is doing counts as “empathizing” by either definition since it is not about emotions that the other person is experiencing at the moment but rather hypothetical ones. I don’t know how “normal” it is and how often other people actually empathize (by Bloom’s definition) but personally I’m much the same, it’s rare for me to consciously consider a specific person’s reaction. I prefer to go by what few social interaction heuristics I’ve picked up. Consciously thinking about others’ emotions all the time sounds very draining.

        • 10240 says:

          Isn’t compassion motivated by a form of emotional empathy: if you know that someone feels bad/good, it makes you feel bad/good if you’re compassionate?

    • I don’t think most people actively imagine what other people are thinking in the way you describe. It’s more subtle than that.

      • Aging Loser says:

        sentientbeings and Wrong Species — I simulate/imagine how these two close family-members would FEEL (defensive? resentful? abandoned? appreciated? attended-to?) if I said X or Y, not so much their thought-processes. So I was wondering whether Normal People do this almost constantly when conversing with almost anyone.

    • Plumber says:

      @Aging Loser

      “Do Normal People while conversing with other people very frequently imagine the situation from the other person’s point of view in order to arrive a somewhat reliable conclusion as to how a possible remark would make the other person feel?…”

      I don’t know about other people but I usually try to consider how others will react to my comments, but I’m less likely to do so when I’m emotionally agitated or very tired.

      I’m probably the most jerk-like (least empathetic) when I’ve consumed a lot of coffee or tea in an effort to stay alert despite being exhausted, or when I have done an unpleasant task (clearing the autopsy room drains, pulling a towel out of a toilet in the jail).

      • Aging Loser says:

        Plumber, do you imagine their feeling-states from the inside, though? As though you’re looking out through their eyes at the You (now alienated from yourself, because you’re in their head-space) that’s just said X or Y, and having an emotional reaction to what this You just said?

        I hear you on how being stressed out or tired makes one care less about how other people feel. Caring about how one’s remarks make other people feel seems to be to distinct from the act of imagining oneself into their head-space, looking out through their eyes — an act that might be motivated by this care but need not be. (One can care about how other people feel without imagining oneself into their head-space to make sure they won’t feel bad if one says X or Y.) I suppose that the more one cares, the likelier one is to perform this imaginative act. And since it’s an act that requires expenditure of imaginative energy and also a kind of self-overcoming, fatigue would result in loss of inclination to perform this act before it results in loss of care altogether.

        (There’s an interesting blogger, about 15 years younger than you, who became a plumber recently. He wrote an engaging book about what it’s like to be the sort of person he is. I guess it wouldn’t be right for me to identify his website.)

    • AG says:

      Uncharitable, but my suspicion is that most people think they’re empathizing, when they’re really just typical-minding. (Or rather, a variation on golden-ruling, “if I were in their shoes I’d think this”) But since they think they’re empathizing, they’re highly disturbed by people who don’t play along and at least pretend to be “empathizing,” when it might be the case that said people simply recognize that their thought process are just tending towards typical minding.
      It’s like how most people are instinctively performative, but while performativity isn’t mutually exclusive with sincerity, so people have a knee-jerk reaction against people who are too overtly performative, but also penalize people who aren’t performative enough to observe social mores.

      tl;dr, Normal People very frequently imagine the situation from the other person’s point of view during conversation, but their imagined other person’s POV is also often not very close to the truth of it.

      • arlie says:

        I don’t know whether or not they are in any sense _imagining_ the situation from the point of view of someone just like them, but it’s extremely common for people to act as if others were just like themselves, sometimes ignoring substantial evidence to the contrary.

    • thevoiceofthevoid says:

      Personally, I don’t often experience empathy as literally imagining things from someone else’s point of view. It’s more, if I had to describe it, like running a quick simulation of them in my head; e.g. “If I complain about my grade on a test to this person, they’ll probably be upset since they generally do worse than I do on tests.” or “If I tell my parents, they’ll be somewhat upset, then they’ll try to reassure me and try to suggest ways to focus more on studying.” I think about it consciously like that mainly when I’m thinking about some significant conversation in advance; in the moment, it tends to be more subconscious. Definitely not something I could discretely count–more of a general awareness of how they seem to be feeling and thinking, and remembering how I’ve felt in similar situations. If I’m understanding you correctly, not too far removed from how you describe interacting with most people. I think you might be empathizing more than you think you are without even realizing it: “They would see … that I’m not relating to them in the Empathetic way” — you’re already thinking about how you make other people feel, though I suspect your model’s a bit pessimistic. If you care about and pay attention to what they’re saying, that’ll show, and you don’t need an out-of-body experience to connect with someone.

    • nameless1 says:

      This isn’t empathy as far as I can tell. Empathy is how *I* would feel in that situation. How *I* would feel if I got that remark. The origin of this is predicting behavior: we cannot simulate other brains, not enough capacity, but we can ask our own brain that given these inputs, what would be the likely output? And assume the other person behaves the same way. However as a side-effect it led to sympathetic pain, and hence compassion. And it is not three times a conversation, it is something constantly running. Every time before making a remark my brain is running a test that if someone would tell me this remark would I get angry? This is empathy.

      This is why it fails spectacularly sometimes or at least people claim it does. Suppose you make a remark about someone’s weight, not being in shape and you are surprised they feel hurt by it. Why, I would not! Then they explain they were bullied about it all through school and besides have depression so really hard to deal with it. I think such claims – that it is a failure of empathy – I false, I cannot be expected to simulate a persons whole life and mental conditions… empathy is walking in someone’s current shoes, not whole life history. In this sense it is literally true that privilege blinds. Too bad it was turned into a political club to whack people around. In the literal sense, it is perfectly true that if some part of my armor is thick, and would have a vulnerability at that spot, then my simulation will tell me people are not vulnerable at that particular spot, which can be entirely wrong.

      • Bamboozle says:

        Adding a +1 to say this is what OP is looking for. I agree that empathy is when you base your response to someone on a quick simulation of how you would feel in their circumstances (updated with relevant bits of information you know about their history, how their mind works from your history with them) and then formulate your response from there. Not 3 times a conversation but before you say anything.

        This is probably where a breakdown happens with some people giving logical solutions to other’s problems when a simulation would tell you that this person just wants to be heard and needs to vent.

        As a personal aside i used to be a very numbers based, socially awkward person (studied physics at university) and put in a lot of personal effort to work on social skills. I’m now a relationship manager for a big firm who doesn’t use numbers in anyway day-to-day and have found that my interest/aptitude in such logic-based things has dramatically declined as a result. Interested to hear if others have had similar experiences.

        • arlie says:

          Sadly, yes. Or more correctly, various aspects of technical, “hard” skills are down for me, even though I still work as a software engineer – I spend too much time communicating, negotiating, and dealing with silly human quirks for that decrease to make me less marketable. But I don’t like it, and I don’t know whetehr the problem is aging, inattention, or even some incompatability of “soft” and “hard” skills in the same person.

          • Bamboozle says:

            I’ve been thinking about this a lot and think that it must be some incompatibility of “soft” and “hard” skills.

            I struggle to think of any famous figures who are truly very good at both. It seems that ultimately one seems to take a back seat to the other, given the examples we have of successful people who take either to the extremes.

          • Nornagest says:

            I think this is probably more a regression-to-the-mean thing. It’s easy to spot people who’re above average in both hard and soft skills, even if our culture kind of pushes against that from both sides (nerd culture valorizes social problems, mainstream culture discourages nerdiness). It’s a lot harder to find people who’re three sigmas out in both, but that’s exactly what we’d expect of three-sigma skills that’re even somewhat independent of each other.

  21. bean says:

    Since Naval Gazing hit the 1-year mark, I’ve decided to pull back my link postings here to the whole-numbered threads only. This seems like a good compromise between me wanting to cut back the effort involved and the people who still find the links useful.

    Russian Battleships, Part 4: A look at the dreadnoughts which followed the Gangut-class, a few of which were completed before the Russian Revolution upended the naval program.

    The Museum Ships of Europe: A reasonably comprehensive list (excluding replicas). Go see if there’s one near you. (Presuming you’re in Europe, that is.)

    Lastly, a look at Operations Research in the Battle of the Atlantic. OR is basically the practice of bringing scientific methods to bear on decision-making and operational problems. One of the most important foundations is the work done by scientists fighting the U-boats, including problems like depth-charge settings, convoy size, and if merchant ships should have guns.

    Also, I’ve started what I refer to as overhauls on old posts. This mostly involves adding links to new posts and cleaning up any style errors that bug me on re-read. List can be found in my Open Thread.

    Finally, update your bookmarks. The CAPTCHAs no longer load on navalgazing.obormot.net. Go to navalgazing.net instead.

    • Incurian says:

      OR is so fascinating!

      • bean says:

        I know! It’s really, really cool stuff, particularly the bits where you get huge improvements from minor tweaks.

        • What my father was doing during the war.

          The one big failure he mentioned was the attempt to use multiple regression models to design better alloys.

          • bean says:

            That’s not what I expected the SRG to be doing. But I don’t know a lot about them.

          • Better alloys would obviously be useful for military purposes. But apparently the relevant relationships are very far from linear.

          • I remember two other things he mentioned. One, which led to important developments in statistics, was figuring out the best way of testing a batch of bullets to make sure the quality was adequate. The old approach was “fire N from a batch of M. If fewer than X misfire, accept the batch.” Statisticians had figured out the optimal value for the variables.

            But that’s the wrong answer, because if the first twenty rounds out of your planned hundred are duds you should reject without bothering to fire the next eighty. So a formula with a stopping rule in it is more efficient. I don’t remember the label for the branch of statistics that came out of that.

            The other one, which I may have mentioned before, involved torpedo directors, analog computers used to aim torpedoes. Designing one requires information on the performance characteristics of the target. We didn’t have that information for Japanese ships, so used the characteristics of our ships, on the theory that it was probably similar.

            At some point we captured a Japanese torpedo director, presumably attached to a Japanese destroyer. Someone got the bright idea that they had probably done the same thing, so tried to reverse engineer the director to deduce the characteristics. They were never able to get a plausible result.

            My conjecture is that the torpedo director was for the Type 93 torpedo, which had a much longer range than anyone else’s torpedoes, and the people doing the reverse engineering didn’t realize that.

            I may have told this story here before–if so apologies.

          • bean says:

            I’m not disagreeing that better alloys would be helpful. It’s just not something that falls within the traditional purview of OR, and I expect the metalurgists would have looked into that already. Then again, I find stuff where analysis that is blatantly obvious to us today was missed for decades all the time in various books.

            You have mentioned the torpedo director before. I did some looking, and I suspect that the torpedo director in question was an aerial unit. It was more likely to contain the sort of simplifying assumptions that could be reverse-engineered out of it (the speed of the Type 93 would be shown pretty explicitly, as would other variables, while an aerial torpedo sight has to work quickly) and the US was significantly more likely to get their hands on one from a downed torpedo bomber.

  22. 4thwaywastrel says:

    I’m in the opening stages of a project aimed at increasing the effectiveness of my mental loop that does more or less the following:

    1. Dream of a more optimal world
    2. Plan a way to close the delta between the current world as I understand it and the Dream
    3. Act to close that delta
    4. Constantly evaluate so as to update on shifting Dreams and Plans and actually consistently Act.

    Other than writing about it and forming a model the other idea I’ve had is to form a group of people which can be used as a tool by each member to augment this process. So I got some people together and we’re going away to talk about it this weekend. I could see this taking the form of:

    * Hypothetical question asking and rubber ducking to narrow in on what exactly one’s Dreams might be
    * Using the experience/brainstorming power of the group to construct a more effective plan
    * Have the group schedule a check-in to run this loop in big ways in small ways (weekend retreats to go deep, and daily check-ins to refresh)

    My question is, has anyone tried this or anything like it? And if so what worked/what didn’t, and what model did you use. Also if anyone could come up a term that encompasses that loop start to finish I would be eternally grateful. So far the closest I’ve found is “actualize” but that doesn’t seem to convert the Dream portion as much. “The executive process” kinda also fits but is a bit long for my liking.

    • AG says:

      There’s work performance evaluations. Setting goals, regular meetings to evaluate goal status, and then revising goals each year to move them towards promotion, while also evaluating the standards and requirements of each promotion level.

      There’s performance goals and development goals, each given a priority weighting to connote how much time someone will spend on that goal during the year.

      “optimal world” with a narrow scope, in terms of the workplace as the world.

      Along the way, running Root Cause Analysis on major deviation events.

      5S as a structure for evaluating an environment, and Six Sigma projects for changing it.

  23. Jo says:

    Hi, is anyone here from Hamburg?

  24. johan_larson says:

    You have been retained by the Time Patrol as a cultural consultant. They have asked you to prepare a list of works that capture the spirit of the 201X first-world anglosphere, for use by agents preparing for missions in this time and place. These works could be almost anything; movies, books, comics and video games are all fine. Obviously, these works of fiction won’t be the only training agents will receive, but they’ll help to characterize the time and place for people who may be from centuries in the future (or past, sometimes.)

    What do you recommend?

    • johan_larson says:

      My first cut at this is to look for a work characterizing each level of society, roughly speaking. I recall Scott referencing a couple of different articles with four-level models, along the lines of the underclass/working class/gentry/wealthy.

      My first cut is:
      The Underclass: The Wire, season 4
      The Working Class: Roseanne, early seasons (before she wins that lottery)
      The Gentry: Mad About You
      The Wealthy: Billions, season 1

      • Nornagest says:

        Any piece of media about the lives of the truly wealthy is going to be wildly inaccurate, partly for culture-war reasons but mostly because the truly wealthy have better things to do with their time than consult for media production. But by the same token, the Time Cops probably won’t be hanging out with billionaires either, so it doesn’t matter much.

    • proyas says:

      A piece of fake dog shit.

    • Paul Brinkley says:

      The boring meta-approach is to look at what got the most attention in 2010-2019, particularly favorable attention. So, start with the Billboard top 100 for all ten years, and the top in ticket sales, top in television viewership, top books on the NYT bestseller list, and top websites on Alexa.

      The question to me is whether video games are enough of a cultural zeitgeist to be worth looking at. How do they compare with movies, TV, books, and websites? Speaking as someone who plays them a lot, they seem like they’re still something of a niche pursuit. World of Warcraft is an obvious counterexample, but they’re mostly a 2000s phenomenon, not 2010s, and they peaked at only around 11 million, and there haven’t been any other obvious uber-MMOs out there. Certain casual games might make the grade, but I don’t know of any that got that much consumer attention.

      • HeelBearCub says:

        PUBG has sold over 50 million copies with 400 million players worldwide.

        That’s not niche.

        • Paul Brinkley says:

          I keep seeing Steam notifications that one of my friends is playing this, and just assuming it’s my bubble. I honestly had no idea PUBG had done this well. Agreed; this isn’t niche.

          Not sure how this translates into “spirit of 201x culture”, but at least I can say the numbers support it playing a role.

          • HeelBearCub says:

            To put this another way, more generalizable to other well known media creations, consider the following:

            ESPN pull in about 1.5 million viewers per day. Clearly ESPN is a very well known cultural element. You may not watch ESPN, but you know of ESPN. Fox News is in the same range.

            Twitch, which is targeted primarily at allowing people playing video games to stream video of their play to people who wish to watch it, was pulling in about 1 million viewers per day as of February. This isn’t directly comparable, as Twitch is arguably a worldwide community, but it is also only 5 years old.

          • bean says:

            I think the big difference is in terms of commonality of viewership. ESPN has 1.5 million viewers, but there’s only one ESPN. Likewise Fox. Twitch is a platform for lots of different content. Those 1 million people are watching hundreds of different things.

            As for the size of PUBG, I think you’re ignoring the worldwide aspect there. How many of those 50 million sales are in Japan, Korea, and China?

          • HeelBearCub says:

            @bean:
            They are all watching video games, though. Just a some ESPN viewers don’t care about any particular show or sport that they may choose to cover, but do care about sports in general. The ESPN market is slightly more coherent, but I think we can easily infer that the market of people interested in video games is quite large, indeed.

            400 million unique users is huge even from a worldwide perspective. 50 million copies of a single game is still impressive globally. That’s on par with the gross revenue that Avengers: Infinity War made.

      • johan_larson says:

        Video games have higher revenues than movies at this point. They’re emphatically a big business. And they seem to have a bit of bleed-over; I don’t play, but I know who Nathan Drake, Master Chief, Commander Shepard, and Andrew Ryan are.

        • AG says:

          But can you extrapolate what agents preparing for missions should take away from knowing these top games?

          Speak too specifically about Mass Effect and you still get side-eyed as an info-dumper, no matter how mainstream it is.

          Popularity doesn’t translate to probability of social interaction in this age of atomized fandom. The most popular web content have fans in the millions, but are basically invisible to the popular niche next door. Tyler Oakley fans may not care about the McElroys may not know who Rooster Teeth even is. Similarly, being a fan of big AAA 1st person shooter # 3 doesn’t get you much small talk with fan of big AAA 1st person shoot # 2, unless the mission of these agents is to create fanwank.

          Honestly, I think hypothetical agents would gain more from watching videogames second hand, looking at a selection of the most popular Let’s Plays or perusing the memes associate with any particular game, than looking at the games themselves.

          • woah77 says:

            I think Gamer lingo would be the most useful thing. Being able to use it would identify you as culturally adept and “young” which would likely be useful to any agents coming back to this time.

          • Michael Handy says:

            @woah77 is right. Knowledge of the main memes (Arrow to the Knee, I’m Commander Shepard, and this is my favourite post on SSC, etc) is probably enough

    • AG says:

      The ironic thing is that media operates off of nostalgia cycles, as shown by all them reboots. So 201X media has a whole lotta love for the 80s. This won’t help agents, either, because it’s not even an accurate portrayal of the 80s, just as in the 2040s-60s, their portrayal of the 2010s won’t be accurate, either.

      Their best bet is to use actual documentaries as a springboard…and Vine compilations.

      • Michael Handy says:

        Yeah, I’m not sure how to cope with the periodic Stuart Revivals during the 18th-19th century, after the fact and with heaps of evidence. The 20th Century is prbably harder than that.

    • knockknock says:

      Why dig for zeitgeisty “works” — Why not just look at Facebook, Instagram etc?

      • Plumber says:

        @knockknock

        “…Why dig for zeitgeisty “works” — Why not just look at Facebook, Instagram etc?…”

        Well I don’t do Facebook, or Instagram (or Twitter, or probably “etc”), nor have I played videogames after the 1980’s, so by that logic I can’t “pass for this time” either.
        If the time travelers are adults they have to know things that someone their apparent age would know, so if they’re my age they’d need to know things from the ’70’s and ’80’s (and what were the “re-runs” from before)

        Let’s start with movies:

        The Maltese Falcon, 

        Casablanca, 

        The Third Man,

        On the Waterfront,

        Doctor Zhivago,

        Logan’s Run, 

        Planet of the Apes,

        Star Wars,

        The Warriors, 

        Death Race 2000,

        Dragon Slayer,

        Rock n’ Roll High School, 

        Escape from New York,

        Blade Runner,

        The Road Warrior,

        Conan the Destroyer, 

        Eating Raoul,

        Young Sherlock Holmes,

        They Live


        et cetera.

        Music:

        Chuck Berry, 

        Bo Diddley,

        The Beatles,

        The Yardbirds

        The MC5,

        The Velvet Underground, 

        The Stooges,

        AC/DC,

        The Ramones, 

        Prince,

        The Clash, 

        Motörhead,

        Stiff Little Fingers,

        The Undertones,

        Madness,

        The Specials,

        Venom,

        Run-D.M.C.,

        Hüsker Dü,

        et cetera. 

        I know these far better than “current” stuff, and I’m not alone, I bet far more are familiar with “Leave it to Beaver”, “Hogan’s Heroes”, “Lost in Space”, “Star Trek”, “M.A.S.H”, and “Good Times” than whatever’s on T.V. now.

        The suggestions I’m seeing are a better fit for talking to my 13 year old son than to me.

        Here’s an idea: Have the time travelers study up on 1950’s culture and send someone old looking, if they slip up youngsters won’t likely notice.

  25. HaraldN says:

    There’s been quite a lot of talk lately of global warming, and there’s a question that has been bugging me for a while in terms of blame/responsibility allocation (objectively pointless, but realistically essential in political reality). Here’s my chain of reasoning so far:
    First try: by country. Here we find the us and china as big culprits, and my own country as largely blameless. But countries are kind of arbitrary so
    Second try: by capita. Here we find that the us is really bad, but china starts looking pretty good, my own country starts looking kinda bad. But the purpose of humanity is not just to live so consider
    Thirdish try: by gdp. Here the us suddenly looks really good. Obviously gdp is a very blunt instrument, since stuff like babysitters and widget production both add gdp but are very different.

    And the whole thing just gets more complicated when I consider that I’m not sure the studies have accounted for externalized emissions (if a european buys a phone made in china, the emissions show up in china but the problem is european consumption). I also feel there should be a game theoretical angle here, in that if we go strictly by capita and limit the allowed emissions strictly to x/person, then the right game theoretical play is for countries to get as many people as possible (to get as much total x as possible), which will just make the actual problem worse.

    Feel free to add any angles I might have missed, or just write down the correct solution if you have it 😉

    • At a slight tangent, one of the odd things about the whole AGW controversy is the weight both sides give to the A. It seems to be assumed that if warming is due to human action then we should do something about it, but not if it isn’t.

      Obviously the cause of warming is relevant to what one can do about it. But if warming really threatens horrible consequences, one would want to try to do something about it even if it wasn’t due to human action.

      There seems to be an underlying moral judgement with regard to the human race–if we broke it, we should fix it, if someone else broke it, not our problem.

      • 10240 says:

        I don’t think it’s unreasonable. We know (I think) that natural events that would cause humanity serious trouble are rare enough that, if a phenomenon is not caused by human activity, then our priors should be high that it won’t cause us serious trouble.

        E.g. if we observe that Earth is warming pretty fast, we may not know if we should expect it to continue or taper off. But if (say) we know that at no time during the last few tens of thousands of years did Earth warm by several degrees over a few centuries, and we knew that we are not causing the current warming, then it would be very likely that the warming will stop or slow down soon (whatever is causing it). But we can’t conclude the same if we are causing it.

        Also, I think the primary argument about global warming is theoretical predictions about human CO2 emissions, not actual observations. Observations can be used to confirm (or refute or correct) the theory. If we observe actual warming, that offers some evidence for the theory if we know that warming this fast is unlikely to be caused by natural phenomena, while otherwise it’s somewhat weaker evidence. If the theory is correct, that, in turn, has the implication that warming is likely to continue as long as we emit CO2.

        • Salem says:

          I don’t think we know that at all. We’ve had climate in relatively recent history (in the geological sense) that would be disastrous for our civilisation, probably far worse than the worst-case IPCC estimates on warming.

          • 10240 says:

            Changes of several °C did happen, but over thousands of years, not 100–200 years (I think). If such a climate change happens over thousands of years, that allows species, ecosystems and humans adapt to it easier.

          • Changes of several °C did happen, but over thousands of years, not 100–200 years (I think).

            We don’t have global data good enough to know that it hasn’t happened this fast before. The one source of high resolution data going way back we do have, ice cores from Antarctica and Greenland, show episodes of temperature change faster than happened in the past century.

            I believe those correlate between the two sources, so it may well have been global, but we don’t know.

        • Tenacious D says:

          Not disagreeing with what you said, I just want to discuss the tapering off point:
          Shouldn’t the warming taper off according to the theory? Or is my understanding off?
          Equilibrium Climate Sensitivity (ECS) is given as a linear increase in temperature from a doubling in atmospheric CO2. So based on a pre-industrial concentration of 280 ppm and 410 ppm at present, 2/3rds of the warming that could be expected at 500 ppm is already locked in. The curve levels off as concentrations get higher. If ECS = 3, then we need to get to 560 ppm to be +3 from pre-industrial, to 820 ppm to get to +3 from current [steady-state] temperatures, and to 1120 ppm to get to +6 from pre-industrial. Unless new positive feedback mechanisms kick in, of course.
          In contrast to the ECS curve, the install base of green technology is increasing more than linearly. So I’m hopeful that human ingenuity will win the race.

      • BlindKungFuMaster says:

        The ‘A’ points to a cause, namely carbon dioxide. This implies solutions which one of the sides doesn’t like. So the ‘A’ is relevant to how to fight climate change, which is really the core of the disagreement.

      • Rick Hull says:

        The focus on anthro, IMHO, is that if we can focus on the cause, it gives us pathways for the solution. For example, reverting to 1M hunter-gatherers globally is an option for AGW. Not necessarily a good one, but options never hurt.

        For non-anthro GW, we lose the options regarding “rewinding human progress / growth”.

        • As I said:

          Obviously the cause of warming is relevant to what one can do about it.

          But the arguments I see are not “if warming might not be anthropogenic we should investigate geoengineering solutions instead of putting all our chips on CO2 reduction.” It’s more “you denialists don’t want to do anything about warming because you deny that humans are responsible.”

      • Dan L says:

        Agreed that it’s a decidedly odd set of reactions. Among those who believe that there would be dramatic consequences from environmental shift, I don’t see too much of the response I would expect from those who believe anthropogenic are particularly strong (“If we’re in the driver’s seat, is there somewhere better to go?”) or are particularly weak (“The driver’s seat is totally empty. Frick.”).

      • nameless1 says:

        If it caused by emissions, we can reduce emissions and that will do something. If it is caused by sunspots, reducing emissions would do nothing, would be a statistical blip. This isn’t moralistic. If we did not break it it is likely that no amount of carbon footprint cutting would fix it and this means we should focus on dealing with the consequences of it.

        • Or on other ways of preventing it, such as geoengineering.

          I agree that whether humans cause it is relevant to how it could be prevented, but it isn’t relevant to whether it should be prevented, save indirectly through possibly affecting the cost of preventing it.

          • RobJ says:

            You mentioned this yourself, but I think it’s very clear that many consider it a moral issue. I think this particularly goes for non-human effects. If the effects on humans are bad enough, we will obviously try to prevent it whether we caused it or not. But if we are talking about things like species extinction, it could be considered sad if it’s a natural process that is not our fault, but a moral wrong to let it happen if it is our fault.

    • The Nybbler says:

      If global warming due to CO2 emissions is the catastrophic threat some claim, it doesn’t matter how you account for it, it has to be stopped regardless of the number of people or amount of economic impact that CO2 production is in support of.

      If it’s not, but you still think it’s damaging, you have to figure out the consequences of both the CO2 emissions and of cutting them by whatever amount proposed. No one ever does this by any accounting measure more sophisticated than the handwave. To be fair this is largely because of the large unknowns on both sides.

      • HaraldN says:

        Yes, but that’s the sort of idealism that won’t actually get the problem solved. Everyone wants to do as little as possible, and will do so (dooming us all in a classic prisoners dilemma) if given a chance. So how can we figure out how to distribute responsibility in a way that corresponds to the worst emitter having to carry most of the cost? This might be a better problem statement than I made in op.

        Your post is also simplifying the matter quite a bit. If country A can keep the global supply chain running at half the co2 emission of country B (a vast oversimplification, but I think the point is clear), then it would be foolish to curtail both equally.

    • Thomas Jørgensen says:

      That responsibility does not matter and we should stop burning coal immediately, because it has externalizations so very high that it is in fact the single most expensive power source in existence.

      http://www.coaltrainfacts.org/docs/epstein_full-cost-of-coal.pdf

      The external costs of US coal – without counting climate change at all are here estimated to be between 8 and 16 cent per kwh. This is far, far costlier than nuclear power, and more expensive than renewable + storage.

      It would still be more expensive than nuclear if the coal utility gave you the electricity for free, it does that much damage.

      The US just mostly pays this bill in average citizens dying from cancer, and miners dying more directly on the job.
      Counting estimates of climate change costs increases that range to 9 to 27 (!) cent per kwh.

      Nuclear Power. Because lung cancer is expensive.

      Before someone brings it up – no, the externalizations of natural gas are not this extreme, but if you replaced US coal with Nat gas, the direct costs would go through the roof. The current low cost of natural gas is an artefact of overinvestment in fracking, not a sustainable long term level.

      • The US just mostly pays this bill in average citizens dying from cancer, and miners dying more directly on the job.

        Miners dying is mostly a cost to them, converted into a cost to the producer and so part of the price of coal by the need to pay people more the more dangerous their job is. As the article you link to appears to recognize.

        • Le Maistre Chat says:

          Well sure, now that we’ve abolished slavery.
          (When role playing in the Bronze Age Near East/Greece, I very occasionally have to resist the urge to have a West Semitic smith named Adam argue that instead of mining with slaves, you should pay people enough to risk their lives and accept the value of metals skyrocketing. 😛 )

          • baconbits9 says:

            It’s my understanding that the price per pound of cotton on the world marked dropped after the end of the Civil War in the US and dropped below the pre war prices fairly shortly into reconstruction. Are there other examples of the end of slavery that resulted in higher prices?

          • Le Maistre Chat says:

            @baconbits9: by the time of the ACW, or even 1830 when slavery was abolished in the British Empire, owners of mines and cropland would have had much more ability to substitute capital for labor as cost of the latter increased.
            What I’d find most interesting to know would be the economics of mining without slavery in the Middle Ages. Come back, David Friedman…

          • Protagoras says:

            David Hume argued that low-paid labor was more economically efficient than slavery first (Smith had a much more comprehensive economic theory than Hume, of course, but wherever Hume had already discussed something, Smith tended to follow him).

          • Le Maistre Chat says:

            @Protagoras: thank you for reminding me.
            Annotated editions of Edmund Burke’s Reflections on the Revolution in France typically note that he obliquely referred to Hume and Smith in positive contrast to French philosophes. This is interesting because he hung out with Samuel Johnson and Boswell (who dutifully documented his puns), who considered Hume dangerous.
            An even more amusing anecdote about the major thinkers of the time knowing each other is how Hume tried to make friends with JJ Rousseau, was warned not to by Diderot, and they ended up feuding like homosexual lovers.

          • John Schilling says:

            It’s my understanding that the price per pound of cotton on the world marked dropped after the end of the Civil War in the US and dropped below the pre war prices fairly shortly into reconstruction.

            But keep in mind that’s about the time that Egyptian and Indian cotton started arriving on the world market in vast quantities. Indeed, a big part of the Confederacy’s downfall was the miscalculation that of course the Great Powers of Europe would support their cause, because their textile industries and thus the whole of their economy would fall into chaos if they did not.

            Oops. The new supplies easily expanded to replace blockaded Southern cotton, England and France were free to chose which North American power they felt like supporting, and once the blockade was lifted the new supplies didn’t go away, so yeah, do the math. What happens if you replace slavery with free labor, and don’t also get another huge supply source opening up, is a trickier question.

          • baconbits9 says:

            @ John Schilling

            IIRC the south was producing more cotton at the post war prices in 1870 than they did in 1860, barring something like substantial subsidies that I am not aware of (which totally could have existed, I’m just not aware) the implication is that the South was able to produce as much or more cotton for those lower prices, which makes it seem unlikely that emancipation in the US would have lead to a sustained increase in prices without those markets.

            @ Le Maistre Chat

            I am open to the argument that the US Civil War is not representative, and the question I asked at the end of the previous post is an earnest one born out of ignorance and not question begging. To my knowledge there has been emancipation of numerous groups at different points
            in history (with differing levels of servitude) it seems like there ought to be an example or two if that is the expected outcome of emancipation.

          • he hung out with Samuel Johnson and Boswell (who dutifully documented his puns), who considered Hume dangerous.

            Boswell considered Hume dangerous? That doesn’t fit my rather vague memories of what Boswell says of Hume in his memoirs (or whatever the books are called).

          • Le Maistre Chat says:

            @DavidFriedman: grammatical slip-up on my part. Boswell was friends with Hume, whom Johnson refused to socialize with (Boswell basically wanted to be friends with everyone famous). Boswell recorded that when Hume was dying, Johnson encouraged B. to go see his infidel friend to test the hypothesis “There are no atheists in foxholes.” Hume was famously serene in the face of transport to the Undiscovered Country, which IIRC shook Boswell.
            Johnson saw philosophy as high stakes business for society and believed it was a mistake for Burke to serve in the House of Commons when he had the intellect to be a Christian philosopher who he believed could check Hume and supersede Bishop Berkeley (whose idealist empiricism upset J. enough to smash his foot against a big rock).

    • Fossegrimen says:

      If you just want more confusion you can add sequestering: Sweden has ludicrous amounts of well-managed forests and can currently be considered a net carbon sink. Russia has even ludicrouser amounts of totally unmanaged forests that doesn’t make much of a difference.

      • HaraldN says:

        An interesting addition, but I am not sure it holds. The trees are already there, and I am not aware either sweden or russia is doing much to increase their sequestering (say by growing forests and then cutting down the trees and burying them far underground).

        • Fossegrimen says:

          There is, but not just from burying trees but a wider perspective on forest management. Guessing that you can read Scandinavian languages from your moniker, you can find the relevant research at NMBU.no
          And yes, Sweden, Norway, UK and New Zealand are putting rather large efforts into it.

  26. CheshireCat says:

    I’ve posted a lot in open threads about my journey to try and fix my emotionlessness, depression and anhedonia (most recent comment here). I’ve tried 4 different antidepressants, a few supplements, therapy, and even psilocybin mushrooms, all with varying degrees of ‘absolutely no effect’.

    My most recent experiment has been self-administered ketamine. And not only does it work, it works very well. My anhedonia is greatly improved — I can have fun doing plenty of things I didn’t even enjoy before. I’m more emotional and more expressive. That dull latent pain that’s been omnipresent for years is either gone or only barely there. I’m less anxious and less bothered by negative events. My cognition is far healthier and less self-critical. I even have much more motivation. It doesn’t give me all my emotions back, and my depression comes back in force after about a week or so, but it’s by far the best treatment I’ve tried yet. In fact it works so well, treating such a stubborn and longstanding depression, that I have to fight the urge to recommend it to every friend suffering with depression. I’m so glad I found it and it’s improved my life in almost every way.

    I’m gonna try adding Wellbutrin next, to see if it further improves my emotionality. I’m currently also taking Remeron, though small doses and mainly for sleep issues.

    And for anyone who wants to try this on their own, my “recipe” is 1mg/kg of body weight in powdered ketamine, dissolved in water and taken sublingually in 1/3rd doses over the course of an hour. I have to buy it off the darknet though, so unless you’re willing to learn how to do that you’re out of luck.

    • Garrett says:

      Can you confirm your 1mg/kg dose? Looking at the use of ketamine for pain management (comparable to morphine results) is 0.1-0.6 mg/kg, depending upon the source. Your one saving grace here is that the bioavilability orally is only about 20%.
      But dosing questions is why there are doctors and pharmacists.

      • CheshireCat says:

        Not entirely sure what you mean by confirm, but yeah 1 mg/kg is the dose I take, though I should clarify that this is a per week dosage, not daily. I arrived at it based on trial and error, and after looking into what dose ranges have been tried before.

        I determined my acceptable therapeutic range (.25-1.25 mg/kg) from looking at the small handful of oral ketamine studies available. [1] [2] [3] [4] [5]

        The most common dosage among oral studies is .5 mg/kg, but there are some that have tried more or less. The latter two in particular experimented with higher doses: [4] used 1.25 mg/kg, and [5] used a pretty insane flat 150 mg daily oral dose for 6 weeks. For comparison, as a 150lb man I take ~60mg once a week. Obviously there isn’t much to be gained from looking at a tiny number of low-powered studies, but ketamine is already known to be a very safe drug so I’m not too worried about safety.

        If we’re being irresponsible and directly comparing the bioavailability of sublingual vs IV, the doses of ketamine I take are roughly equivalent to 2/3rds the strength of an ordinary IV dose. I don’t know if that’s the optimal dose but it works for me.

        My biggest safety concern is possible bladder damage, given the higher net amount of ketamine passing through my body than with IV administration. But bladder damage seems to start at like 2 mg/kg per day? I don’t take anywhere near that much and will stop immediately if I notice any signs of damage.

        Thus far I have noticed no adverse effects of any kind, except maybe that it sometimes feels like it works *too* well and insulates me from bad feelings that I probably should be feeling a bit more. But that’s a small price to pay and it’s very temporary.

        I wish I could talk to my doctors about this stuff, but none of them are willing to, so I’m stuck doing the dirty work myself. I don’t know much about pharmacology but I think I’ve done enough homework that I’ve arrived at something sustainable. Anyone with more knowledge, please feel free to call me out if I’m doing something really stupid.

    • Rick Hull says:

      What is your subjective sensation regarding any psychedelic (Fear and Loathing style) effects? Is this a microdose?

      • CheshireCat says:

        I’m not sure I would call it a microdose, as it’s definitely psychoactive, but I don’t think I would call it a “trip” either. (I’ll call it that anyway for the sake of convenience.)

        Basically, I get really relaxed and unfocused. I lie down and my internal dialogue gets quiet and peaceful. My fears and anxieties tend to melt away during the trip, and for a few hours after, before coming back. I’d say it takes about 2 days after the dose for the therapeutic effects to fully kick in.

        I can still function decently during the trip, ie I’ve gotten text messages and returned them without much issue, but I’m not sure I could hold a conversation or convincingly pretend to be sober in the midst of things. My coordination is impaired and thought process is somewhat screwed with. I definitely would never drive.

        It’s really nice and therapeutic to lie down and fully relax for 1.5 hours, but it’s not an inherently pleasurable experience. I don’t feel any urge to take more to “get more out of the trip”, if that makes sense. Ketamine does have addiction potential, but I don’t crave it for the trip or anything. In fact I often procrastinate taking my dose because I don’t want to bother sitting around for an hour and a half doing ‘nothing’.

        Even after the trip feels over, I still feel a little loopy and disorganized for a while after. I wouldn’t drive for maybe 2 hours after I took the last part of the dose, as that’s about as long as it takes for the secondary effects to wear off.

        Other than that, nothing really. No ego dissolution, altered perceptions/minor hallucinations or anything of that sort. Just some cognitive/motor deficiencies similar to being tipsy drunk, and a relaxed, painless state of mind.

  27. bean says:

    Re the comment on insane people generating most internet content, this one rings true to me. Blogging isn’t an actual addiction for me, but it’s not too far away, either. My life has been getting busier lately (in a good way), and I”m trying to cut back, but I’m having to fight to keep from running my buffer up more than it already is. Based on some stuff Scott has said, I think he’s the same way. And some people are going to take it to even more absurd extremes, like the wikipedia guy.

    • pontifex says:

      But Scott posts all the time, and he’s not insane… right? right??

      • bean says:

        I think the author of the post wasn’t using language that well. His point is that the vast majority of internet content is generated by people who are very much not normal. Insane isn’t a good word to use for that, but the deeper point is correct as far as I can tell.

        • pontifex says:

          Yes, I read through the linked essay and “insane” doesn’t seem like quite the right word. “Eccentric,” perhaps.

          If I were Nassim Taleb, I would make up a catchphrase like “20% of the people drink 80% of the beer” and write a few books about it 🙂

          Ironically, the people who write about prolific writers are likely to be themselves profilic, so a sympathetic treatment is likely.

          • bean says:

            Pretty much. Looking at this from (sort of) the inside, I can’t really imagine not blogging any more. Earlier attempts at blogging and the like failed because I couldn’t keep myself doing it. Even my big space warfare paper was often a chore. And I won’t say that every blog post has been totally delightful to write, but I’m not joking when I say it’s the next thing to an addiction. If I was somehow banned forever from writing about naval things, I give really good odds I’d be talking about spaceflight or something like that within a few weeks.

            I think Scott is saying something very similar here.

  28. Brett says:

    2. Impressive stuff, especially the Wikipedia editor. Although I could see how he would reach 385 edits a day, if he does multiple edits per minute (maybe he’s really fast at typing). If he averaged two edits per minute, then that’s about 3.2 hours of editing per day. A lot more than the typical person, but not that much if he’s just doing it in his spare time after work in lieu of other activities.

    I’ve been watching a lot of Isaac Arthur SFIA videos lately. What’s the general commentariat view on them, among fellow watchers? A lot of it of course is speculate by nature.

    • mindspillage says:

      FWIW, multiple edits per minute is totally reasonable for Wikipedia editors who have extremely high edit rates, as it usually doesn’t involve doing much writing–they’re mostly making small fixes, often with the aid of bots. (Things that identify common spelling errors, for example, or that remove a mischievously inserted picture mere seconds after it shows up.) Not that it’s not impressive, but some of the more prolific writers don’t show up as highly in the statistics because they do their editing in large chunks and leave only a handful of log entries to show for it.

  29. BBA says:

    The discussion of one-hit wonders on a previous thread reminded me of an odd anomaly in the Billboard charts that went unexplained for over 30 years.

    Joel Whitburn is a music historian and statistician, and his company Record Research has put out comprehensive guides to every song and record ever listed on the Billboard charts. In the process, he amassed a collection including every record that ever charted on the Hot 100. The logical next step was the “Bubbling Under” chart that included songs that would be #101 and lower on the Hot 100. There were some amazingly obscure records that Whitburn was able to track down, but one that he couldn’t find any information on: a song called “Ready ‘n’ Steady” by a band called D.A., released by Rascal Records, which appeared on the Bubbling Under chart for three weeks in 1979. It peaked at #102 and then vanished; neither D.A. nor Rascal was ever heard from again. Whitburn tried in vain to track down any information about the band or the label, finding a few with similar names, but none that were active in 1979 or released anything called “Ready ‘n’ Steady.” Eventually he declared it had to be a hoax, perhaps deliberately planted by Billboard to ward off competitors from copying the charts, and deleted it from his statistics. The truth was a bit more interesting.

    In 2016 somebody checked the Library of Congress’s online copyright registry and found an entry for a song called “Ready & Steady” credited to D.A. Lucchesi and Jim Franks. Lucchesi passed away in 2005, but Franks was still alive, and one of Whitburn’s researchers got in touch with him. His story was that their band, actually called D.A. and the Dukes, made some kind of deal with an industry insider who claimed he could guarantee their song would chart. They recorded the song, but it was never released as a record. Instead, the promoter apparently bribed some record stores to list it on their weekly sales reports to Billboard. The song charted without ever selling a single copy or getting played on the radio. Obviously, D.A. and the Dukes weren’t able to turn this ruse into a deal with a real label, and vanished into obscurity. As for Rascal, that was the name of somebody’s dog.

    All this was first explained by the researcher on a local radio show in Minnesota in 2016, when the #102 “hit” from 1979 was played on the radio for the first time. Thanks to the magic of the internet, anyone can listen to what was once the ultimate rarity, and it sounds, well, like I’d expect a song that reached #102 in 1979 to sound.

  30. idontknow131647093 says:

    I recently took a Wunderlich test as a prior to getting a job interview. The test is obviously one that is mostly designed for the difficulty to come from the time constraints (50 questions in 12 minutes), but it did have a very fun type of question that I think actually should be a genre of question we should work with more: Cube folding (not just cubes but other shapes as well).

    For reference they were similar to these cube folding questions

    Anyone else have experience with these? I think 3d visualization is a very interesting thing to test and teach.

    • Plumber says:

      @idontknow131647093

      “…Anyone else have experience with these?…”

      Yeah, I’ve taken quite a few tests like that for various jobs, probably the most significant one was in 1998 to be an apprentice plumber with UA Local 393, there was a test for arithmetic, some simple physics (“Will this get colder, hotter, or no change?”), some “mechanical aptitude” (“Turning gear A clockwise will result in gear F turning what direction?”), and a test for spatial relations that looked a lot like the one in your link.

      They were over 300 of us in the auditorium (I did a rough count while looking at how many of us they were) and they were 65 spots available.

      When I got to the mentally visualizing unfolding boxes test, the first problem I solved was how many seconds I had to each question, I quickly determined that I only had time to rule out some of the answers, and since a question left blank is always incorrect, I mostly just randomly picked between the two or three out of five possible answers and filled out the bubbles as fast as I could, I think that near the very end of the test I slowed down enough to fully ponder a few of the questions, but mostly it was a matter of speed in filling out bubbles, and other than practicing that exact test I really can’t imagine a way to teach how to do better at the visualization part of it, as I really can’t imagine any “lessons” beyond doing the test itself.

      I did much the same for the arithmetic portion, I’d just calculate enough to see how many decimal places the answer had, or whether it was odd or even, anything to make short cuts.

      My life would probably have been very different if I had been ranked #66 instead of #63.

      Thirteen years later I took the City and County of San Francisco test to be one of their plumbers, which was more based on knowledge rather than “aptitude”, and I was tied for rank #6 (out of about a hundred who qualified to take the test), but I’ve worked with some plumbers who I thought were much better at hands on work than I am, even though they did much less well on the test than me, so I don’t think the tests are that cogent to actual on-the-job performance, but I don’t think anyone wants to admit that and just do a lottery.

      • AG says:

        An audition-style process must be much more resource-intensive than a test, and evidently companies don’t find the improvement of results from an audition style to outweigh the costs.

        (Can you imagine the hilarious reverse, though, if jobs that currently use auditions used an aptitude test instead?)

      • idontknow131647093 says:

        Of course always answering is the best strategy on these tests. Its always good to hear from people who have taken similar tests and done well.

        Personally I don’t think I guessed on the folding questions, they are kinda in my wheelhouse which is why I decided to post about them. It was intriguing and seemed like a skill that more people could use.

  31. ksdale says:

    Is the Slippery Slope Fallacy useful?

    Whenever I hear people bring it up, they always say something like “that’s just a slippery slope argument” as though that ends the discussion.

    But the Slippery Slope Fallacy doesn’t say that every extrapolation is false, just that ridiculous ones aren’t necessarily true by virtue of their being a possible consequence.

    But then we’re just back to arguing about which predictions are plausible, which we were doing in the first place. The phrase Slippery Slope Fallacy seems to me to be something that people trot out to avoid having to explain why a prediction is ridiculous, but that’s very often what the whole discussion is about.

    If a discussion is about all of the possible consequences of a certain action, then saying something falls victim to the slippery slope fallacy is just saying that you think that a consequence is too ridiculous to contemplate, without saying why, which doesn’t add much to a discussion.

    Besides that, people often use it to describe predictions that seem very much NOT ridiculous, simply because the predictions extrapolate from current trends… as if the predictions are necessarily false because they suggest a slope. Is that the Slippery Slope Fallacy Fallacy?

    • cassander says:

      I think it’s fair to argue that slippery slope arguments are fallacious in the sense of formal logic, but also important in the real world, which doesn’t respect the rules of formal logic.

      • albatross11 says:

        I think (I could be missing something) the fallacy version is basically arguing against the truth of claim X by claiming that its truth would have consequences Y. There’s a sort of fundamental error there, hopping between a claim about whether some statement is true and a claim about whether it would be good for it to be true. Example: We can’t be descended from apes, because that would mean it was acceptable for us to act like animals.

        There’s also a kind of argument where you could claim that if X were true, then that would imply Y, and Y is obviously wrong. But that can be a valid argument, and in fact, it’s basically how you do proof by contradiction. Example: There can’t be a last prime number, because if there were, you could multiply all the primes together and get a big number N, and then N+1 wouldn’t be divisible by any of the primes, only by itself and 1, which means it would be a prime bigger than the “last prime.”

        Finally, there’s a kind of argument where you say “If we do X, we will end up doing Y.” That may or may not be true, but you need to make the case for it. Example: If we legalize gay marriage, we’ll end up letting a man marry his lawnmower. (Maybe that’s true, and if it is, maybe we shouldn’t legalize gay marriage. But you need to explain how gay marriage leads to lawnmower marriage….)

    • The Nybbler says:

      Eugene Volokh has written a few articles defending the slippery slope argument

      http://www2.law.ucla.edu/volokh/slippery.pdf

      http://www2.law.ucla.edu/volokh/slipperymag.pdf

      I’ve found that often enough, a valid reductio is labeled a “slippery slope”.

    • Samo says:

      If people try to discredit your argument with an accusation of the Slippery Slope fallacy, its important to remind them that slopes are in fact slippery, or at the very least easy to stumble down.

    • John Schilling says:

      Slippery slopes demonstrably exist in the real world, are disturbingly common, and anyone suggesting the concept is fallacious needs to either volunteer to pay my entire income tax bill or be laughed at without mercy.

      Against a specific invocation of a slippery slope, it is often sufficient to point to the fence that guy Chesterton put up as a safeguard against going over the edge. And he built an awful lot of fences, if you look for them.

      • dick says:

        I don’t disagree in theory, but are you suggesting that income taxes are an example? It would seem like each new tax reduces the justification for future taxes. (And similarly, that each tax cut reduces the justification for future tax cuts)

        If not, if the reference to your income tax was just a colorful comment, what would be an example of an obviously-valid slope you’d think other people would concede to be definitely slippery?

        • John Schilling says:

          When the United States introduced the income tax in 1913, it applied only to the richest one percent or so of the population, at a rate of 1%, and going up to a whopping 6% for the 0.01%-ers. Strictly a way for making sure the rich paid their fair share. The idea that ordinary middle-class Americans would have to pay an income tax was absurd, as was the idea that even the rich would ever be asked to pay more than say a 10% tithe. Suggesting that once established, the tendency would be for the income tax to grow without limit in scope and degree, was widely dismissed as a slippery-slope fallacy.

          Within five years, the top income tax rate was 77%, and even the working class was being hit up for the 6% that was previously deemed appropriate for the richest of the rich.

          Your intuition is at odds with reality. Income taxes have been from the start a classic example of the slippery slope in action. And the exceptional cases of clawing our way any significant distance back up that slope, have generally required either the end of a World War or a strong leader with an ideological commitment to small government. So, yeah, if you want to argue that the slippery slope is a fallacy, I’ll send you my 1040 and you’ll send me a check.

          I might spot you that first 1%.

          • dick says:

            So, yeah, if you want to argue that the slippery slope is a fallacy, I’ll send you my 1040 and you’ll send me a check.

            I’m not arguing that slippery slopes are a fallacy. (Recall the part where you said “Slippery slopes demonstrably exist” and I replied “I don’t disagree”.) However, the slipperiness of income taxes in 1912 is not super relevant to me, as I only ever get asked to consider the propriety of changes to taxes in the present day.

            So, I’m happy to agree that, in 1912 America or some other place where income taxes don’t exist, and a very low income tax is being proposed, the argument, “Careful, that small income tax might turn in to a large one, it’s a slippery slope!” would be worth listening to. However, in a place like the modern-day US where income taxes exist and are well-established at some level, if there was a proposal to increase that tax by 1%, I think someone saying, “Careful, that small increase might turn in to a big increase, look at what happened in 1913!” would not be, for the reason already specified: the higher a tax is, the harder it is to make a persuasive and correct argument that it should be higher.

    • idontknow131647093 says:

      I agree with John above me quite a bit. I find the idea of a slippery slope fallacy being applied very often outside of formal logic/philosophy and it is invoked because it is a power rhetorical riposte if you are defending a position that has no logical line of demarcation between the position they are advocating for and an absurd position (which they may actually desire or may not).

      For instance, lets say I am a farmer and my neighbor keeps driving his ATV across my field as a shortcut. I tell him to stop and maybe even get courts/law enforcement involved. And his defense is, “I barely damaged anything you are just being a jerk.” Then I say, “no you did cause a small amount of damage, and by your logic you could drive a truck across my field and there would be nothing wrong, or even a dump truck.”

      He can claim this is a slippery slope fallacy, but my counterpoint would be there is no logical difference, just one of magnitude. The fact is, his invocation of the slippery slope fallacy is actually an affront to logic in this situation because he has not articulated a logical stopping point where I would be able to take redress against his trespasses.

    • Aapje says:

      @ksdale

      It is useful in the sense that it can be unfair when a person argues for X, but the rebuttal is about Y, which is more extreme than X. It is not useful in the sense that slippery slopes do exist, so a person arguing for X actually has to be willing to argue whether his X may turn into Y.

      A good rebuttal to ‘Slippery Slope Fallacy’ is ‘Schelling Fence.’

    • Salem says:

      Our host has written quite memorably on this question.

    • Brad says:

      Leaving aside the formal logic question, simply as part of good discussion etiquette I think they should be avoided. Mostly because they are rarely someone’s true objection. If I propose policy X and you don’t like policy X tell me why you don’t like policy X, don’t spin some story about how putting in place policy X is going to lead to policy XXX which of course you and everyone else wouldn’t like. Unless this is one of those rare cases where that is your real and primary objection to X, in which case carry on I guess.

      • Rick Hull says:

        > If I propose policy X and you don’t like policy X tell me why you don’t like policy X, don’t spin some story about how putting in place policy X is going to lead to policy XXX which of course you and everyone else wouldn’t like.

        Eh, the argument is generally along the lines of: “Well, according to that principle, you would allow XXX as well. Assuming you wouldn’t allow XXX, what distinguishes X from XXX?” or: “Well, allowing X sets a precedent, and wouldn’t that precedent be used to defend XXX? If you agree XXX is bad, how can we stop X from leading to XXX?”

        i.e. It’s useful to explore the space between X and XXX and ideally arrive at a major distinction. The slippery slope argument is that there doesn’t seem to be a major distinction.

        • idontknow131647093 says:

          This is where separating logic arguments from daily life is important. In real life line drawing is typically the most important thing we do in negotiating relationships. Thus pointing out someone else’s refusal to draw a line should be a powerful objection to their actions.

        • Brad says:

          If we aren’t going to do formal logic (where slippery slope is a fallacy) then we probably shouldn’t do “according that principle” either. This:

          “Well, according to that principle, you would allow XXX as well. Assuming you wouldn’t allow XXX, what distinguishes X from XXX?” or: “Well, allowing X sets a precedent, and wouldn’t that precedent be used to defend XXX? If you agree XXX is bad, how can we stop X from leading to XXX?”

          seems hard to distigish from going on about ad hominem and appeal to authority.

          It’s quite common to support e.g. pot legalization and not heroin legalization and it is dishonest to pretend that the one is going to automatically lead to the other. Instead you should present your real objection to pot legalization, whatever that might be.

          • Aapje says:

            How is questioning the limits of adopting a new policy the same as an ad hominem and/or appeal to authority?? I fail to follow how you arrive at that conclusion.

            Let’s get specific with an example:
            P: Let’s allow euthanasia for people who suffer.
            C: What about people who suffer temporarily, like a fixable depression?

            Then from my perspective, these are (a non-exhaustive list of) valid answers:
            – We’ll exclude these cases by having rule X.
            – I also want those people to be able to get euthanasia.
            – I prefer to not have these people get euthanasia, but any rules against it would harm more people than it helps, so I accept these undesirable consequences of my policy.

            What is not a valid answer in my mind:
            – That will never happen!!11

            In fact, I see that latter answer as an appeal to authority, as the person implicitly makes a claim to know the future, without justifying their claim by explaining how they came to their conclusion.

    • Faza (TCM) says:

      Besides that, people often use it to describe predictions that seem very much NOT ridiculous, simply because the predictions extrapolate from current trends… as if the predictions are necessarily false because they suggest a slope. Is that the Slippery Slope Fallacy Fallacy?

      In common “internet usage” (and informal offline use as well, though I see a lot more of appeal to fallacy online than off), one man’s Slippery Slope is often another man’s Modus Ponens (mercifully, the relevant Wiki article actually does make a note of this).

      It is perfectly possible to chain any number of valid MP arguments into a single argument that looks for all things like a Slippery Slope – and is formally valid, as long as all of the chained arguments are valid. To assert Slippery Slope Fallacy with regards to such an argument is itself fallacious.

      The grey area appears where the argument chain is plausible on its face, but contains suppressed premises. That is: if the slope points to one possible future, but not necessarily the only possible future. Such a chained argument won’t be valid, but pointing this out won’t be particularly helpful. We would do better to instead focus on where the suppressed premises lie and to introduce them into the argument explicitly. That way we’re arguing about the merits of the argument, rather than trying to score cheap points.

      • thevoiceofthevoid says:

        Everything worth talking about contains suppressed premises. The real world is complicated enough that if you’re not talking about pure mathematics (…scratch that, just remembered Eliezer’s sequence on numbers), you’re making assumptions to begin with. Nothing interesting you say about contemporary politics (or nearly anything else) is going to be formally valid.
        Formal logical reasoning can give us hints towards what might be a convincing argument and what might be a shady one, but it won’t prove anything outright unless it’s so obvious that you don’t need to think about formal logic in the first place.

    • S_J says:

      Any individual usage of the Slippery Slope Fallacy in an argument may be valid.

      But every successive use increases the odds that the next person to use that form of argument will use the it to produce an invalid argument.

      Thus, the Slippery Slope Fallacy recursively generates scenarios in which it becomes less useful as a form of argument.

      Almost as if there is a Slippery Slope involved in the use of the Slippery Slope argument.

      • John Schilling says:

        Yes, but every successive rejection of the slippery slope “fallacy”, increases the odds that the next person will improperly reject a valid slippery slope.

        So does this slippery surface slope both directions at once?

    • Paul Brinkley says:

      My usual approach to possibly slippery slope arguments is to ask what the limiting principles are.

      Many slippery slopes have such limits. For example, there’s a naive argument in economics that, since the incentive of a company is to maximize profits, then the obvious strategy is to raise its prices higher and higher. The limiting principle to this, of course, is that customers only have so much they can offer in exchange; also, the higher your price, the higher the chance that a competitor will offer a lower one and steal your trade.

      A slippery slope is truly slippery if there are no such principles. It may also be sufficiently slippery if said limits exist so far out that harm may result in the mean time.

      Take John Schilling’s allusion to income tax as a less naive example. The state has an incentive to take as much of my income as possible. One limiting principle is that if it’s too high, I’ll move somewhere where the income tax is lower. But that’s very hard to do, and may require me to live in a place that is undesirable in enough other ways that I end up not moving. One of the biggest reasons for this might be that all the people I like live close to each other. If we’re all interested in moving, we might all end up in different places, either because of slight difference in location preference, or worse, lack of any one place with enough room to accommodate all of us, so we never move, which permits the state to exact a tax so high that the only way to break it is to be separated from our friends.

      So, an inquiry into limiting principles strikes me as a useful objective way to judge whether a slope is indeed slippery, and even how slippery it really is.

    • Randy M says:

      Slippery slope is not a fallacy, but it is not always applicable.
      A change could be a step down a slippery slope in terms of principles or in terms of preference, or neither or both.
      For example, someone who argued for gay marriage because they were gay probably has no similar incentive to argue for incest, and will probably not push for it, but if they are trying to persuade people that objection to homosexual marriage are based on an out-of-date taboo that interferes with love and free association, the argument for the former does lend some support for the latter, (although of course there are other arguments against incest, at least heterosexual incest), so I think it’d be fair to call it a slippery slope.

      On the other hand, someone arguing against sales tax because it harms small business and unfairly advantages out of state internet retailers may well be an anti-tax activist who will go on to oppose all other taxes, perhaps even gaining future successes due to this one, but the argument employed doesn’t logically lead to the conclusion that all tax is bad.

    • Drew says:

      “Slippery Slope” seems to cover a bunch of unrelated scenarios. One argument looks like:

      Claim: “We should adopt [reasonable bill] because of [badly-phrased principle]”

      Response: “But [badly-phrased principle] would lead to [extreme outcomes].”

      Rebuttal: “No one is advocating for [extreme outcomes].”

      A decade ago, this might play out in the object level as:

      “We should [legalize gay relationships] because [consenting adults should be allowed to do anything they want in their own homes].”

      “But saying [consenting adults should be allowed to do anything they want in their own homes] would require us to [legalize cannibalism]”

      “No one is advocating for [legalizing cannibalism]”

      In this case, the original claim doesn’t have a valid argument; they don’t actually believe their [badly-phrased principle]. So, the rebuttal works on a technical level.

      At the same time, the rebuttal is unsatisfying, since the original speaker probably believes a variant of [badly-phrased principle] with some more qualifications and nuance that they considered too obvious to state.

      The original claimant probably feels like they’re getting filibustered (ugh, “OK, so there are some cases, like mental illness, where we don’t care about consent, but what does that have to do with this?”) and the person replying feels like they’re being asked to accept an obviously-wrong argument for a dubious position.

      • Aapje says:

        Your example is one that is invalid, but what about:

        “We should [legalize gay relationships] because [consenting adults should be allowed to do anything they want in their own homes].”

        “But saying [consenting adults should be allowed to do anything they want in their own homes] would [force companies to bake cakes celebrating gay marriage]”

        “No one is advocating for [forcing companies to bake cakes celebrating gay marriage]”

        • arlie says:

          *sigh* This smells totally wrong.

          Culturally, we’ve followed a trajectory like: “gay is terrible” to “gay is illness” to “gay is substandard but don’t abuse them” to somewhere between “gay, straight, what’s the difference” and “gay is an oppressed minority, deserving more protection than straight”.

          The “must bake cake” case can be opposed for a variety of reasons, many of them consistent with some of the reasons for decriminalizing homosexual behaviour, and then eventually allowing same sex weddings.

          But it basically comes out of the overall cultural change, not the freedom and self-determination argument. And it requires an additional cultural change that has very little to do with any argument based on freedom.

          • Aapje says:

            But it basically comes out of the overall cultural change

            Sure, but people only became interested in fighting to force people to bake cakes for gay marriage once gay marriage was legal.

            So then the slippery slope argument is valid, because if people had been able to delay or prevent gay marriage, forcing people to bake cakes would have been delayed or prevented.

            The only requirement for the slippery slope to be real is that two changes are not independent, but that preventing or delaying one, prevents the other.

            You seem to invoke a sort of inevitability argument, that treats these changes as something destined and unpreventable. I think that you are then not arguing against the slippery slope existing, but instead arguing that the slope is already so steep that resistance is no longer possible.

            If anything, that is a defense of slippery slopes, because the slippery slope argument is that once you allow certain changes to happen, further changes in the same direction become unstoppable.

            That this comes from cultural change, rather than law, is not a proper rebuttal IMO, because:
            – law impacts culture
            – the slippery slope argument isn’t limited to law, but is often applied to cultural changes

          • arlie says:

            @Aapje

            Not inevitability. The whole idea of enforcing ‘fairness’ of any kind, in the way that applies to the cake case, was pretty much inconceivable, at the same point in time as being gay was treated as sick/criminal/bad.

            Even if homosexuality per se were still being treated that way, there’d be lawsuits about e.g. a barber whose religion doesn’t allow him to touch members of the opposite sex politely requesting that a female would-be customer find another person to cut her hair, and recommending several colleagues.

            That’s the point I’m trying to make.

          • Aapje says:

            The whole idea of enforcing ‘fairness’ of any kind, in the way that applies to the cake case, was pretty much inconceivable, at the same point in time as being gay was treated as sick/criminal/bad.

            I don’t like your framing (which you yourself seem to consider questionable, given your application of quotes), because it unreasonably vilifies people of the past (and deifies people of today). Back then, people generally did believe that they were being as fair as they could be, but based their opinion on incorrect information, had to deal with constraints that no longer exist, etc. Today, people also tend to believe that they are being as fair as they can be, but often do base their opinion on incorrect information, make choices with definite downsides to certain groups because of certain constraints that may disappear tomorrow, etc.

            Anyway, if I understand you correctly, you argue that the adoption of a certain belief, for example that gay relationships don’t damage society, inevitably leads to people not punishing openly gay people, then to gay marriage and then perhaps to forcing bakers to sell gay marriage cakes.

            If that is your argument, then I agree that some changes have become unstoppable, because they almost inevitably result from changes that have already happened. However, I repeat my claim that you are then arguing for the existence of slippery slopes. You are merely arguing that for this specific example, gay marriage is on the very steep section of the slope, where resistance is ultimately futile.

          • I don’t like your framing (which you yourself seem to consider questionable, given your application of quotes), because it unreasonably vilifies people of the past (and deifies people of today).

            I would say precisely the opposite, minus the “unreasonably.”

            The issue isn’t a change in what people think is fair, it’s whether treating people fairly ought to be enforced. The alternative to enforcing fairness is a regime of individual rights and free association, in which my choosing not to hire you, work for you, sell to you, befriend you, marry you, may be judged as unfair and a reason for other to think less of me, but is entirely up to me to decide.

            In my view, the abandonment of that in favor of the increasingly common presumption that if I don’t have an approved reason for my choice I ought not to be permitted to make it is a very large step backwards.

          • RobJ says:

            In my view, the abandonment of that in favor of the increasingly common presumption that if I don’t have an approved reason for my choice I ought not to be permitted to make it is a very large step backwards.

            I think it’s more, “here are a few disapproved reasons” rather than, “must have an approved reason”, but I suppose one could view that as a slippery slope if they were so inclined.

          • albatross11 says:

            I’m coming more and more to share David’s view here, because of two issues: conscience and good decision-making.

            In order to enforce fairness across protected groups, we end up having the state demand that people do things that violate their most deeply held beliefs. If you think gay marriage (or interracial marriage, or polyamorous marriage) is morally wrong, you’re still required to support that wedding–bake the cake, photograph the wedding, cater the reception, etc. Having the state coerce people to violate their conscience seems like a really bad thing to me–enough so that it should only be done in the most critical or fundamental cases. It’s a better world when people who think abortion is infanticide don’t have to perform abortions, and people who think capital punishment is murder don’t have to sell the lethal drugs to the state to carry out the executions, and people who think gambling is wrong don’t have to work in casinos, and so on.

            The other side is that the person making the decision in a given situation (whom to hire, whom to rent to, etc.) often has a strong incentive to make good decisions. It’s very easy for anti-discrimination law to override their good decisions with ones that don’t make sense, but sound good or win votes. And you are almost certainly making better decisions when you just have to make the decision than when you have to justify those decisions before some kind of potential lawsuit or enforcement action.

            [ETA]
            A major problem with the current approach is that it eliminates the “no skin off my nose” approach to justifying freedom for people you don’t like or don’t agree with. If you want to legalize gambling, and I think gambling is immoral and a very bad idea, you may be able to get me to accept it by arguing that it’s no skin off my nose–someone else will go gamble their paycheck away, but I don’t have to be involved, so why should I fight against it. But once it’s clear that I will be forced to be involved–maybe it will be illegal for me to refuse to provide services to casinos even if I think they’re pure evil–then that argument goes away. If I don’t get to have my own private morality that’s different from the public enforced morality, then I’m going to vote to keep the public enforced morality as close as possible to my own private morality, in self-defense.

          • The Nybbler says:

            I think it’s more, “here are a few disapproved reasons” rather than, “must have an approved reason”, but I suppose one could view that as a slippery slope if they were so inclined.

            Especially since we’ve already slid down it. Nowadays to prove you didn’t do something for a disfavored reason if it plausibly could have been, you need to positively demonstrate there was an approved reason.

          • Aapje says:

            @DavidFriedman

            I think that it is universal that people see ‘fairness’ as a combination of obligations and entitlements.

            For example, I strongly suspect that if you see a child getting beaten (harshly) by a parent, you will believe that someone (yourself, another bystander or the police) should intervene. So there is an entitlement by the weak party and an obligation by a stronger party.

            Libertarians/classic liberals tend to believe that these obligations should not or less often be enforced by law, but either should be voluntarily adopted or should be enforced through non-legal means (like cultural norms, boycotts, etc). However, the (lack of) enforcement mechanism is orthogonal to the point I was making.

            My point was that the exact obligations and entitlements that are favored by people at a certain point in time & space, is greatly dependent on circumstance (although norms tend to adapt to changed circumstance with a delay). For example, in places and times without pensions and social security, there tends to be an obligation on children to take care of their elderly parents and an obligation on parents to have children. Then there is usually also an obligation for society/family to care in some way for those who cannot find a partner and/or have children. The combination of these norms results in an informal social security system for the elderly, but it is heavily dependent on most people having (enough) children.

            I suspect that disapproval of gay people is at least in part because their natural infertility disrupts this system.

            Nowadays, we do have pensions and social security. Most likely, this is not so much because we became smarter or more social, but because we got richer, so became able to afford these programs (while the previous informal social security system was less based on wealth transfers, but more on children letting their parents live in the same house and such, which works better for fairly poor societies than direct wealth transfers). Similarly, I suspect that at least part of the disapproval of gay people disappeared because circumstances changed and the childless are now far less of a drag on society.

            So, then the changed attitude towards gays is not so much a case of enlightenment, but because circumstance has given us the luxury of changing our views. So as modern people, we should then not be smug about how we adopted ‘fairness,’ while the previous generations favored ‘unfairness.’ Instead, the reality is much more that the most fair choice now (let gay people partner up) was much less fair in the past (because then it meant that society had to make a lot of extra sacrifice to deal with the consequences, whereas now gay people tend to fairly comfortably be productive enough during their working life that they pay more than enough into the system to pay for them to be cared for when they stop working).

            Understanding the above has repercussions for all kinds of idealists. Conservatives should recognize that changed circumstance means that old solutions may no longer be optimal. Progressives should realize that their ideals are bound by (material) circumstance and that utopia thus cannot be forced without material advancement. It’s not just a matter of enlightenment. Libertarians should recognize that the ability for people to do highly individualized decision making well is heavily dependent on people being wise and altruistic in their individual choices, which in turn is heavily dependent on various circumstance, where a lack of wisdom and altruism in individualized decision making can change these circumstances to reduce the wisdom and altruism of a subset of society. Etc.

    • nameless1 says:

      Slippery slopes are fallacious only if no mechanism of slippage is shown. For example, a common argument that changing a law allowing X that was formerly banned will lead to allowing Y, a fairly obvious mechanism can be shown: allowing X will be used as a precedent in arguing for allowing Y, Y will be argued to be an analogous to X or a special case of X. Thus people against Y will have a harder time. This mechanism is not only obvious, it is in fact empirically true that it is used all the time. A decade ago I saw a cartoon supporting same-sex marriage, with a person saying “people are saying our marriage would be unnatural” and then parents, who turn out to be of different races reply “they used to say the same thing about ours”. I.e. the case for same-sex marriage is made stronger by the former allowing of interracial marriage, by arguing that the case is similar. Thus the slippery slope between them does not only have a mechanism, it also used all the time.

      This can be called “foot in the door”, or “pushing gradual reforms” or “slow frog boiling”, and perhaps my example was too political and too liberal, in fact the right is using it too and many non-political cases of using it. It is all about wearing down resistance gradually. This slippery slope mechanism is not only possible but in fact used. When I was a child my mother used to say “I give you my little finger and then you demand my whole arm”. I used this tactic as a child largely instinctively. If she says no more chocolate for you today and I demand a big piece, I cannot win it. I can ask for a tiny piece, and when I got it another tiny piece…

  32. Sandeep says:

    I realize I probably am asking something that is so well-known to be taken for granted in this community, but if some kind soul can help out that would be great.

    Basically, I see some very smart people without any background in biology or medicine having picked up practical and often technical knowledge that actually helps them when they or their loved ones fall sick. This includes not just specific knowledge but also some intuition about how things work, that they can in some situations use to whittle down the range of possibilities in a given situation.

    Is there a systematic way to self-cultivate on this for someone with zero background in medicine? Say, in the form of a university-type course or a reading list? Thanks.

    • Armand says:

      Not sure how useful this answer is, but I’d say read lots of the primary literature (sci-hub is your friend here!). Generally the introductions will have a lot of good background material that you can follow up on. It will be very slow-going at first since there will be a lot of academic jargon you have to look up.

      Also, less formal sources such as blogs or lectures are often very useful for setting up an intellectual framework of understanding for the disease area which you can then fill in more throroughly with literature.

      • Sandeep says:

        Thanks! There’s too much information out there though and especially for a slow reader who takes forever to read just one post of Scott may have difficulty figuring out a moderately efficient path without more concrete/narrow lists.

    • albatross11 says:

      Maybe read through the home edition of the Merck Manual? It’s pretty readable and gives a basic background on what’s going on with major organ systems and what various kinds of diseases look like. But I’m not any kind of medical person–I wonder what the medically trained people here think of this.

      • Sandeep says:

        Awesome! Thanks very much. Will wait to see if someone else comments and accordingly take a look/get a copy.

    • Garrett says:

      Thoughts from my own background: I started taking an EMT class in my late 20s having never taken a biology course before. EMT textbooks are accessible at the ~8th grade reading level and include very basic information on anatomy & physiology, pathology of disease, etc. I don’t remember how much pharmacology that textbook goes into.

      If you take an EMT course at a community college, you’ll get both didactic experience as well as training on implementing basic emergency care. And then the part which I found the most valuable, the clinical component. Getting to see a number of conditions up close really helps put into perspective injuries that a lot of people encounter.

      Additionally, I found taking a intro to bio and Anatomy & Physiology 1&2 very valuable in terms of really getting the fundamentals down.

      Medicine as a subject (to whatever depth) roughly involves:
      * Knowing what your body has (anatomy) and does (physiology) at all different scales.
      * Knowing how things can go wrong (pathophysiology) and the results (disease).
      * Knowing how treatments can work to address stuff not working (pharmacology, surgery) plus adjacent care (nutrition, physiotherapy, etc.).

      Increasing your knowledge of any/all of these will improve your ability to understand medical conditions should they arise.

    • Cheese says:

      Systematic way? Probably not really unless you want to really go nuts and do a year long (minimum) kind of course. It depends what your objectives are. Is it to gain a general understanding of physiology or pathology? Or is it to know where and how to acquire specific knowledge should you require it in the future?

      If your objective is to get a general overview of how physiology or pathology works then there are a ton of free or easily obtainable resources that. I’m going to disagree with others and say definitely don’t read primary literature if you’re after a general idea of things. If you have a very specific or rare problem, then yeah it can be helpful. Why do I say that? Well because primary literature is really bad at giving you simple overviews and often leads you to misleading or outright incorrect conclusions if you don’t have a reasonable background in an area. You tend to see this every now and then where a lot of smart people get obsessed with a particular study or set of studies and they end up being wrong. Doctors included. I would recommend starting with those typically used by first year medical students, as you should be able to understand them well enough and many are laid out in an excellent fashion. A first-year targeted text book like first aid for USMLE step 1 will basically sort you out without going into too much detail.

      If your objective is to only acquire specific knowledge when you need it then i’d probably say don’t bother with that so much. If you encounter a specific issue a good start would be to look at free medical student targeted resources in a focused manner to make a bit of sense of the simplified version of the physiology or pathology you’re interested in. Curated resources targeted at medical students/professionals like UpToDate etc can be useful, some have for patient information sections but most of it is paywalled. From there that will equip you to better dive into primary literature and reviews relevant to the specific problem.

      For the layperson, if you’re interested in something specific, the resources that tons of 1st/2nd year medical students use can be quite helpful. Khan Academy (simplified, good for basic physiology), OnlineMedEd (more clinical student level targeted), Pathoma (not free, easily obtainable), DrNajeeb lectures (super cheap), Osmosis (not free, good all-rounder). Youtube channels like Armando Hasudungan are good starters too.

      I’m not sure what you’re really asking tbh. I hope that helps in a general sense.

      In a more specific sense if you have a problem that you’d like to find out more about, there is an association for basically every disease now. A lot of the bigger ones publish an absolute ton of patient education stuff and even treatment guidelines. Half the times those end up being the broad guidelines under which the medical field operates professionally.

  33. FMD4CP says:

    Hello, I’ve had a big improvement in my cerebral palsy after doing a fasting-mimicking diet (FMD). This Reddit post has an overview from a year ago. I’ve just put two videos on Twitter. They’re short: one is two minutes on balance and the other is forty seconds on my hand.

    I think the stem cells produced during the FMD might have something to do with it. There’s research ongoing into stem cell treatments for cerebral palsy, and into using the FMD for neurodegenerative conditions like Parkinson’s and Alzheimer’s, but I’ve found almost nothing for fasting and acquired brain injuries like cerebral palsy or stroke. This study, “Prolonged Fasting Improves Endothelial Progenitor Cell-Mediated Ischemic Angiogenesis in Mice,” might have some bearing on it, but I don’t know.

    Whatever’s the cause, I was diagnosed with cerebral palsy and my condition has markedly improved. The FMD is worth looking at to potentially help at least other people with a cerebral palsy diagnosis.

    • nameless1 says:

      When I try to google FMD all I get is basically buy and eat these bags of food (maybe Prolon) so basically a product, not a set of ideas how to design yourself an FMD. I have no such conditions so would mostly use FMD to lose weight and get healthier, but I absolutely don’t like the idea of eating out of bags that arrived through the mail. It is too dependent. I want a diet I can base on my local grocery store or farmers market.

      • sunnydestroy says:

        I’m replying to you very late so you probably won’t see this, but I may as well do this for the benefit of the future viewers.

        The Prolon FMD product is the most evidence based, clinically trialed version of the diet. It’s made so you can’t mess it up safety and efficacy wise and is supposed to contain certain ingredients to enhance fasting effects.

        In his book The Longevity Diet, Valter Longo (the inventor of the FMD), does give a general DIY recipe if you want to do an FMD, though he cautions this is something you should show a dietitian to create a plan for you.

        If you google around, some people have uploaded scans of the relevant pages.

  34. Freddie deBoer says:

    Unpopular opinion: Occam’s actual Razor says “do not multiply variables unnecessarily” and the rampant popularizations and generalizations of the idea have so distorted it that it’s no longer a useful heuristic.

    • Lillian says:

      The fact that other people are using a heuristic incorrectly does not mean the heuristic is not useful. At worst it might mean that using the heuristic to convince other people isn’t useful, but you can neatly sidestep that by simply not invoking its name. You can still say, “Both our explanations cover all the facts, but yours is unnecessarily convoluted with lots of extraneous terms that provide no increased predictive value.”

      That said Occam’s Razor can sometimes lead you a bit astray. Coppernicus’ model of the solar system is actually a classic case of multiplying terms unnecessarily. It has epicycles like the ancient Ptolemaic system, but it also has twice as many as them, and even an epicycle on top of epicycle, but hey it dumped the equants, which gave it one less extraneous term. Unfortunately aside from all the extra epicycles it introduced a whole slew of new ones by positing a a moving Earth, which was a bit of a problem in a time when neither stellar parallax nor the Coriolis effect had been observed.

      It did however have greater predictive value, since the observed phases of Venus would have been impossible under the Ptolemaic system, but were possible under the Copernican. However the competing Tychonic model was exactly as predictive, but lacked any of these extraneous terms. No epicycles, no random orbital centres, and no need to explain the absence of observed parallax or Coriolis. Applying Occam’s Razor in the 17th century would have thus lead a rational person to prefer the Tychonic moddel over the Coppernican.

      Then along came the Kepelerian model which provided the most predictive value by the innovation of mating heliocentrism with elliptical orbits. It still had extraneous terms, since nobody could provide any evidence of the Earth’s motion until 1728, nor had directly observed it until 1806, but nonetheless it was the model that best matched the observed movements of the planets. It is at this point that Occam’s Razor would lead us back unto the right path.

    • albatross11 says:

      Occams razor needs a notion of which explanation is simpler, but often what seems simpler is a matter of what other ideas/concepts/models of the world we have agreed on. Is a parabolic trajectory for a cannonball a simple model? Depends on whether we already share a bunch of geometry and algebra.

  35. sentientbeings says:

    In Search of a Word

    I’m searching for a word that describes a certain idea. I don’t think it exists in English, but I have had enough luck finding wonderful words in other languages* that I have some hope there is one out there. If not, I’m open to invented suggestions.

    The idea is as follows: Consider having a hero for which one doesn’t agree on the point for which the hero is held in high regard, but for which one holds the person in high regard for that same point (doesn’t have to be a hero specifically, but there should be significant amount of respect). I think there should be a word for it.

    One example for me is Robert LeFevre. LeFevre was a pacifist and I don’t agree with his pacifism, but I do greatly respect him and his position. Beyond that, I desire a state of affairs in which more people like him exist, even if they displace people that agree more closely with me. I have a few reasons for that, although I’ll skip the details.

    Is there such a word?

    * My favorite word in another language is the Portuguese word “saudade.”

    • Conrad Honcho says:

      Principled?

    • As you may know, LeFevre was part of the model for Prof in The Moon is a Harsh Mistress. He was charismatic but not intellectually all that able. I remember a talk where he had Bonny Prince Charlie off by two generations. And another where he was trying to explain how one could enforce rights without the use of force, not very successfully, until Dirk Pearson took the podium away from him and gave a much better defense of his position.

      • sentientbeings says:

        LeFevre was part of the model for Prof in The Moon is a Harsh Mistress

        I did not know that, and it makes me like LeFevre even more.

    • Hoopyfreud says:

      Paragon?

    • AG says:

      Kantian respect?

    • Telminha says:

      I hope you find your word. I can’t think of anything that would be appropriate or clever; but I wanted to say that “saudade” is also one of my favorite words, although it can be a painful one sometimes.

      I would post a link to this, but it wouldn’t work without a Tumblr account:

      “Saudade,” as Defined by our Brazilian Community — Duolingo

      We asked our native Portuguese speakers to define “Saudade,” a word that has no direct translation in English. From those responses, we crafted a crowdsourced definition for all our non-Portuguese speakers (and here it is below)!

      To our surprise, our community responded not only with informative translations but beautiful allusions and declarations, generating interesting discussions about the true meaning of this very Brazilian concept.

      Here are some of our favorite responses! 

      “Saudade, it is all that is left of you. Memories of how happy we have been, memories of how angry you left me sometimes. The things that we have built and destroyed together. Oh, and how many times we had been destroyed and hurt and I pulled you up like all the times you pulled me up. Everything that we have shared, all the saddest times, all the happiest times, all the weirdest times, all the funniest times. Now you are gone, and I don’t have a word to express my feelings; it is not sadness because you gave me happiness, it is not happiness because you are not with me now. I have no word to express myself to you, unless if you know what saudade means.” – Aaron Lionel

      “Saudade means Gisele.” – Max Pereira on Facebook, prompting an interaction with the lady he felt saudades for.

      “Based on what the Germans say (“heimweh”), you not only miss something/someone, but it hurts. Or “sehnsucht”, when you not only miss, but you kind of keep looking for them.” – Juliana Borges

      “There are words describing a similar feeling in many other languages: one example is “banzo”, that was the word used by the African slaves to express the feeling of missing their families/homes/country.” – Marcus Stolarski

      Thank you to all 180 people who helped us!”

      You may find this interesting. My favorite.

      • sentientbeings says:

        I enjoyed your response. Thanks for linking to the Dictionary of Obscure Sorrows. I hadn’t heard of it before.

  36. Anon. says:

    My favorite thing about the Odyssey: Ancient Greek doggie bags.

    10.215:

    in the way
    that dogs go fawning about their master, when he comes home
    from dining out, for he always brings back something to please them;

  37. Le Maistre Chat says:

    What a great ocean view Odysseus and Penelope’s house on Ithaca had. But why does she only have a loose brick for a footstool?

  38. caffeinezombie says:

    New Jersey voter here. Generally left-leaning but I’m considering voting for Bob Hugin, the republican candidate, as his opponent is well-known to be corrupt. I don’t know anything about Hugin, except that he is the CEO of Celgene and that he has flip-flopped about Trump: he donated to his campaign but is now distancing himself from Trump (both of which sound like weak signals and are probably posturing).

    First, can anyone who knows about the pharmaceutical business comment about celgene? I read somewhere that celgene has done sketchy things (it has apparently sharply raised prices of cancer drugs, and settled in both an anti-trust and a fraud lawsuit). Without going into the ethics of pharmaceutical companies in general, is this behavior normal in the business (or was it at the time?)

    Second, how much can Hugin’s claims of bipartisanship be trusted? Posturing aside, how likely is he to vote against mainstream republicans?

    Hope this is ok to post and doesn’t turn into a Trump/tribal warfare thread.

    • Conrad Honcho says:

      This is supposed to be the thread free of “hot button political issues.” “Who do I vote for” is a pretty political issue.

    • michelemottini says:

      I knew a Celgene executive that once bragged with me about how they setup a subsidiary in Switzerland running their production plant there, with a deal not to pay any Swiss tax, and then ‘bought’ the manufactured drugs from that subsidiary at inflated prices, so that all profits ended up there and they had to pay little / no tax in the US. That would be accounting / tax fraud, but I bet they have excellent tax lawyers, accountant and lobbyist, so they get away with it. So Celgene = pretty bad. Don’t know anything specific about Hugin himself – but being the CEO he is responsible of those shenanigans.

      • Mark V Anderson says:

        That’s not fraud at all. Everyone does their best to minimize their taxes. What the company is doing is trying to minimize their tax by having most of the profits of an intercompany sale in the jurisdiction that has the lowest tax. Sounds like sound tax planning to me. Well, it might be fraud if they are hiding facts from the IRS or lying to them. But that information is unknown here. And of course the only issue that makes it sound like fraud is the use of phrase “inflated prices.” That is a very subjective thing. It is kind of interesting that an exec would say that, but I’ve seen plenty of execs that didn’t understand tax rules and might misunderstand what is going on. So basically this is a pretty slender thread to hang a governor candidate on.

        • cassander says:

          As I recall, for many years, microsoft filed its taxes as if it company whose value was almost entirely created by a factory that made plastic disks in Puerto Rico.

        • dick says:

          Everyone does their best to minimize their taxes.

          This is certainly common but it’s not ubiquitous.

    • broblawsky says:

      Regarding Celgene: under Hugin, they were responsible for hundreds of millions of dollars of Medicare and Medicaid fraud. They did so by flooding the country with hundreds of sales reps who promoted their thalidomide analogue as a cure for various cancers for which it had no proven therapeutic value. Considering how dangerous thalidomide is, not just for pregnant people, but for anyone with any kind of cardiovascular condition, I’d be willing to bet that Celgene killed at least a couple of cancer patients. They were only forced to clean up their mess after a whistleblower showed how bad the problem had gotten.

  39. Le Maistre Chat says:

    Dungeons & Dragons thread, subtype Monstrous Manual.

    The basilisk is a monster whose earliest surviving source is Pliny’s Natural History. It immediately follows the description of the catoblepas, where he says “all who behold its eyes, fall dead upon the spot” and follows up with:
    “There is the same power also in the serpent called the basilisk. It is produced in the province of Cyrene [coastal eastern Libya], being not more than twelve fingers in length. It has a white spot on the head, strongly resembling a sort of a diadem. When it hisses, all the other serpents fly from it: and it does not advance its body, like the others, by a succession of folds, but moves along upright and erect upon the middle. It destroys all shrubs, not only by its contact, but those even that it has breathed upon; it burns up all the grass, too, and breaks the stones, so tremendous is its noxious influence. It was formerly a general belief that if a man on horseback killed one of these animals with a spear, the poison would run up the weapon and kill, not only the rider, but the horse, as well.”

    Somehow D&D decided that a death gaze and extremely toxic breath/blood should be changed to “a gaze that enables them to turn any fleshy creature to stone; their gaze extends into the Astral and Ethereal planes.”
    (I would have liked to be in the campaign where they codified this. “Haha, the basilisk’s gaze doesn’t work because I’m astrally projecting!” “It gazes into the astral plane.” “Haha, now it has no effect because I’m on the Ethereal plane!” “Save against petrification.” “… dammit, Gary!”)

    Also, “12 fingers long” has changed into “7 feet.” There’s also an even bigger Greater Basilisk, a “gotcha” monster where “Even if a polished reflector is used under good lighting conditions, the chance for a greater basilisk to see its own gaze and become petrified is only 10%, unless the reflector is within 10 feet of the creature” and a Dracolisk, the offspring of the “gotcha” version and a Black Dragon.
    Why do only black dragons have sex with them? I don’t knooow.

    • ing2 says:

      The basilisk is one of several monsters which I don’t like to use because their effect is so binary. Either:
      * Surprise, you’re turned to stone! Nobody knows the spell to fix it. I guess you’d better roll up a new character!
      * Surprise, you’re turned to stone! But somebody does have the spell to fix it, so it doesn’t matter.
      * The basilisk glares at you but you make your save! Then you kill it easily, because the stone gaze was the only really dangerous thing about it.

      Recently, when I ran a module that called for a basilisk, I houseruled it from “save or turn to stone” to “take dexterity damage, and if your dexterity drops to zero, turn to stone”. I thought that was a nice compromise between having something happen and not having someone lose their character.

      • Nornagest says:

        Save-or-die monsters are really useful for one thing: constraining the solution space for players. If you ever find yourself rolling a save against one, you’ve already screwed up, and any player worth their ten-foot pole should know that, so as soon as you introduce one the encounter becomes about coming up with a solution that doesn’t give the basilisk the opportunity to gaze.

        For this to work, though, you need to have a sort of social contract built up that says you won’t leave a basilisk in a broom closet as a gotcha.

        • Le Maistre Chat says:

          Every evil mage worth their salt would leave a Greater Basilisk in the broom closet as a gotcha if they exist.
          The trick is building up a social contract where the players know that the mage’s home having a blind cleaning woman telegraphs “play smart or roll a death save.” 😛

          • Nornagest says:

            That’s… on the far side of reasonable by my lights, but it probably works for some groups (especially in the late Seventies, or for tournament play). The main point I’m trying to make, though, is that it needs some kind of foreshadowing. The appropriate level of foreshadowing varies with your players’ level of paranoia and the type of campaign you’re trying to run, but a death save makes a monster a setpiece encounter all by itself, and setpieces should never just be thrown into melee with the players.

          • Le Maistre Chat says:

            @Nornagest: right. A legendary death gaze monster deserves a setpiece and shouldn’t be a random encounter… despite being on the 1E DMG random dungeon tables.
            I think the fundamental issue is that it’s something with a good mythological pedigree that’s pretty unfun. So give these monsters special treatment and don’t invent new ones (I make an exception to this principle for Beholders.)

          • Hoopyfreud says:

            @Le Maistre Chat

            I think that *being* on a random encounter table doesn’t imply it *should* be on a random encounter table.

            Following that line of logic, it seems to me that there’s no good reason to *avoid* making gimmick monsters as long as you’re careful about where you put them. Just because it doesn’t belong on a random encounter table doesn’t mean it’s bad. For example, the mutated infectious immortal blood-oozes in my CoC game – they’re fun to avoid but much, much less fun to fight.

          • Bugmaster says:

            I don’t know… one of my characters is a lv 11 Wizard. He’s LN, not Evil, but he’s got a lot of summoned monsters and necromantic constructs and such running around. At no point would he even consider putting a Basilisk in his own closet. Who wants to live and work on top of a loaded nuke ? What are the benefits here ? If the enemy fireteam had breached his inner sanctum, then obviously everything had gone horribly wrong already, and the smartest action to take is to teleport out. Sure, self-destructing the site afterwards is prudent, but there are way more reliable ways of doing that than closet basilisks.

      • johan_larson says:

        Save-or-die in general is one of the pitfalls of D&D. You spend a year building up a character, and suddenly they’re just gone. Very frustrating.

        Of course, at higher levels death can be undone.

        • Le Maistre Chat says:

          Sure beats trying to combine extremely complex chargen with “realistic” firearms, though.

          • James C says:

            I have a soft-spot for classic Traveler which allowed for death during character gen. There’s nothing quite like adding insult to injuring during a re-roll 🙂

          • Lillian says:

            To this day i have yet to encounter a system that does realistic firearms in the sense that most bullets don’t actually hit anyone, but still have tangible effects on the people being shot at. Ironically the system best suited for representing this is probably the one for Exalted 3rd Edition, which is actually meant to represent a combination of the feats of epic heroes and high flying wuxia action.

            The combat system is built around the idea of managing momentum between combatants. When you watch a sword fight, you can usually tell who has the upper hand even if neither one has drawn any blood. So starting in a battle both sides roll for initiative, and they act in the order of that initiative. Attacks target not their health bars, but their current initiative, such that a successful attack subtracts a number of initiative from your opponent and gives it you. At any time you can spend all of your initiative to make an attack targeting your opponent’s health bar. If it succeeds, they are injured and you have a decisive advantage, if it fails you now have no initiative, which puts you at a disadvantage. Due to risk, it’s best to wait until you have high initiative before going for the killing blow, but if you’re losing anyway a reckless decisive attack might turn things around if you get lucky. This is what sword fights look like, the two combatants trade blows, until one sees or forces an opening and goes for the kill.

            If you applied something similar to gunfights, you could actually properly model small unit tactics. Most of the shooting would be about gaining the initiative and denying it to your opponent, that that is to say suppressing and pinning your opponent while trying to manoeuvre to a superior position. You would need to have mechanics for using cover and manoeuvring under fire, but the point is that by stealing initiative you have a mechanical representation for fire that is effective despite causing no injuries. Only when you have obtained superior positioning can you actually gun down the enemy.

            (Incidentally, i am forever annoyed about how the core of the Exalted 3e system is absolutely brilliant, and yet the game is basically unplayable because of the bloated mess they built on top that core.)

          • Rick Hull says:

            @James C

            You might be interested in https://github.com/rickhull/traveller_rpg

          • Nornagest says:

            @Lillian — One of my back-burner RPG projects attempts to model fire and maneuver tactics (nominally for a setting that’s vaguely Old West in the same way that standard D&D is vaguely medieval, but you could use it for plenty of other settings too). The biggest sticking points so far have been that it’s inherently fiddly, and it really demands miniatures and a battle map; I dislike battle maps, but it’s very hard to do theater-of-the-mind stuff when flanking angles matter.

          • Le Maistre Chat says:

            @James C: Was character gen fast? If so, I don’t mind a chance for death during it. It’s a good way of calibrating players’s expectations. What I think is objectively bad is when it takes a week or more to make a character, like studying for a university exam (in a subject you have no prior knowledge of!), and you can’t trust the GM to make combat safe by, if necessary, making it impossible for NPC guns to hit you like Star Wars.

          • Le Maistre Chat says:

            @Lillian:

            To this day i have yet to encounter a system that does realistic firearms in the sense that most bullets don’t actually hit anyone, but still have tangible effects on the people being shot at. Ironically the system best suited for representing this is probably the one for Exalted 3rd Edition, which is actually meant to represent a combination of the feats of epic heroes and high flying wuxia action.

            Hmm, interesting. Is 3rd also the best Edition of Exalted for representing epic feats or flying wuxia action? If so, I should look into swiping it for a better game.

            (Incidentally, i am forever annoyed about how the core of the Exalted 3e system is absolutely brilliant, and yet the game is basically unplayable because of the bloated mess they built on top that core.)

            Yeah, I saw a bunch of red flags when I checked that rule book out. I really, really don’t like cruft. I have a pet project of a tabletop RPG that tries to make combat and exploration more fun than D&D (at its best; no sense going after a weakman) using the front side of an 8.5″x11″ character sheet for walking through character gen options inc. equipment and the back side for movement rules and combat/other task resolution.

          • Protagoras says:

            Original Traveler chargen was fairly quick, as I recall (I played it a tiny bit when I was 12 or 13, in the early 80s). You rolled for stats, and then rolled on maybe half a dozen to a dozen tables (and not huge tables, usually you were rolling a d6). I didn’t like the massive randomness of it myself (sometimes you got to choose which table to roll on, but otherwise it was mostly up to the dice). The risk of dying was pretty small; people just always bring it up because it’s such a unique feature of the system.

          • Lillian says:

            @Nornagest: This is why i like the idea of using a momentum system. It lets you abstract away many of the more difficult and fiddly aspects of fire and manoeuvre while still preserving the general feel of it.

            @Le Maistre Chat: Honestly i don’t think i can, in good concience, recommend you any edition of Exalted. While i love the setting and have played it extensively, the systems are kind of a mess. 1e is very clunky with too many dice rolls per action, 2e is still clunky and also fundamentally broken, 2.5 works decently but requires cross-referencing pages and pages of errata, and 3e is an extremely good core system crippled by 700+ magical powers half of which are stupid dice tricks like “reroll all 1s”. They’re all crunchy by design though, it’s what the fanbase likes, it’s the execution of that crunch that tends to leave something to be desired.

        • ing says:

          > Of course, at higher levels death can be undone.

          …For at high levels even Death may die.

    • Nornagest says:

      Why do only black dragons have sex with them? I don’t knooow.

      As all veteran D&D readers know, dragons will have sex with anything. (Especially in 3.x+, with the half-dragon template to play with, but there’s more than enough part-dragon critters in AD&D for it to be true there too.) Presumably there’s some kind of magical or genetic quirk that makes offspring from other dragon colors nonviable.

      Green would make more sense, though; they’ve got an immunity to poison gases, which the greater basilisk breathes.

    • Faza (TCM) says:

      My take on the basilisk is that the D&D version is based more on much later material than Pliny – mediaeval European legends and such. The beastie bears more than a passing resemblance to the Basilisk of Warsaw’s Old Town (though the latter also had a hint of cockatrice in it) – right down to use of a mirror as a way to defeat it.

    • Ventrue Capital says:

      Why do only black dragons have sex with them? I don’t knooow.

      I have a relatively friendly, and quite talkative, black dragon in the West Marshes in my campaign. (He is already acting as a patron to one of the PCs.) Why don’t you ask him? 😀

      Another of my periodic plugs for my campaign, since I’m trying to recruit as many players as possible from SSC.

      It emphasizes roleplaying, exploration, and mysteries over combat. Also there is a *lot* of philosophical discussion over things like property rights to magical energy, the rights of undead and of nonhuman intelligent species, the morality of necromancy, and the nature of the gods.

      • Le Maistre Chat says:

        It emphasizes roleplaying, exploration, and mysteries over combat. Also there is a *lot* of philosophical discussion over things like property rights to magical energy, the rights of undead and of nonhuman intelligent species, the morality of necromancy, and the nature of the gods.

        Well I would hope so! If I ever consented to make a GURPS character, I would just play Castles & Conversations and need to have a high level of trust in the GM not to spring GURPS combat, pit traps, or anything else that would kill my character on us. 😛

        • Protagoras says:

          Making a character optimized for social interaction in GURPS is a lot easier than making a character optimized for combat. If you follow that strategy, you may well find it easy to get other people to do the combat and falling into pit traps bits of the adventure.

        • Ventrue Capital says:

          I *like* that GURPS combat is more realistic than D&D’s, at least in the sense that “realistic” means “tactically transparent.” (Warning: NSFW link to wonderful article about what tactical transparency is and why it’s good.)

          I would say, however, that GURPS characters have a *lower* chance of dying than low-level D&D characters, and a higher chance of dying than high-level D&D characters. Which is also more realistic.

          And I also believe that GURPS characters are not as fragile as most people think they are. (For example, in GURPS a normal human will not even have a chance of going unconscious until they’ve taken 10 hit points (dropping them to zero), nor a chance of dying until they’ve taken *another* 10 hit points (dropping them to -10). And in either situation they get to make a Health saving throw.)

          • dndnrsn says:

            Zak says tactical transparency overlaps with but isn’t equivalent to it. I’d say comparing the two, some editions of D&D compare, some don’t. Up to 2nd, maybe, D&D is about as tactically transparent as GURPS (with magic and such in, obviously): use melee against ranged enemies, use ranged against melee enemies, use cover, etc. The tactics make sense. 3rd gets worse with time; 4th is the least tactically transparent game I’ve played – it’s mostly about stringing powers together. Tactical transparency is about whether someone who doesn’t know the rules can play well doing what makes sense.

    • Le Maistre Chat says:

      Beetle, Giant

      “Because of the thorough grinding of the mandibles, nothing eaten by giant beetles can be revived by anything short of a wish.”

      “Except as noted below, giant beetles are not really social animals; those that are found near each other are competitors for the same biological niche, not part of any family unit.”
      If you look at the # Appearing field, all but two kinds of giant beetle always appear in groups, and those that can appear alone do so <17% of the time. So pretty much every time adventurers encounter giant beetles, they've bumped into a bunch of individuals competing "for the same biological niche."

      If you kill giant bombardier beetles fast enough, you can cut them up for those cool chemicals, which can then be used to make a bomb or handgonne.
      Giant boring beetles are individually "not much more intelligent than other giant beetles, but it is rumored that nests of them can develop a communal intelligence with a level of consciousness and reasoning that approximates the human brain."Uh, that doesn't sound boring at all.
      " In tunnel complexes, boring beetles grow molds, slimes, and fungi for food…
      One frequent fungi grown is the shrieker, which serves a dual role. Not only is the shrieker a tasty treat for the boring beetle, but it also functions as an alarm when visitors have entered the fungi farm. Boring beetles are quick to react to these alarms, dispatching the invaders, sometimes eating them, but in any case gaining fresh organic matter on which to raise shrieker and other saprophytic plants."

      "Despite its name, the fire beetle has no fire attacks, relying instead on its huge mandibles to inflict up to three times the damage of a dagger in a single attack."
      "Fire beetles have two special glands above their eyes and one near the back of their abdomens. These glands produce a luminous red glow, and for this reason they are highly prized by miners and adventurers. This luminosity persists for 1d6 days after the glands are removed from the beetle, and the light shed will illuminate a radius of 10 feet. The light from these glands is cold – it produces no heat. Many mages and alchemists are eager to discover the secret of this cold light, which could be not only safe, but economical, with no parts to heat up and burn out. In theory, they say, such a light source could last forever."
      … are they teasing a giant beetle-based Industrial Revolution on this page?

      The shell of giant rhinoceros beetle "is often brightly colored or iridescent. If retrieved in one piece, these shells are valuable to clerics of the Egyptian pantheon, who use them as giant scarabs to decorate temples and other areas of worship."
      Since we've previously talked about harvesting monster parts in Baldur's Gate, now I'm imagining an enterprising adventurer killing giant rhinoceros beetles in the Forgotten Realms to sell to priests of the Egyptian gods. Whether this requires traveling to Egypt on Earth just raises further questions.

      "Water beetles sometimes inhabit navigable rivers and lakes, in which case they can cause considerable damage to shipping, often attacking and sinking craft to get at the tasty morsels inside."
      What a dangerous world D&D humans have to make a living in.

    • nameless1 says:

      Why are people still playing D&D ? I was playing it… ugh… over 20, even 25 years ago, but already by then (2nd Edition) it was considered an outdated RPG with an unrealistic and I would say undramatic rule system. For example two 10th level fighters fighting with a sword will give teach other many small cuts due to how they have large HP and D8 damage. This is not I want. Not only unrealistic, also undramatic, would not look good in a movie. I want them to fence, fence, fence, then one mistake blam winner.

      Back then I considered Shadowrun rule system the most realistic, if the world a bit silly. Later D&D editions improved the rule system, but on the other hand I think screwed up the feeling by having very childish artwork in the books. Even back then GURPS, RuneQuest, Earthdawn and Vampire provided an alternative.

      How comes today people are not rather playing better RPGs than D&D?

      • johan_larson says:

        How comes today people are not rather playing better RPGs than D&D?

        One part nostalgia; it’s often the first RPG people learned and they have many fond memories of playing it with their friends. One part lowest common denominator; if you get a group of players together, it’s typically the one game all of them already know.

        • woah77 says:

          LCD is probably the most common reason. It’s got the largest market penetration for any RPG I’m aware of. The niche gamers like myself are generally isolated from the other players of those games.

      • dndnrsn says:

        I don’t know what game does “dungeon adventurers, ideally in a sandbox world” better than some editions of D&D. In the best editions, play is quick, character creation is quick, etc. I used to think D&D sucked, then I tried playing a retroclone in the sandbox style. It’s great.

      • arlie says:

        *sigh* My life is full of things that are “considered an outdated …”. In many cases, the people making this pronouncement simply favour the latest cool new thing as of a certain time in their life. (E.g. the first one *they* encountered.) Or whatever-it-is gets major changes every couple of years, losing some features while gaining others deemed more modern; every decade or so someone rediscovers one of the old features, which is generally hailed as a great innovation under a new name.

        To your specific points though – I don’t want a realistic role playing game, particularly with regard to its combat system – I want a heroic one. If there’s any kind of advancement system, I want it to move at a game-appropriate pace, not in lock step with real life. “Realistic” is not a universal win. And as for “outdated” – see above. Early versions of D&D have things the new games presumably lack – and doubtless vice versa. And besides all this, I’m sick and tired of learning new ways to do the same old thing.

        That said, I’m no longer significantly involved with table top role playing. When I do play, the game generally involves old friends and family – people often near my own age, with shared history. We’re very likely to simply play what we all know. If people whose tastes I trust want to try something new, I’ll go along with them, and maybe adopt it as a new option. But I’m not playing enough to be bored with what I already have, and I expect in general to have to try 10 things to get one I actually like.

        FWIW, back when I was experimenting, I tried both GURPS and Runequest. GURPS failed the character creation time vs character life expectancy test – it was way too lethal. (i.e. the kind of adventures I wanted were too lethal to be fun.) Runequest was fine, and I played it for a couple of years, since that’s what one of my GMs was into. Bushido made GURPS look safe – ie. from my POV it sucked. And D&D’s various revisions have been mostly drop-some-stuff, get-new-stuff – with some of the dropped stuff seeming like a significant loss, and other parts total yawners. My favourite system remains an AD&D 1.5 amalgam played by my circle of friends in Ottawa, around 1991. (We also experimented with the then-current AD&D revision, and rated it merely “OK”.)

      • Le Maistre Chat says:

        The market favors rules-heavy games, because you can’t sell a rules-light game in book stores except es the rule booklet in a board game.
        Most people understandably have a preference for highly-survivable characters if they’re going to play a rules-heavy RPG.
        Inertia or Lowest Common Denominator is another factor. I think this particularly applies for the huge success of 3rd Edition/Pathfinder, which stripped B/X-AD&D characters of the survivability inflating hit points used to give by exponentially increasing damage (and it was generally accepted that damage-maximizing “builds” were the weakest). Playing D&D as it had heretofore existed and was popularized through CRPGs, novels, etc. required a gentlemen’s agreement of many clauses.

        • dndnrsn says:

          I think you’re right about the “gentleman’s agreement” but I’d add: it’s an unspoken and assumed gentleman’s agreement that has increased since at least some point in the mid-80s: increasing survivability, at least up until 4th edition, was entirely “cultural.” Various assumptions as to how the game was played (especially in the understanding, often found in GM advice, that a “bad story” won’t happen by a PC dying at a non-dramatically-appropriate moment, and that fudging should be done to ensure this) developed and this led to the game playing very differently than it would if you ran it entirely as written with zero fudging.

          One rules-related thing was adding challenge ratings and having various bits of math to determine what a proper encounter looked like. This wasn’t a huge factor in the cultural shift, except that it created the expectation that an encounter would at worst be “challenging” or whatever, and so there wouldn’t be anything in an adventure that wasn’t winnable.

          Adventure design also changed: for example, the earlier you go, the more traps you find; traps tend to be quite lethal (especially to players who aren’t accustomed to going through the whole rigmarole of pouring chalk dust all over everything and whatnot).

          I’ve found that running semi-old school (no fudging, no “rigging” encounters by having enemies suddenly grab the idiot ball if the PCs are losing, minimal scaling; on the other hand, I put in minimal traps because I don’t really like traps – they tend to be either boring or ridiculous) leads to, following some PC deaths and close calls, the players become a lot more risk-averse in various ways, and spend a lot more time planning.

          Survivability in the mechanics seems to go up a lot with 4th and 5th: suddenly more hit points are slapped on everything; I don’t know how 4th handled PCs going to 0 HP but 5th’s rules on dying seem designed to ensure that things have to go really wrong before a PC dies.

          Some games add an “extra lives” mechanic – spend a Plot Point and your character doesn’t die!

          (I would also argue that these changes have been seen in most games: for example, a lot of Call of Cthulhu campaigns include zero advice as to how to introduce new PCs, even when the nature of the campaign would make it difficult and thus advice would be useful; I interpret this as indicating that there’s an unspoken understanding that the campaign is written with the understanding that all sorts of fudging will happen to make it less lethal than it would be if ran by someone without that understanding; CoC has a reputation as a “killer game” but if you run it by the book, most published campaigns fall apart)

          • Le Maistre Chat says:

            Good point about assumed fudging. I really don’t know how to deal with this; I try not to do it, but keep finding that subtleties of how you run initiative can make the difference between a room full of mooks murdering every PC they can reach and the Mage hitting her I Win button (Sleep) in time. Does forgetting to use less PC-favoring rules for initiative count as fudging? What about “enemies always lose ties”?

          • dndnrsn says:

            @Le Maistre Chat

            Most classically, fudging is changing die rolls: if the monster rolls a 20 and you know it would kill a PC and you mentally decide it’s just a normal hit, that’s fudging. I’d also consider changing stuff like monster stats after it’s become relevant fudging, or something similar: if the encounter is going too hard for the PCs and you decide that all the enemies have fewer hit points, or if the party’s wizard busts out a spell on the boss and you don’t want the boss to get turned into a squirrel so you give them immunity to magic which they didn’t have, etc.

            Screwing up and making an honest mistake isn’t fudging, it’s screwing up. Sometimes you should roll with it (eg, let’s say you forgot that all the enemies take half damage from swords, then remember halfway through the combat: having the game world suddenly change in a way that will disorient the players is far worse than just crossing it out on the sheet and adjusting the XP the players get for them) and sometimes (mainly if the consequences were serious for the players) you have to say “ok I screwed up, you shouldn’t have died there; I am giving everyone a mulligan.”

            Rules that benefit PCs like “monsters always lose ties” aren’t fudging; they’re part of the game system. If you “forget” rules like that when you want to make things harder for the PCs, that’s fudging, or something close to it, or if you “forget” rules that would hurt the PCs when you want to make things easier. Stuff like enemy tactics changing on the fly based on how hard the GM wants to make it are a grey area.

            I’m a really big fan of the Alexandrian’s essay on railroading and I’d expand it further: it’s not just bad when the GM negates player choice, but when the GM “changes reality” in-game to ensure something happens or doesn’t happen. I think elsewhere he notes that he differentiates mechanical “cheating” like changing a die roll from ignoring “content generator” stuff like rerolling on a random encounter table; this is kind of a grey area. But, in general, stuff where what happens is what the GM wants to happen, is bad. Player choice and the element of chance are two of the things that make RPGs a cool medium.

          • Randy M says:

            Morale rulings (apart from morale rules, which I don’t know if they are common anymore) are a good way to give plausible deniability to fudging, because it’s easy enough to believe in idiosyncratic personalities.
            “Sure this time [when you all happened to be close to death] all the goblins surrender when their leader takes a wound, because this is a particularly cowardly tribe; next time they might be braver.”

          • Nornagest says:

            on the other hand, I put in minimal traps because I don’t really like traps – they tend to be either boring or ridiculous)

            Traps are kind of an unfortunate part of the D&D formula, I think. They leave you with a nasty Morton’s-fork situation: on the one hand, the party thief’s going to get sulky if half their class features go unused, while on the other, it’s not very fun for everyone else to stand back and watch the party thief do their thing, especially if it takes longer than a roll or two.

            And they don’t make a lot of sense with most adventure concepts. You can fit them into tombs and the like, sure, and sometimes you can make a “temple of trials” work. Traps have seen plenty of real-life use in warzones and to catch prey, too, and you can work with that concept. But on the other hand, most “dungeons” are first and foremost camps, inhabited by people or monsters, and it makes no sense for them to be heavily trapped. Grug the goblin isn’t that bright; eventually he’s going to forget not to step on the red tiles and fall into the scorpion pit. Aghrazgal the bandit king is going to return drunk after a successful raid, his hands a little too shaky to get the safety catch thrown on his treasure chest; he’ll get stuck by the poison needle, miss his save, and die.

          • Le Maistre Chat says:

            @Randy M: morale rules were gone as of 3E. Hostile monster encounters are always resolved by killing every last one, fleeing, or the broken Diplomacy skill if it’s specifically 3.X

            @Nornagest: Any trap should be resolved within two nail-biting rolls if they’re there for the Thief player’s sake. I don’t think that’s time-consuming enough to make a bad Morton’s Fork prong.
            Your point about being illogical is well-taken, though.

          • dndnrsn says:

            @Randy M

            I’m pretty sure morale rules dropped of the actual rules of D&D at some point; at a minimum, people seem to have mostly stopped using them. Most other systems don’t have them, even when it would make sense to. It’s probably because it makes it harder to ensure outcomes.

            EDIT: I believe LMC that it was 3rd ed that dropped them for good. I played a bunch of 2nd and I don’t remember a morale roll ever getting rolled, though. Nor do I remember random encounters.

            @Nornagest

            Real-life-style traps would have versimilitude, but would get boring: a pretty thin range of punji pits, grenade tripwires, trapped items, land mines with anti-tamper devices, etc (or fantasy equivalents, or whatever). D&D style traps in the old-school mode are often unbelievable Rube Goldberg devices built around it being a game.

            I’d like to use more traps, but it’s so hard to inject them in a way that’s neither boring nor versimilitude-killing.

          • Skivverus says:

            Perhaps more alarm-style traps?
            “Thief disarms doorbell, lest someone answer the door before the party is ready.”

          • Le Maistre Chat says:

            Yes, plenty of traps that wouldn’t hurt the homeowner are a great idea.

          • Nornagest says:

            Yeah, I like alarm-style traps but didn’t mention them for some reason. They’ve shown up in RPG modules before — I remember one Pathfinder author liked to throw in falling thunderstone traps as alarms — but the territory’s probably been better covered by computer games: anyone who’s played Skyrim has dealt with the bone chimes that it puts everywhere, for example. That’s a good example of a trap that’s a real threat but isn’t versimiliude-killing. Does mean a little more work for the GM, though.

            I played a bunch of 2nd and I don’t remember a morale roll ever getting rolled, though. Nor do I remember random encounters.

            The 2E group I played with in middle and high school used random encounters, though more sparingly than the 1E material I’ve read. I don’t remember it ever using morale rolls, though.

          • Randy M says:

            Maybe I should have used a different word, then, since I wasn’t talking about using the morale rules; obviously if you are following the rules, then you aren’t fudging anything.
            But at the same time, if zero HP is presented as dead (for enemy combatants) or even just helpless, it stands to reason that the fight might go out of some enemies before the hit points do, and as a DM you can use a wide range of justifications to explain why the side that still has more hit points is throwing down their arms and begging for their lives or running away.

          • Le Maistre Chat says:

            Also, Egyptian tombs make some of the best trapped dungeons on Earth (where I set my games), because you can use traps that only hurt the living, you can ignore the logistics of food and water, and we know damn well it was realistic for obscure adventurers to rob them.

          • Nornagest says:

            I’m imagining a guardian mummy whose upper torso looks like a pincushion from all the times it’s triggered the arrow traps in its tomb, but who doesn’t give a shit because it’s undead, feels no pain, doesn’t bleed out, and game-wise is immune to normal weapons.

          • Le Maistre Chat says:

            @Nornagest: Yes, and if you’re a shabti, the arrow traps will just bounce off rather than pincushioning you. Tomb raiders could find poison arrows just laying on the floor and some easily-spotted traps empty. Others are still loaded, because sometimes a guardian mummy steps on one, lets out an angry noise because now it’s hard to walk, and returns it to the empty trap.

          • dick says:

            re: Traps – I hardly ever use traditional traps of the “kills you if you didn’t search for it” variety. Partially because they’re often unrealistic, and also because they slow down gameplay so egregiously once the party gets scared and starts searching every inch of every room. However, I do use the sort of traps where the trap is obvious and the party just has to figure out how to get past it.

            For example, a few sessions ago, my party found the old “hallway that smushes anyone who tries to walk through it”, which is pretty cliche. But, I felt good about it because a) it was realistic (the hallway led to the back-most room of the temple, where the previous inhabitants stored their valuables and performed their initiation ceremonies; it wasn’t somewhere they would’ve walked through often), and b) it was fairly obvious there was a trap there, because when the party arrived, a band of goblins were trying to construct a wooden brace strong enough to keep the ceiling up, and the trapped hallway’s floor and ceiling were covered with the results of the goblins’ previous efforts, a lot of splintered debris and a thin and amazingly odious paste of recently-smushed-goblin. So, the party got a neat trap to figure out (there was a mechanism elsewhere in the temple that disabled it, in a way that made sense thematically and had appropriate clues) and I don’t have to deal with “Wait, we better check for traps before we open the door of this tavern” for the next month.

          • John Schilling says:

            Hostile monster encounters are always resolved by killing every last one, fleeing, or the broken Diplomacy skill if it’s specifically 3.X

            Well, yes, but “fleeing” means either morale rules or morale rulings, and the latter have been explicitly part of the DM’s job in every incarnation of D&D. NPCs are supposed to be played by the DM as intelligent beings with agency and sensible behavior, except in the rare case of INT-0 mosnsters like slime molds or skeletons.

            The absence of explicit, quantified, morale rules in 3e/3.5e/Pathfinder is a moderate curiosity and oversight given their design philosophy of a rule for everything under the sun. I say a “moderate” oversight only because pretty much every other RPG in the known universe also lacks quantified morale rules, in spite of almost all of them prioritizing simulated tactical combat with NPCs above any other role that the players might want to game out. That’s one of the few areas where old-school D&D got it sort of right and everybody else lost their way.

            But, if you recognize the problem, either you wing it or you house-rule it.

          • Le Maistre Chat says:

            @John: There’s a stupid oversight in 3.X though, where the rules say players get double XP for overcoming monsters non-violently, so I’ve had the lovely experience as DM of having a player rules lawyer me that when monsters flee like sapient beings, I owe them double XP for the survivors. Given how fast 3.X PCs level up from murder, they’d only need to bully 7 “CR equal” groups into cooperating without violence for each level up.
            Now that I’m running old school D&D, I hand out standard XP for enemies who surrender or flee, and it feels good, because A) it’s a “rulings, not rules lawyers” game and B) each PC would have to overcome 83-166 normal men, orcs or equivalent for each of your first two level-ups anyway! (Hence GP=XP as a pacing mechanism.)

          • John Schilling says:

            Your players have a very strange definition of “non-violent”, and I’m not going to fault 3.5e for that one. Though I do agree that constant XP regardless of method is preferred, and I’d also prefer a less generous rate for pure victory-over-monsters with the balance made up by awards for achieving campaign objectives and solving non-combat problems.

    • beleester says:

      Even better, the toxic breath does show up in the Gorgon, which is a monster in D&D which is not the Gorgon from Greek myth (D&D calls that a Medusa), but instead is sort of a giant cow monster that’s a lot like the Catoblepas.

  40. Vermillion says:

    Attention all of you with a heroic mien and whatnot, I am hosting a super-powered superhero Rumble! Contestants and other interlopers have spent the past week creating novel powers which you can view here. Now the next phase is that everyone who’s signed up can go to the page marked Master List (comment here) and pick TWO powers from the list that they’d like to have available to bid on. Until nominations are closed and bidding starts anyone who wants to can still sign up.

    Full rules of the rumble can be read here or just browse the previous three open threads where I talked about this.

    • Jake says:

      Just to clarify on this, are we supposed to make a comment on the master list for the powers we want to nominate?

    • Vermillion says:

      Nominations are in for all the current players but I’ll hold off on starting the bidding until 9PM EST today (11/7/2018) on the off chance that anyone else would still like to sign up.

      Looking at the powers that are in the bid pool so far I get the feeling it’s gonna be weird. 😀

  41. RavenclawPrefect says:

    I’ve been a math nerd since age 2 or so, and one of the things that I love about the subject is the prevalence of interesting questions that have wonderful ratios of solving time to stating time: I can read one or two sentences, store the question in my head, and go for a walk for hours just thinking about it. And there’s an actual right answer to the question – I can try out different approaches and know when I’ve gotten things right. None of these properties are math-specific, though; it just seems like mathematics is really good at producing questions that meet these criteria.

    So what are some good non-math puzzles? I’m specifically looking for elegant statements here; crosswords take as much space to specify as they do to solve, so they don’t quite count. And no dumb tricks – the nice bit about a math problem is that you’re not hunting in vain for the ambiguous word or the stupid misinterpretation that lets you solve the riddle, you know exactly what you have to do and only your own competence stands in your way. Communicating badly and then acting smug when you’re misunderstood is not cleverness.

    I know of exactly two such puzzles, posted below; I’m sure there are more, but years of hanging out in nerdy spaces online hasn’t seemed to turn up any.

    • RavenclawPrefect says:

      • It’s easy to hop in a time machine and prove your authenticity as a person from the future beyond a shadow of a doubt: show off your fancy tech and knowledge and stock market predictions. But how would you prove that you are a time traveler from the past? For specificity, suppose you’re a reasonably well-educated Victorian aristocrat who has found themselves in some English-speaking part of the world in 2018, carrying nothing but yourself and the items on your person. You should aim to overcome the suspicion of someone whose prior assigns more probability to the hypothesis that an eccentric millionare has staged a particularly elaborate hoax. (Easy mode: give yourself a week to prepare before the jump.)

      • Say we’d like to prepare for the contingency that humanity dies out in a non-biosphere-ruining manner, and leave a message for a potential intelligent species in Earth’s evolutionary future. So we’re looking for a way of preserving a fair bit of readable (and detectable) information for a few hundred million years or so.
      (A) What sort of message, if anything, would we send? Under what conditions do we want it read?
      (B) How do we actually get said message to last?

      Both of these were encountered in the process of reading someone else’s solution to them; I saw the first from Gwern, and the second from Paul Christiano. You can find either of their writeups with a little Googling, but I’m not linking to them here because I think they’re a lot more fun to ponder yourself.

      • Max Chaplin says:

        To the first one: an aristocrat disappearing would probably make the news back in the day, and he is quite likely to have photos and/or portraits of himself archived somewhere. These, along with his lack of documentation, the awful fillings in his teeth (if he has any) and his extensive knowledge of his family and estate, are enough to convince at least some people of the possibility that he is a time traveler. These believers will hopefully help convincing the authorities to dig up the remains of family members and perform DNA testing on them.

        • RavenclawPrefect says:

          Locating graves of family members is a good one, if those can be reliably dated; basically rules out everything except cloning technology far ahead of what we believe to have existed at the time of their expected birth. Knowledge of one’s surroundings seems harder; almost anything verifiable is in principle discoverable by someone studying for the hoax.

        • bullseye says:

          DNA testing is a better answer than anything I came up with; but how would a Victorian know such a thing is possible? If he’s had smallpox I think there are ways of finding that out, but I don’t think he’d think of that either. Dentistry might be the best the Victorian could actually come up with, if he’s clever enough to realize his teeth will probably be bad in a different way than a modern poor person’s.

          Being in the news and having pictures wouldn’t do much to convince me; people mysteriously disappear sometimes and I don’t think it’s usually time travel, and sometimes two people look the same.

          • albatross11 says:

            Smallpox scars would be pretty damned convincing. I wonder if you could do some kind of antibody test that would distinguish between antibodies to smallpox and antibodies to the cowpox vaccine strain.

          • albatross11 says:

            Another possibility–check him for antibodies that would neutralize any of the last hundred years’ circulating flu strains. He won’t have them if he just came from the mid-to-late 1800s.

          • 10240 says:

            Do you have antibodies against flu viruses if you’ve never had the flu? I know very little about how immunity works.

          • bullseye says:

            I feel like a modern doctor would mistake smallpox scars for something else. Because obviously he’s not a time traveler so it can’t be smallpox.

          • Lambert says:

            Probably ought to vaccinate him too, before he gets them.

        • JulieK says:

          Before hopping into your time machine, bury a time capsule with your photograph, handwriting sample, a newspaper, etc. Tell the people in the future where to dig for it. (My husband’s suggestion.)

        • johan_larson says:

          If you have access to a lab, it is possible to test whether a person grew up before or after atmospheric nuclear testing. Those tests dispersed enough radionucleides that detectable levels are present in our teeth. If this fellow really grew up in the Victorian era, his teeth won’t have that.

          Of course, that sort of test is probably not the place to start. I need something easier to start with. I think I would start with checking whether this Victorian aristocrat is literate in Latin and Greek. If he isn’t, he’s probably a faker.

      • Bugmaster says:

        (B) How do we actually get said message to last?

        I don’t know, is carving it into the surface of the Moon still an option ?

        In general, I feel ambivalent about open-ended questions like these. They remind me too much of philosophy, where you can endlessly argue about any number of points, and never come to any conclusion.

      • John Schilling says:

        Say we’d like to prepare for the contingency that humanity dies out in a non-biosphere-ruining manner, and leave a message for a potential intelligent species in Earth’s evolutionary future.

        Do we need the message to be readable before they develop a reasonable level of technology?

        If not, a 30-meter gold-plated disk in geosynchronous orbit, with a long boom for gravity-gradient stabilization, would be naked-eye visible from the Earth’s surface and I think the orbit would endure over geologic time. That’s something we can do for a few hundred megabucks today, and it would be a conspicuous indicator that one should come up and look for the attached message cache. Which will be in an environment that is also conducive to long-term preservation.

        The orbit probably won’t be geosynchronous after millions of years, and we may need to move it a bit farther out. The L4 or L5 point would almost certainly endure, but that’s going to require a ~300 meter disk, so we’d rather find a stable-ish resonance somewhere closer.

        Hmm, I may want to do some math on micrometeoroid impact rates. If we can arrange a long tubular sun-and-meteor shade that keeps most of the surface shiny, the specular reflection means we can get by with a smaller disk at the “cost” of making it only visible at near midnight. Which will make it more conspicuously artificial, or at least unique and interesting.

        If it’s specular and in a non-circular orbit, the gravity-gradient boom may have a bit of persistent wobble that makes the reflection blink at a fixed rate, also conspicuous.

        And if the optical surface is mostly undisturbed, a layer of glass with carefully-chosen additives (or just layers of the right thickness) can impose artificial absorbtion lines on the solar spectrum that may encode a few bits’ worth of “yes this is artificial”.

        Then they have to invent rockets to go look.

        If that’s not acceptable but we are willing to insist on at least an industrial level of technology, then deep burial should preserve our cache and all we need is a surface-level marker.

        If we want to send a message to a preindustrial civilization, then we need it in a form that will endure on the Earth’s surface for a good fraction of an eon, and I’m going to have to ponder that some more.

        • Andrew Hunter says:

          I don’t think, given what you’ve said about ubiquotous telescope coverage, you need a satellite visible to the human eye, no? A conspicuously technological but small object at L4 would get the same attention by the time they had rocketry sufficient for retrieval, if I’ve understood you previously before.

          • John Schilling says:

            On a purely technological basis, yes. But making it naked-eye visible isn’t that much harder, and it may be a useful nudge to a civilization that somehow decided space travel and astronomy – or industrial-grade science and technology generally – weren’t worth the bother.

            Though to be really safe, we should look at what constitutes “naked-eye visible” for e. the 5th percentile vertibrate, rather than assume the next guys will have human-level eyesight.

      • Before looking at other people’s answers …

        1: You identify yourself, tell your interrogator where and when you vanished, and invite him to research what happened to you–a vanishing gentleman should show up somewhere in the surviving records. It’s barely possible that your fingerprints are on record if you are very late Victorian, but not likely. Also possible that there is a surviving painting or photograph of you. You probably don’t own a passport and are very unlikely to have one with you. If your interrogator can find a document you signed you can demonstrate the same signature.

        There may be some way of proving that you have not been immunized against a variety of things that modern people almost always are immunized against, but that could just mean that you are from some relatively backward part of the modern world.

        I don’t see any reliable solution–only ones that might work if you were lucky.

        2. To be of any real use I think we have to transmit a lot of information. With no language in common that has to start with images. You need some equivalent of a video recorder with a very long shelf life and a lot of recorded information, including what is needed to teach a language. Making records that last that long isn’t undoable–fossils manage it, after all. Probably a solar powered player of very durable material. Carved images to get your visitors started.

        Not a very satisfactory answer, I’m afraid.

        • On doing it wrong …

          A long time ago I wrote, but did not publish, a story in which aliens had tried to do this for us.

          One of the aliens, now returned, is explaining their mistake to one of the last surviving humans:

          And he pointed, with one enormous flipper, to the thousand mile arrow of the Mariana trench, pointing to the storehouse, the giant storehouse, six miles deep.

      • Squirrel of Doom says:

        1. As I understand it, all living today things carry certain radioactive isotopes as a result of nuclear tests in the 50s-70s. Someone truly from the 1800s would have a different composition, which I think could be detected by examining a tooth, for example.

        2. A safe space that only a quite advanced civilization could read would be in orbit. Can we make orbits that are stable for hundreds of millions of years? I don’t know. Would an orbit around the moon be better? The sun? Moonless Venus?

        I have nothing I’d want to tell that future civilization, aside from a complete copy of Wikipedia.

        • albatross11 says:

          #1: Brilliant! I wonder if he’d also lack some chemicals in his body that are in everyone now.

          #2: Could we make a whole lot of very durable things with useful information on them, that could be combined to get directions to the cache of The Complete Idiot’s Guide to Jumpstarting Industrial Civilization etched in gold plates in some underground bunker somewhere? Imagine something so common that they were routinely used as tools, money, jewelry, etc., but also durable enough to survive centuries of such uses.

          Also, could we help future civilizations out by making fake mines or caches of hard-to-make/hard-to-find materials somewhere? Here’s the best obsidian around, and hey, when you chip through all that obsidian for your tools, there’s a bunch of copper and tin lying around waiting for you to find….

      • Andrew Hunter says:

        If you were a time-travelling battleship, it’d be easy to demonstrate, as you’re made entirely of low-background steel. I don’t think there’s a good modern equivalence, but maybe? Dissolved levels of various modern chemicals/plastics in blood?

        • Eric Rall says:

          There are pollutants that accumulate in fat cells, most notably methylmercury compounds and organic chlorine compounds (dioxin, etc). I don’t know if these are pervasive enough that anyone who isn’t a time traveler would have unambiguously detectable background levels, though.

        • bean says:

          A time-travelling battleship is easy to figure out in general. Just take me there. And then fire a broadside. No, that part isn’t strictly necessary, but it would be cool.

          The hard part with inanimate objects is proving that it took a shortcut, and didn’t just go the long way in a very good state of preservation.

          Actually, I think it might be possible to look at the isotope composition of various cells in a person, too.

      • Nornagest says:

        Say we’d like to prepare for the contingency that humanity dies out in a non-biosphere-ruining manner, and leave a message for a potential intelligent species in Earth’s evolutionary future. So we’re looking for a way of preserving a fair bit of readable (and detectable) information for a few hundred million years or so.

        Stamp your message into gold foil, with some Pioneer plaque-type headers to make it obvious that it’s artificial and give the beetle people a head start on translation, and deposit it somewhere that you think’s going to make a nice fossil bed in a million years or so. Do this a few thousand times, to improve your chances of detection.

        • RavenclawPrefect says:

          How easy is it to find a sheet of gold foil several meters under rock, if you’re a civilization far back enough technologically that our current info would be of much use? I think for this to be effective you’d need a good way of saying “Big conspicuous artificial something happened here, best investigate it.”

          • Nornagest says:

            It’s not, but fossil beds are the sort of thing that’s going to be investigated thoroughly if the beetle people are as curious about their world as we are. They’re not going to find all of them, but that’s why you do it many times.

            The basic problem is that it’s very hard to make a big conspicuous artificial anything that’s stable for hundreds of millions of years. The guys at the Long Now Foundation are trying to make a clock that’ll last for ten thousand years, and that’s a major engineering project. We need ten thousand times that, at minimum.

      • Eric Rall says:

        1. Handwriting is fairly unique under close examination, given a good enough sample set of documents of known provenance. And upper-class Victorians tended to write a lot of documents (letters, personal records, legal documents, etc) in their own hands, and these documents seem decently likely to have survived. If you can locate and get access to documents you wrote before your time-jump, you can write fresh documents on demand in the same handwriting for a forensic document examiner to verify.

        2a. Part of the problem is getting the message understood. Despite diligent effort, we’ve failed to decipher a number of bodies of writings, written by humans a mere few thousand years ago. What we can read is usually a combination of 1) the language is still around in a recognizable form, or 2) there’s parallel texts (same content in the dead language and one we know how to read) that we can use for reference. It’s a pretty safe bet that anything we try to say in prose isn’t going to get through. There’s a better chance of with information that can be conveyed through diagrams (geometric theorems, maps, blueprints, etc).

      • The Arcadian says:

        There’s one crucial difference between your version and Gwern’s – in the original, it’s the modern scientist who is tasked with finding a difference proof, not the Victorian aristocrat. The time traveler has an enormously more difficult task, because they have to come up with a proof within the bounds of 19th-century science, where the proof cannot be faked with 21st-century technology.

        • RavenclawPrefect says:

          I intended for the task of proving it to include such things as “go ask people on SSC for ideas of what to do”, or “go to the nearest library and read some fortuitously chosen science books”. No reason one can’t just walk into a lab with something weird like smallpox scars and get them intrigued enough to run more expensive and harder-to-fake tests.

      • LesHapablap says:

        With a week’s notice there are lots of ways to get into the news, put fingerprints in a time capsule, that sort of thing.

        Without a weeks notice the traveler will be limited by their lack of knowledge of the present. Teeth can be reliably dated to the modern era due to low levels of radioactivity, different levels of lead in the environment etc, but the traveler would not know this. The key then would be to search out someone from a profession that will have that knowledge or be curious enough to get it.

        The traveler should head to the nearest university, claim to be from the past, and offer a grand prize to whomever can disprove it.

        • RavenclawPrefect says:

          This inspires another question: if you could grab as much stuff as would fit on your person, how would you maximize the amount of prize money you could raise for this competition? (We’ll grant some precognition or exceptional guesswork to make the right choices.)

          The first things that come to mind are large amounts of small antiques, like rare coins in mint condition, and perhaps any valuable metals that have increased in value substantially over the centuries (though taking inflation into account, I don’t know of any such). You might try nicking some historically interesting pieces of paper, but that probably looks fake when as well-preserved as you’d have it. I imagine stocks are unlikely to be maintained well enough from the 19th century to 2018 for any brilliant business investments to pay off, if you can even find anything still around in the present day.

          A weirder but potentially very lucrative scheme (and one that gets you some temporal proof as well): stuff a breeding pair of passenger pigeons in your pocket, go to some well-funded biology department posing as a pioneering geneticist.

          • James C says:

            A weirder but potentially very lucrative scheme (and one that gets you some temporal proof as well): stuff a breeding pair of passenger pigeons in your pocket, go to some well-funded biology department posing as a pioneering geneticist.

            Though that’s one that does require future knowledge as no one really thought the passenger pigeon could ever go extinct at the time. Although, if you could go back and forth on request it’d be something cheep and easy to acquire as proof.

          • Jiro says:

            I know you mentioned pieces of paper, but postage stamps may be the way to go, if you didn’t come from before they existed. If you’re from a certain narrow time range you could bring comic books.

      • AlphaGamma says:

        Something that a Victorian would know about, but would work *better* with modern technology: cut down a tree, and bring a disc or wedge of the trunk from core to bark with you. Dendrochronology, which was known to the Victorians, will give the date that the tree was felled. This, plus the fact that the wood is from a just-felled tree, should prove time travel. With modern tech, radiocarbon dating (and lack of a bomb pulse) could help corroborate it.

      • proyas says:

        It’s easy to hop in a time machine and prove your authenticity as a person from the future beyond a shadow of a doubt: show off your fancy tech and knowledge and stock market predictions. But how would you prove that you are a time traveler from the past? For specificity, suppose you’re a reasonably well-educated Victorian aristocrat who has found themselves in some English-speaking part of the world in 2018, carrying nothing but yourself and the items on your person. You should aim to overcome the suspicion of someone whose prior assigns more probability to the hypothesis that an eccentric millionare has staged a particularly elaborate hoax. (Easy mode: give yourself a week to prepare before the jump.)
        Before you travel to the future, go to some place that is unlikely to be demolished or dug up between now and 2018, and hide some kind of artifacts there. In 2018, you could then lead the authorities to the artifacts, and your foreknowledge of their location and contents would serve as strong proof of your authenticity.

        A quick and easy way to do this might be to buy a newspaper in the past, tear it in half, and leave one half hidden at the secret location and put the other in your pocket. In 2018, the text, ink, paper, and tear patterns, and all sorts of other characteristics would perfectly match the hidden newspaper fragments with the ones you had, but carbon-14 dating would show an age difference consistent with your time travel claim.

      • AG says:

        Nice try, Keanu Reeves.

      • eclairsandsins says:

        If I were going to the future from now, I think I could prove that I’m from the past by storing a time stamped and digitally signed prophesy of my arrival in some blockchain, then when I arrive, prove I am the writer of the message by signing something with my private key. The only problem is, how do you prove that you didn’t just give the private key to your descendants? Also this relies on the assumption that the encryption scheme you are using won’t be broken by the time you go to

        Of course, a victorian aristocrat would not know about public key cryptography

    • tophattingson says:

      If you are ok with puzzle games, you might be interested in some of the games by Zachtronics.

    • Soy Lecithin says:

      What’s the largest number of consecutive “had”s you can have in a grammatically correct sentence.

      “I had eggs for breakfast,” is 1. “She had had enough,” is 2. Can we keep going?

      The “had”s should be used meaningfully in the sentence. For example, the sentence “Mary had ‘had’ written on her paper,” would count as 1, not 2.

      This might not fit all your criteria as it’s not necessarily easy to know that you’ve got the right answer.

      • RavenclawPrefect says:

        Thanks, that makes three I know of now! [Spoilers below, for those who might want to think about this some more.]

        You can have arbitrarily many. Proof by induction:

        The disease which the patient had was contagious.

        The disease which (the patient the doctor had) had was contagious.

        The disease which the patient the doctor had had was contagious.

        The disease which the patient (the doctor my friend had) had had was contagious.

        The disease which the patient the doctor my friend had had had was contagious.

        The disease which the patient the doctor (the friend I had) had had had was contagious.

        The disease which the patient the doctor the friend I had had had had was contagious.

        Etc.

        Turns out you only need one form of the word to make this work – when you add in all the parentheses, no “had had”s are really going on here. You can generalize this approach to any word where (prelude [semantic type] word) or (word [semantic type] postlude) itself constitutes an instance of [semantic type] – in this case a noun phrase. So e.g. “the guy the teacher the cat she likes likes likes is a jerk”, this xkcd.

      • Fitzroy says:

        “The “had”s should be used meaningfully in the sentence. For example, the sentence “Mary had ‘had’ written on her paper,” would count as 1, not 2.”

        Not allowing for mentions as well as uses takes most of the fun and point out of these kinds of word games. With use and mention you can get to eleven consecutive ‘hads’. With pure use I can’t think of a way you can go past two grammatically.

    • helloo says:

      A lot of math puzzles can be shortened to “dumb tricks” though.

      How do you add a list of increasing numbers? Pair them up from the ends so it becomes a list of half the length with equal sums.

      Everyone pair off and compete against each other, then winners then do the same until there’s 1 winner. If there’s an odd number of people, the odd one gets a free pass to the next round. How many matches are played? One less than the number of players (as each match reduces the # of competitors by 1).

      As for non-math – there’s some words that aren’t easily translatable (either to or from).
      r/DoesNotTranslate is a subreddit for them to give some examples. Generally they are still defined, but often through a phrase or metaphor than directly.
      Is it possible to have an idea that is literally untranslatable to a certain language? To all languages? For any language does there exist such a word/idea?

      • johan_larson says:

        There are definitely words that are difficult to translate. “Talko” from Finnish might be translated as “community work day”, but the translation sounds like a euphemism for forced labor and the original is a positive neighborly chance to do something useful that’s at least not the same old grind, sort of like “barn raising” back in grandpa’s day. The Russian “niekulturni” means an uncultured person, but the implication is more like “white trash” without the racial specificity.

      • thevoiceofthevoid says:

        I’d call those “clever tricks”, not “dumb tricks.” A clever trick helps you find the right answer to a difficult (-seeming) but well-defined question in an easy way. A dumb trick as I think Ravenclaw means it is something where the intended answer to a riddle depends on the question being misinterpreted, or on some hidden assumption that would be impossible to guess unless you’ve heard the riddle already.
        For example: [content warning: suicide. These bad riddles are often really dark for who knows what reason.] A blind man, one of the few survivors of a shipwreck, was rescued after being stranded a desert island for a month. He goes to a restaurant and orders albatross. As soon as he tastes it, he leaves and commits suicide. Why?
        Anwer, rot-13’d: Ba gur vfynaq, gur zna’f sevraq qvrq. Fvapr gurl unq abguvat ryfr gb rng, nabgure fheivibe srq uvf pbecfr gb gur oyvaq zna naq gbyq uvz vg jnf “nyongebff.” Jura ur ngr erny nyongebff naq vg gnfgrq pbzcyrgryl qvssrerag, ur ernyvmrq jung unq unccrarq.

        As for the untranslatable ideas…My first thought is something technical like “Fourier transform”, to a language used in a pre-industrial society. Ideally one of the ones (if the articles I vaguely remember reading at some point were actually correct and not feeling-culturally-superior-mongering) that doesn’t have distinct words for numbers greater than two.

        • On the subject of riddles, here is one intended for someone familiar with Norse literature:

          I was a hostage for one
          Who being brave broke faith.
          Now I and my twin brother are parted forever.

          Who am I?

          • Nornagest says:

            Gle’f evtug unaq.

          • dick says:

            V gubhtug bs Bqva’f rlr.

          • Nornagest got it. Dick didn’t.

            So here is another:

            Because I was overlooked,
            One who could
            Gave me to one who could not
            To use as he did not intend.

          • Nornagest says:

            Zvfgyrgbr.

          • Again correct. Here’s an easy one:

            Could not bear up the foe of a mouse
            As a bright-eyed bride brought down the house

            I can offer some more, but they are based on the sagas not the Eddic material.

          • Nornagest says:

            That one actually took me a bit longer, but it’s cebonoyl Gube. Gur “sbr bs n zbhfr” vf sebz gur fgbel bs Hgtneq-Ybxv, naq gur “oevtug-rlrq oevqr” vf sebz gur fgbel bs Guelze naq gur Oevfvatnzra, jurer ur qvfthvfrf uvzfrys nf Serln naq unf uvynevbhf fvgpbz nqiragherf.

          • Yes.

            Part of me bought all of me from a bloody weapon’s hold.
            Whose head am I?

          • Nornagest says:

            Ybxv’f. Ur raqrq hc bjvat uvf urnq gb fbzr vengr qjneirf, ohg cbvagrq bhg gung ur qvqa’g bjr nal cneg bs uvf arpx, naq tbg uvf yvcf frja fuhg sbe orvat n fznegnff.

          • No. Where is the “bloody weapon’s hold”?

            I warned you that I was switching from the Eddic material to the sagas.

          • Nornagest says:

            Had a feeling that wasn’t quite right. I don’t know the sagas as well as I do the Eddas, I’m afraid (there’s a lot more of them, for one thing).

            I’m guessing it’s Rtvy. Gurer’f na rcvfbqr jurer ur fnirf uvf urnq sebz Revx Oybbqnkr (gur “oybbql jrncba”?) ol pbzcbfvat n cbrz sbe uvf enafbz.

          • That time you got it.

            Egil in York. His tongue saves his head. The bloody weapon is the clue.

            Here is another from the Eddic stuff:

            I met my mate,
            Which lost a bet,
            But swiftest treasure
            Did I get.

          • aho bata says:

            Ybxv? Ol gnxvat gur funcr bs n zner naq qvfgenpgvat gur qvivar jbexubefr Finguvysnev, ur ceriragrq Finguvysnev’f znfgre sebz svavfuvat gur fgebatubyq ur unq cebzvfrq ur jbhyq ohvyq gur Nrfve. Va gur cebprff (f)ur (or)tbg Fyrvcave, gur snfgrfg ubefr.

            Here’s another saga one, with a skaldic flair:

            The guest, to God and to Odin,
            To grave and to truth, no ruth
            Nor mercy he shows, nor shies
            From shades to their ends a-sending.
            Conquering for Christ, making fast
            The true creed, the fire on reed
            And hallower of fire, lovely Fjǫrgyn’s
            Fruit to ash his lord dashes.

        • Nornagest says:

          A blind man, one of the few survivors of a shipwreck…

          For what it’s worth, I got that one within seconds, and I don’t think I’d heard it before. Maybe I’m just morbid.

          • So did I, assuming that the answer was:

            Ur unq orra srq gur syrfu bs bar bs gur bguref, gbyq vg jnf nyongebff.

          • thevoiceofthevoid says:

            …maybe I’m just bad at riddles, I guess.
            There’s a worse one in which a blind man commits suicide on a train after a surgery to restore his vision. [Yeah I really don’t know why all of these are about blind men commiting suicide.] Why? Orpnhfr gur genva jnf va n ghaary fb ur gubhtug gur fhetrel unq snvyrq.
            I think it really depends on how much of the relevant information the question decides to give you. I think the phrasing of the albatross one I’d heard originally might have been “a man goes into a restaurant, orders albatross, tastes it, and commits suicide. Why?” Though it might have also been one of the ones where you’re specifically supposed to ask clarifying questions, which would again redeem it.

          • Gobbobobble says:

            There’s a worse one in which a blind man commits suicide on a train after a surgery to restore his vision.

            That’s not worse than pnaavonyvfz. When you said worse my mind went to gur Qnir Punccryyr oynpx juvgr fhcerznpvfg fxvg

        • fion says:

          Jeez, it’s hard enough explaining “Fourier Transform” to people in our society!

          For that matter, it’s hard enough explaining it to physics undergraduates…

      • nameless1 says:

        I think truly good writing is always hell of hard to translate. Words have one or more denonations and a number of connotations some are strongly implied, some are weakly implied. A good writer will choose a word that implies the connotations he means strongly strongly, and the connotations he means weakly weakly. It is pretty likely no word in the other language will have the exact same range of strong and weak connotations.

    • dick says:

      So what are some good non-math puzzles?

      I empathize, I also like having a go-to thing to ponder while cycling, waiting in line, etc. Over the years I have sometimes spent time on oddball hypotheticals like the time machine one (others include: “Could you avert 9/11 if you knew about it a month before-hand?” and “if you were time-travelled back to some point in the past with your current knowledge, how long could you fool the locals in to thinking you knew how to make a perpetual motion machine?”).

      But the ones I always come back to are software projects: decentralized social media, non-dumb uses for ETH, video games that would be fun but within reach for a single dev, etc.

      • AG says:

        video games that would be fun but within reach for a single dev

        Not all men can create Cave Story.

        Or Universal Paperclips. (Though, technically, he did have help from at least two others, for the combat system and the Threnody song.)

    • watsonbladd says:

      Bring a book. Paper ages, and no book from that time is in a great preservation state. At the same time a printing can be authenticated by looking at the details of the pressing.