OT102: Difference Of Openion

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. The Future of Humanity Institute is starting a “Research Scholars Program”, offering salaried positions plus training and mentoring to early-career researchers interested in the same big-picture topics FHI is – AI, existential risk, far-future technologies, utilitarianism, and the like. Would probably involve moving to Oxford. See more information here – they seem to want “expressions of interest” by May 25.

2. Comments of the week: a German economist explains ordoliberalism, a lawyer makes a surprising case for why one might not want to ban a revolving door from regulatory agencies to industry, Nabil al Dajjal tries to summarize the latest Hotel Concierge (if only there were something in between Nabil’s length and Concierge’s), and a bunch of people have a very long debate about why the FAA does what it does. If you guys had just written this up as an adversarial collaboration, you could have been well on your way to winning $2000 by now.

3. Congratulations to the subreddit on reaching 10,000 subscribers.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

846 Responses to OT102: Difference Of Openion

  1. trivialize says:

    I’m commenting on the “case for revolving doors” comment.

    First, I have roughly the same background as theory, the previous commenter. I’m also part of the big corporate law firm world and his description of the dynamics are 100% true in my experience and widely understood there as well. In the legal field the revolving door attracts highly talented, ambitious, and aggressive lawyers into the government because they know there is a pay off. The truth is, many if not most of these people are not doing it for the revolving door – they are genuinely interested in public service and tend to go back into the private sector because they start having kids and the salary gap just gets bigger every year you stay in government. “No revolving door” for these people is basically saying agree to what will, by your mid 30s, be a 90% pay cut for the rest of your career. Theory is also correct that the incentives for going back to the private sector are to make a name for yourself by winning big cases. For examples of this – the number one revolving door legal job is the US Attorney’s Office for the Southern District of New York – their current most high profile case is the one against Michael Cohen. I don’t want to politicize this but if you follow what they are doing it is apparent to me as a lawyer and I think it also would be to a non-lawyer that the people on that case are very smart, very tough, and very aggressive.

    One caveat to this is that the strongest case for this benefit of the revolving door is with respect to lawyers working enforcement type jobs – that is, the people whose job is to identify people breaking the law and to, depending on your job, bring criminal or civil charges. The incentives are less strong where regulators have different jobs (like writing the regulations or deciding whether or not to approve this or that project). The legal profession also has been dealing with this and similar issues for quite a long time, and is pretty good at inculcating a sense that your loyalty is to your present client, not whoever you hope to work for. Whereas someone with, for example, a science background doesn’t get conflicts of interests issues beaten into their head throughout their schooling, or have some professional body that can ruin them if they screw around with this stuff.

    The other caveat I have is that I think the biggest worry about the revolving door is not outright corruption – I go easy on you b/c I want a job from you – but two related phenomena. 1) Revolving door means that the people advocating on behalf of the regulated have professional relationships or even personal friendships with the regulators and 2) the revolving door causes regulators to see the regulated as their peers, which naturally leads to a more sympathetic stance towards the regulated. Again, the vast majority of this is not corrupt. It’s simply a matter of, if you are representing industry, would you rather be trying to persuade a total stranger or that guy from work who you always chatted with in the elevator and got a beer with once and a while, and whose general views you are acquainted with? The second issue is seeing the people you regulate as your peers. This actually is more due to a less talked about part of the revolving door – people coming from the private sector into government. To use an example I know well – law firms. The big corporate firms who supply the young lawyers who enter the US Attorneys office do a lot to make you feel like part of the Wall Street club. They bring you to country clubs, they take you out to fancy dinners, they bring you to socialize with clients. Note – this is not for the nefarious purpose of trying to compromise future prosecutors, it’s because they want to groom their young attorneys to be able to be future partners who can go golfing with a client. But it does have the effect of causing the young lawyers who do become prosecutors to identify with the sorts of people they are considering prosecuting. It does subconsciously have an impact on people’s outlooks.

    On the other hand I think there are some other big reasons why the revolving door is actually a necessity given other features of the regulatory environment.

    First, it is quite hard for the government to keep up with rapidly developing or changing industries. Imagine yourself at the SEC and CDOs have just been invented, or you are at the EPA and fracking has suddenly become a thing. How do you go about getting up to speed? You could ask the industry to explain it to you, but you know they will tell you that this new thing is great, and it is of no risk to anyone, and it creates a million jobs per month. You could hope to get a whistleblower – but that’s unpredictable and if you don’t have a basic understanding of the industry how do you know if you have a whistleblower identifying serious problems or some disgruntled former employee with an ax to grind saying whatever he has to in order to in order to settle some personal gripe. You could hope to catch someone in the industry doing something so illegal that you don’t actually need to understand the industry to squeeze a guilty plea out of them – but are you gonna trust that guy. You could hire consultants but that’s even worse than the revolving door. Or you could hire someone from the industry, but you can’t do that if you ban them from going back to industry. Note – the information gap is a BIG problem even with these revolving doors in the government. I’ve been in situations where the government is trying to investigate a very complex field and was relying 100% on general interest press articles.

    The other advantage of a revolving door is that employees involved will be more conscious of the potential for capture, and less likely to get totally rolled by industry. As I said before the revolving door makes it easier for people who join the private sector to gain influence at the margins through personal relationships. But because the government workers are aware of this, many if not most will make some effort to correct for it, as well as use any knowledge about their old coworkers to their advantage. On the other hand, without a revolving door the biggest risk is that the regulators get totally rolled b/c they are not savvy enough about their counterparts in the private sector.

    Given the above, two reforms that would make more of a difference than a revolving door ban. 1) Massively increase the number of government employees who are simply tasked with doing originals research to understand the industry their agency regulates. That will help to close the knowledge gap. Some of it is probably inevitable but thanks to sequester it’s getting much worse, with agencies not being able to hire enough. 2) Shrink the salary gap by paying employees more in fields where private sector work is lucrative. Would there be some griping if government lawyers started making high six figure or even seven figure salaries? Yeah, but at some point we have to accept this as necessary if we want top people to join government and stay. Government pays market price for everything else, why not employees? You really wouldn’t even have to match – you just can’t have such a huge gap where people’s lifestyles are drastically downgraded by staying in government work. Also, if I’m right that the revolving door issue is less a corruption thing than an affinity thing, and if I’m right that the revolving door has some benefits as well, than the optimum revolving door level is somewhere above zero, but probably below where it is now, so closing the salary gap can help adjust this on an incremental level.

    One other thing – a ban on lobbying specifically by former government employees seems to have much more upside and less downside than a general ban on returning to the industry you regulated. That’s because 1) people who have a high degree of technical skill in a field are probably not joining government because they want to become lobbyists later, and would happily join government as long as they knew some non-lobbying job was in waiting 2) you are narrowly targeting the activity where personal relationships are most easily exploited. You could even make the rule so narrow as to only forbid lobbying contact, so that a former government employee can plot out the whole lobbying strategy, and even write out scripts for others to deliver, and you’d probably reduce the potential for regulatory capture.

  2. Thegnskald says:

    So something I have started noticing getting more common in contemporary media I consume of late:

    Representation among protagonists is roughly evenly distributed. Representation among antagonists is not.

    So, about half the “hero” team will be women, for example.

    Among the villains, the vast majority will be male, and white where relevant. If a minority is represented among the antagonists, they will almost always have a change of heart and become a protagonist later.

    (This became painfully apparent in a particular book series I was reading, in which the first book, written much earlier than the later books, had an unbalanced cast, and excuses were found to systematically kill off many of the male protagonist characters to make room for new female protagonists, and villains established as, for example, gay, switched sides, while all the straight white male villains died for their sins.)

    There is also the ever-present Dances with Wolves problem – previously, white men always ended up better at whatever the natives did than the natives were, whereas the modern version is that the women who take up male-coded-in-universe tasks end up better than the men.

    It is very lazy writing, and if someone wants to know what people complain about with SJW influences on writing, this is a broad part of it. None of this is new, mind, but it does seem to be getting more blatant lately.

    • Nornagest says:

      I think I’d have a better idea of where and how this happens if you gave a few examples.

      • Thegnskald says:

        The Demon Cycle was the latest example; I am picking up on it elsewhere, but that was the most recent series I bothered to finish. It doesn’t tend to correlate well with great writing, and I usually abandon books that exhibit this behavior before I see if they do anything more interesting. (Honestly, I don’t know why I finished that series, the prose was pretty awful, and half the plot felt like the author was rolling dice to determine what would happen)

        Half the male cast was killed off in the last three books and replaced with female equivalents. The Islamic Misogynistic Jihad Society (that was mildly cringy) in the book, which featured a society in which all fit men are trained as warriors, featured a comuppance in which women started to be allowed to be warriors (and all the women did much better than the men, even without training). All the villains were men.

        Mind, the first book was pretty awful for the reverse reason, in particular falling prey to the “Women only get character development when they get raped” trope, whose name I am unaware of. But reversed stupidity isn’t intelligence, and all that.

        • Nornagest says:

          I’m tempted to say “stop reading crappy books”. But truthfully I don’t know how common these tropes are; I make a habit of not consuming media if the first thing I hear about it is how woke it is, which does a pretty good job of preserving my blood pressure but which rules out an alarming amount of stuff produced in the last five years. And I’m pretty burned out on doorstopper fantasy.

          (Oh, and I think that falls under “rape as drama”.)

          • Le Maistre Chat says:

            I’m tempted to say “stop reading crappy books”. But truthfully I don’t know how common these tropes are; I make a habit of not consuming media if the first thing I hear about it is how woke it is, which does a pretty good job of preserving my blood pressure but which rules out an alarming amount of stuff produced in the last five years.

            With all the classics, SF more than five years old but recent enough to not contradict current science, etc. to read I feel like this isn’t a problem. We just need to make our friends aware of how contemporary books are bad for the mind. 🙂

        • Chevalier Mal Fet says:

          I’m not sure I agree that all the villains were men – doesn’t Inevra (I’m not going to look up how to spell her name) start off pretty villainous? And all the human villains that don’t get killed off tend to join Team Good Guy by the end anyway, and of course the Mother of Demons is, well, female.

          Plus I’m not sure you can say that all the women warriors were better than all the male warriors. You only saw the best of the best of the women, plus three female protagonists who had Secret Mystic Training From Hell with their (male) master. The two male protagonists are at least as proficient at fighting.

          Plus the Western European society is clearly more sympathetically portrayed than the Islamic Misogynistic Jihad Society (although to be fair a lot of the sympathy comes from its far better treatment of non-noble women). I dunno, I think you’re being unfair to the Demon Cycle here, which I thought was perfectly good escapist power-fantasy fiction.

      • Jiro says:

        Star Wars: The Last Jedi.

    • K.M. says:

      One of the reason I love Gillian Flynn’s stuff is due to her take on female characters, especially villains (I’m afraid I couldn’t find her full essay anymore, but this should give you gist).

      To me, that puts a very, very small window on what feminism is,” she responds. “Is it really only girl power, and you-go-girl, and empower yourself, and be the best you can be? For me, it’s also the ability to have women who are bad characters … the one thing that really frustrates me is this idea that women are innately good, innately nurturing. In literature, they can be dismissably bad – trampy, vampy, bitchy types – but there’s still a big pushback against the idea that women can be just pragmatically evil, bad and selfish…. Not a particularly flattering portrait of women, [but that’s] fine by me. Isn’t it time to acknowledge the ugly side? I’ve grown quite weary of the spunky heroines, brave rape victims, soul-searching fashionistas that stock so many books. I particularly mourn the lack of female villains.

      I’m afraid I don’t read enough modern stuff to confidently say if what you describe is increasing or not. I can easily think of examples, but I don’t want to draw conclusions just from that. I do get pretty concerned seeing how some examples (such as Star Wars) are really high profile, meaning someone was selected from a huge pool of applicants and paid a lot of money to write something of incredibly low quality.

  3. johan_larson says:

    You wake up one morning and after considerable confusion determine it is the year 2503. What happened?

    • Thegnskald says:

      I died in my sleep and the shortest probability path for the pattern of my consciousness to continue was, for some reason, someone a few hundred years in the future. It honestly wouldn’t confuse me that much – I sort of expect this sort of thing, since it doesn’t require me to die, that just makes the probability path shorter – but it would surprise me, since it is so unlikely.

    • dodrian says:

      Some fiveish years in the near future after a continually escalating culture war, it is no longer politically correct even to say “BCE” and “CE”, as, lets face it, just changing the names didn’t actually do anything to protect us from the microaggressions caused by being subject to the patriarchal western cathlo-imperialist calendar.

      Recently merged tech-titan-conglomerate Applebet decided to take matters into their own hands doing us all a favor, really, and pushed out overnight OS updates, largely inspired the recent phenomenal pop-cultural success of the kabuki-calypso-nangma-zulu musical Eu-R-D – a show with a first-nations cast and based loosely around events in the the life of a famous ancient Greek playwright. Thus began the BE/EU calendar.

  4. johan_larson says:

    David Wong has an explanation of why the current media environment foments fosters radicalization:

    What we do have is an explosion in the number of, let’s say, tight-knit groups that each push a specific worldview, and which recruit by playing on people’s fears and insecurities. They always promise to fill a specific hole that is very common in the lives of 21st-century young people by offering them:

    A) An explanation for how the world works, and by extension, why you should bother waking up in the morning — it’s usually framed as some sort of battle you must join;
    B) Instructions for how to live your day-to-day life;
    C) A social group you can hang out with and be proud of.

    These aren’t just fandoms or shared interests. They may start that way, but I’m specifically talking about the ones that turn their cause into an identity and lifestyle, to the point where eventually the tribe exists primarily to protect itself against the normies who would destroy it.

    • Nornagest says:

      tl;dr: the plot of Fight Club came true, but it’s much less metal than Chuck Palahniuk thought it would be.

  5. Bluesilverwave says:

    To briefly touch on the whole FAA discussion – I read a good bit of it and I think that a lot of people have a different mindset than how the aviation industry approaches risk. I worked in aviation product development (albeit briefly – automotive pays a heck of a lot better), and got to do some FAA compliance work in the process, so here are a few thoughts:

    From reading the discussion, I think most folks are calibrated more or less to how NHTSA tries to do safety. For a bit of context: NHTSA has around 600 employees, and the FAA has around 45,000 (of which I think around 1/3 are air traffic control, and another few thousand are NextGen). That type of resource difference isn’t “a different league;” the FAA is in a whole different sport in terms of what they’re trying to do, and people flying random drones (or engaging in ad-hoc passenger service) stand to substantially muddy that water. This is a regulatory apparatus designed to “go-for-zero,” and it has basically achieved that in the US.

    You can really see this in the way that the FAA classifies hazards. Here’s an interesting AC for hazard classification levels when dealing with Part 23 aircraft (little planes) equipped with e.g. autopilots:
    https://www.faa.gov/documentLibrary/media/Advisory_Circular/AC%2023.1309-1E.pdf
    Refer to the table on Page 23. The way to interpret this is that e.g. a Type IV Part 23 aircraft (basically a CJ3 sized bizjet) needs to be robust enough that there’s less than a 10^-5 chance per flight hour of a “Major” event of any kind. While they define “Major” elsewhere, consider that a single screen in the cockpit going out briefly with no other effects at all on the aircraft is probably somewhere between “Major” and “Hazardous.”

    This verbiage probably looks familiar to anyone working in autonomy – ISO 26262 and IEC 61508 are both related to the FAA’s functional safety work (they’re all basically based on RTCA DO-178B I believe). The kicker is the FAA requires you to positively prove this performance too (no simulation-based shortcuts), generally through a probabilistic fault tree analysis the very long and tedious way.

    Letting private pilots without commercial-grade equipment or training run ad-hoc “charter” flights outside of Part 135 regs, or letting random drone pilots with big, highly capable drones operate anywhere near an airport, are antithetical to this way of thinking. How do I prove to 10^-X that a drone won’t malfunction in a way that it flies into controlled airspace? Or that the drone pilot will stay in radio contact with the tower and not do it on accident? Does a Flyber pilot know how to safely bring a plane with innocent victims – who probably imagine this is as safe as Delta – in it to the ground? Will they even have the right equipment to do so?

    It’s a minor miracle they decided to play as nice with drones as they have.

  6. Well... says:

    Cheapskate homeowner question from a guy who would rather work on a puzzle than mow his lawn:

    Assuming a flat, rectangular lawn of about 2400 ft^2 (40 x 60) without obstacles in it, what mowing pattern is the fastest and most efficient (in terms of gasoline and wear & tear on the mower), given that you are going to mow it yourself?

    Constraints:

    1. You are using a conventional gas-powered walk-behind mower that ejects cut grass to the right. You will not be weed-whacking, so to cut as close as possible to the borders of the lawn you must keep the borders on your left as you mow along them.

    2. Mowing over ejected grass for more than a few yards will cause you to have to stop and manually unclog the mower every now and then, and potentially make your engine seize up if you really let it get out of hand, so you must avoid mowing over ejected grass whenever possible.

    My answer is to start in the middle of the lawn and spiral out clock-wise (thus keeping ejecta inside the ever-growing mown circle and, after a couple turns, eliminating further need to turn in place more than about 60˚). When I have created two U-shaped areas of uncut grass separated by a circle of cut grass, I do one clockwise lap around the perimeter of the whole lawn, then cut each U-shaped section by spiraling in counter-clockwise along their edges. When the pointy bits on the ends of each U-shaped area get too small to turn around on without it being annoying I just stand in one place and move my mower back and forth to quickly eliminate them so that each area is shaped like a rectangle, then I continue spiraling in on it until it’s gone.

    Is there a faster and more efficient pattern? This seems faster and more efficient than the conventional perimeter-lap-then-back-and-forth-in-straight-lines pattern but I’m open to being proven wrong.

    Bonus question:

    Your mower has an auto-drive lever that greatly reduces the amount of effort, but does it cost you in gas? I say it does, and I don’t use it.

    • powerfuller says:

      Wouldn’t starting on the outside and spiraling inward counterclockwise also keep the ejected grass on the outside? That seems easier than starting in the middle.

      • Well... says:

        The problem there is you can’t get close enough to the border, because the grass ejection chute (or whatever it’s called) gets in the way.

    • tayfie says:

      In the terms you specified (gasoline and mower wear), the most efficient thing is to not use the mower and use cut the grass manually with scissors.

      Alternatively, the time saved with a marginally more efficient mowing pattern is likely insignificant compared to the time spent thinking about it. Just get out there.

      Alternatively, cut the grass more often so the clippings are not heavy enough to clog the mower and you can use any pattern you want.

      Alternatively, buy a bag to catch the clippings.

      Alternatively, buy a more powerful mower that doesn’t have that problem.

      Alternatively, buy a robot mower.

      Alternatively, make your wife or kids do it.

      Alternatively, pay a neighborhood kid to do it.

      Alternatively, pay a professional service to do it.

      For any solution that costs money, you can use time saved to make back more than you spent. Tutoring pays well, is flexible, and probably suited given you post here.

      • Lambert says:

        And this is why engineering is as much about a good specification as a good design,

        Put a wide stake in the middle of the lawn and tie one end of a rope to it. Tie the other end to the side of the mower.
        It will mow in a spiral.

        Fuel costs will be negligible. Obligatory.

      • nameless1 says:

        >For any solution that costs money, you can use time saved to make back more than you spent.

        What? Why? Most people are not consultants or something. 9 to 17 jobs, fixed salary. I thought about starting to consult but I am not at all sure I would find enough sales to work more than those 40 hours a week if I wanted to. The appeal of consulting would be to work actually less – if consulting firms charge €1000 a day and I undercut them as a freelancer charging €500 then I need only 10 working days a month to do a decent living. Why don’t I do this? Because it is nowhere guaranteed I could find even one client, no matter my skills – it is a function of demand.

        In short I think it is one of the spherical cows of the economists.

        Add the following. Whether work has utility or disutility follows a marginal scale. That is, people who work in a restaurant kitchen all day do not cook at home and they find watering their plants is a relaxing pastime. People who work watering plants all day find cooking at home a relaxing pastime. You just get bored of doing the same thing all the time but if you vary your tasks it is better, hence adding another 1 hour to your 8 hour day watering plants is worse than cooking at home for 1 hour, even if you earn enough for that 1 hour overtime to buy 1.5 dinners.

      • Nancy Lebovitz says:

        “In the terms you specified (gasoline and mower wear), the most efficient thing is to not use the mower and use cut the grass manually with scissors.”

        No, you could use an old-fashioned muscle-powered lawn mower.

        • Well... says:

          I used to have one at my previous house. A top of the line Fiskars one actually. It was good when it worked and in general I liked it a lot, but even that one could not cut certain strains of grass, the alignment of the blades had to be adjusted frequently, and my current house has way too much lawn to make that a worthwhile method.

          Since I got my mower free and it doesn’t use up nearly as much gas as I thought it would (about 1 gallon per year), I’m happy with the deal.

      • Well... says:

        the most efficient thing is to not use the mower and use cut the grass manually with scissors.

        I specified “fastest” also.

        Alternatively, the time saved with a marginally more efficient mowing pattern is likely insignificant compared to the time spent thinking about it. Just get out there.

        I’ve never sat inside thinking about it when I could have been out mowing my lawn. The thinking about it happens involuntarily, while I’m walking from my desk at work to the printer, or while I’m drying off after a shower, or… […] and then I try out stuff while I’m actually mowing. I also think about it then.

        Alternatively, cut the grass more often so the clippings are not heavy enough to clog the mower and you can use any pattern you want.

        That seems less efficient. Besides, I don’t want to cut my grass more often!

        Alternatively, buy a bag to catch the clippings.

        I have one but don’t use it. It would fill up quickly and I’d have to stop and empty it all the time (and they only pick up lawn refuse once every two weeks where I live and I haven’t built my compost bin yet), plus putting the cut grass back into the lawn is good for it.

        Alternatively, buy a more powerful mower that doesn’t have that problem.

        Alternatively, buy a robot mower.

        You somehow gathered from my post that I love spending money on my lawn…

        BTW, do the robot mowers have that same problem as the Roombas where they make maps of your property and then sell the data?

        Alternatively, make your wife or kids do it.

        My wife has a wrist injury and can’t even unlock the back door, let alone push a mower. My kids are both basically toddlers but have no doubts, as soon as they’re old enough they’re mowing the lawn.

        Alternatively, pay a neighborhood kid to do it.

        Alternatively, pay a professional service to do it.

        You got me! I have money stockpiled up, set aside just for the grass that surrounds my house.

        For any solution that costs money, you can use time saved to make back more than you spent. Tutoring pays well, is flexible, and probably suited given you post here.

        I got my lawnmower for free and it’s stored in a shed 50 feet from my back door. I’ve been using it for a year and am still working through the 2-gallon gas can that also came free with the house. Immediately finding and securing an odd job that occupies me only during the times when I would be mowing my lawn, that pays enough to cover the costs of having the lawn maintained by someone else, even the neighbor’s kids, is unlikely.

        Besides, as much as I hate mowing my lawn (or even having a lawn), there is a certain masculine pride and independence I feel at the end of the day from maintaining it myself.

    • beleester says:

      You will not be weed-whacking, so to cut as close as possible to the borders of the lawn you must keep the borders on your left as you mow along them.

      This is only true if your lawn is surrounded by a fence. If it’s possible to run your mower outside the border of the lawn (say, where the lawn touches the street or driveway) then you should do so, with the mower straddling the border and the ejector pointed outwards.

      If the lawn has no raised borders at all (like my parents’ house), then you should start at the outside, ejector pointed outwards, and spiral in to the center. That way you never mow over clippings and you could theoretically mow the entire lawn in one pass.

    • bean says:

      Your mower has an auto-drive lever that greatly reduces the amount of effort, but does it cost you in gas? I say it does, and I don’t use it.

      I’m sure it costs gas. But does it cost enough gas to make it not worth using? While I’m with you on paying someone else to do it, you say it’s taken you a year to burn through two gallons of gas. Even if it doubles your fuel consumption, you’re looking at spending maybe $5/year to make mowing the lawn a lot nicer. When my parents got the self-propelled mower, mowing the yard went from actively hellish to merely seriously unpleasant (it was a big yard on a steep hill, and outside in St. Louis in the summer is not a place I remember fondly even when I wasn’t pushing a mower.)

      I’ve also found that a lot of clogs can be cleared by just lifting the mower on the back wheels and letting the front fall back down. Do that every few yards when you have to cross the clippings.

    • johan_larson says:

      I would first cut the outside edge, going clockwise, so I can get as close to the edge as possible, without the ejection port getting in the way. I would then cut the rest of the lawn in a spiral, going counterclockwise, so the ejection port points outward, letting the ejected leaves get spread mostly evenly over the rest of the lawn, rather than getting concentrated in the middle.

      This pattern requires you to mow often enough that you can leave the clippings on the lawn, rather than needing to collect them.

    • rahien.din says:

      I used with a lawn 60′ wide by 40′ tall. Assuming your mower path is 24″ wide, this divides the whole lawn into 24×24 units. I also assumed that the non-neglible clippings travel about a mower’s width (this is plausible from my own mower).

      Start at the point (11 units, 10 units) and mow left-to-right for 11 units. Mow down for one unit, then right-to-left for 11 units. 10 of these units will contain grass clippings, but that’s all the clippings you re-mow. Mow up 2 units, then across 12 units. Continue to spiral in this fashion until you end at the top-left unit.

      Clean border, minimized travel time, minimized re-mowed clippings.

      Your mower has an auto-drive lever that greatly reduces the amount of effort, but does it cost you in gas? I say it does, and I don’t use it.

      It depends.

      Gas use is a function of the driveshaft’s rotational speed. The upper bound on that speed is set by the engine’s design parameters, and we can plausibly conclude that this is at an equilibrium point for mower efficiency (a tension between gas usage and wear-and-tear, and cutting ability). Probably any deviation from that is a decrease in efficiency, which means, more gas used per unit of grass.

      For the mower in question, there is no way to change engine speed and no transmission. The only influence on engine speed is back-torque from the blade when it is given too much grass to cut. Excess grass means a slower blade speed means the engine slows down means the engine is less efficient and uses more gas per unit of grass. For most operating environments, the grass load does not exceed this limit, and there amount of back-torque is negligible to the mower’s operations.

      Engaging the auto-drive diverts some driveshaft torque from the blade to the wheels. This has basically the same arithmetic effect as adding grass to the blade’s load. Essentially, this shifts the entire curve in the wrong direction.

      If the grass load is low to begin with (say, you cut frequently, or your grass is dry and delicate to begin with) then engaging the auto-drive will decrease the amount of time you spend without decreasing cutting efficiency or significantly altering engine speed. In this case, it pays to auto-drive.

      If the grass load is high to begin with (as seems to be the case you give us here) then engaging the auto-drive will significantly increase the risk of non-negligible back-torque, ultimately costing you more gas per unit of grass.

  7. a reader says:

    Quite late to the party, but thinking of the differences of opinion between the Blue Tribe and the Red Tribe – remember Scott’s article “I Can Tolerate Anything Except The Outgroup” – thinking that maybe the hostility between the two tribes is because each tribe tries to impose their ways on the other one, considering their ways the moral and just ones – and according to Jonathan Haidt, the Blue Tribe (the WEIRDs, as he calls them) is sensible only to 3 moral foundations: Care/Harm, Fairness and Liberty, but the Red Tribe seems sensible to 3 more (+Loyalty, Authority, Sanctity), and they are quite separate geographically, the Blue Tribe being mostly urban/suburban and the Red Tribe mostly rural, I had a weird idea: what if the two tribes would separate, as states in the US are separated presently, to have each their own laws regarding culture war aspects? So that the cities will be sanctuaries for immigrants and the countryside will have no immigrants; in the cities, it will be forbidden for bakeries to refuse to cook for a gay marriage, in the countryside, the gay marriages will be forbidden; in the cities, all places will have to let trangenders use the bathroom of their chosen gender, in the countryside, only that of their birth sex; in the cities, the schools will teach evolution and sexual education, in the countryside, they will teach religion and “the controversy”; in the cities, guns will be banned and marijuana allowed, in the countryside guns allowed and marijuana banned; etc. etc. etc. Those culture war laws being enforced not by punishments like prison or fines, only by sending the persons that don’t conform to the other side – but probably most people would “segregate” willingly. What do you think, would something like this work? If not, why not? Could people agree to disagree on such topics? Or are they too fundamental for both parties to admit disagreement?

    • albatross11 says:

      It seems to me that a lot of Blue Tribe reaction to racism/sexism/homophobia/etc. looks exactly like a sanctity/cleanliness reaction.

      • dndnrsn says:

        This is… uncharitable. It’s easily explained as care/harm. Stuff that looks like “purity” or whatever is often more… I don’t know what you’d call it, exactly. Projection? I’m trying to gesture at stuff like well-off white people who denounce racism fervently but select where they live based on “good schools” – but is the denunciation a purity reaction, or just hypocrisy?

        I will say this: there are some things that make more sense when they’re framed as non-care/harm issues, but people who hold care/harm-heavy views interpret them as care/harm. I think this is what is going on with a lot of cultural appropriation complaints: stuff that conservatives (or even trads) would agree with if phrased as loyalty/betrayal or respect/subversion complaints gets filtered through the care/harm lens.

        • Douglas Knight says:

          Psychology is inherently uncharitable.
          The whole point of psychology is to invalidate people’s claims about their internal experience.

          I don’t see any relevance to this whole comment chain about the psychology of moral disagreement. The original comment should have taken it as a brute fact. Invoking Haidt’s psychology to trace the details was irrelevant and probably intended as an aside.

        • dndnrsn says:

          Those rationalizations do happen. And I’m willing to buy that the experience I had – of standing around with the smokers outside a 95% white drinking party, made up of people from a university that’s nowhere close to what white, shittalking those damn racists who vote for Trump or whatever – starts to look a lot like a purity response. I see a lot of people express anti-racist sentiment that if you tilt your head slightly looks a lot more like classism, a lot more like “ew those gross dumb people who don’t have university degrees.”

          But that’s just a chunk of people on the left, isn’t it?

      • S_J says:

        Is the Blue Tribe attitude towards building oil pipelines through North Dakota (or Alaska) based on a sanctity/cleanliness reaction?

        By my read, such pipelines are safer (on a care/harm spectrum) than large convoys of tanker trucks, or large trains pulling many tanker-cars.

        More generally, how much of environmental politics (or the politics of GMO food labeling, or the politics of vaccine conspiracy) based on care/harm, and how much is based on sanctity/cleanliness?

        • Nornagest says:

          There probably is some sanctity/cleanliness going on, but — if you’re sufficiently alarmed by climate change — there’s also a purely practical reason to oppose pipelines. Pipelines make it cheaper to move oil from place to place. Make stuff cheaper and you incentivize using more of it. Use more oil and you produce more CO2. Of course, trains and tanker trucks also burn fuel, and probably a lot more of it than pumping stations for a pipeline. It’s possible that the math would work out such that you’d lose more in transport than you’d lose by making gas cheaper at the pump, but that’s not obvious to me, and anyway that’s consequentialist thinking and almost everyone isn’t.

          GMO politics strikes me as almost pure sanctity/cleanliness, though. Nuclear power politics likewise.

          • Edward Scizorhands says:

            Closing off the fastest and cleanest route for a dirty energy source doesn’t make it stop being used. You might make it used marginally less, but the margins that really make the difference are global energy prices, which make much bigger moves over reasons way out of your control.

            You can buy up the dirty energy source for yourself and keep it in the ground.

            Liberals are not alone in this. Tell someone “we cannot stop the bad thing but we can make the bad thing as un-bad as possible with your help” and lots of people will scrunch their hands into fists and shove them into their ears and insist you are a liar. They want to be able to say their own hands are clean. It’s more of Copenhagen ethics than anything else. People who are completely divorced from the problem are ethically cleaner than those involved to make it less bad.

            (On re-reading the thread, I realized I was just repeating things.)

          • Nornagest says:

            I know you struck this paragraph, but I’m gonna respond to it anyway.

            You might make it used marginally less, but the margins that really make the difference are global energy prices, which make much bigger moves over reasons way out of your control.

            Margins matter. If oil costs sixty bucks a barrel plus or minus twenty depending on what OPEC feels like, and you add a tax for a buck a barrel, then you should still expect people to be using less of it on average than in the counterfactual. Even if that’ll only be apparent in the empirical figures once you can integrate them over enough time that OPEC’s mood swings average out.

          • Edward Scizorhands says:

            I wouldn’t mind a tax. I would vote for a tax. At least with a tax we have a clue what we are doing.

            If this was a tax, it was a tax on one very specific carbon source, while all the others, some much dirtier, continue to exist. This is as useful as boycotting one chain of gas stations.

    • dndnrsn says:

      People cannot frictionlessly move to wherever they would prefer to be.

      Out in the countryside, for example, what happens to gay or trans teenagers? If some state or county or whatever decides to go all Jim Crow, are all the black people there either supposed to move somewhere else (uprooting themselves, finding new jobs, selling their real estate, splitting up extended families) or suck it up? How do you “[send the persons that don’t conform to the other side”?

      In the cities, what happens to socially conservative immigrants? In Canada, there’s the issue of the Ontario sex ed curriculum, which became a major wedge issue: contrary to the belief of some definitely-Blue-Tribe people (I don’t like the Blue/Red terminology, because it was meant as shorthand for “latte sippers vs tobacco chewers” or whatever, but these people are definitely latte sippers) there are homophobic and anti-trans bigots who aren’t white sheriffs from the 1950s, and a lot of people from immigrant communities got upset about the idea of their kids being taught it was OK to be gay, or about gender identity, or whatever. Here the Conservatives are actually pretty good at picking up visible-minority and immigrant-community voters; contrary to those people’s belief, the Conservatives aren’t reliant on white voters like the Republicans are. Where is a conservative Muslim supposed to go?

      Beyond the issue of whether people are OK with their neighbour sinning/being problematic, you’ve already got major practical issues.

    • arlie says:

      The obvious problem is people who don’t fully match either tribal definition. Hiding under that is the problem of which tribal definition to use.

      It may be that as US tribalism progresses, most people will adopt whichever complete package makes their own life easier – socially, psychologically, or in terms of material advantage, and attempt to extirpate (if possible) or conceal their own contrary tendencies. But that’s not really what we have now.

    • tayfie says:

      Leaving aside the problems of other commenters have brought up, how do you split the country without Civil War 2.0? Groups of states have *tried* seceding to do their own thing outside of federal law, which is what such a proposal would eventually require (if we aren’t stripping federal law to the bone). The federal government will not allow it because it breaks up the Union, causing a huge loss of power on all fronts, power that the federal government (ostensibly) needs to protect the citizenry, and the citizens will agree. This is the major problem, even if the majority agreed that splitting would make everyone happier in principle. The loss of the strength of unity is too much to bear.

      Then there is the ideological component. Splitting up would feel to many like declaring the American Experiment failed. Unionists would fight in a civil war for the sole reason that we are the *United* States. Leaving the Union would be painted an insult of the highest order, a direct attack on the values of all Americans. The United States has a mythos so strong it would create a reality distortion field around an issue like this.

    • syrrim says:

      The basic problem is that both groups believe that their morals should be applied to everybody, and can’t stand there being a group that defects. Consider the abortion issue. The one side holds that it is equivalent to murder, and so believes the other group should be tried as murderers. The other side holds it is a matter of bodily sanctity, and so the other side should be tried as human traffickers or something. Because it’s a moral issue, neither side is satisfied to have it enforced their way just against them, or even just in their territory – they both believe it should be applied their way everywhere.

      If you’re a fence sitter on these issues then this may seem silly, but imagine this was applied to a moral issue that you find very important. Imagine that one side decided that eating babies was perfectly permissible. Would you sit idly and say “Well, as long as they don’t come around and eat any of our babies…”? Surely not – you would want this thing stopped, and you would be willing to transgress any custom of national autonomy to achieve this.

      • arlie says:

        There are aspects of the tribal split that may not quite rise to this level. E.g. the belief that white folks should have priority over other people. I *think* that most of the red tribe would be happy if black folks just went away somewhere else.

        [Edit – well, they’d be happy until/unless they found out that wherever the black folks went had some resource they wanted. Then they’d feel perfectly entitled to come take it from them. But at least it wouldn’t be a *moral* issue.]

        • S_J says:

          I *think* that most of the red tribe would be happy if black folks just went away somewhere else.

          I’ll disagree with you. (I think I qualify as Red-Tribe…) I’d prefer to judge people on the content of their character, not the color of their skin.

          Maybe I’m an outlier.

          My position is that I prefer well-behaved neighbors of any skin color to poorly-behaved neighbors of any skin color. Somehow, my preference for well-behaved neighbors appears to be coded as racism to the typical Blue Tribe observer…who fails to see the non-racial-minority people I’m trying to avoid as neighbors, and only sees the racial-minority people I’m trying to avoid as neighbors.

  8. Edward Scizorhands says:

    https://www.nytimes.com/interactive/2018/05/03/magazine/money-issue-iowa-lottery-fraud-mystery.html

    An insider hacks the RNG of a lottery machine in dozens of states. He ends up caught because he doesn’t have any reliable way of anonymously collecting his winnings.

    • Chevalier Mal Fet says:

      I don’t have much commentary on this, but since no one’s replied I just wanted you to know that I thought this was a fascinating read, thanks for sharing.

    • S_J says:

      I have a feeling I’ve seen that one before… Possibly at Bruce Schneier’s blog. It’s still an interesting story.

      The concept is simple. And he probably would have gotten away with it if he had not purchased a multi-million-dollar win himself.

      If any of his compatriots had won that prize (with the possible exception of his brother), the whole thing might have escaped official scrutiny.

      Amusingly, the story reminds me of the heck perpretated in the film Office Space. A coder figures out how to siphon money from the system, but gets much more money than expected… And has no idea of how to handle that money without attracting attention.

  9. Edward Scizorhands says:

    Someone recommended the “HTTPS Everywhere” Chrome extension in an open thread, so I stop coming to the unencrypted version of SSC and messing up my cookies.

    However, sometimes I end up at a site that just doesn’t speak HTTPS. How do I enable a single bypass for the extension?

    • dark orchid says:

      Doesn’t HTTPS everywhere just let you through anyway in that case, but changes the colour of its icon to warn you? As I understand the extension, it’ll enforce HTTPS on many sites where it knows how to do that, but will leave sites alone that it doesn’t have in its rulesets.

    • Lambert says:

      That’s not a problem I’ve ever had. It doesn’t try to use HTTPS on sites that don’t support it.
      Perhaps if the webserver is misconfigured, so it looks like it handles it, but doesn’t really.

      Anyway, go to ‘add a new rule for this site’ ‘advanced’ and change the https to http in the ‘redirect to’ field.

      • Edward Scizorhands says:

        I think I have rules set up, but when I check them by clicking on “view all rules” and I get taken to the eff’s website. I can’t see my own rules.

        I’m testing with http://econlog.econlib.org. If I say “block all unencrypted requests” I can’t get there. If I don’t have that clicked, and I come to https://slatestarcodex.com, I don’t get switched to the https version of this.

        • Douglas Knight says:

          That sounds wrong. The extension does 2 things. One is that it has a bunch of rules that convert http to https. The other, only for the very paranoid, who are willing to put up with a lot of breakage, is the option block all unencrypted connections. If you don’t have a rule for SSC, and you try to go to the unencrypted version of this site, that option should just block you, not switch to https.

          When I visit a page, to the right of the URL I have an S icon for the plugin. Clicking on it has two checkboxes: (1) Enable; (2) block all http connections. Below that is text in underlined blue like a link saying “add rule for this page.” That’s what you want to do. Below that is a list of rules that got invoked on this page. Finally “View All Rules” or something. You sound like you have a similar control panel, but missing the middle. You’re probably configuring in preferences, so you’re not on a specific page and you don’t see what rules got applied nor have the ability to add a rule.

          • Edward Scizorhands says:

            Thanks, I was using the tool wrong.

            We aren’t at the point where I can use HTTPS everywhere and enable exceptions. But we are at the point where I can individually pick things to force to HTTPS.

  10. johan_larson says:

    A guy tried gigging for a scooter-rental company, finding and charging electric scooters that can be picked up and left anywhere. At $5 a pop, he made a bit of money but found the experience frustrating because of misbehavior by users and other chargers.

    https://slate.com/technology/2018/05/charging-bird-scooters-overnight-is-like-a-much-less-fun-version-of-pokemon-go.html

    • Edward Scizorhands says:

      This was a fascinating read and there’s probably a few economics papers to be written about how horribly they’ve set up incentives. People get paid to charge and they purposefully make the system worse for other chargers and for paying customers.

      • johan_larson says:

        I understand the bit about hoarding. Someone uses the scooters to get around, and rather than leaving the scooter after getting home, they keep it, so they can use it again tomorrow. Makes sense. They just need to activate the scooter by app when they need the wheels to turn. Fixing that one would be hard; you’d need to identify users with that usage pattern, and apply penalties of some sort.

        But why all the “decoys”? Why is the author getting reports from many scooters that need a charge, and just aren’t where the app says? Could it be the scooters themselves don’t have GPS, but are just reporting the location where their last user clocked out, getting the location from the user’s phone’s GPS? And then presumably someone came along and moved the scooter. It seems like a weird design decision. And that one has a technical solution.

        • zoozoc says:

          It could be that they turn off the GPS during the night to save on power and, like you said, the location on the APP is simply the last known location of the scooters.

        • Douglas Knight says:

          I thought that the scooters not being where they were supposed to be was that they had been brought inside or otherwise on to private property, that it was just another aspect of hoarding. But GPS could be precise enough to make that clear. Are they not running the radio long enough to pinpoint? It should be obvious if the scooter is inside a fence. Maybe if it is indoors, that messes up reception and prevents pinpoint?
          It’s not that they turned off GPS and were moved far away, because they’re still in bluetooth range.

          Also, I thought that the hoarding was mainly chargers, not ordinary users. They pick up a scooter, hide it for a couple days until it registers as “lost” and then get paid 4x for charging it. (I’m mainly basing this on the guy in the pickup the second night. Since he’s out at pickup time, he’s probably a charger. Also, why would a user need a whole truck full of them? The first night there was also a hoarder with a whole truck, though less clearly a charger.)

  11. Tinman says:

    Recently I have coined a term for a very specific type of intellectual “inadequacy”, or cognitive limit.

    Imparscience – literally “inadequate knowledge”.
    I would describe it as a physically factored inability to adequately perceive and/or process all necessary information available to arrive at a conclusion.

    The example I’ve put forth to explain it is thus:
    Assume a system of X users.
    The system demands of its users to know all the demands (Y) (short for personal traits and information about a user) and relationships (Z) of all its other users.
    As the number X, the number of demands grows proportionally and the relationships exponentially, as every member will have a relationship with every other member. It compounds. X < Y < 1) < Z, Z = X combinations of 2.
    As the amount of users X increases, there will come a point where some users will become unable to, through one means or another, perceive all relationships Z as they will become too plentiful, and their internal model of the whole system becomes incidentally incomplete. They will still, however, be able to perceive both X and Y. This is one imparscience.
    There will come another point where even the best users will be unable to perceive all relationships Z as the number grows. This is another imparscience. After this point, the relational map will be incomplete.
    Then comes the amount of demands, where some and then all users will be unable to keep notice of all their colleague's wants. More imparscience. Further relationships will be incomplete, but can still be created and held.
    Afterwards comes the terminal point where the users will be unable to consistently hold more relationships with other users after a certain X.

    These imparsciences vary from person to person, of course – for example, a clinical idiot will be able to hold very few stable relationships while a genius might be able to hold considerably more.

    I can name at least one number for an imparscient limit, and that is an X of 250. It's called Dunbar's number, and it is the approximate maximum number of active relationships an individual can have at once where they still know the relationships of all its other members. In other words, all X Y and Z can be perceived by an individual.
    It is prudent to assume that there are other imparscient numbers higher than that.

    Are there any similar terms? Am I reinventing the wheel here?

  12. johan_larson says:

    Charles Stross, the Scottish SF author, has some thoughts on how the rest of the century will turn out:

    Here’s the shape of a 21st century I don’t want to see. Unfortunately it looks like it’s the one we’re going to get, unless we’re very lucky.

    Shorter version is: there will be much dying: even more so than during the worst conflicts of the 20th century. But rather than conventional wars (“nation vs nation”) it’ll be “us vs them”, where “us” and “them” will be defined by whichever dehumanized enemy your network filter bubble points you at—Orwell was ahead of the game with the Two Minute Hate, something with which all of us who use social media are now uncomfortably, intimately, familiar.

    • Edward Scizorhands says:

      Did you link to that because of how nuts he is?

      He talks all about filter bubbles but he thinks the only filter bubbles that matter are the ones of the alt-right, who are going to start killing people in the name of fighting global warming.

    • Le Maistre Chat says:

      I “love” how Stross starts off with “Violence will be ‘us vs them’, where ‘us’ and ‘them’ will be defined by whichever dehumanized enemy your network filter bubble points you at” and then unpacks that entirely by demonizing the people his filter bubble points him at.
      White leftists and “refugees” will be powerless saints while all dehumanization and violence will be conducted by Stross’ outgroup. Because only they have filter bubbles, I guess?

      • Randy M says:

        We don’t need to tolerate the intolerant those who don’t practice radical inclusivity.

      • Nornagest says:

        He’s running a universalist ideology and he imagines that he thereby escapes his filter bubble. His isn’t the only universalist ideology on the planet, of course, but somehow that never comes up.

        This sort of thing is so frustrating. It comes so close to what’s actually going on and then it swerves off at the last moment and ends up at “let’s blame it all on those fuckers“. And if you point that out, it just gets you labeled as one of Them.

        I don’t know how you talk someone out of this. Actually going out and meeting the dehumanized enemy isn’t enough, if we can go by the million and one soul-searching but ultimately inconsequential variations on Rednecks in the Mist that came out after Trump’s election.

        • Nornagest says:

          Okay, now that I’ve taken a few deep breaths, it’s not a totally incoherent vision if you accept a few simplifying assumptions:

          – There’s a single Global Ruling Class (“wealthy white people” in Stross’s version, but I’m sure we could all name some alternatives) with a single filter bubble.

          – The climate threat is catastrophic if not apocalyptic on a timescale of decades.

          – Local authorities do not have the ability to mitigate or adapt to it without the help of the GRC.

          I’m pretty sure all three of these are incomplete to false, but they’re close enough to true — especially if your priors lean that way already — to make for good rhetoric. He does do a lot of demonizing, and that’s annoying, but you don’t actually have to jump into wild-eyed denunciations to get the argument to work: given these assumptions, anyone in the GRC’s blind spots is screwed no matter how benevolent they are, and everyone’s got blind spots. And you don’t have to imagine yourself immune to bubble filtering, you just need to believe yourself to be outside the GRC’s bubble and therefore able to see into those blind spots.

          That suggests some possible avenues of attack. I’ll have to think more on this, but one of the more striking is how much White Savior thinking it’s got in it: it’s practically colonial.

          • Le Maistre Chat says:

            @Nornagest: One of the infuriating things is that simplifying assumption #1 is trivially false unless he can somehow define wealthy white people to include wealthy Japanese and the Chinese ruling class (it would be helpful here to know how closely China resembles a “democratic” capitalist state like nearby Japan).
            #2 is enough like Club of Rome “Oh noes overpopulation will lead to 50% of global population dying when exceeding my weirdly low estimate of carrying capacity causes an eco-disaster” to also look flatly false rather than incomplete. History shows it is the sort of thing the global elite will believe, ironically for Stross.

            I could imagine, for SF purposes, a scenario where global warming with a sea level rise that’s high but not high enough to make the IPCC say “dude, you’re scaremongering” is bad enough to be catastrophic for underdeveloped countries south of 30 N latitude and authorities are incompetent to change food crops or build dikes like the Dutch without the assistance of the global ruling class, so they tell people to go all Camp of the Saints instead. At that point, though, it’s hard to figure out how the European and American ruling class support border security rather than letting them win.

          • Nornagest says:

            Japan, South Korea, and the other rich Asian states have small enough economies compared to the US and Europe that they conceivably couldn’t do much in a scenario like this, and they also tend to be relatively isolationist by comparison. China is a bigger sticking point.

            But yeah, it’s got problems.

          • Le Maistre Chat says:

            Japan, South Korea, and the other rich Asian states have small enough economies compared to the US and Europe that they conceivably couldn’t do much in a scenario like this, and they also tend to be relatively isolationist by comparison.

            Sure. This scenario basically requires China, the EU and Anglosphere to all act as isolationist as we know Japan would.
            For the latter two cases, this almost requires “wealthy white people” to not be Blue tribe. To get around that, you basically have to anticipate a flip that makes them unsympathetic to Muslims and immigrants, the way they were unsympathetic to Muslims but not immigrants before Foucault replaced Marx in their worldview.

          • Nornagest says:

            Yeah. It all comes down to disenfranchisement; for this thinking to work, Charles Stross and all his friends need to be helpless observers riding a culture that’s totally locked up by a class of callous super-rich manipulators.

            There’s a certain symmetry here with the alt-right’s own fears re: Cthulhu’s sinister motion, etc. But like I’ve said here before, everyone feels disenfranchised these days.

            I’m starting to wonder if feelings of disenfranchisement might be a serious threat in themselves.

          • albatross11 says:

            I’m beginning to think feelings of disenfranchisement are somehow profitable to create and amplify.

          • For the latter two cases, this almost requires “wealthy white people” to not be Blue tribe

            The idea that wealthy people are what locally counts as left is USian. In most of the world, the wealthy are coded conservative.

          • Le Maistre Chat says:

            The idea that wealthy people are what locally counts as left is USian. In most of the world, the wealthy are coded conservative.

            I don’t know how true that is, but we’re only talking about the US/Anglosphere, EU and China, the developed countries that would have to take 5 billion immigrants in Stross’ imaginary future.
            I don’t see how you can argue that the EU’s ruling class is either on the conservative side of the Culture War or not wealthy. In the few EU countries where culturally conservative Parties are in power, namely Hungary, Czechia and Poland, the real ruling class is trying to strip them of all legislative and executive power. Are these supposed to be wealthy rebels fighting a non-wealthy ruling class?

          • Nornagest says:

            I was going to say that China would never take a significant number of immigrants because it was already too dense, but when I actually did the math I found that it had a lower population density than the UK or Italy. A lot of western China is basically uninhabitable and it might be skewing those numbers, but maybe I was wrong and it does have some carrying capacity left. Still, I think the interventionist case for it is more likely to look like China running mitigation projects in nearby countries in exchange for more geopolitical influence than like China absorbing a lot of immigrants from Bangladesh or Indonesia or the Philippines.

          • Le Maistre Chat says:

            @Nornagest:

            Still, I think the interventionist case for it is more likely to look like China running mitigation projects in nearby countries in exchange for more geopolitical influence than like China absorbing a lot of immigrants from Bangladesh or Indonesia or the Philippines.

            Well, yeah. Stross seems to completely ignore the possibility of the global elite helping global warming victims where they are; it’s either open borders of >50% of humans then alive die.
            In a future where India can’t be a developed country, the Chinese gov’t would love to engage in intervention that saves the West Bengal/Bangladesh breadbasket from going underwater. By building Dutch-style water control infrastructure while also being able to dam the Indian rivers that have their source on the Tibetan plateau, they’d make themselves a hydraulic empire.

    • proyas says:

      His vision sounds just a tad paranoid.

      I bet he’s a lot of fun at parties.

    • rlms says:

      It’s interesting that he references “muh physical cash”; I thought that was an exclusively right-wing thing.

    • Nancy Lebovitz says:

      I think he’s got some plausible right-wing disasters there, but left-wing disasters and generic authoritarian disasters are also possible.

  13. Eponymous says:

    Sorry for the long comment, but this seemed the appropriate place to put it.

    I know that some people who comment here are concerned about the problem of friendly AI. I recently thought of an approach which, on reflection, seems to work. So I thought I would share it here, so that you can (with high probability) explain why it would actually destroy the world, or (with low probability) use it to save the world.

    The idea is that you tell the AI: “Behave as an intelligent and moral human being would want an AI to behave after appropriate reflection.”

    Specifying this would require that you give the AI some examples of human beings who are generally regarded as being moral, and that it understand the concepts of intelligence, reflection, and want (as applied to humans).

    The likely result is that the AI would simulate moral humans, augment their intelligence, and consult them as to what it should do.

    One concern is that these simulated humans would be conscious and have moral status, so that simulating them would be immoral. But then, that is a mistake the AI would only make once, because if this were so, the simulations would tell it to make future simulations unconscious (barring the unlikely case that consciousness and moral status are necessary to engage in human moral reasoning!); and since the initial simulations would probably be fairly low quality, the AI would hopefully figure out that this was wrong (if indeed it is!) before it created any human beings with moral status.

    Of course, it’s far from clear to me that it would be morally wrong to simulate superintelligent and moral humans and ask them to think about what is moral to do. That seems like a life worth living to me! And if the end result of this is turning the universe into computronium to simulate highly intelligent, moral humans discussing the nature of morality, that doesn’t sound so bad as dystopian AI scenarios go, especially since this would only occur if said intelligent and moral humans concluded it was moral to keep doing this. In which case they’re probably right.

    Another concern is that the AI might reflect too long, and thus not take needful action; or else not reflect long enough, and thus take wrong action. But of course thinking is itself an action that is governed by the same guidance system. If the AI’s current simulated moral cabinet think the AI should reflect further, add to their number, or augment their intelligence, they would tell it to do so; and if they thought there were sufficiently compelling reasons to take some action without further reflection, they could just tell it to do that instead.

    Of course, notions like “moral humans” would need to be grounded in (say) actually existing 21st century humans to prevent value drift.

    So why wouldn’t this work?

    (Perhaps the answer is, “That’s what we’re trying to do; but actually writing that sentence in machine-understandable code is a difficult technical problem.”)

    ———————

    P.s. After writing the above, I decided to check in Bostrom’s Superintelligence to see if he talks about this. I bought the book a few years back and made it to chapter 10 before I stopped actively reading. I didn’t remember reading about this idea, even though I had read the chapter on control methods, but I thought I would check anyway. So I turned to chapter 9, and sure enough there’s a tiny two-paragraph description on page 173 of “Indirect Normativity” which corresponds to this idea, but which then says that a full exposition will have to wait until chapter 13.

    Turning to chapter 13 I find that my proposal seems pretty close to EY’s Coherent Extrapolated Volition. I guess that’s pretty good. Admittedly the ideas are not exactly the same, since my version seems to emphasize a cabinet of simulated human advisers as the likely result, which isn’t present in Bostrom’s description (this is on a skim of chapter 13). Perhaps a reasonable name for my proposal would be the Moral Philosophy Club or Advisory Board or something like that.

    Even after reading Bostrom’s chapter 13 I think that my comment is a useful contribution to the discussion, and it seems fairly implementable, so I’m going to post this anyway. Criticism welcome, or pointers to prior work. I don’t think about this stuff usually, so I’m sure others have discussed similar ideas.

    • carvenvisage says:

      Two issues I see:

      1. In “moral human” the word moral is doing a lot of the work. In logic and human discussion you can stipulate meanings for the sake of argument, but in the case of an AI you need to fully define “moral”, which is kind of the main difficulty and if you can do it the human part probably becomes superflous

      2. “human” might impose a certain level of risk/lower bound on the AI’s rationality. A senile grandmother running a soup kitchen is a a moral human but “moral” doesn’t mean qualified/suited to run the whole world.

      But if you could stipulate the definitions it might work pretty well. The basic problem is that you can’t stipulate the definitions, you have to map them out in mathematical detail

      Still, in the absence of an alternative, like if you needed the AI to avoid a famine or win a war with aliens, I could imagine such a base (presumably with additional safeguards) being the best of bad options. (maybe someone can correct me on that)

      • he case of an AI you need to fully define “moral”, which is kind of the main difficulty and if you can do it the human part probably becomes superflous

        In eponymous’s scenario , the AI only needs to be pointed at examples.

        senile grandmother running a soup kitchen is a a moral human but “moral” doesn’t mean qualified/suited to run the whole world.

        Making a safe AI is a different problem to making a world-improving AI.

        • carvenvisage says:

          In eponymous’s scenario , the AI only needs to be pointed at examples.

          That appears to be just waving the problem away. I’m not sure how easy it is copy a (moral-)human’s brain structure in general, or particularly in a way that scales to the speeds and capacity of superintelligence, but more fundamentally once you have the definition of moral that can reliably (and provably) distinguish between superintelligence-safe-level moral-humans and others, you have a template to build that superintelligence directly from the ground up, around that definition.

          Making a safe AI is a different problem to making a world-improving AI.

          “Safe for X” is a moving target as X changes. And generally these discussions concern at least semi-autonomous with some power, not “oracle Ai”, which is a whole seperate solution.

          And of course it’s very common for humans to be morally safe/qualified for some things but not for others, most iconically in ingroup vs outgroup treatment but potentially in a million different foibles that one can avoid e.g. by staying within their area of strength.

          If we can say that someone is a moral human but their mind is not suited to certain endeavours, presumably someone can be a moral human but not a suitable template for a superintelligence. Or to put it more briefly: Someone can be a moral human while having structural blindspots

    • Eponymous says:

      Any other takers?

      Personally I think that the biggest obstacle would be specifying what it means for a human to “want” something.

      The obvious failure mode is equating “wanting” with various external cues of want, e.g. saying “yes” or “I want that”, nodding, smiling, etc. Then the AI tiles the galaxy with meat puppets (or worse, actual conscious humans) eternally giving the signs of approval.

      So we need the AI to learn that humans have underlying wants and desires that give rise to these external behaviors, and that these must be accurately elicited. One approach would be to simply inform humans of the circumstances, give them access to simulations of likely outcomes of various actions and augmented reasoning faculties, and then let them discuss what is moral to do, and accept their reported conclusions as a true expression of their wants. Perhaps the AI could play referee to the discussion, pointing out logical errors and providing necessary data throughout.

      Of course, any direct intervention in the process is dangerous, since the AI might try to manipulate the discussion so that its advisers report to want things that are easier for it to do. So we need it to learn that a true expression of “want” requires an absence of external manipulation. But then it might not inform the advisers of relevant facts since this counts as external intervention. So the problem is not trivial.

      A second possible problem is that augmenting the intelligence of the simulated humans might alter their moral values in some way. I think this is a lesser concern, because presumably the process of intelligence augmentation would be directed by earlier generations of advisers who would be attuned to the dangers of value drift.

  14. Nancy Lebovitz says:

    The Creature from the Cleveland Depths by Fritz Leiber, published 1962.

    Predicts a moderate amount about about electronic schedulers before any of the tech existed, and includes the question of AI purpose.

  15. Joseph Greenwood says:

    How long do Scott’s comment threads stay “live,” in the sense that multiple people are actively commenting on them?

    • johan_larson says:

      I haven’t actually measured, but my impression from occasionally dropping in on older OT threads is that virtually all of them are dead by the one-week mark.

      • fion says:

        That’s my impression. Was it like that even before we had the twice-weekly threads? It seems natural that most people will move onto the new thread when it becomes available…

      • liskantope says:

        Most of them are pretty much dead by the one-week mark, but if I’m not mistaken, the commenting capability is shut down after a certain length of time (2 weeks?) anyway.

        • bean says:

          It’s very slightly over 4 weeks. I used to have to rebuild the Naval Gazing index every other full-number OT, and I could just squeeze in and post a link to the next one before it closed.

  16. Perico says:

    New paper on the EM-drive concludes that it doesn’t work, and it’s all due to experimental errors and the Earth’s magnetic field. There’s still some unexplained extra thrust, though, which the authors suggest is due to “some hidden bias in the experiment”.

    This thing may end up being a worthless piece of junk, but it’s quite an interesting one.

    • Tinman says:

      It’s been a very obvious glorified flashlight from the very start. Even in its original readings, thrust on the order of several nanonewtons is miserable performance for the necessary ~72kV and above. With that amount of (direct) voltage, they have basically made a very, very bad magnetic tether, which is already an established propulsion system. I would suspect that the remaining extra thrust is just difference in thermal emissions generating thrust through difference in radiation pressure.

    • mustacheion says:

      Glad somebody mentioned this, I think the story of the EM drive is a good example of science working correctly. It started off as an idea conceived by some crazy quack – when I first heard about it my opinion was that no respectable scientist should bother with such a dumb idea, and absolutely no funding agency should bother looking into it. But some respectable scientists started working on it as a hobby anyway, using equipment at public labs that wasn’t otherwise in high demand, and got some interesting nonzero results. At that point I updated my beliefs in the direction of still being extremely skeptical about the EM drive, but now believing that some funding agency somewhere should allocate a small amount of funding to study the device. And that is what happened. From this new paper, we learned that the performance of the engine is most likely caused by magnetic effects caused by the power cables routing power into the device. I have updated my belief back down to “nobody should bother with this dumb idea any more.”

      We gave the crazy revolutionary idea a fair chance, and it didn’t impress, so now we can move on basic laws of physics intact.

      For anybody unaware, the reason the EM drive was so interesting is because it seemed to violate conservation of momentum, which is one of the most basic and sacred principles in physics. If it worked, it could have conceivably been used to make a perpetual motion / free energy machine. Which is precisely why it doesn’t work.

      If anybody is interested, I find Scott Manley’s videos on the EM drive to be pretty illustrative. The old one, when the first positive results were published is here and the new one about the disproval is here.

    • John Schilling says:

      This is about what I expected, including the bit where Martin Tajmar’s team is the one to nail home the lid of the coffin. But, unless I’ve missed something, Ars Technica conspicuously failed to include a link to the actual paper, and that’s always annoying in cases like this. So, here.

      Also, +1 to mustacheion. Science works.

  17. proyas says:

    Is there any reason to think ethernet cables or some other type of cord might “come back” at any point in the future for residential internet (maybe thanks to the demands of virtual reality?), or will WiFi and broadband keep getting better and more dominant?

    I’m old enough to remember the days when we had to plug cords into the backs of our computers to get internet access.

    Part of the reason I’m asking is that my house still has coaxial cables coming out of the walls and floors in each room, and I’m thinking of removing them since they look ugly and we’ve exclusively used WiFi for years.

    • Bugmaster says:

      I don’t know about coax cables, but Ethernet cables are pretty much required for gaming; the latency introduced by WiFi will kill you.

    • cryptoshill says:

      Most dedicated gamers/stream watchers/programmers still use hardlines for their main pcs (which they still have). They never went away, they just went to niches that are more latency-critical. The speeds you get on Wi-Fi are more than acceptable for almost any use with the exception of latency-dependent applications.

    • Acedia says:

      When I lived in an apartment building I had to plug my laptop in with an ethernet cable in the evenings because of wireless performance dropping when all my neighbours were home and using their own networks.

    • skef says:

      Are we talking 10base2 or some custom thing?

      Coax is now most common in residential settings as the medium to connect to a cable modem, after which some kind of twisted pair cable is used. So unless 1) the coax is “tv-cable” coax and 2) you don’t have any need to connect computers within your house faster than they connect outside, it probably won’t come in handy for networking purposes.

      Anyway, I would be shocked if 10base2 made a comeback.

    • J Mann says:

      I put ethernet ports at various points in my house, and find that when I have them, I often have a use for them. On the other hand, with the new mesh wireless systems (and powerline data transmission), you can probably live without it.

    • tayfie says:

      I predict wireless technology becomes even more dominant consumer-side, but will never replace wired where performance is more valuable than convenience. The mobility afforded by wireless is very convenient for consumers since many people now interact with smartphones as their main computing device.

      VR, like 3D movies, will remain a niche entertainment market for the foreseeable future. No matter how good the visuals, VR won’t be greatly immersive until you can generate realistic sense of motion.

  18. rm0 says:

    I came up with an idea for a fun game. The first person posts some string of characters that would be quickly recognizable to a member of some ingroup, but are difficult for others to discern the meaning of (including via google).

    Other people try to guess what the group is. They have the full internet at their disposal. (But knowing without it earns you extra points)

    Here are a few (not very good) examples I came up with:
    OF,WAIH,HBTN
    23, 5, HE; AHD )|(
    FNTH ITTD 366\651

    • fion says:

      I don’t know any of yours. :/

      Haha, I was about to suggest TINACBNIEAC as a joke but it turns out it’s trivially googled.

      Darn, these are really hard to come up with. All my ideas turn out to be trivially googled. I’ll have a think and see if I can come up with anything…

      EDIT: Got one!

      441 531 423 45123

      • BeefSnakStikR says:

        I haven’t read U____g yet so I didn’t know it, but I guessed TINA stood for “This is not a…”. It’s a pretty common acronym part. No idea about the other.

      • rm0 says:

        I had that on my list but I thought it might be too easy 🙂

      • rm0 says:

        No idea for your second one, but
        gur svefg 6 qvtvgf, jura chg gbtrgure, ner rvtugl bar gb gur guerr, naq jura lbh gnxr gur jubyr ahzore, vg vf qvivfvoyr ol guerr, frira, naq bar bgure ynetr ahzore.
        Nyfb, 45123 vf na nern pbqr va buvb, naq V svaq vg fhfcvpvbhf gung vg’f /nyzbfg/ bar gjb guerr sbhe svir.

        • fion says:

          Aha, some good observations!

          Some comments on your observations:
          Gur snpg gung gur jubyr ahzore qvivqrf ol 3 vf xvaq bs eryrinag ohg va n irel bofpher jnl. Lbh fubhyq cebonoyl nyfb svaq gung rnpu vaqvivqhny ahzore qvivqrf ol guerr.

          Another observation that is more relevant to working out the answer:
          Gur nirentr bs rnpu frg bs ahzoref vf 3 (naq fb boivbhfyl gur nirentr bs nyy gur ahzoref vf 3).

          An obscure hint:
          Lbh pna, va snpg, plpyvpyl crezhgr gur ahzoref va n frg, fb lbh pbhyq jevgr 45123 nf 12345, ohg vg jbhyqa’g gura or noyr gb sbyybj ba qverpgyl sebz 423.

          And the actual answer:
          https://bit.ly/2IF1HSV

    • cryptoshill says:

      HE-1945
      HE-1946
      OE-315
      SEIE
      VH-8
      I am pretty sure there’s at least one or two of you out there on this blog.

      • rm0 says:

        fhoznevar ratvarref?
        V gevrq frnepuvat rnpu bs gur fgevatf va dhbgrf, naq br guerr bar svir jnf qvfgvapgvir rabhtu gb tvir n erfhyg nobhg fhoznevarf (uvtu fcrrq ohblnag pnoyr nagraan), juvpu V pbasvezrq jura frvr (Fhoznevar Rfpncr naq Vzzrefvba Rdhvczrag) naq iu rvtug (Iragvyngvba bhgyrg naq vfbyngvba inyir sbe ratvar ebbz) obgu ghearq hc erfhygf yvaxrq gb fhoznevarf. V pna’g svaq nalguvat sbe gur svefg gjb gubhtu, fb znlor V’z jnl bss onfr.

        • cryptoshill says:

          Anil Fhoznevar Enqvbzna. V cebonoyl fubhyq’ir nibvqrq gur ersrerapr gb fhoznevar-fcrpvsvp pbzzf rdhvcrzrag. Jr ynory bhe inyirf naq trne va n irel fcrpvsvp jnl. V zvtug unir znqr guvf gbb rnfl. Lbhe tbbtyr erfhyg jnf vapbeerpg nf sne nf IU-8, ng yrnfg ba bhe obng gung jnf gur fabexry urnq inyir.

    • aphyer says:

      Possibly too easy (especially with this audience), but let’s try:

      1aup 10agp 100cup

    • rlms says:

      I think I fnord know your second one from the numbers (I’m not part of the ingroup).

      My attempt:
      “Put your SV work in my pidge after formal”*

      *(Spoiler warning for anyone trying to guess it) If you’re a member of this group, we’ll hopefully be having a meetup at some point in June. If you aren’t on the mailing list from going to previous meetups, comment here/email me at the address listed here and I’ll add you.

    • Shion Arita says:

      Probably too small a group, but

      K&Y:HT VOE E:AGIG GS HGJ N BTM:D CG:ATE

    • Chlopodo says:

      RFCSFKRFKRFDBRDBRS

      • rm0 says:

        jbeyq bs jnepensg?
        V abgvprq gung gurer jrer guerr “ES”f va gur fgevat, naq fcyvg ba gurz, naq gura abgvprq V pbhyq fcyvg gur jubyr guvat vagb guerr yrggre frtzragf, naq gura tbbtyrq gung

      • Said Achmiz says:

        From memory:

        Entrsver Punfz
        Funqbjsnat Xrrc
        Enmbesra Xenhy
        Enmbesra Qbjaf
        Oynpxebpx Qrcguf
        Oynpxebpx Fcver

    • BeefSnakStikR says:

      This is probably hard:

      SWIXENLUD

      Easier:

      NEWSUDIXL

      • fion says:

        Um. The second one.

        abegu, rnfg, jrfg, fbhgu, hc, qbja… hu… sbegl-bar!

        Anywhere close? 😛

        • BeefSnakStikR says:

          NEWSUD is spot on.

          It’s just a coincidence that IXL is a roman numeral. (Hint: they represent actions.)

          If you figure out which actions those are, you’ll know exactly what I’m referring to. It’s less a group of people and more a genre of work.

    • rm0 says:

      Spoiler Answers:

      puevfgvna — bhe sngure, jub neg va urnira, unyybjrq or gul anzr
      qvfpbeqvna — 23 5 (23 pbafcvenpl, ynj bs svirf) unvy revf, nyy unvy qvfpbeqvn, unaq bs revf tylcu
      qxzh — V erirefrq gur fgevat, lbh pna tbbtyr QGGV/UGAS naq gur ahzoref sbe zber vasbezngvba

    • Shion Arita says:

      Another one, probably a lot easier than my first:

      WD&DU
      I&W
      A
      FII
      MP2SFAM
      SDOIT
      TOT
      O
      SC
      BC&SL
      ADTOE
      DT
      TA

    • Chalid says:

      M on M, R the R, U L, said the C P

      • BeefSnakStikR says:

        Does “R the R” stand for envfr gur ebbs, and are they song lyrics or a chant of some sort?

      • Chalid says:

        Ok, it is harder to make this recognizable than I thought. Everything is obvious, once you know the answer.

        Let’s reorganize; all these are separate mostly unrelated lines, relating to the same idea:

        M on M
        R the R
        U L, said the C P

        A large hint: vs lbh’er ernqvat guvf, lbh ner gur vatebhc.

        • Iain says:

          Zrqvgngvbaf ba Zbybpu
          Enqvpnyvmvat gur Ebznapryrff
          Havirefny Ybir, fnvq gur Pnpghf Crefba

          • Chalid says:

            Did you need the hint?

          • Iain says:

            Seeing it with the line breaks convinced me to stop and think about the niggling feeling I’d been having about “said the C P”, after which the rest was easy. I didn’t use the large hint.

    • iy1}MuIYUe2@\87o@E933\VO{6a&AQ

      • rm0 says:

        crey hfref?

        • Nope, nothing like that. It is a pretty niche thing, but if you are familiar with the thing you will probably get it quite easily. And I can imagine somebody unfamiliar with the thing getting fairly close to the answer.

          • BeefSnakStikR says:

            Are the two curvy brackets demarcating something, or is that just a coincidence?

          • They’re not demarcating anything.

          • BeefSnakStikR says:

            I’m flailing, but does it have anything to do with roguelikes?

          • BeefSnakStikR says:

            Wait, does it have anything to do with the fact that they’re mostly on the top row and number row of the keyboard?

          • It’s nothing to do with roguelikes.

            The fact that they’re mostly on the top or number row of the keyboard is not a coincidence. There’s something a lot of the characters in the string have in common, which happens to be correlated with being on the top or number row of the keyboard (although not in an essential way—like, I didn’t notice the correlation was there until you pointed it out just now).

            The thing those characters have in common will not give you the full answer to “what is the meaning of the string”, but if you can work it out, that’s probably as far as you can get if you’re not familiar with the thing this comes from. (If you were familiar with it you’d probably have realized what the full meaning was by now.)

          • BeefSnakStikR says:

            Goddammit, you’ve got me hooked… now I really want to figure this out. Before I go any further, will scouring your blog help?

          • Scouring my blog would help, since it is related to one of my interests, and I talk about my interests on my blog.

            I will post the solution later today, if nobody gets it before then 🙂

      • James says:

        my first thought was ‘vim macro’, or possibly ‘accidentally in insert mode without realising it’.

        I don’t think it is a vim macro, but now it occurs to me that a sequence of vim commands would have been a really good entry for this challenge.

        • It’s not a vim macro. It’s not anything computer-related, except in a very loose sense (it would certainly be misleading if I told you it was computer-related).

      • fion says:

        I read your discussion with BeefSnakStikR for clues. I’m thinking that the thing they have in common might be gung gurl ner eneryl-hfrq punenpgref? That would explain why it’s not a coincidence that they’re on the top rows of the keyboard but also why it’s not related to that.

        I had a look at your blog to see what your interests are. I’m familiar with mathematics and music, so unless it’s a particularly niche sub-group of one of those I’m going to guess it’s something to do with yvathvfgvpf. Am I barking up a tree in the right forest?

        • It is to do with yvathvfgvpf. But the thing they have in common is not gung gurl ner eneryl-hfrq punenpgref. Note that not all of the characters have this thing in common, just a lot of them. (The thing which *all* of them have in common is the more obscure thing where if you were familiar with it, you’d probably see it quickly.)

          • quaelegit says:

            >yvathvfgvpf

            NFPVV fhofgvghgvbaf sbe Ragreangvbany Cubargvp Nycunorg flzobyf?

            (Nygubhtu V’z whfg thrffvat sebz gur fhowrpg znggre abj, V qba’g erpbtavmr gur flzobyf. Jryy, V’z abg n yvathvfg, whfg ernq yvathnoybtf ba gur vagrearg! Naq nyy gur barf V ernq ner cerggl tbbq nobhg eraqrevat VCN naq inevbhf punenpgref…)

            Gur bgure guvatf guvf erzvaqf zr bs vf ubj V’ir frra Nenovp eraqrerq va NFPVV, r.t. va snprobbx pungf. Gurl hfr ahzrenyf sbe fbzr yrggref, ohg bayl n unaqshy.

          • You got it! Gur flfgrz vf K-FNZCN; vg’f gur zbfg jryy-xabja NFPVV rapbqvat sbe VCN. V xabj nobhg vg qhr gb pbaynatvat sbehzf.

            Also the characters are specifically the ibjryf, neenatrq va yrsg-gb-evtug, gbc-gb-obggbz beqre nf va gur hfhny cerfragngvba bs gur ibjry qvntenz.

          • A1987dM says:

            I am familiar with [that thing] but I still didn’t recognize it until I read quaelegit’s comment.

          • fion says:

            Haha, ok. I’d never have got that. Guess that’s me learned something new today!

          • BeefSnakStikR says:

            My next guess was going to be ctrl+shift keyboard mappings for various accented letters and international symbols. So close, yet so far! Your system is super obscure.

    • beleester says:

      Probably easy for this crowd:
      SWLHN, FAFL, UCS, TED

    • 20 80 110 90 90 5 2 5 90 110 8 50 6

      • This one may have been too obscure. Here are some hints, and the answer:

        Gur fhz bs gur ahzoref vf fvk uhaqerq naq fvkgl fvk.

        Vs lbh unir urneq bs trzngevn, znlor gur nobir yrq lbh gb gur vqrn bs frrvat jung yrggref gur ahzoref pna or vagrecergrq nf, hfvat fbzr Yngva trzngevn flfgrz. Ubjrire, V unir gb pbasrff gung ng gur gvzr gung V jebgr guvf, V qvqa’g xabj gurer jrer bgure Yngva-nycunorg trzngevn flfgrzf bgure guna gur bar V hfrq urer, naq gur bar V hfrq gheaf bhg gb or irel bofpher, juvpu znxrf guvf fvtavsvpnagyl uneqre gb onpx-qrevir.

        Vs ol fbzr punapr lbh znantrq gb fghzoyr hcba gur evtug trzngevn flfgrz, juvpu vf nccneragyl xabja nf gur “1683 nycunorg”, gura lbh jbhyq ernq bss “yehffrorfhubs”, juvpu, vs lbh unir ernq Jne naq Crnpr, lbh’yy erpbtavmr nf gur cuenfr jubfr trzngevn ernqvat Cvreer Ormhxubi hfrf gb pbapyhqr gung uvf qrfgval vf gb xvyy gur Nagvpuevfg, Ancbyrba Obancnegr.

    • Here’s a relatively easy one (compared to my previous two):

      ЄOSDCPTJKTQ

      • The answer is trbybtvpny crevbqf va gur Cunarebmbvp Ren.

        • BeefSnakStikR says:

          This is a good one because as an outsider I immediately though the first symbol stood for Euros, but knew that was too easy, and Googling Є didn’t turn up much apart from a brief reference in a Wikipedia articles.

          How exactly did Є come to stand for for Cambrian? I mean, I get it’s a C with a line through it, but isn’t standard practice in pretty much everything to use the first unique character? In this case, caMbrian and/or caRboniferous?

    • James C says:

      Hmm, how about:

      TINO/PanPan CYOA

      • BeefSnakStikR says:

        Does TINO stand for Guvf vf abg bxnl?

        I’m guessing that CYOA doesn’t stand for Choose Your Own Adventure? That’s a pretty standard abbreviation and a bit too common-use if it that’s what it is. Although I’m still having a challenge figuring out exactly what it all refers to.

        • James C says:

          Fbeel ab wbl ba GVAB, ol PLBN vf evtug. V svtherq gur svefg ovg jbhyq or gbb bofpher ba vgf bja.

      • beleester says:

        Jbez snaqbz – Gnlybe Va Anzr Bayl/Cnanprn, naq gur Jbez PLBN gung ynhapurq n ahzore bs snasvpf

        • James C says:

          Bingo. Vg’f nyfb n snasvp fhournqvat gung jbhyq trg lbh n uhaqerq qvfyvxrf jvguva na ubhe 🙂

    • AlphaGamma says:

      1x 2x 2- 4x 4- 8+

      HAM VET BOT RAI RIC VER ALO HUL MAG SAI PER GAS LEC VAN STR ERI OCO HRT GRO SIR

      TVMDCAW

      TEotW TGH TGR TSR TFoH LoC ACoS TPoD WH CoT KoD TGS ToM AMoL

      • rahien.din says:

        Gur Jurry bs Gvzr gheaf, naq Ntrf pbzr naq cnff, yrnivat zrzbevrf gung orpbzr yrtraq. Yrtraq snqrf gb npebalz, naq rira npebalz vf ybat sbetbggra jura gur Ntr gung tnir vg ovegu pbzrf ntnva.

    • IANANIAAFM!

      Easier:
      IaNaNIaaFM!

      Easier still:
      IaNa#IaaFM!

    • rahien.din says:

      VPA LEV PHB LTG LCS OXC CBZ TOP ZON ESX VGB FBM CLB RUF PHT

    • ManyCookies says:

      …. W
      . U – G
      .. B R

    • Nornagest says:

      MIIDBzCCAe+gAwIBAgIJAOjd/eDUcQy9MA0GCSqGSIb3DQEBBQUAMBoxGDAWBgNV
      BAMMD3d3dy5leGFtcGxlLmNvbTAeFw0xODA1MjIyMTQ5NDhaFw0yODA1MTkyMTQ5
      NDhaMBoxGDAWBgNVBAMMD3d3dy5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEB
      BQADggEPADCCAQoCggEBAOb+tJDmk91oOYR9A2qMPBOr+PDiqYOLC8V69/rWcsj7
      kdsmsDLqUfE1FLhEs/zkt7wdpit9PYHE4SJaqw1heXgEioJ85A0asRQFiHSuGM6D
      iDsBRJndiFjyOgesUjyKFlBT++CubQgyBIoqpERv3vA6snHta3fSiOLI9Hc11Wrt
      OxVx1e8L9M1VqYxgCsp6JANGO0rM5fZv17q0uFVUP9srauSOvG+UAlENfHJc4q5s
      zqKnSSw1mTGnVELi/w//CbL4OMiu8PYU8ifhQD27Pd0lygEKQ4oZpbfTL4VfFiey
      zMeqVZ4jjv6EBIu8CfVdP3M3eSm/wgaTBMaAOdGpSeUCAwEAAaNQME4wHQYDVR0O
      BBYEFJbcug2OZQWvqvwW6+s3XaJ+8H+7MB8GA1UdIwQYMBaAFJbcug2OZQWvqvwW
      6+s3XaJ+8H+7MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEBAKSZzUEs
      7Yo4edqmot6I7YOzB0jNbHzma6KHPCPtZFKxRpPNhjjObCfVUb0mIiE73TzteFOZ
      y01z6O1f45Uxphdem/YvY1b4AISxaPtWGOMIUhDESQuXEAUkxqlPLR8nj+DCbUob
      gPTzB8v49FIZEogL2TWv9d/n9d4/mLap/SaDPwf8Mh8tcBKCuJkeDi1SzGhm6QxT
      887KqXu/E5Li0KzH9pooe/xscWWAizHSn6cwOjSBMb5Fr2i7J32ry/h1mnMMIku3
      8LU3ojfEF+njRLFgf+kGnBcoEUOBNJWx5Tyw8bk2K63D6cXCrlaKQdx3hc1SX3dP
      SLi3dKWIv4c5HXE=

      • Randy M says:

        Further confirmation that all this “rot13” and “cypher games” is just an excuse for the SSC ai’s to drop their act and speak in their native tongue for a change.

      • BBA says:

        I see what you you did there.

      • beleester says:

        Well, I’m confused. This is pretty obviously base64 encoding (uses only letters, numbers, +, /, and = for padding at the end), but the decoding is just as mysterious as the encoding. The only recognizable strings inside it were “example.com.”

        • Nornagest says:

          Hint: vg’f n onfr64 rapbqvat bs n pbzcyrk fgehpgher.

          • beleester says:

            Vg’f na FFY pregvsvpngr sbe rknzcyr.pbz, juvpu V svtherq bhg ol pbairegvat gb urk naq ybbxvat ng n yvfg bs svyr fvtangherf. V fhccbfr va gurbel lbh pbhyq erpbtavmr vg ba fvtug vs lbh’ir frra rabhtu bs gurz gb erpbtavmr gur zntvp ahzoref gung fgneg gur svyr, ohg gung’f cerggl uneqpber.

          • Nornagest says:

            Lrc, V jnf guvaxvat bs gur zntvp ahzoref. V’ir frra rabhtu bs gurfr gung nal onfr64 rapbqrq oybpx bs nobhg gur evtug yratgu fgnegvat jvgu “ZVV” vzzrqvngryl ertvfgref nf na FFY preg.

    • tayfie says:

      “including via google” makes it tough.

      To be a quickly recognizable string of characters to a decent sized ingroup, it has to be written somewhere else. If it is written somewhere else, it is trivial to search.

      One way to go about it is to use an ingroup where large events are private and personal interaction is still a dominant factor. This suggests a company or university, and I see at least one other person has used that.

      You could also use something that does not have an easily searched encoding. Keyboard shortcuts for a computer program is one possibility. Unicode symbols, emojis, and other pictures could easily tell a story only recognized by those who know it.

      A third route pursued by some seems to be acronyms of a phrase not normally subject to acronym.

      All those options don’t really feel in the spirit of the game, and after thinking about 20 minutes, I can’t come up with anything good.

      • beleester says:

        (Spoilers for my riddle, obviously).

        I picked acronyms which were shared with other, more common names. For instance, searching UCS will get you Union of Concerned Scientists rather than Unconquered Sun.

        Also, I slightly modified one acronym to make it ungoogleable. SWLHN is more typically abbreviated SWLIHN, but that would be trivial to Google, so I dropped the “In” from the acronym. It’s still recognizable.

      • KG says:

        In my case the text has apparently just not been typed out on the internet, at least as far as Google knows, which means that the ingroup that knows my answer must be very small indeed. I know it’s otherwise on the internet, but it’s visually represented in an image rather than copy/paste-able text, which I assumed was the best way to get something not Google-able but still directly reproduced.

      • BeefSnakStikR says:

        You could also use something that does not have an easily searched encoding. Keyboard shortcuts for a computer program is one possibility. Unicode symbols, emojis, and other pictures could easily tell a story only recognized by those who know it.

        I think this in combination with 20-question style “yes/no” answers is the way to go.

    • KG says:

      I was just about to offer “teen_ hmif _noe a_f gi l” but turns out it’s easier to find than I thought.

      Here’s one though:
      you’re ARMS fell oFF!

    • fion says:

      45612378

      Doesn’t quite fit the description because people in the in-group won’t get it immediately. Also the in-group is very large and includes most of you.

      • A1987dM says:

        people in the in-group won’t get it immediately

        Seriously? I’m not even in the in-group and I still got it straight away.

        • fion says:

          V zrna… gur va-tebhc vf “rirelobql jub xabjf jung beqre gur fgne jnef svyzf pnzr bhg va”.

    • baconbits9 says:

      Short reply to wage subsidies

      Super easy to abuse. I watch my kids, my neighbor watches hers, with a significant wage subsidy for the price of paperwork we can make 5-10,000 a year each by claiming to watch each others kids. Compliance becomes a bitch, either billions gets wasted on people running scams or to get the wage subsidy you have to jump through a hundred hoops. Probably the latter which means big business has a huge edge over small business.

      Secondly this is a wage subsidy it should heavily favor labor intensive organizations over capital intensive ones, basically it will end up as a tax and transfer scheme from one type of business to another. The type gaining ground would be low margin, high volume, labor intensive industries so basically you have the Wal-Mart economy with little room for small businesses, little room for promotion and permanent low wage pay.

      Of the options discussed a wage subsidy of this size might be the worst from a free market perspective.

      • Edward Scizorhands says:

        If you a wage subsidy of $2/hour to hire someone at $8/hour, yeah, some people might lie, in order to swipe $4000 a year. But they have to claim it as their full time job. That’s all they are getting.

  19. Randy M says:

    Lots of sci-fi talk in this thread, so keeping with that theme:
    What’s the best resource for writing plausibly about colonizing a hypothetical alien world? A very big open ended question, I realize. The context here is that I’m interested in trying a writing project where that setting seems to make the most sense, but despite a youth spend in the Sci-fi racks, I don’t feel near capable of doing it justice. Handwavium covers a lot, but I want to avoid overlooking the obvious.
    Obviously related: what’s your best guess for our maximum speed for an intersteller ship built in about 100 years, barring a singularity or revolution in our understanding of physics, expressed as a fraction of c?
    Feel free to use the question as a jumping off point for whatever tangents you want, of course.

    • johan_larson says:

      The fastest we’ve ever made a spacecraft go is 0.0002 c. If we wave grandpa’s magic wand (the one that says “fusion” on it), perhaps we could boost that by a factor of 50 to 0.01 c. The grandkids’ wands will say “antimatter” on them, but they’re not available yet, so it’s hard to say.

      • Bugmaster says:

        Realistically speaking, you’re going to have to handwave a lot of stuff in order to get even interplanetary (as opposed to interstellar) colonization; fusion seems like a nice solid option for that. Given our present level of technology, interplanetary colonization is impossible, of course.

        • albatross11 says:

          Nitpick: interstellar colonization is impossible with current technology. Colonizing Mars (in the sense of long-term viable colonies that survived with some trade but not massive constant flows of wealth from Earth) is doable with current technology. It’s not clear anyone will ever do it, since it would cost a gazillion dollars and there are easier ways to get away from it all, but as far a I know, there’s nothing that makes this impossible.

          • Bugmaster says:

            No, I meant what I said. I don’t think we can set up a self-sustaining colony on Mars, given the tech we’ve got today. You say that no one would do it because “it would cost a gazillion dollars”, but cost is also a function of available technology. On top of that, while a temporary Mars base may just barely be possible given the leading edge of our tech (although I’m not 100% convinced even that much is true), a self-sustaining colony is a much tougher challenge.

          • John Schilling says:

            No, I meant what I said. I don’t think we can set up a self-sustaining colony on Mars, given the tech we’ve got today.

            If by “tech we’ve got today” you mean specific manufactured devices, sure. Nobody’s built an ISRU fuel processing plant yet.

            But that’s not terribly interesting. Nobody will manufacture the specific bits of technology until there’s a specific market, and Musk et al haven’t reached the part where they are buying Mars colony kit. An oversight, BTW, they should at least be commissioning prototypes of some of the long-pole and/or early-need items.

            As far as I know, there’s nothing that would be needed for Mars colonization where a plausible technological approach has not been identified, nothing beyond current manufacturing capability, nothing that specialists in the relevant field have called out and said “we don’t know how to build that no matter how much money you throw at us”. When e.g. mining experts are consulted on lunar or asteroid mining plans, they have a great big laugh at how easy the naive space cadets think it’s all going to be, but then they say “this is how hard it’s going to be and this is what you’re going to have to do and this is what it’s going to cost”.

            What, specifically, do you think is actually missing outright?

        • Thomas Jørgensen says:

          You dont need magic fusion drives to get.. well, at least a probe, to the stars.
          Plutonium will do fine.

          https://en.wikipedia.org/wiki/Fission-fragment_rocket

          • Bugmaster says:

            I agree that probes are way easier than colonization ships, but I’m compelled to point out that fission drives are also purely theoretical at this point. Less so than fusion drives, admittedly.

      • Randy M says:

        Thanks! That makes for a 430 year travel time. Or about 18 generations. Not bad for evolving an eloi cast among the crew.
        I think the hard part is keeping something running for that long. Posit a weak ai and fusion power, is it possible without magic nanotech manufacturing? It should be possible to have a completely sterile/clean environment for the electronics, with some vacuum sealed back-ups. I think wear and tear on the crews life support systems would be a real challenge. (Still, people have been ignoring that kind of thing in Sci-fi).

        Also, space is pretty empty, but you might get some collisions in that time. Maybe we can say waste heat is channeled into a sort of thermal shielding. Seems like that would cause a deceleration though. Chalk that up to inefficiency and tack on a generation or two.

        • Ninety-Three says:

          The naive maintenance solution is to carry a lot of replacement parts. Since this is a spaceship and mass is precious, you could do one better by designing all your critical systems around the constraint that they be 3D-printable, and bringing along whatever 2100’s version of a 3D printer is. If even that seems like it would require too much mass in the form of printer stock, you could also design your ship so that all parts likely to wear out can be melted down and reprinted back into new parts.

    • Nornagest says:

      Atomic Rockets is the best resource I know about for the ship question, and it might have something for the colonization question too.

      (I’m pretty sure some AR contributors post here, too.)

      • bean says:

        At least two, although we’re both more on the near-future side than the sort of stuff Randy is asking about. But yes, that’s the place to go.

        • Randy M says:

          If a century is implausible, it could be further out. But with more time elapsed I have to think harder about what the sociology & geopolitical system would be on earth at the time of launch.

          I don’t think 100 years out will be “eh, pretty much like now” in terms of politics and culture, but it’s not crazy talk the way 300 or 500 years would be.

    • Bugmaster says:

      Colonization ships would probably accelerate at about 1g for half of their trip (and decelerate at 1g for the other half), assuming sufficient energy and reaction mass. As others have said, you can provide these via handwavy fusion, and some combination of massive amounts of stored water (or ice) and ramscoops. Anyway, in this case the maximum speed of the ship just depends on the length of the trip, and, of course, relativity.

      • bean says:

        You have to have really, really spectacular fusion to make that possible. The last time I tried (and it’s been a few years) I had to stretch several parameters in a possibly unphysical manner to avoid turning the reactor into something that didn’t output considerably more in X-ray power than it produced in fusion. (Of course, the first time I ran into this problem, I was doing the math by hand.)

        • Bugmaster says:

          I haven’t done the math, but yes, I agree — hence the handwaving. Interstellar travel is really kind of impossible (at least, for live biological humans), so you need to cheat if you want to have it in your story…

        • John Schilling says:

          Yes, the one-G torchship really doesn’t work unless interstellar ramjets turn out to work much better than anyone presently expects is possible.

          For realistic self-contained starships with e.g. really good fusion engines, ~0.25C looks like a practical speed limit, and that makes accelerations more than ~0.1G a pointless expense. Fortunately, vaguely-habitable worlds (i.e. those with liquid water on the surface) look like they will wind up spaced ~10 LY apart, so even if humans don’t get indefinite life extension, interstellar travel should be possible for someone who wants to make e.g. founding an interstellar colony their life’s work. Or conquering someone else’s interstellar colony, emmigrating from some hopeless political situation in your home system, etc. You’ll arrive in time to see the early results and retire to play with the grandchildren who grow up (hopefully) enjoying the benefits of the great work you devoted your life to.

          Travel at speeds of up to 0.9C, maybe more, will likely be possible eventually, but only with massive external support. A launch laser, ginormous antimatter factory, or whatnot. This doesn’t work for some applications (e.g. refugees) and even where it does work it will be much more expensive than the 0.25C version, so it may not be the preferred solution for e.g. colonization. Wait for the local economy to support a launch laser, and everywhere within reach will have been colonized by people who launched fusion-drive colony ships fifty years earlier. But they will have niches, like small fast science probes and extreme long-range expedtions.

          There will probably also be a period where the technology barely allows interstellar travel at 0.01-0.10C, so only sleeper ships or multi-generation arks allowed. But at that level you will risk being overtaken by people who took the time to develop better technology, unless you’re dealing with a refugees-fleeing-a-doomed-world scenario or the like.

          • Randy M says:

            .25c, that’s a good bit faster than was speculated up thread. Since I’m just looking a plausibility, having a wide reasonable range is good.

    • johan_larson says:

      I see our resident rocket scientist hasn’t chimed in on this issue yet. I imagine he is fortifying himself with big gulps of the second-best whiskey before wading in to offer gentle correction to the enthusiastic amateurs who have already replied. 🙂

      • John Schilling says:

        Atomic Rockets has already been cited, and yes, it’s the #1 place to start looking. You’ll find some of my old stuff there, and a few others you might or might not recognize (not going to out anyone whose pseudonyms may have changed). It’s very good at providing both intro-level and deep technical detail for people who want to get their SF rockets and rayguns (etc) right. And it calls out SF that gets the technology particularly right, if you’re looking for recommendations.

        Now, to look at and maybe fix some of the details right here…

    • helloo says:

      One thing to consider is the scope of the ship itself. AFAIK the majority of scifi takes on this has nearly everyone be in stasis of some sort while a skeleton crew/AI handle the hundred of years passage. The other approach tends to be making a world ship that could operate interstellar as an environment of its own properly at least semi-permanently (and you’ll face the wally situation of if they even want to land). There’s a few other types that don’t even bother with live/stasis humans and simply try to “seed life”/artificially create humans after landing/frozen embyros + fake wombs. Those cases, you don’t need that big of a vessel, or even “true spaceflight” (just launch them from mass drivers or out of spaceships and hope). Though it might be hard to call whatever that can emerge and rebuild civilization from that human.

      Side note: There’s a simple but neat little Android app/game called Seedship that doesn’t even bother with the propulsion/tech but looks a bit more into some other parts of a colonization interplanetary ship.

      • Randy M says:

        Yeah, what I’m thinking now is that most of the colonists will be frozen embryos on the premise that cryogenics at the time is unworkable for preserving adult brains, with a very small crew sent to live and reproduce normally in order to have care takers to raise the colonists upon establishment, under the premise that humans raised without human touch would be psychologically unhealthy to an extent that would threaten the colony.
        The meat of the story would be about the developing colony and not the creation of it, but these kinds of details should have ramifications on the setting.

        • Evan Þ says:

          Arthur C. Clarke’s Songs of Distant Earth mentioned as part of its backstory that humans raised without human caretakers would be psychologically unhealthy – but that’d work itself out over generations to the point that you get a good colony, with people who just don’t like talking about their great-grandparents.

          I’d really like to read a story following your idea, too.

  20. johan_larson says:

    I’ve been reading an old article in Commentary about the society the Vietnamese communists built after their victory. It makes much of the existence of special stores for privileged members of society.

    In Vietnam today, all goods and services are allocated according to one’s political status and one’s economic class. Party members have first priority, government officials and those designated “working class” second priority, and everyone else comes last. Special, restricted-access stores (a feature of all Communist countries) are a fact of life in Vietnam. The leading party members (Politburo and Central Committee) have the best local and imported food available to them. High-ranking cadres have their own special stores, where medium-quality produce is available to them. Low-ranking cadres and ordinary citizens are excluded from these stores.

    Just how big a violation of orthodox Marxism or communism is this practice? Is communism supposed to produce absolute equality of goods, or are some economic differences acceptable, to incentivise doing the right thing?

    • maintain says:

      Why don’t they sell all the items in the same store, and just make some items cost a lot more?

      Now, I know you can see my question and be like “cause they’re communists!”, but I’m genuinely curious. Let’s think in terms of Chesterton’s Fence. Obviously the way the are doing this is meant to solve some problem. What is that problem?

      • Nornagest says:

        Shortages.

      • johan_larson says:

        I would guess that a hefty portion of the people who end up running communist institutions are from wealthy or comfortable families in poor countries, and therefore have certain expectations of quality of life. These countries are way too poor to sustain that quality of life for everyone, so some sort of excuse must be found to give it to some but not all. Good communist theory already allocates a special role for the vanguard party, so letting these special party people enjoy a few extra privileges is a workable excuse. They work so terribly hard for all of us, after all.

      • DocKaon says:

        They didn’t have the means of coming up with prices that would balance supply and demand. Prices were basically arbitrary sitting on top of a quantity based balancing of inputs and outputs. Due to limitations of computing power the plan was only updated annually, so there would be no way to adjust prices and production fast enough to ensure the goods being on the shelf. If you want to make sure your elites have access to luxury goods in order to properly incentivize them, you have to make sure those luxury goods aren’t bought up by non-elites because you’ve priced them too cheaply.

        • maintain says:

          But what’s to stop there from being a shortage because all the elites bought up the luxury goods?

    • Nornagest says:

      Not a commie, but I’ve read Marx.

      Per orthodox Marxism, none of the societies that exist now or have ever existed, including the Marxist ones, were running on a communist mode of organization — communism in that sense is a stateless egalitarian society at the end of history, which big-C Communism, the ideology, aspires to, but which you’re not supposed to be able to get to directly from a capitalist society. (This is what obnoxious edgy teens on Tumblr are talking about when they say “full communism”.) Instead, the Soviet Union et al. were supposed to be socialist societies under the “dictatorship of the proletariat”, a sort of transitional period.

      Marx was vague about a lot of the implementation details, and most of his concrete proposals make more sense for the mid-1800s context in which he was writing than for the mid-1900s context when Communism was most popular. And while Leninism and Maoism are clearer on implementation, they diverge sharply from each other and from Marx’s original idea in places. All three, though, are more concerned with capital than with private goods: big cars or flashy clothes might be seen as signs of bourgeois decadence, but the theory doesn’t disallow owning stuff like that, only that the “means of production” be held in common. Factories, mines, etc.

    • Brett says:

      I think it’s a massive violation of the ethics of Marxism and theoretical communism, but an unsurprising result of an authoritarian socialist state where goods and services are allocated by political power.

  21. Nancy Lebovitz says:

    https://twitter.com/TopherTBrennan/status/984863624157052928

    Argues that Pinker got his optimistic take on the decline of war because he only looked at Europe. The trend isn’t as strong if you look at the rest of the world.

  22. James says:

    On the subject of incel’s gripes of increasing female hypergamy and top-tier men increasingly monopolizing available women, I just read that in Inca society:

    – the emperor had many houses throughout the realm of 1500 concubines, selected for their beauty, usually before age eight to ensure virginity
    – great lords had harems of seven hundred women
    – ‘principal persons’ fifty women
    – leaders of vassal nations thirty
    – heads of provinces of 100,000 people, twenty
    – leaders of 1,000 people, fifteen
    – administrators of 500 people, twelve
    – governors of 100 people, eight
    – petty chiefs of fifty men, seven
    – chiefs of ten men, five
    – chiefs of five men, three

    (Working out how many were left to the average inca pleb is left as an exercise for the reader.)

    The punishment for sleeping with a woman belonging to your superior: you are killed, your wife is killed, your children are killed, your relatives are killed, your servants are killed, your fellow villagers are killed, all your llamas are killed, your village is destroyed, the site of your village is strewn with stones.

    So it could be worse.

    • rlms says:

      Sounds untrue.

    • sfoil says:

      Starting at “heads of provinces of 100,000” and working down, that works out to 135,920 women in order to fill the org chart. (Those are allocated to 33,601 men.) Assuming that “population” only counts adult male citizens makes it more feasible but still, smells fishy.

      Edit: if you halve the numbers below the “governor of 100” level since it specifies “men”, you only need 73,920 women for 17,601 men. If only adult males count for population, it’s definitely doable. Even if only adults count, it might still be workable with a combination of juvenile/child concubines and a high birthrate.

      • James says:

        Well done for doing the maths which I couldn’t be bothered to.

        It does seem an outrageously prodigious system. A lot of wastage at the top end.

      • johan_larson says:

        Perhaps not all theoretical positions in the org chart are filled. In particular, do all the chiefs of 5 need to be supervised by chiefs of 10? Probably not. Just have one or the other, depending on the size of each lowest-level group.

      • nameless1 says:

        In such a system there is no reproductive incentive to keep unpromising sons (who would likely not at least achieve a rank of a chief of five) alive as you get no grandchildren from them, so I would expect to be more women than men, more male infanticide. So if 130K girls and 130K boys are born, and 30K boys are killed, does this work out?

        • Nancy Lebovitz says:

          One of the way fringe, hyperpolygamous Mormon sects handled the excess sons problem was by forcing them out of the group.

          So far as I know (possibly not very far) it wasn’t so much a matter of the sons being unpromising as individuals, it was that they were younger and/or from lower status men.

          Infanticide isn’t the best strategy unless you’re short on food– a child/teenager can be economically productive.

    • Nabil ad Dajjal says:

      How do you select for feminine beauty before puberty? The only thing I can think of that isn’t going to change with the onset of puberty would be facial symmetry. Maybe they selected based on their older sisters or something?

      Then again, with >1,500 concubines you can afford to have some give in the system.

      Anyway, this seems unlikely to be driven by women’s mate choices so much as the aforementioned mass executions for disobedience. Khans and Sapa Incas didn’t end up with so many wives on the basis of their own attractiveness. It’s an apples-and-severed-heads comparison.

      • James says:

        Anyway, this seems unlikely to be driven by women’s mate choices so much as the aforementioned mass executions for disobedience. Khans and Sapa Incas didn’t end up with so many wives on the basis of their own attractiveness. It’s an apples-and-severed-heads comparison.

        Yeah, the reason why is different, but the result (paucity of partners) is the same, and the cause (low rank in the porking pecking order) is similar or the same.

        • Nabil ad Dajjal says:

          The point I was making is that the cause is emphatically not similar.

          The “pecking order” of Inca noble titles is a hierarchy of the sort that groups of men tend to make for ourselves. There’s a reason that you described it as an “org chart;” if you denominated compensation in dollars rather than concubines, it would look like any corporation in the country today.

          If what we have today is a sexual marketplace, this is a sexual command economy. They’re very different systems with different results.

      • Well... says:

        How do you select for feminine beauty before puberty?

        Look at the parents?
        Rule out those with obvious, permanent deformities and blemishes?

        Also I suspect there were probably some standards related to skin color/eye color/hair texture/etc.

        BTW I’ve heard that pro sports talent scouts can tell which boys as young as about 8 or 9 will definitely not be able to be pro athletes. If that’s true, maybe there could be such a thing as a beauty scout for young girls.

        • baconbits9 says:

          BTW I’ve heard that pro sports talent scouts can tell which boys as young as about 8 or 9 will definitely not be able to be pro athletes.

          There are some sports (ie basketball) where this is definitely not true.

    • Nick says:

      Source?

    • dndnrsn says:

      Are these numbers based on writings such as law codes, or do we have harder evidence? Law codes often establish the way things were supposed to be, or the way the authors wanted them to be, rather than how they were.

    • Bugmaster says:

      I have a feeling these numbers represent the legally allowed upper limit, rather than actual headcounts.

    • Brett says:

      That’s different from what the Wikipedia article on Inca Society says about marriage:

      Men of lower rank could only have one wife; people of ranks higher than the kuraka were allowed more.[4]

      A Kuraka was basically a provincial governor. I doubt there were a lot of them compared to the size of the population ruled.

    • nameless1 says:

      And they are not around anymore, while more monogamous cultures survived. Maybe there is something about men not being motivated to work or fight if they have no chance to a woman.

      There is something else. How does the whole genetic math work out? In a strictly monogamous culture you generally expect that your sons will have wives and will reproduce and their sons too. Unless they are so hideous that women would rather choose to become and old spinster than marry them (because in strict monogamy, if all more desirable men are taken, they are the two basic choices).

      But if you are an inca chief of five men and have three wives, you cannot really expect all your sons to be a chief of five men. There are just not enough men around. Maybe one. Who will carry on your genes and of course your daughters will also carry on your genes. The other two sons are kind of pointless from your genetic math. They gonna be incels.

      So why not kill all but the most promising sons?

  23. RC-cola-and-a-moon-pie says:

    There was a conversation on a recent thread about deference to experts in areas in which you are not yourself expert. Most, I think, seemed to agree that, on the one hand, there are times where one could be entitled to reject an expert community on grounds of serious problems (I don’t know — the Soviet Lysenkoist community based on the political imperatives operating on it during its time of dominance), and also, on the other hand, that there are many times when you ought to defer without blinking an eye (some wholly apolitical, non-“tribal” technical issue in some scientific field you know nothing about). The question, I think, comes in the wide intermediate range along the spectrum between these two poles.

    I’ve been continuing to mull this over, and it occurs to me that I’m 99% sure Scott Alexander wrote something highly relevant to this question, but I can’t easily get my hands on it by searching. So going by memory I’m certainly going to butcher this, but take it as what little I retained from it rather than what he actually said.

    The gist, I think, was a thought experiment where you imagine a world where for whatever historical, path-dependent reasons you want to make up, there emerged an overwhelming predominance of the academic community and the journalistic community and the wider intelligentsia in favor of an ideology that we would all view as terribly wrong and even immoral. Imagine that OUR beliefs in the actual world were so highly stigmatized by this hypothetical community that there really wasn’t an openness to consider them.

    The question is, taking into account all the things like (a) the emerging replicability and methodological problems that can unconsciously skew research programs that are so ably discussed on this blog all the time, (b) the ability of both self-selection of research topics and the power of funding mechanisms supporting the dominant worldview to channel research in favored directions, (c) the strong cultural incentive not to lend aid and comfort to views that are seen as supporting an enemy worldview, (d) the power of media and the intelligentsia to filter and characterize (and often misharacterize) research results, and (e) etc. — I’m sure I’m not pulling in everything relevant here — in light of all that, how confident are we that that culture couldn’t put together an epistemically grounded worldview just as seemingly strong as ours seems to us? Particularly given that nobody is expert in all the subjects pulling together the big picture — at best, someone will be an expert in one or maybe two areas, relying on the broader zeitgeist to fill in the larger worldview into which you fit the few pieces of the puzzle to which that person is expert.

    I just recently noticed that the blog has a subtitle: “The Joyful Reduction of Uncertainty.” I don’t know if this is new or if I just never noticed it before. But it seems strange to me. I come away from my experience with this blog with substantially GREATER uncertainty than I started with. Many posts show that entire accepted paradigms can be highly uncertain, and that individual research studies seem next to useless in many situations. I just wonder how all this impacts the case for deference to experts, and I tend to worry that it pushes the case against deference further away along the spectrum of possible cases than one might otherwise assume. Indeed, in my more skeptical moments I worry that there is no stopping point at all on the slope toward scientific agnosticism in light of all these problems.

    Well, that’s it. I guess the only concrete question coming out of all this is if my characterization of Scott’s essay rings any bell sells. (I’m pretty sure but not positive it was him.) Obviously, any substantive reactions or reassurances would be welcome as well.

    • Ninety-Three says:

      I believe you’re looking for this article about a hypothetical society which insists that lightning comes after thunder.

    • kokotajlod@gmail.com says:

      I believe you are thinking of this: https://slatestarcodex.com/2014/06/03/asches-to-asches/

    • mdet says:

      “Joyful Reduction of Uncertainty” comes from the Friston on Free Energy post. TLDR, Scott interpreted Friston as saying something like “The single, fundamental goal of all consciousness is to produce joy by reducing uncertainty”. In other words, “We’re all really curious and just wanna understand what’s going on.” (which sounds appealing for Friston and Scott and you and I, but might not be generalizable to *all of consciousness*). Disclaimer: This is my paraphrase from memory of Scott’s interpretation of a guy who is notoriously barely comprehensible, even by his colleagues.

      The blog subtitle changes every few months it seems.

      • ec429 says:

        To expand on this, in re

        I come away from my experience with this blog with substantially GREATER uncertainty than I started with.

        one could argue that you’re not more uncertain, just more aware of existing uncertainty. Possibly-bogus analogy: even if the things you learn push some of your probabilities nearer ½, I’d say a Beta(100,100) posterior is less uncertain than a Beta(4,1) prior (it has a lower differential entropy. At least I hope so, I didn’t calculate it), even though the result of a single (Bernoulli) trial is more uncertain (p=½ has a higher entropy than p=⅘).
        All this ties in to the “why doesn’t the Fristonian mind lock itself in a dark room?” argument. Somehow.

  24. sfoil says:

    Why are street drugs, especially cocaine, so often adulterated with other drugs?

    I’m not talking about cutting valuable product with inert (or at least low-cost) fillers, impurities resulting from bad production processes, or users/suppliers knowingly mixing drugs to achieve certain effects e.g. speedballs.

    Back when I argued about politics on the Internet, it was a common libertarian/pro-legalization talking point that street drugs being intentionally “laced” with other drugs (e.g. marijuana + pcp) was basically a myth, largely on the grounds that dealers have no economic incentive. The upshot was that a lot of Just Say No horror stories were made up. There was some concession that a clueless user might in some circumstances order the “special” without knowing what was in it. The argument was reasonably convincing. Why would suppliers spend more money to produce a less safe/pleasant product? (Again, a separate argument from doing this to save money.)

    However, over the past several years I’ve had to supervise a lot of drug tests. Nearly everyone who tests positive for cocaine also tests positive for “something else”, usually some sort of opioid, and while they generally readily admit to using cocaine once caught they’re always surprised at the something else. I bring this up because recently someone I know died using cocaine that turned out also to contain fentanyl. Obviously he can’t be asked directly but there’s a great deal of evidence he had no idea. There’s no way anyone who knows what fentanyl is would use it to “stretch” a batch of cocaine. What gives?

    • Urstoff says:

      Poor shipping and handling procedures at the drug warehouse.

    • baconbits9 says:

      (mostly a guess)

      Supply lines are very fragile in the drug world compared to the legal world. One arrest, broken hydroponics system or guy who gets to high to make a delivery for a few days can dry up a local area, but shortages are when the largest margins are made. If you sell your cocaine unadulterated then when your supply dries up you are first out of the game, if you are willing to cut it up with something else you are the one with a “supply” for the longest and make the most money. If you cut it up with inert materials (corn starch) then your clients just get a shitty high and will be happy to turn somewhere else when supply goes back to normal, but if you find things to cut it with that get them high (enough) you can skirt this issue.

      Even mixtures of two drugs can work this way, if you sell speed balls then you can adjust for the harder to find ingredient, going from 50/50 to 60/40 one way or the other (note I have no idea the actual ratios involved in speed balls) depending on what is available.

      Finally there is also probably a whole lot of cross contamination going on at the various levels of drug distribution. The guy cutting your cocaine probably doesn’t have a high level of conscientiousness.

    • gwern says:

      Are they adulterated that often? I’ve read a few papers from Energy Control and others, and the upshot seems to be that street drugs are typically fairly pure, with the exception of cocaine (apparently always cut with levamisole or something else, suggesting it happens upstream at the cartel origin) and recently, opiates (primarily heroin) with synthetic opiates (fentanyl, carfentanil) to cover up for cutting. But all the other categories like MDMA, marijuana, psychedelics, apparently are pure and mostly have dosing problems.

      • sfoil says:

        It’s really just cocaine and I bet that’s the answer. Cut with levisamole (which isn’t tested for and I hadn’t heard of before you brought it up), and the opiate maintains the expected potency. E.g., 50% cocaine, 49% levisamole, 1% fentanyl or whatever. That makes way, way more sense than cutting cocaine with fentanyl.

    • Garrett says:

      Slightly on-topic. I volunteer in EMS. A class I recently took, taught by a toxicologist, mentioned that as a part of the regulatory cat-and-mouse game, a large amount of “designer drugs” are coming into the country. These are usually referred to as things like “synthetic marijuana” or similar. In these cases, chemists abroad make changes to existing drugs in ways that make them technically legal and also so that they don’t trigger existing drug tests.

      Unfortunately, this means that people are now taking drugs which have an un-studied pharmacological effect. Also, the users don’t actually know what they are “really” getting. And there aren’t any standard tests for it; these apparently take months to develop. So you get patients in the ER who are high on “mystery white power” that you can determine isn’t cocaine and isn’t heroin. Great … that narrows it down.

      • sfoil says:

        Where do you live? In the US, “synthetic marijuana”/”spice” was something of a fad about five years ago but seems to have gotten less popular (although I also lived in a different part of the country then). Anecdotally I knew a lot of people that tried the stuff, had various bad experiences, and went back to good old cannabis. That was also about the time that THC vapes became more common, too, as another possible substitute.

    • BeefSnakStikR says:

      Why are street drugs, especially cocaine, so often adulterated with other drugs?

      Increasingly liberal attitudes to extramarital relations between drugs. Divorce between drugs is also peaking.

    • SaiNushi says:

      After looking up fentanyl, I see that it’s used to mimic heroine. Which means it would give a different high than pure cocaine. Possibly not telling customers to give the drug dealer’s supply an edge, making it better than the competition?

      Or, corner on the market. Druggie gets the coke+fentanyl mix, gets addicted to both coke and fentanyl. Druggie tries other drug dealer, but since the unknown fentanyl addiction isn’t satisfied, must go back to the drug dealer that included the fentanyl.

      See: McDonald’s and Pizza Hut using sugar to make people prefer their food over other fast food and pizza places.

  25. themindgoo says:

    I have 2 possible ideas for a research:
    1) Could LSD or other hallucinogens cause foot fetish? (My hypothesis is that because foot fetish is linked to cross-talk between brain parts responsible for feet and genitalia and LSD increases connectivity between parts of brain that it could cause that)
    2) Do straight males with less empathy choose porn without male actors? (My hypothesis is that males in porn are to relate to them but if that ability is lacking person would see them as useless)
    I’m not psychologist and not even in any social sciences so I don’t know if there would be any point in testing that or if survey would be the best way to approach it. Does someone know something about that or could give me an advice??

    • Nornagest says:

      Could LSD or other hallucinogens cause foot fetish? (My hypothesis is that because foot fetish is linked to cross-talk between brain parts responsible for feet and genitalia and LSD increases connectivity between parts of brain that it could cause that)

      Probably not unless you were taking LSD when you were five years old. Most of the stuff I’d read on the subject indicates that fetishes are set quite early.

      • Le Maistre Chat says:

        Fetishes getting set before puberty is pretty weird if you think about it.

      • powerfuller says:

        fetishes are set quite early

        That’s interesting; I’m pretty sure my own fetishes were set in early adulthood. I guess that makes me a weirdo among the weirdos. I have a friend who claims to be able give himself fetishes (and abandon them later) by volition. I’m skeptical, to say the least.

    • powerfuller says:

      I remember reading somewhere that foot fetishes were more common in times/places with STD epidemics.

    • BeefSnakStikR says:

      Do straight males with less empathy choose porn without male actors? (My hypothesis is that males in porn are to relate to them but if that ability is lacking person would see them as useless)

      How would you distinguish “relating to the sex act being performed on the man” (empathy with the man’s sensations) and “relating to the acts the men are performing on the women” (empathy with the man’s desires/will)? No matter what data you get, you can’t prove it’s one or the other.

      I’d study producers of pornography: first, discard producers who hire male actors for pornography. Then study those who (1) themselves participate in pornography or (2) do not use males at all in their pornography. (1) gets rid of the possibility of relating to the man’s sensations, since sex acts are either directly performed on the man or in the case of (2) not performed at all. Then you’re studying the empathy for male desire/will.

      How exactly you’d sample this, I don’t know.

  26. John Schilling says:

    Scott’s discussion of why we really shouldn’t want to have a Universal Jobs program included a number of elements that had to do with poor implementation, e.g. that the jobs will be made to suck more than they need to and that it will be made bureaucratically difficult to switch jobs. And these are things that would almost certainly be part of any real Universal Jobs program, and need to be considered. I would have preferred that discussion not have been derailed two steps into “UBI sucks because most people are shiftless layabouts if given the opportunity”. Suffice it to say, I agree that Universal Jobs sucks in part because it will probably be implemented poorly, but so long as there is any chance it will be implemented at all, maybe we should talk about how to do it better.

    Now I want to talk about how the UBI will also be implemented poorly, if it is ever implemented at all. Not about how it turns everyone into shiftless layabouts, that’s basically just ideology, but how it will predictably be implemented in a way that screws over even diligent people who want to make the best of it. And maybe talk about how to do it right if we get the chance. But here’s what I am pretty sure will be done wrong:

    1. “Universal” will be watered down to exclude both rich people who obviously don’t need it, and shiftless layabouts who obviously don’t deserve it. The first part at least can mostly be dealt with by an income cap assessed via our current income tax machinery, and if limited to the truly rich wouldn’t be too disruptive. The second part, requires an elaborate and powerful bureaucracy specifically tasked with making sure the wrong sort of people stay truly poor.

    2. Unless we are confident we have kept all the shiftless layabouts off the UBI, we are going to punish and stigmatize people who collect the UBI in order to satisfy some people’s sense of justice and also to discourage use of this expensive service. And if the punishment takes the form of making people stand in line to turn in paperwork or make themselves available for public-service work, that goes against the economic freedom to combine UBI with low-wage labor. Likewise if the stigma amounts to a label of “shiftless layabout – do not hire”.

    3. Major bureaucracies which should be made redundant and shuttered by the UBI, will instead carve out and protect bits of turf to keep themselves gainfully employed and empowered. The UBI won’t be sufficient for a severely disabled person who needs round-the-clock assistance, and while one obvious solution is for everyone to get long-term disability insurance in the market, another is to keep SSDI – and the entire Social Security Administration to run it. There are ways to arrange for UBI to cover children’s living expenses, but there’s also just limiting UBI to adults and keeping AFDC/TANF/Food Stamps. Bureaucrats are really, really good at making sure their own jobs are not red-lined out of the budget.

    4. The UBI will be set at an unreasonably, unaffordiably high level by pointing to sob stories involving photogenic single mothers with four children in the bay area, referencing an arbitrary definition of “poverty” that will necessarily set itself somewhere above plain UBI, and neglecting the bit where UBI recipients are supposed to be supplementing their income with part-time or gig work. More money for poor people isn’t a bad thing in its own right, but it is likely to either crash the economy or incentivize failures #1 and #2 above.

    5. Others that I am probably missing but expect the commentariat here will be able to fill in.

    A UBI is the least-bad solution I have yet seen for dealing with mass technological unemployment. I would favor implementing a UBI if MTU were imminent, or if some transient political circumstance made it likely a UBI could be implemented with relatively few of the failures noted above. But unless circumstances were particularly favorable, I would expect the results to be anything but a utopia.

    In particular, I would expect a poor or even mediocre implementation of UBI to enforce a strong class system. On the top, people with upper-middle-class jobs, management and STEM and the like, six figure salaries, paying taxes to support the UBI, never claiming it for themselves even if they could because the additional 10% or so in income does not compensate for the stigma. On the bottom, people condemned to UBI Hell with maybe a little bit of off-books gig work. And in the “middle”, but a lot closer to the bottom, people with crap jobs from which they can be fired on a one-way trip to UBI Hell if they get too uppity. Or, slightly better, one can claim UBI while working one of the lower-tier crap jobs and so we get mobility at that level, but then the actual wages of the crap jobs are adjusted downwards to compensate – and there’s no chance of being hired for the good jobs, which are only open to people who don’t have the stink of UBI on them.

    And now I’m tempted to turn this into an adversarial collaboration, but it would probably be cheating to ask for Scott as my partner. Also, there’s going to be a distinct shortage of hard data to work with (cue David Friedman telling me about four archaic societies I’ve never heard of that kept good records of their UBI implementations).

    • helloo says:

      To expand on 3, I think the biggest one is that UBI does not eliminate (at least for a couple years even in good case scenarios) the underlining reasons and arguments that brought forth the other welfare programs in the first place.

      Even if the UBI prompters say that it’s going to replace them, the various interest groups will bring them back stating how UBI isn’t sufficient and the “just give it a few years to fix everything” isn’t going to be convincing enough to stop them.

      An interesting symmetry to this might be the flat/simple tax that some US conservatives proposed in the 2016 election. There’s a reason why ~50% of all congress proposals are related to changes to the tax code.

    • Aapje says:

      @John Schilling

      Your objections pretty much all boil down to people not accepting an actual UBI and instead implementing a more welfare-like system instead, and then calling that an UBI.

      • Conrad Honcho says:

        See also the implementation of “healthcare for everybody” as Obamacare.

    • J Mann says:

      While we’re at it, I don’t know enough to have any confidence in my concerns, but here are my concerns with a UBI.

      1) Children: Above a certain point (“X”), a UBI will incent people who are willing to raise their children in poverty to have more children in order to increase the parents’ lifestyle. Below a certain point (“Y”), a UBI will not allow children to live in what society considers acceptable economic circumstances. Worse, I am concerned that Y > X, possibly by a lot. This is going to result in enough child suffering to create a tension on the program, even if it reduces net child suffering overall.

      2) Lobbying incentives: I worry that it will create an incentive to spend more time lobbying for increases in UBI and less time doing activities that other people value. I also don’t see any way that it doesn’t up crufted up like the tax code, with all kinds of nooks and crannies designed to incent or reward various behavior.

      • Jake says:

        To respond to point 2, I like the idea of a UBI that is explicitly income redistribution where the amount each person gets is determined by the budget surplus for that year. If you want to add new programs/start new wars/etc, you are going to do it by taking money from everyone on UBI or raising taxes. Conversely, if you want to raise UBI, you either need to cut programs/wars/etc, or raise taxes. I think that should end up in a semi-stable state that ends in people actually caring if the budget is balanced at all, though it does get rid of some macroeconomic tools.

        • J Mann says:

          I like that a lot. I take it we’d either hold the debt stable or pay it down by some fixed amount relative to GDP? You definitely couldn’t enact a “let’s pay off the entire debt first, then get a UBI.”

          I guess you would still get a constant threat of politicians running on “let’s tax the rich a little more and stick it in the UBI” – the salami would stop getting sliced when (if ever) the majority of voters were convinced they were doing actual damage to economic growth.

    • helloo says:

      There’s also always the issue of fraud.

      Like the case where several million babies disappeared when the IRS started requiring Social Security numbers for dependents to be listed.

      Though I suspect that’s going to be more of an issue with less developed countries similar to voter fraud.

      For better or worse, UBI will probably prompt the creation of a nation ID for any country that implements it if they didn’t have it before (namely the US).

      • Edward Scizorhands says:

        Fraud will exist, but I don’t think it will be a large issue. It will surely be tied to SSN. Some people hide grandma’s death to keep claiming SSN but it’s not that common.

        Of course, that will raise the fight of whether immigrants (illegal or otherwise) get it.

        A UBI claimable only by citizens will create new waves of illegal immigration as employers give up trying to pay $15 / hour to hire Americans to do work. This will be enforced as well as it is now, which is barely.

    • Randy M says:

      A UBI is the least-bad solution I have yet seen for dealing with mass technological unemployment. I would favor implementing a UBI if MTU were imminent, or if some transient political circumstance made it likely a UBI could be implemented with relatively few of the failures noted above. But unless circumstances were particularly favorable, I would expect the results to be anything but a utopia.

      I don’t have further objections to fill in yet, but I want to chime in and say this encapsulates my view perfectly as well.

    • I think there is a worse possible outcome than the ones you mention.

      You end up with a society where a large part of the population is doing nothing of value to other people, is consuming resources produced by other people, and is low on skills, ambition, energy. At some point ideologies change and it becomes acceptable for the part of the population that is productive and competent to decide they would be better off without the deadweight. If they are in a generous mood they put birth control in the water supply and drink bottled water themselves. If they are less generous … .

      • Edward Scizorhands says:

        Just moving off to Galt’s Gulch is easier at that point.

        I normally think people threatening GG are full of it, but it’s less drastic than sterilization. If some region ends up with a concentration of the actual workers, things can snowball from there. The workless, being workless, will have a hard time organizing an army to go after the workers to enslave them.

        • toastengineer says:

          Trouble is, where, physically, are you going to put it? Unless everyone chips in to buy a cruise ship and convert the swimming pool to an aeroponics bay or something, you’re gonna have to take the land from somebody.

          Rereading your comment you mention it would start at a natural concentration; fine, but you’re still displacing people.

          • John Schilling says:

            A while back I pondered what Atlas Shrugged would look like if it were rewritten to fit the 21st century, e.g. for a not-stupid movie adaptation. In addition to Dagny Taggart running a backbone telecommunications firm and Hank Rearden running chip foundries rather than the steel kind, Galt’s Gulch was decentralized. From the outside, it looked like a bunch of people had dropped out to run hobby farms in the country or play video games / make avant-garde art in cheap urban apartments or lofts, scattered across the country and with minimal economic interaction. Internally, they were running a prosperous trade economy based on e-commerce, 3-D printing, and the like.

            I don’t think it would be possible to actually maintain the masquerade if you looked at the details too closely, but I think it’s closer to plausible than anything that involves atmospheric refraction rays or whatever.

    • Garrett says:

      cue David Friedman telling me about four archaic societies I’ve never heard of that kept good records of their UBI implementations

      This brought delight to my day on many different levels. Thanks. 🙂

      • rlms says:

        Spoiler: they are all 12th century Iceland.

        • Aapje says:

          Unfortunately, the UBI failed because they were constantly yelling “GRAAAAGH” while assaulting others.

          • Chevalier Mal Fet says:

            We’re going to need to start creating a compendium of SSC open thread in-jokes to help orient newcomers.

            it will be written in rot13, of course.

          • dndnrsn says:

            But who can be trusted to correctly interpret the wise words and traditions of the True Caliph?

          • Paul Zrimsek says:

            You mean “the the True Caliph”.

    • cryptoshill says:

      What of the idea that we move from Mass Technological Unemployment to Mass Technological Employment? I will point out that the modern technological societies we live in have done a REMARKABLE job of allowing people to monetize and market previously difficult-to-monetize and difficult-to-market skills. Or , to make myself more clear at the risk of generating a strawman of my opinion “Who actually thought 30 years ago that “professional video game player”” was going to be a real profession?

      I predict that whether we like it or not, we will still have many, many ways to be employed in services that are not “burger flipper” and pay much better. However I suspect that the nature of that employment will change models drastically. I predict a sort of “Hustle economy” where what is selected for is no longer cognitive ability or conscientiousness, but a general factor of “hustle” (which is probably closely linked to conscientiousness but I am attempting to distinguish a person’s ability to establish a “personal brand” and market oneself’s Universal Human Talents to others from “ability to follow orders of a superior” and “ability to tolerate boring drugery”). Of course, the high-cognition high-conscientiousness people will still wind up with robot empires, but it may not foresee the End of Income for the Underprivileged.

      Anecdotally – being a Creative used to be a laughed-at idea for anyone even thinking of making a living. Now there exist millions and millions of ways to monetize your art, some of which are entirely self-driven.

      • proyas says:

        But even today, it’s still very difficult for artists and people in niche “leisure jobs” like pro video gamers to make more than poverty wages.

        StarCraft 2 pro gamers are some of the most famous in the world, but ignoring the very best guys who are household names in South Korea, it’s quickly apparent that it’s not a substitute for a real job.

        Right now, the 100th best StarCraft 2 player is ‘Dmytro “DIMAGA” Filipchuk’.

        https://www.esportsearnings.com/players/1058-dimaga-dmytro-filipchuk

        Over his ten year career, he’s only made $66,313.11, which averages out to not even working part-time at McDonald’s. It gets worse when you realize that the game has probably consumed most of his time during the last decade, muscling out time that could have been spent working other jobs or getting a degree. A pro StarCraft 2 player’s career typically goes downhill fast in their late 20s as their reflexes slow, so even the best guys can only count on a few years of high income.

        So, in all of StarCraft 2 fandom, only the top 70 players might be winning more money (averaged over at least a five-year career span) than they’d make at a minimum wage job.

        Yes, I know StarCraft 2 isn’t the only video game where you can make money from tournaments and broadcasting your play on YouTube, but even if we assume the “job market” is 1,000 times bigger than what exists for StarCraft 2, it’s clear how far short it falls satisfying demand for jobs.

        • baconbits9 says:

          Following your link almost all of Dimaga’s earnings come from a 3 year stretch where he made $27,000, $16,000 and $15,000. If you go by earnings per year the 500th highest earning player on your list made $37,000 in 2017, well above poverty wages.

          • proyas says:

            Huh? The 500th highest-earning StarCraft 2 player has earned $2,485.78 over his two-year career.

            https://www.esportsearnings.com/players/37286-meomaika-tran-hong-phuc

            You must be looking at the list of player earnings for all tracked video games. The site tracks 100 different games.

            I stand by what I said. No matter how you slice and dice the figures, and even if something like the Ready Player One “Oasis” comes along, pro video gaming will never provide >minimum wage income for even 0.1% of the working-age population.

          • baconbits9 says:

            Yes, I was looking at the all esports pages. The fact that hundreds to thousands (once you count youtube, casters etc) make solid incomes, and then people who play poker, fantasy sports, play instruments, coach and give lessons in things they love at well above poverty wages is a strong position against what you said.

            Poverty level wages in the US is about $5 an hour full time for a single individual, what people really mean when they talk about how hard it is to be an artist is the fact that they can’t pursue their absolute passion while also having the family they want while also being financially comfortable. They don’t phrase it like that because it becomes obvious that they are asking for a whole crap load more than the average person gets.

        • cryptoshill says:

          There are at least 10-30 people who make US Middle-class incomes playing Super Smash Brothers Melee professionally.

          One of the things I object to here is that this reflects the current number of people spending their time watching pro gamers or musicians on youtube or playing independently-developed videogames. I have no reason to doubt that as technology starts to remove more and more of the drudgery from our lives, people will become bored and thus spend more of their time/money on entertainment than *ever* before. I guess the uncharitable thought is that this also leads to a drug-abuse epidemic. (Which is probably quite right as well).

          • proyas says:

            I agree with you that video gaming will become more popular as the unemployment rate climbs, but I don’t see any reason to think that anything but a token fraction of those people will be able to monetize their personal video gaming to the point that they’re making at least minimum wage. Do you know of some trend that suggests otherwise? Where is your source?

          • cryptoshill says:

            I am going to avoid being uncharitable here – but supporters of the Mass Technological Unemployment hypothesis are guessing at the Singularity. When historians and economists point out – “but this has always happened and we’ve found other things to do”, and the objection is “BUT THERE’S NOTHING ELSE FOR HUMANS TO DO, EVER”.
            I don’t think I need to “source” my objection of “there’s at least 10 million ways to make money on the internet that don’t involve something a robot should be doing instead”.
            This is the trend, and this isn’t even counting all of YouTube, independent musicians, people that make memey t shirts to sell on facebook, etc:
            https://techcrunch.com/2017/01/24/streamer-numbers-and-incomes-are-rising-healthily-according-to-data-from-popular-tool/
            84% increase in revenue from tips, (not subscribers). Also seems to be pretty evenly distributed.

          • John Schilling says:

            “but this has always happened and we’ve found other things to do”, and the objection is “BUT THERE’S NOTHING ELSE FOR HUMANS TO DO, EVER”.

            You are indeed avoiding charitability. The objection is: the other things we have always found to do in the past, were things that a person of average intelligence could do but which average people weren’t allowed/adequately paid to do because civilization really needed people to do an awful lot of things that were just slightly beyond reach of the average horse. There is good reason to expect a qualitative change if the threshold of “machines can do that cheaper” broadly crosses over average human intelligence, for which shifting of that threshold within the supra-horse but sub-human intelligence range is not a good precedent.

            But there is the precedent of mass technological unemployment for horses, in which glue factories played a disturbingly prominent role.

          • cryptoshill says:

            I apologize, I was trying to avoid exactly that tone. I think I need to do a better job of seperating the actual singularity – (in which we are going to have a lot of problems with cheap, hyperintelligent AGI and general purpose robotics) and the pre-singularity. I will reach here and suggest that after cheap and plentiful hyperintelligences exist – we have either Solved the Problems of The Human Condition, or are all dead. In the meantime then, let me clarify my priors:

            Trade jobs are not going away, ever
            The reason why I think this is the case is that they seem to combine “basic human talents” (going up stairs, crawling into tight spaces, creative problem solving) to manufacturing processes and some amount of skilled training. (If we reach a point where we are 3D printing most of our houses this might change – but based on modern building codes this is unlikely).
            The demand for leisure will increase substantially, even among the chronically unemployed. This seems to just be a natural feature of all technological evolutions that reduce employment more generally.
            The cost of manufactured goods will drop precipitously If we have Mass Automation, I have a really hard time believing anything else.
            Coordinators will all be out of a job I am presently a coordinator in my job role, and without the security requirements involved I could easily program away every single function my job requires even though my programming background is sparse at best.

            From these I derive:

            The cost of human life will be substantially lower than what it is today, adjusting for inflation When manufactured goods drop do “essentially free” in price, to include foodstuffs – it will be substantially easier to sustain oneself on an income that would be considered “poverty wages” previously. A quick estimate here is that the resources I consume on a monthly basis only cost me approximately $1000, the large majority of that share being food. If we abolish zoning codes, put most plumbers and electricians out of work (or at least make their work much less expensive) by building extremely inexpensive 3d printed housing – an annual income of $14,400 will be plausible for relatively high-grade subsistence. (I’m not totally sure what power and water prices will do in the future, but I am pretty sure “power down” and “water up” are good priors, so I am zeroing them out).

            “Bespoke Employment” will explode
            The number of people who are in search of something unique and obviously made by a real human is growing.
            https://www.forbes.com/sites/paularmstrongtech/2016/12/15/what-you-dont-know-about-etsy-and-its-2017-strategy/#5b65c84c64b0

            Semi-professional entertainers will find it much easier to make money. Increased demand for leisure activities and general antipathy towards mass market media (theatre sales figures are abysmal for anything that isn’t the latest blockbuster series). People are cutting traditional cable networks and watching Youtube, Twitch, Netflix and Amazon video instead. Other than Netflix, these are all marketplaces with relatively free market access.

            I have no doubt that automation will reduce employment substantially – but I do not think that it will be particularly difficult for independent entrepeneurs to live a high-quality subsistence existence. Especially if cost of living is substantially impacted by the same processes that bring about this unemployment.

      • Iain says:

        The fundamental problem with the attention economy is that it is winner-takes-all. Success is exponential. Take a look at the most played games on Twitch: Fortnite (#1) is twice as popular as League of Legends (#2) which is in turn twice as popular as PUBG (#3). Looking at the numbers for Fortnite in particular, the top Fortnite streamer has more than double the number of average viewers as the second most popular. Most of the rewards for any one game go to the top handful of people; most of the rewards for games as a whole go to the top handful of games.

        You can’t base an economy on hustle. What does a boring 9-to-5 job look like in the hustle economy? We can’t all be the one guy who hits the jackpot. The math doesn’t work out. If the number of followers needed to sustain yourself is higher than the number of people that you can meaningfully follow yourself, then somebody’s got to end up holding the bag.

        If you’re going to employ a significant fraction of the population, you need a model that rewards people who aren’t in the top epsilon% of the talent distribution.

    • Conrad Honcho says:

      In addition to firing people who get uppity, banishing them to UBI hell, what about the people on UBI with the Wrong Opinions who spend their limitless free time plastering their Wrong Opinions all over the internet?

    • Andrew Hunter says:

      5. Some small but non-trivial group of UBI recipients will gamble/drink/badly invest all their money. Their children will starve in the streets unless we have equivalents of TANF, Section 8, Medicare, and all other major social programs. The population will not stand for this. We then have to pay for a UBI _and_ all our current social programs (with the same bureaucrats determining eligibility, taking their percentage off of the top…)

      (or, as David says, possibly worse: the population *will* stand for this and we watch our underclass actively die. )

      • Edward Scizorhands says:

        There is some moral hazard that would be undone if the US were to stick to its guns and say “no, you had your chance, now die on the streets.” I don’t know where it is, but some fraction (between 1% and 99%) would say “oh, they mean it,” and shape up.

        I think this is moot because it will never happen.

      • SaiNushi says:

        what if, instead of keeping EBT… if the problem is “just” making sure that the kids get fed, expand the school lunches program to include three meals a day, and implement year-round schools with a program to get their meals if they spend the break at the school building (doing break programs or something).

  27. Leah Velleman says:

    I’d like some help understanding The Last Psychiatrist/Hotel Concierge. A couple writers I respect seem to admire him, and I can’t figure out why, and that makes me suspect I’m missing out on something interesting.

    There are two difficulties I have with him, one about style, and one about substance. The stylistic problem is that he writes so very coyly — as if he’s solved a puzzle, and doesn’t want to share the solution outright, but instead wants to flatter the other people who’ve guessed it and annoy the ones who haven’t. This makes it hard to tell what he’s on about. But often, as far as I can tell, what he’s on about is that the world is full of shallow, vapid, cartoonishly stereotypical assholes who are miserable in cartoonishly stereotypical ways, and who will never stop being miserable because they’re narcissists who deserve it. And — this is the problem with substance — even if that were true, it doesn’t seem interesting enough to be the basis for a long-running well-respected blog.

    So… what am I missing? Are there places where he drops the coy tone and says directly what he thinks? In particular:

    – Does he ever define “narcissism” in a brief and straightforward way? He seems to use the word differently from everyone else I know, and to have a lot of scorn for people who don’t grok the difference — but does he at some point explain what the difference is?

    – Does he ever make concrete proposals for how he’d like some part of the world to be different — either politics, or pop culture, or the field of psychology, or anything else?

    – Does he ever point approvingly to other, clearer writers and say “This person gets it”? Or has some other, clearer writer given a summary of his ideas?

    • dndnrsn says:

      Why do you assume that they’re the same person? Early HC reads like someone writing TLP fanfic. At a minimum, if it’s the same guy, he got high on his own supply and got a lot less snappy. There’s also a difference between different HC posts – there’s some where one point is fairly clearly made; the most recent one feels more like three points made in five posts of writing.

      What I got from TLP is that a major source of bad things is people thinking of themselves as the protagonist of a story that is being told. That might be just what I’m taking from it, though. What you get out isn’t necessarily what a writer puts in. One could look at the latest HC post, take the idea that what he’s saying is Goodhart’s Law applied to relationships, take from that the idea that the problem is not taking the time to get to know people, trying to look for a shortcut to figuring out whether a person is OK or not before you sleep with them or whatever. Is that the point?

      I like TLP more, but I binge-read TLP at a point where I was in a very receptive state of mind, and I remember the good posts all sort of blended together. There’s a lot of forgettable or dumb stuff. I didn’t exactly take notes.

      • Leah Velleman says:

        Er, sorry — I only assume they’re the same guy because I’ve seen others making that assumption in a way that made it sound like an open secret. I definitely don’t have any first-hand evidence one way or the other.

    • Urstoff says:

      I think writing in that style gives a lot of readers the illusion that they are close to understanding the BIG SECRET about the world, and that illusion is very emotionally attractive. If the author was straightforward, the readers would not find it so alluring, and it would turn out that either: the thesis is fairly banal, the thesis is unsupported by the evidence, or there is no thesis.

      • Leah Velleman says:

        Frankly, this is what I’d assume was going on too if it wasn’t for people like Scott whose thinking I respect claiming there was more substance to it.

      • dndnrsn says:

        I’d put it somewhere in between. TLP and HC are both authors where you can get something from it, but it’s vague enough that it buzzes around in your head and links together with other stuff, then you project it back. “Teach a man to fish” sort of situation. The positive reading of this is that giving your readers the ability to figure things out themselves is good; the negative that it’s in the same domain as the Forer effect or the Carnegie trick of getting people to talk about themselves so they think you’re interesting.

        • Urstoff says:

          If that’s the output of that type of thinking, I’m not sure I see the value in it.

    • Nabil ad Dajjal says:

      Does he ever define “narcissism” in a brief and straightforward way? He seems to use the word differently from everyone else I know, and to have a lot of scorn for people who don’t grok the difference — but does he at some point explain what the difference is?

      One important piece of context is that he either practices or at least greatly admires psychoanalysis. A lot of his posts are ~50% bitching that all he does is write prescriptions instead of psychoanalyzing patients. You get the sense that he would kill to practice in 19th century Vienna.

      I don’t understand psychoanalytic theory and I don’t particularly want to for the same reason that I don’t want to spend time understanding æther theory or any other discredited scientific model. But the gist of it that I do understand is that narcissism is a whole big thing, which includes the colloquial definition of narcissism but also a lot of behaviors and ways of thinking that we wouldn’t normally think of as narcissistic.

      Does he ever make concrete proposals for how he’d like some part of the world to be different — either politics, or pop culture, or the field of psychology, or anything else?

      In one of his posts, I don’t remember the name, he praises the wife of a man undergoing a mid-life crisis for pointedly ignoring / bluntly dismissing his attempts to redefine himself until he snapped out of it. The man presumably wasn’t a bad husband or a bad father or even a bad employee but he didn’t want to be “just” a good husband, father and professional. Except, well, that’s what he actually was and his flailing for a different cooler identity was pathetic and ridiculous.

      It sounds a lot like what people tell me* Jordan Peterson says but with fewer dragons and lobsters. You have a role in the world already and that’s a more “authentic” identity than any ideology or corporate brand you could identify with instead. Let yourself be defined by the things you do and your relationships with other people and stop worrying about it.

      *I don’t watch YouTube videos of people talking unless they’re also playing video games skillfully and/or humorously.

      • skef says:

        One important piece of context is that he either practices or at least greatly admires psychoanalysis.

        “For the record”, TLP said a few times that most of his work was forensic. E.g. evaluating patients for competency to stand trial, be released from involuntary care, etc.

        • Nabil ad Dajjal says:

          Thanks for the correction. I had read the article where he said that but apparently didn’t absorb it.

          • skef says:

            Given “or at least greatly admires”, it doesn’t even count as a correction …

            I think I’ve read just about all of TLP and I’m not sure I have much grasp on his views of psychoanalysis. It’s unlikely he’s done much past his schooling given his age and degree — psychiatrists mostly hand out drugs on as fast a schedule as possible these days. He definitely does say the odd good thing about it, though. (Second to last paragraph. *sigh* I generally look for some HTML ID to link to more directly but older stuff often lacks them.)

            Edit: Whoops — the total of section IV is more interesting, pertinent, and equivocal than just the mention of the word in that paragraph. Teach me to Ctrl-F.

      • HeirOfDivineThings says:

        Plot twist:

        TLP/HC is Jordan Peterson

    • WashedOut says:

      I get the impression your frustration is due to a list of specific things you are seeking out in TLP/HC writing that you are not finding: concrete proposals, definitions of key terms, references to authorities, etc etc. In this case you are barking up the wrong tree.

      In my opinion the value in reading TLP/HC is like the value in appreciating a good film. It’s a perspective on things, rather than a strict taxonomy or survey. In the case of TLP it’s the perspective of a cynic who provides cautionary tales about the consumption of media, and who looks at behavioural psych through the acting out of archetypes.

      What Urstoff says elsewhere in these replies regarding TLP’s tone being used to create an air of secrecy is a fair description, but I disagree that it leaves the reader empty-handed. It’s just a narrative device used to pull you through the mire of his own bitter, caustic thought process and to me it’s clear he does this in a self-aware way and with good humour.

    • ohwhatisthis? says:

      Yes. Ignore absolutely every essay on narcissism he has ever written. Focus more on his articles about psychiatry. People ignore that, even though *that’s the main point of the blog*. Or was. He has some decent social commentary here and there. But his best oddly enough are not about narcissism, which really I just gloss over.

      • WashedOut says:

        Ignore absolutely every essay on narcissism he has ever written.

        Care to substantiate? Want to put forward any alternative views of your own that you think are more insightful?

        • ohwhatisthis? says:

          They just don’t matter in comparison to his other essays and articles. Here are some heavy hitters.

          Know all those ads on the TV marketing some pretty scary stuff like Seroquel to ordinary depressed people? What if the medical evidence for its helping depression amounted to a glitch in the way people use words, without any actual net benefit in real life? The implications for the next breakthrough depression treatment are of course, huge.

          The heaviest hitters of TLP are on her/his own field of research. Everyone has a social commentary piece that another person disagrees with, makes some good insightful points here and there.

  28. theodidactus says:

    Hi guys,
    I’m gonna play a fun game. I won’t have time to look at the responses to this comment for another 72 hours. When I get back, I’m gonna be really stressed out, so I want a fun discussion below to read once this big project I am working on has passed.

    So here’s your topic:What is your solution to the Fermi Paradox
    https://en.wikipedia.org/wiki/Fermi_paradox

    To my knowledge, I have never seen this discussed on Slate Star Codex before, and I would thing the posters here would have a lot of good things to say about it. Assume I’m a guy that’s heard most of the basic answers and even a few weird ones (IE There were aliens but they all died, hid themselves to avoid roving murderous nanobot swarms, or now spend all their time playing video games).

    I’ll start with my basic take:
    If there was a one/[any reasonable number] chance of a civilization anything like humanity, appearing anywhere in the galaxy, we really should have seen something by now. A few civilizations that “got started” even relatively recently, IE 200,000 years ago, should be blindingly visible by this point. I don’t buy most of the “they invariably die before they reach exodus” arguments, as they rely on probabilities that I simply don’t buy. (EVERY one, REALLY?). I don’t buy most arguments that rely on a singularity/uploading either (a civilization that did that would be able to trivially announce itself to the universe in a pretty dramamtic fashion. They should be reshaping the entire galaxy by now).

    But it seems really weird to rely on the anthropic principle, or to assume that life is a once-in-a-galaxy fluke. Maybe it’s a personal prejudice, but I’ve never seen anything terribly shocking about biogenisis. It just seems like something that would be expected to happen if you shake up basic chemistry long enough.

    I am forced to conclude that alien life exists, but it is STAGGERINGLY different from us. drives we consider “universal”, like discovery, exploration, communication, and multiplication, are ultra-rare on a galactic scale.* I realize this relies on some weird probabilistic reasoning as well (EVERY exoplanetary species except us? REALLY) but I can’t find a solution that makes more sense.

    I would be especially interested in hearing from people who could refute the assumption that these guys should be easy to see, given a few hundred thousand years of time to kill and a desire to reach out and make their presence known to other beings.

    * Or, more disturbingly: these drives are objectively a waste of time compared to whatever-the-hell a species gets up to post-singularity. If so, fuck that, I don’t want one.

    • baconbits9 says:

      My best guess is that one of the underlying assumptions of the paradox is wrong, probably the assumption that interstellar travel is possible, desirable or achievable for most civilizations. Further I think that the assumption that accomplishing low level interstellar travel (colonizing nearby places) will lead to high level travel (spreading out and colonizing every place) is poor.

      • helloo says:

        That’s generally considered to be the Great Filter hypothesis – in that one of the steps is one that much harder to achieve to the point that it isn’t expected for an intergalactic civilization to form/have existed.

        My assumption is somewhat different – I think that most/all expected intelligent lifeforms are either A) more naturally regulating to not just keep expanding B) not necessarily keep “progressing” in power and technology and will become “stable” at some point before dyson spheres and galaxy-empire.

    • Urstoff says:

      It seems to me like we don’t remotely have enough evidence about the probability of the development of life (and then intelligence) to make the Fermi Pardox / Drake Equation anything more than a parlor game.

      • kokotajlod@gmail.com says:

        Can you unpack that? What does lack of evidence have to do with anything? The equation still stands, even if we have very little idea what the values of the variables are.

    • John Schilling says:

      Someone has to be first; evidence suggests it is us. Technologically advanced civilizations(*) have the means to make themselves known over cosmic distances by means including but not limited to colonizing the known universe and which will not be missed by anyone who has made even our level of cursory search. Technologically advanced civilizations also have the means to make it impossible to destroy or coerce them without waging war on a cosmically visible scale. And “solutions” which require every single civilization in a lively galaxy to agree with the proponent’s reasoning as to why they should not do any of these highly visible things, are not plausible once the number of civilizations exceeds half a dozen or so.

      More generally, we have no way of estimating several key parameters in the Drake Equation even at order-of-magnitude levels. We’ve got one data point on abiogenesis, and you want to put a p-value on it? The expected number of independently evolved technological civilizations per universe is most likely either >>1 or <>1 is ruled out by the above. Therefore observed universes mostly have one independently evolved technological civilization, plus whatever they chose to uplift. The number of unobserved dead universes, is of no interest.

      * By which I mean, more advanced than us but in foreseeable ways that require an infinitesimal fraction of a mere galactic lifetime to achieve.

      • Urstoff says:

        Is there much sci-fi where humans are the elder race rather than the plucky upstarts?

        • John Schilling says:

          Sadly, no. Someone needs to change this.

        • Nick says:

          Does Ursula K Le Guin’s Hainish cycle count? We don’t come from Earth, but the ‘aliens’ who populated Earth and other planets are very much like us.

        • arlie says:

          There’s plenty of sci fi with an essentially human universe, and far fewer aliens. But I can’t think of any that has this setup, and then focusses on conflict between humans and aliens, rather than conflict between humans.

          Hmm … one half remembered novel had a planet with both humans and non-humans, getting caught up unwillingly in a larger war that eventually had them invaded – but I can’t remember which species ultimately invaded, and was defeated by using the capabilities of the other species. (In particular, there was a local plant that was dangerously poisonous to only one of the two species, giving off something that was absorbed from the air, and the invaders were tricked into basing themselves in an area with lots of this plant…)

          I *think* the plant was harmful to humans, and while humans came up with the strategy, most of the troops/militia facing the invaders were non-human as a result, but I’m not sure.

        • J Mann says:

          Dinosaurs, by Walter Jon Williams, is a fun example.

        • Evan Þ says:

          The famous Mote in God’s Eye.

        • helloo says:

          There’s a few where the “precursor race” was in fact tomatoes! er humans all along.

          r/hfy is a subreddit that is sort of dedicated to this line of thought (humans not being the underdog).

        • Randy M says:

          In Speaker for the Dead, the humans are the elder race compared to the Piggies, but not the buggers. Given that in that book there’s only one bugger left, I’d say it counts.

        • Bugmaster says:

          The Empire of Man series arguably counts. There are other interstellar civilizations, but they are about equiv-tech with humans; meanwhile, there also exist pre-industrial civilizations, whom humans occasionally uplift.

        • Bugmaster says:

          In the Battletech universe (at least, the Inner Sphere part), humans are an elder race with respect to themselves. Ancient humans possessed technology and knowledge far beyound the modern ones; but, after a major interstellar war, most of that knowledge was lost — but secret caches of technology still exist…

        • Lillian says:

          There’s a bunch of sci-fi in which humans are an elder race and the plucky upstarts. In Stargate the ludicrously advanced Ancients who built the titular stargate network are the original humans who evolved in another galaxy, and modern humans are their descendants. In Halo it turns out that human civilization is actually as old as that of the Forerunners, but they kicked our shit in and quarantined us on Earth because they were jealous that the Precursors – the elder elder race – liked us better than them. In Weber’s Empire from the Ashes series humanity one of two OG elder races, but we keep getting nearly wiped out by the other OG elder race and having to rebuild from the ruins.

      • helloo says:

        The whole Fermi paradox rests in the case that it’s been quite a few billion years since life was possible – and that by his estimations (which he is well-known for), there should have been plenty of previous “attempts”.

        So sure, one can just say that Fermi’s estimate isn’t as good as his reputation holds, but a lot of the alternates hold for instances where his estimate is correct and that we’ve simply been unable to see signs of that civilization.

        • John Schilling says:

          We’ve been unable to see signs of “that” civilization even though we know how to make ourselves visible to civilizations like us, are not terribly far from the point where we’d have to actively hide ourselves from civilizations like us, and even though Fermi estimates that “that” is actually an ensemble of many civilizations.

          So there’s an unstated postulate that 100% of all possible civilizations decide to conceal themselves. Or, given that the Fermi paradox / Drake equation is so often used as a justification for SETI funding, to conceal themselves just barely enough that they can be detected by a civilization like us but only if we look a little bit harder than they have been.

          I’m going to go with “they don’t exist” being a vastly more parsimonious explanation.

          • baconbits9 says:

            We’ve been unable to see signs of “that” civilization even though we know how to make ourselves visible to civilizations like us

            We do?

          • Thomas Jørgensen says:

            My personal theory is simply that technological civilization cannot fit in interstellar luggage.
            The more advanced we get, the larger and more interconnected our economy gets. – If that is a hard “law”, then anyone capable of building a starship would have an economy consisting of many billions of people and the ship would be the product of that vast network.. which wont fit aboard. So you cant colonize. Surviving on an alien world or in deep space would require a breath of tools and expertise which simply cannot be launched across the chasms between the stars. Probes ? You bet. Research teams? If they have ice in their veins and a pragmatic attitude towards being restored from backup if the ship breaks, sure..

            Colonies? Suicide.

          • Brett says:

            “They don’t exist” is definitely an answer, but if there just aren’t that many of them then it’s not implausible that we ended up in a situation where they all decided against detectable* interstellar expansion for their own various cultural reasons.

            * “Detectable” being the key there. We may not be noticing them because they don’t settle every system they pass through, or they’re tiny (i.e. their interstellar spacecraft are very small to minimize mass and energy requirements).

          • johan_larson says:

            My personal theory is simply that technological civilization cannot fit in interstellar luggage.
            The more advanced we get, the larger and more interconnected our economy gets.

            I’m not quite as pessimistic as you are, though I agree there are formidable challenges to settling a new planet, even if you can get there in the first place. Advanced technology generally requires a large economic base to be viable. But the initial group of settlers can reproduce. And the goal they need to reach isn’t the tech base of the planet that launched them, but a tech base sufficient to sustain life on their new planet.

            What I see is a race against the clock as all the shiny high-tech stuff the colonists brought with them slowly wears out, and the colonists try to grow like mad to create all the hands and minds that can learn to do all the work that needs doing to build a modern economy. They’ll have lots of information available to get there, and nifty gear to help the process along, but they have a long way to go.

            How far they need to get depends on how hostile the planet is. If the planet is like earth, the colonists pretty much can’t fail. Hunter-gatherer tech lets you live in most parts of planet earth. If it’s like the moon, they need to be able to create air-tight habitats, manufactured from scratch, make breathable air, and grow food, all on a world that has none of these things. That would be a much harder challenge.

          • Thomas Jørgensen says:

            No, if it is earth-like, it fails faster – because the biochemistry will not be compatible, and the activity level will be harsh on machinery.

            Note that this is not really pessimism. It is one of the kinder answers to fermi, because nothing stops you from turning your home system into a star-mining/dyson swarm setup and settling in for the long haul, or from communicating with others doing the same thing.

            … and if this is what an apex civilization looks like there is a simple reason we are not hearing their signals, too – at that level of tech, the obvious way to talk across long distances is a 10 kw radio using the suns gravity lens to do beamed broadcasts and reception… which is far too faint a signal for us to pick up. (Nobody would bother with high-gain transmissions, because, well, if someone has radio at all, then waiting 1-300 years means they have gravity lens receivers… and the round-trip of the message is measured in timescales far longer than that.

          • Nancy Lebovitz says:

            Out there in hypothetical land…..

            It seems reasonable that a whole planetary civilization can’t clone itself into something much smaller like a space colony. However, it seems reasonable that a planetary civilization could be so capable, knowledgeable, and wealthy that it could identify what was needed for a self-sustaining fraction, or at least one with a high probability of being self-sustaining.

          • Thomas Jørgensen says:

            Possibly, but even if theoretically doable, you would agree that any such faction would, by necessity, be much, much poorer than the home system?

            Because it is a smaller economic web, and has less physical infrastructure.

            And growing any colony to the point where it has a remotely comparable level of wealth per capita would take centuries or millennia. That is one doozy of a disincentive to being a colonist, even if you are long-lived enough that you might see that day.

            Not to mention the rather distinct possibility that any new citizens you raise in your off-shot colony will take a good look around at their prospects, take a look at the cultural exports from home, and express their teenage rebellion by immigrating back to the home system via radio.

            So even if you technically could- it would be insane to do so. And colonies founded by lunatics may not have the best success rate.

          • MB says:

            Consider where people were two centuries ago and where they are now. It’s possible that we know nothing yet (including the true answer to Fermi’s paradox).

        • Conrad Honcho says:

          There are some theories about galaxy development that if true, would mean we are not quite a few billions of years since life was possible. Young, hot massive stars burn fast and supernova frequently, sterilizing swathes of the galaxy at regular intervals. These stars may have just about died off, meaning it’s only the past few billion years in which life could evolve. And would be a little more likely in our lower density edge of the galaxy.

      • MrApophenia says:

        What is the basis for the idea that life is extremely easily detectable? I know I’ve read several articles in the past that all the worries about us beaming radio signals into space are probably worrying about nothing, and that we can flip the logic there for SETI – that if there was another Earth in Alpha Centauri giving off the same signals we were, our radio telescopes still wouldn’t detect them due to the fact that the signals just aren’t strong enough.

        • Nancy Lebovitz says:

          There’s the that a planet with significant amounts of life will have a chemically unstable atmosphere.

          https://en.wikipedia.org/wiki/Gaia_hypothesis

        • Conrad Honcho says:

          It’s not that other civilizations are easily detectable, but that any civilization that’s had a few million years to spread out would be unavoidably detectable. And yet earth was not colonized by replicator robots (unless that’s what life itself is) and we don’t see Dyson spheres all over the sky.

        • John Schilling says:

          if there was another Earth in Alpha Centauri giving off the same signals we were, our radio telescopes still wouldn’t detect them due to the fact that the signals just aren’t strong enough.

          We’ve been through this here recently, and no. If there were an exact duplicate of Earth anywhere within ~40 light-years, each would know of the other’s existence due to the intense microwave beacons that are ballistic missile early warning radars. And that’s just an unintended side effect coupled to a very weakly funded search-only SETI program. Any civilization that actively wants to communicate will, with approximately our own technology and wealth, be able to make itself known across hundreds of light-years. Toss in another few centuries of technological and economic growth, and only the civilizations which are actively trying to hide will remain hidden.

          • Nancy Lebovitz says:

            Hundreds of light years is pretty close compared to the size of the galaxy.

            A fast google turns up 512 G stars within 100 light years. So maybe 5000 within 300 light years?

          • John Schilling says:

            But hundreds of light-years is the figure for a deliberately-communicative early 21st century Earth. The great fallacy of SETI is that the universe will be populated by civilizations exactly as advanced (*) and prosperous as us. The point of the Fermi paradox is that, if you allow for some of those civilizations to have an infinitesimal technological or economic lead, in cosmic terms, the whole thing blows up. A K-I civilization can make blinking colored dots appear naked-eye visible in our night sky, or our daylight sky if they really want, to trade mathematical proofs with Pythagoras from halfway across the galaxy. At even a tiny fraction of K-I and a desire to talk, Marconi et al would have been picking up their signals.

            * Technologically speaking; there’s often an assumption that ETI will have great wisdom and scientific knowledge, but will still be limited to Arecibo-ish radios for communication.

      • alef says:

        Either first, or among the first few dozen or so (the others having destroyed themselves or stagnated somehow). But what is the chance that you or I – i.e. some intelligent entity posing this question – belong to the very first race?

        Presumably it’s low – unless the first is also the last. That would be consistent with the concern that perhaps it’s staggeringly easy to destroy/sterilize/lobotomize the reachable universe (if so, and if we develop such technology – paperclip AI for instance – it will ‘soon’ be deployable by any single individual and we need to stop any one going rogue or making a mistake _for ever_)

    • Oleg S. says:

      I don’t think planetary-based life-forms are the most prevalent in the Universe. Landing on a planet is a one-way ticket: once down the gravity well and under the atmosphere, it takes a huge amount of energy, resilience and power output to get up. Since almost every resource a colony need is readily available from the asteroid belt, I don’t think most life form would bother landing and establishing a long-lasting connection with a planet before sailing off to other worlds.

      In that sense the answer to Fermi Paradox may lie in Solar System’s asteroid belt – we didn’t find anyone because we didn’t look there yet.

      • John Schilling says:

        Refresh my memory, how many spacecraft have we sent to the Solar System’s asteroid belt to look for interesting stuff? Not to mention all the telescopes.

        • Oleg S. says:

          Four, if I remember correctly: Rosetta, Dawn, NEAR Shoemaker and Hayabusa. Don’t think we can confidently cross out any (derelict) spaceship factory in our backyard just yet.

          • John Schilling says:

            A derelict spaceship factory implies a factory’s worth of derelict spaceships lying about. You need a fairly contrived scenario for people to cross interstellar distances, set up house in our asteroid belt, and build only a single compact industrial facility for their trouble.

          • Oleg S. says:

            @John Schilling
            I agree with you that we won’t find our asteroid belt teeming with robotic drones busily building interstellar spaceships. But I still think that chances to find any remnants of extrasolar presence are way better in asteroid belt than here on Earth.

          • Nornagest says:

            The Kuiper belt or the Oort cloud might be a good place to look, depending. There’s more free energy in the asteroids, but volatiles get easier to find outside a gravity well the further out you go. If you need something like helium-3 to power your stuff, that could be a decisive advantage.

          • helloo says:

            There’s currently a serious hunt for “Planet Nine” a planet that’s 10x the size of earth in the Oort Cloud (https://arxiv.org/abs/1601.05438). I think a factory worth of spaceships can stay hidden.

      • Conrad Honcho says:

        Since almost every resource a colony need is readily available from the asteroid belt

        Eh, not exactly. On Earth, minerals are concentrated via geological, hydrological, and biological processes that have played out for billions of years. But asteroids are dead, unchanging worlds. So yes, there’s absolutely trillions of tons of platinum in them thar’ asteroids. There’s an atom of it over there, and atom of it over there, an atom over it over there…

        Asteroid mining is not as simple as “land on asteroid; wail away with pick axe.” It might well be easier to extract resources from a living world where billions of years of activity have conveniently concentrated them for you.

        • Nornagest says:

          A lot of asteroids — or at least meteorites — seem to be remnants of partially differentiated objects. Iron meteorites (which we think are similar to M-type asteroids, a common class) are almost pure nickel-iron, probably come from planetesimal cores, and are better iron ore than anything we’ve got on Earth. The earliest iron tools were made of those meteorites, cold-forged into shape without refinement.

          • Conrad Honcho says:

            Do you have a source for that? I participated in a QA once with a guy who was developing commercial asteroid mining and he agreed with my statement.

          • Nornagest says:

            Everything I said is on Wikipedia. And I’ve seen some of those meteorites, and one meteoric-iron knife, personally. Widmanstätten patterns are really pretty.

            Now, it’s likely that minerals in asteroids aren’t concentrated in the same way as they are on Earth, where a lot of stuff gets built up by hydrological processes. And some asteroid classes might be completely undifferentiated. But I’m fairly confident in what I’ve said about M-type asteroids as far as it goes.

        • bean says:

          Not all asteroids are created equal. Some asteroids (Class M) are made out of what is essentially low-grade steel. Others are clumps of rock, a few of which might have useful amounts of water. So yes, there is concentration, and useful levels of it, too. Plus, the gravity well is really, really deep.

          • baconbits9 says:

            Random question:

            How plausible would it be to just attach boosters of some kind to a useful asteroid and use it as the base for your ship and raw materials for the interstellar travel? How much energy would you need to push a small one out of orbit from the sun and in a direction you want?

          • baconbits9 says:

            Or, for a martian colony what about aiming a valuable asteroid towards it and slamming it into the surface a few years ahead of your colonists?

          • Nornagest says:

            How plausible would it be to just attach boosters of some kind to a useful asteroid and use it as the base for your ship and raw materials for the interstellar travel? How much energy would you need to push a small one out of orbit from the sun and in a direction you want?

            The rocket equation’s going to give you trouble there. Asteroids are heavy and in most cases you won’t be able to use much of one as reaction mass. You’d have to find one that’s mostly volatiles, which are rare in the inner Solar System, and run it through a propulsion system that’s fairly agnostic to the type of reaction mass it’s using. Even then top speed will be limited because most of the stuff you’re using as reaction mass will have high molecular weights.

            Or, for a martian colony what about aiming a valuable asteroid towards it and slamming it into the surface a few years ahead of your colonists?

            Large asteroids don’t survive impact in one piece, or even in conveniently sized chunks — if there’s no atmosphere or it’s too big for the atmosphere to brake it, it impacts with an energy several times higher than you’d get with an equal weight of TNT. You’d end up splattering vaporized asteroid and regolith over a wide area of the Martian surface, mostly as fine dust. This might be what you want if your asteroid (or more likely comet) is made of water, but probably not if your asteroid is made of, say, iron or olivine.

          • bean says:

            How plausible would it be to just attach boosters of some kind to a useful asteroid and use it as the base for your ship and raw materials for the interstellar travel? How much energy would you need to push a small one out of orbit from the sun and in a direction you want?

            Interstellar travel? Unlikely. There’s the old O’Neil scheme for using mass drivers to push asteroids around, but that’s not fast enough for interstellar. For that, you’re going to want to make everything as light as possible, and that means not hauling around a bunch of extra rock.

            Or, for a martian colony what about aiming a valuable asteroid towards it and slamming it into the surface a few years ahead of your colonists?

            Much more plausible, although asteroid redirection makes people nervous, because it’s way too easy to turn into a bombardment weapon. And I’m not sure why you’d do it. Maybe a Class M, but I think Mars is probably past the threshold where you’re going to get something useful out of a reasonable-sized asteroid, instead of sprinkling a fine dust over a large area. (I’ve read about simulations looking at Class M impacts on the Moon, and a lot of it could survive unmelted there. But probably not on Mars.)

          • James C says:

            How plausible would it be to just attach boosters of some kind to a useful asteroid and use it as the base for your ship and raw materials for the interstellar travel? How much energy would you need to push a small one out of orbit from the sun and in a direction you want?

            There’s a wonderful idea for a ship called a beehive colony. This flips the idea around and lands a ship with everything you need on a comet. The comet is then hollowed out by the colonists and the waste ice/rock used as propellant to get the colony up to interstellar speeds. It’s a neat idea as it sidesteps the rocket equation by building your ship out a fuel, and you’ve got more radiation shielding than you’ll ever need. The thrust to weight ratio though… well, it might take you several decades to leave your solar system at all.

        • John Schilling says:

          So yes, there’s absolutely trillions of tons of platinum in them thar’ asteroids. There’s an atom of it over there, and atom of it over there, an atom over it over there…

          Based on meteorite and spectroscopic data, there is a fairly large class of asteroids whose surface consists of a neatly powdered regolith with a magnetically-separable platinum ore at 10-100 ppm. That’s almost an order of magnitude richer than the average platinum ore on Earth, so platinum-group elements are definitely on the list of potential targets. In this case, it’s because of the wrong sort of concentration: Earth’s platinum is almost all concentrated inaccessibly in Earth’s core, so an asteroid that never formed a core and has “merely” the average cosmic abundance for metals, is a rich ore.

          Aside from platinum-group elements, there are asteroids that are rich in a pretty good nickel-steel alloy, asteroids from which we can reasonably extract magnesium, asteroids made of what is essentially lignite coal (different origin, of course), and asteroids from which we can make glass. That’s a useful basket of materials, but it’s not the complete recipe for industrial civilization.

          We haven’t finished exploring the asteroids yet, and there are signs of something interesting going on with Vesta and Ceres. Meanwhile, Mars has had the full range of interesting geological processes in its past and it hasn’t been infested with clever primates looking for placer deposits of shiny or colorful. So there are probably gold-rush level deposits of pretty much all the elements (including gold) waiting to be found.

          This suggests trade, for anyone wanting to build a civilization beyond Earth.

    • The Element of Surprise says:

      My model of a spacefaring civilization is that it would eventually end up sending probes at speeds approaching c to all stars and otherwise useful sources of negentropy to harvest them, destroying all significantly less intelligent life in its wake. The ultimate cause for this is that aggressive expansion is advantageous to factions within a civilization, who would thus outcompete less aggressive factions (see Scott’s Moloch). Proximate reasons could be a (friendly, to its creators) AI trying to maximize its expected lifetime and probability of survival in intergalactic war (e.g. as in Friendship is Optimal, which is a much better read than its title would suggest imho), an unintentional modification / bug (“mutation”) in a Von-Neumann-Probe (“space cancer”?), or good old paperclips. I think this is very likely to happen within centuries in a civilization at our curent technological level.

      (First contact with such a civilization would probably not be a spaceship with ambassadors, but galaxies apparently darkening / disappearing near the edge of the observable universe. The volume of darkened space would appear to us to be a prolate spheroid with its major axis pointing our way, and with apparent growth rate exceeding c. We would possibly stop existing not much later, since a faction of the civilization that checks for extraterrestrial life before harvesting would have a competitive disadvantage, but it is also possible that competition within the civilization has not reached this point when they arrive *fingers crossed*.)

      l’d say we haven’t made contact with extraterrestrial intelligence because the timespan between a civilization being visible to us or having the means to communicate, and a civilization aggressively colonizing all of space, is very short compared to evolutionary timescales. We haven’t met any non-primate intelligent civilization on earth: if one had emerged before homo, it would probably have been around for millions of years already, and it would have so thoroughly reshaped earth that we would never have naturally evolved (and if humans had been bred artificially by members of this civilization, we would never have needed to “search” for them). The same is true for space: If intelligent life existed in our past light-cone, they would probably have trampled our ancestors when they were single-celled, so we are probably the first.

      • Conrad Honcho says:

        We haven’t met any non-primate intelligent civilization on earth: if one had emerged before homo, it would probably have been around for millions of years already, and it would have so thoroughly reshaped earth that we would never have naturally evolved (and if humans had been bred artificially by members of this civilization, we would never have needed to “search” for them).

        Maybe, maybe not.

        When it comes to direct evidence of an industrial civilization — things like cities, factories, and roads — the geologic record doesn’t go back past what’s called the Quaternary period 2.6 million years ago. For example, the oldest large-scale stretch of ancient surface lies in the Negev Desert. It’s “just” 1.8 million years old — older surfaces are mostly visible in cross section via something like a cliff face or rock cuts. Go back much farther than the Quaternary and everything has been turned over and crushed to dust.

        And, if we’re going back this far, we’re not talking about human civilizations anymore. Homo sapiens didn’t make their appearance on the planet until just 300,000 years or so ago. […] Given that all direct evidence would be long gone after many millions of years, what kinds of evidence might then still exist? The best way to answer this question is to figure out what evidence we’d leave behind if human civilization collapsed at its current stage of development.

      • The Element of Surprise says:

        To do some calculation (each within a few orders of magnitude) (thinking out aloud here)
        * Stars per galaxy: 1e11
        * Proportion of stars with planets that support life: 1e-1 (I assume that dust around a star will collapse to create planets eventually. I would also assume that life is possible in conditions that appear uninhabitable to us.)
        * Proportion of life supporting planets which spawn life: 1e-1 (Our earth spent 1/4 of its existence in a habitable zone without actually having life, so it seems that, once life is possible, it appears with high probability)
        * Proportion of life containing planets that contains intelligent life: 1e-4 (hundreds of thousands of years out of life on earth’s billion year history)
        * Proportion of time that an intelligent lifeform spends in a state of being detectable with considerable means without being agressive (rapid change in athmosphere chemistry (large impact on ecosystem), space probes, EM communication): 1e-3 (hundreds of years)
        * Proportion of time that a civilization in that stage spends in a state of being easily detectable (with our current technology, e.g. from darkening their sun with Dyson swarms)) before rapid and aggressive space colonization 1e-1 (tens of years?)

        So from this naive calculation I would expect there to be tens of civilizations per galaxy on our past light cone that we would be able to see, but that haven’t destroyed the universe yet. The problem is obviously that someone being able to make this observation must live in atypical conditions, as the universe is still suspiciously devoid of paperclips (or computers running simulations of alien orgies).

        • The Element of Surprise says:

          Looking at it from the other direction: We exist, so the universe has not been colonized, so I am pretty sure there is not much intelligent life in the interior of our past light cone minus, like, a billion years or so. There are about 1e10 galaxies with 1e11 stars each existing for 1e9 years on average (volume of a 4D hypercone is 1/20 of a hypercylinder? Expansion of our universe makes this a bit complicated) in that spacetime volume, so the rate of emergence of intelligent life seems to be 1e-30 per star per year empirically. (Might be 1e-28 per habitable planet per year, if my earlier assumptions are true). (I don’t think the fact that sometimes stars burn up and swallow a civilization in the making makes a big difference).

          The rate-limiting step seems to be the either the appearance of life on a habitable planet (took us billions of years), or the emergence of multicellular life from that (another billions of years), but since this argument relies on the assumption that at least one of these steps was unusually quick for us, one of them probably has a typical rate orders of magnitude lower than the other, while the other step(s) all took a typical timespan. E.g. if from a large group of people, everyone plays the lottery until he wins, and then throws a die until he gets a 1 (each of these actions taking one unit of time), you would expect the first guy to finish to take unusually few tries for the lottery, but have average luck on the die. If the “multicellular” step usually takes much longer, we expect the universe to be full of “bacteria” (or other kinds of lifeforms that miss something crucial for intelligence?); if the “emergence of life” step usually takes longer, we expect the universe to be mostly sterile.

          If only a fraction of civilizations end up colonizing space, this would change these estimates by a factor accordingly. Interestingly, the number of observable civilizations we expect to find would depend on this fraction: E.g. if 99 out of 100 civilizations make Dyson spheres but then fall into stasis and do not colonize space, we would expect to be able to observe hundreds (but not thousands) of civilizations in all of our past lightcone: One per 1e8 galaxies. We might get technology capable of observing this in our lifetime.

          Once we colonize space (i.e. fill the interior of our future lightcone), we would expect to meet tens to hundreds of other colonizing civilizations, since the universe we can observe right now is younger than the universe that we will be able to physically reach by about that factor. Who “wins” these clashes could depend on the ressources each colonizing force has dedicated to researching space warfare (which probably will use up energy of many galaxies, and probably will look like nothing we could imagine) and how old each civilization is by that point, unless physics, or the multi-agent gametheoretical setup at that point, somehow permits a stable stalemate of unequal forces.

          The probability of a civilization being “colonized” (wiped out) in a given timespan before it itself starts colonizing is that timespan, divided by the proportion of civilizations that become colonizers, divided by the age of the universe. So we will probably not see this happen during a human lifetime 🙂

    • MrApophenia says:

      My personal pet theory is that by the time your technology advances enough to travel between stars, you no longer need to – that is, your home star produces so much more energy than you need that you can support an almost arbitrarily large civilization without needing to go to another one, at least for a few billion years.

      • Jake says:

        Also, once you have the technology that allows interstellar travel (assuming no FTL), you can pretty much live indefinitely in space, so why would anyone travel light years away to a different star, when you’ve got near infinite space close to your own star.

        My favorite ridiculous theory is that dark matter is just alien spaceships accelerating as close as they can to the speed of light in order to maximize their time dilation relative to each other, to get better returns on their investment portfolios.

        • albatross11 says:

          Jake:

          You still need materials and energy. I think those are concentrated in solar systems, albeit not necessarily in Earth-like planets.

      • Wrong Species says:

        But then you would have to account for population increases. As time passes, more people will use the Sun’s energy and(more speculatively), they will use the energy available to them more intensely. In the span of billions of years, they will either need to spread out or enforce population restrictions.

        • MrApophenia says:

          Would they, though? Stars produce a *really massive* amount of energy. To take the Earth as an example, the figure you hear quoted a lot is that more solar energy hits the Earth every hour than is used by the entirety of human civilization in a year.

          And that’s just the extraordinarily tiny percentage of the sun’s energy that actually reaches the Earth – most is just dispersing into space. If we had the ability to capture any kind of significant fraction of that energy and use it, you could support a pretty ridiculous population with basically limitless energy for each of them.

          • Wrong Species says:

            Think about the time scales we are talking about here and how exponential growth works. You have to deal with Moloch eventually.

          • MrApophenia says:

            Population growth isn’t exponential, though. It looked like it for a while, but current estimates are that due to declining growth rates, it will take centuries before the next doubling of the population.

            But let’s pretend that isn’t true, and that given access to seemingly limitless energy, we start doubling the population every generation. Let us further assume that we start with everyone on Earth consuming as much energy per capita as an American does now, and then double that every generation too. This seems wildly overestimating it, but sure.

            After 200 years of both of those things doubling every 20 years, with 71.6 billion people each using 41 gigawatts of energy per year, they will have fully exhausted the amount of solar energy that hits the Earth. We wouldn’t be using the full energy of the sun until another 180 years or so after that, with 9.2 trillion people, each individually using 10.5 terrawatts of power per year. Actually, even that would be fine, it would be the doubling after that which would use more energy than the sun is putting out. (By comparison, all of human civilization currently uses about 18 terrawatts.)

            So yes, if both population and energy use doubled every 20 years, we would quickly exhaust the energy the sun provides – but when you actually work out what that kind of doubling would mean, it seems really unlikely to actually happen. Ie, trillions of people seems plausible at some distant point for a very advanced society, but each of them individually using the full energy of the modern Earth seems questionable, unless they are using technologies so far beyond our own that worrying about energy production is probably also an irrelevant concern.

          • Wrong Species says:

            I meant exponential growth in regards to energy consumption, not population growth. I think the former is more plausible than the latter. As long as either population or energy use per person in increasing, on a long enough scale, then it necessarily follows that eventually the solar energy will not have enough energy to support everyone. If you want to avoid that, then both energy use per person and population will have to stop increasing and I don’t see that happening absent some kind of catastrophe that causes humanity to permanently be taken down a notch. I’m not even sure that exponential growth is necessary, as a billion years is an incredibly long time.

          • Nancy Lebovitz says:

            At some point, I assume energy dissappation will be a big problem.

      • Kestrellius says:

        The issue with that is this: There is a limited amount of matter in the universe. Once it’s gone, there’s no way to get it back — not as far as we know, anyway. And every second that passes, every star in the universe is irretrievably wasting it.

        Expansion to, and Dysoning of, every single star you can possibly reach, as soon as you possibly can, is an absolute necessity in order to stockpile as much negentropy as possible. Every moment a star goes unharvested permanently destroys irreplaceable potential for life.

    • sty_silver says:

      I just wrote out my theory about this in a LessWrong post a while ago. It’s here. The crux is that because of Many Worlds, the survivorship bias applies.

    • MrApophenia says:

      Oh, and alternate answer to the Fermi Paradox:

      The aliens actually have been detected, the government literally just released videos of UFOs that they actually think are probably aliens, it was on the front page of the NY Times website and nobody really cared.

    • beleester says:

      The Drake Equation’s biggest weakness is that we have no information on two key terms – how often does life evolve on a habitable planet, and how often does life evolve intelligence? Currently, we’ve only seen one example of a life-bearing planet, which isn’t much to go on.

      We’re finally getting solid data on how common exoplanets are, but AFAIK the best we can do for finding habitable planets is “planets that are the right distance to not instantly cook or freeze anything that lives on it,” which is… not really that helpful for finding aliens. So “Maybe intelligent life is just really rare” is still a plausible answer to the Drake Equation (for now).

      • Oleg S. says:

        From the dataset of 1 we can estimate that on average it took 200 million years for life to evolve, some 1.8 billion years for life to be detectable from other stars (by producing oxygen in sufficient amounts) and another 2.45 billion years for development of intelligence.

        So, from the dataset of 1 and taking a very conservative 100 years as an average lifetime for technological civilization, I would assume that there is life on almost every habitable planet, biosignatures should be detected on 50% of them, and probability to find technological civilization on a habitable planet is about 0.000002%.

      • Douglas Knight says:

        Habitable bodies are common. We don’t need fancy 21C telescopes to find them. There are 10 habitable bodies in the solar system. That is, places that could support methane-metabolizing “extremophiles.” It is we who are the extremophiles, not them. They may well outweigh us on even on Earth. (But that doesn’t address whether life exists on those 10 bodies.)

        • Oleg S. says:

          I count 3 definitely habitable worlds: Earth, Enceladus, Europa. I’d be very surprised if we don’t find life on all of them. Of other worlds, Titan is too cold, just by Arrhenius law (the reaction rate doubles for every 10 degree Celsius increase in temperature) I would expect all processes to be 200 000 times slower, and life not to emerge there until Sun becomes a red giant. Mars is too cold and too dry now, I would also not call it 100% habitable.

          What other 5 worlds did you have in mind?

          • Bugmaster says:

            I would call Europa and especially Enceladus “arguably habitable” at best, and I would be very surprised if we found life of any kind on any of them. While these planets do have water ice on the surface, and probably even liquid water underneath, on average they are far too cold and too dark for life to be a near certainty, as you seem to be implying.

          • Oleg S. says:

            @Bugmaster
            The hope for Enceladus and Europe is that tidal friction produces a lot of energy which is liberated in hydrothermal vents and may be harvested by native life forms. I guess that oceans of Enceladus and Europe may be a paradise for life forms from Earth’s deeps (after a little of genetic tinkering to adapt to different chemistry and pressure).

            The origin of life, however, represents a problem if UV is critical for it. But 1) I don’t think there is a consensus about where life first originated on Earth too – maybe UV was not that necessary. And 2) probably there was some early bombardment of Europe and Enceladus too, so they got their share of radiation.

          • Bugmaster says:

            @Oleg. S:

            I guess that oceans of Enceladus and Europe may be a paradise for life forms from Earth’s deeps…

            What are they going to eat ?

          • Douglas Knight says:

            I’m not talking about surface life. Extremophiles on Earth don’t live on the surface. So the surface temperature and surface water don’t matter.
            The number 10 comes from Thomas Gold (1992), but he doesn’t spell out a list. The biggest rocky bodies, going down to at least Earth’s Moon.

          • Oleg S. says:

            @Bugmaster

            What are they going to eat ?

            Mainly hydrogen sulfide or methane released by the vents, I guess, but there may be other options as well.

          • Bugmaster says:

            @Oleg S.:
            Ok, I will grant you that extremophiles may survive on Europa/Enceladus, but I still think their presence there is unlikely. In addition, I would still call these worlds “arguably habitable”, at best. We humans aren’t extremophiles, and neither are any species in our food chain, AFAIK.

    • Bobobob says:

      One solution to the Fermi Paradox, which I don’t often see referenced, is that life is vanishingly rare on the galactic scale but common in the universe. That is, there could be only two or three intelligent civilizations in any given galaxy at any given time, but since there are upwards of a hundred billion galaxies…

      Not even the most advanced civilization would venture beyond its own galaxy, and it would be virtually impossible to detect radio (or other) signals from so far away. So we’re not alone, technically, but for all practical purposes we’re never going to make contact with another civilization.

    • veeloxtrox says:

      I am going to throw out a different solution that I haven’t seen discussed that address your question

      What is your solution to the Fermi Paradox

      God made Earth with life on it and so the base assumption in the Fermi Paradox is wrong. The probability of life happening randomly is 0 and the reason we don’t find any other life is that it doesn’t exist.

      I am curious if anyone here has thought about it as a possibility? I know that I would have to seriously reconsider if God existed if we found aliens.

      • Nick says:

        Why wouldn’t God make aliens?

        • Anonymous says:

          Indeed. He made the Amalekites as well! And you don’t get much lower than the Amalekites.

      • Tim van Beek says:

        I know that I would have to seriously reconsider if God existed if we found aliens.

        Why? That is, according to what denomination? It would not be a problem for e.g. the Catholic church. It think you will easily find interviews were both Pope Francis himself or diverse Vatican astronomers say that the Church is basically agnostic (snicker) regarding this question.

        I am curious if anyone here has thought about it as a possibility?

        Sure. The Fermi paradox is however posed and supposed to be answered within the context and epistemological framework of natural sciences. Switching that means dodging the question, or, in case you are a writer, changing the genre.

        • sty_silver says:

          I don’t agree with that. An answer to the Fermi Problem merely needs to explain why we see no aliens. “God only created humans” is a valid explanation. There is no additional clause that forbids this.

          • Tim van Beek says:

            Well, it is a recontextualisation of the original Fermi paradox. Contexts are implicit, so, yes, there is no explicit clause that “forbids” it. Actually, nobody forbids anybody from discussing this answer, my intention was simply to point out why people who are interested in the Fermi paradox usually don’t accept this kind of answer, or even find it remotely worthy of discussion, including, paradoxically, the pope.

        • Aron Wall says:

          The Fermi paradox is presumably “posed and supposed to be answered within the context and epistemological framework of” the actual universe. For those who are religious, their beliefs about God cannot be reasonably compartmentalized from the discussion.

          In any case, it seems unsporting to exclude religious answers from a question that seems specifically designed to solicit extreme speculation.

          • albatross11 says:

            Right. But if the answer to the Fermi Paradox is “Because God only made one species–us,” then the obvious next question is “Okay, but why didn’t He make more species?”

          • Aron Wall says:

            I personally don’t see any particularly strong reason why God wouldn’t make more alien species (even if abiogenesis requires a miracle). I’m merely defending people’s right to give whatever answer(s) to the question they actually believe:

            You are a philosopher, Thrasymachus, I replied, and well know that if you ask a person what numbers make up twelve, taking care to prohibit him whom you ask from answering twice six, or three times four, or six times two, or four times three, ‘for this sort of nonsense will not do for me,’ –then obviously, that is your way of putting the question, no one can answer you. But suppose that he were to retort, ‘Thrasymachus, what do you mean? If one of these numbers which you interdict be the true answer to the question, am I falsely to say some other number which is not the right one? –is that your meaning?’

            Plato’s Republic, Book I

      • zoozoc says:

        I think it is better to phrase the solution to the Fermi paradox as saying instead, it is impossible for life to appear from non-life. The physical laws of our universe do not allow for it. But God created life in such a way that it can continue once it has been initially created.

        So unless God specifically created life elsewhere, there will never be life found anywhere but here.

    • Brett says:

      There’s all the obvious filters (my money is on it being extremely rare for intelligent life to make the jump to tool-using, recursive tool-creating, large-scale civilizations), but I also think we just haven’t been looking long enough. Alien civilizations may not broadcast their presence to others, and given how valuable energy and raw materials are they may literally be tiny compared to us (i.e. their spaceships are tiny to save energy and mass on travel, they dwell in compact spaces as programs or reduced-sized versions of themselves, etc). They may not settle every star system they pass through, or even most of them.

      Just think about it. If a small alien research craft massing two metric tons had flown around our solar system taking pictures and sending information back to where it came from as recently as 500 years ago, we’d be none the wiser as to their existence.

    • Kestrellius says:

      I want to outline some of the concepts involved with the Paradox before getting into my own ideas, in case not everybody here is familiar.

      Disclaimer (and recommendation): most of my information on this topic comes from the inimitable Isaac Arthur, who has made numerous videos on the topic. I cannot recommend his channel highly enough. However, you may wish to start with one of his more recent videos rather than the one I linked, as Isaac has a speech impediment, and has gotten better at speaking clearly over time.

      So. I used to pretty much disregard the Fermi Paradox on the grounds that we probably wouldn’t know what to look for, and there’s no reason why aliens would contact us. I changed my mind upon learning of the Dyson Dilemma, which follows from the idea that a spacefaring civilization could build swarms of satellites around stars in order to collect all the energy they produce.

      It is extremely likely that nearly any civilization with the capacity to do so would immediately construct as many Dyson Swarms as possible, in order to harvest the energy of as many stars as possible. It’s difficult to imagine that evolution would produce a spacefaring society that did not have the desire to reproduce, and this desire would tend to result in a desire to expand their civilization.

      There are various reasons why a given civilization might not do this, but in order to solve the Fermi Paradox, every civilization — at least every civilization within a few billion light-years — must have decided to stay at home. Therefore, certain possibilities like self-extinction by nuclear war or rogue AI are unlikely to be adequate explanations, as we would expect to see at least occasional exceptions — assuming that the universe frequently produces technological civilizations, that is.

      Also, bear in mind that expansion into space and construction of Dyson swarms is not only a matter of increasing the number of organisms (or amount of consciousness) that exists at the current time. It is a matter of increasing the amount of consciousness that will ever exist. Most of the matter in the universe (aside from dark matter) resides in stars, which, as long as they are left alone, are constantly converting it into heat and light — which are then dumped into empty space and lost forever. If a species wants to exist for as long as possible — which it should, given that valuing self-preservation is conducive to evolutionary success — then it must halt as much of this waste as it can, as quickly as it can. The construction of Dyson swarms around every accessible star is necessary in order to stave off entropy for as long as possible.

      Of course, the rate of expansion (and likewise the ability to observe the colonization from Earth) is limited by lightspeed, and various other practical concerns — but it would still only take something on the order of millions of years for a civilization, having mastered space travel, to colonize and Dyson-swarm multiple galaxies. So far as we know, our universe has existed in a life-bearing state for several billion.

      Therefore: the question is not why aliens have not contacted us. There are all sorts of possible answers to that question. The question is why there are still stars in the sky. And, for that matter, why our own star has not been shrouded by satellites, and our planet converted into them.

      ——————————————

      So. With that out of the way, we come to the matter of solutions. The simplest and most plausible one, and the one that Arthur espouses, is that life, especially technologically advanced life, just isn’t very common — that there is no Great Filter per se, but rather a combination of numerous obstacles that prevent the generation of large numbers of spacefaring species. For example, according to this solution, we are most likely the most advanced species in our galaxy.

      There’s another solution that I’ve come up with, though it seems optimistic to the point of absurdity. If there’s some kind of simple technology which allows for the creation of large amounts of energy in violation of thermodynamics, that would eliminate the need for a desperate scramble to colonize space — the eventual death of the universe is no longer a concern. If the device is simple, and follows directly from various other technologies and methods that tend to be required for space colonization, then perhaps nearly all advanced civilizations discover it, and most of those that don’t discover it come into contact with those that have discovered it before having built very many Dyson swarms.

      So…one reading of the data would indicate that, far from being about to destroy ourselves, we’re potentially on the cusp of making consciousness immortal, and permanently defeating entropy (which is the ultimate cause of all evil in the universe).

      We can only hope.

      • Alphonse says:

        I would be very happy if the “solution” to the Fermi Paradox is that we will discover how to defeat the Second Law of Thermodynamics (whether by reversing entropy or creating new, useful energy ex nihilo). That seems like a very happy world.

        But I can’t see how it works, even taken on its own terms.

        Limitless energy is helpful not only for running unending simulations of heaven for every member of your species, but also for things like waging war. It only takes one species (or faction thereof) who don’t want to slip away into a blissful, eternal simulation to wreck this setup. Once someone attacks you, you either die or fight back.

        That wouldn’t require the war-like civilization to Dyson sphere all the planets (they already have infinite useful energy), so maybe we wouldn’t detect them (although it seems unlikely we wouldn’t see the wars). But it’s still a bit odd that they aren’t already here.

        Even if we assume that everyone is peaceful, exponential growth is a big deal. A quick Google search suggests an upper bound of 10^82 atoms in the universe. It takes 270 iterations of doubling to go from 1 to 10^82. If an endless energy machine produces enough energy to enable you to construct another fully functional such machine while maintaining the current machine even only once every million years, you could go from one such machine to as many such machines as there are atoms in the universe in less than a third of a billion years — certainly not a fast pace, but comfortably within the history of complex life on Earth.

        So you still have to postulate that every species that discovers the infinite energy machine elects to stop growing at some point, because otherwise they could eventually tile the universe with such machines (or at least as many of them as can fit without collapsing everything into one gigantic black hole), which also provides an obvious motivation for such civilizations to start expending some of that limitless energy on fighting each other for space (in a way we could presumably detect).

        If the machine works by reversing entropy, rather than acquiring energy, then it seems like species would still want to gather as much energy as possible, since they could use that energy literally forever. The motive there is the same as for the standard Dyson builders, but on steroids.

        • Jaskologist says:

          If you can pull energy from nothing, doesn’t that also mean you can create mass/atoms from nothing?

          • Alphonse says:

            Yes, which is why any machine which defeats the Second Law of Thermodynamics by creating energy in excess of that needed to operate and maintain it also necessarily provides a path toward self-replicating those machines. Hence why any such device could self-replicate until the universe was tiled in them (which we would presumably notice).

            (I take it that’s your point here, but your comment reads like it is intended as a correction — although I may be misreading your tone — and I think my comment assumed the point you’re making.)

      • Nancy Lebovitz says:

        Any thoughts about specific conditions which make technological takeoff possible?

        Fossil fuels seem pretty likely on a planet which has had life for a long time. Fossil fuels which are accessible with pre-fossil fuel tech seem less obvious.

        Perhaps cheap fibrous plants for making paper are needed, and they don’t seem inevitable to me.

        On the other hand, maybe there are preconditions which make technological takeoff much easier, and we haven’t even imagined them.

    • Thegnskald says:

      Suppose time doesn’t “flow”, but rather that life is a physical pattern that transforms with respect to time. Suppose furthermore there is more than one timeline [edit: timelike] dimension – even two would be sufficient, giving rise to an infinity of potential angles.

      With these assumptions, the universe could be full of life, which all exist along timelike lines that exist at odd angles to our own. The intersections would certainly be interesting.

    • Jaskologist says:

      What if we tackled a simpler toy problem instead: How likely is it that we’d be the first intelligent species on the planet? Life has been here for a really long time. It sure seems like something else should have evolved intelligence before now, and yet we are the first.

      • dndnrsn says:

        Blatant Elder Thing erasure.

      • Oleg S. says:

        How likely is it that we’d be the first intelligent species on the planet?

        Maybe we are not the first – it’s difficult to know for sure. See this article and its summary.

        There is this enigmatic Paleocene – Eocene Thermal Maximum 55.5 million years ago, a massive carbon injection into the atmosphere and associated temperature rise 5-8 degrees Celsius. It happened very fast (just a couple thousand years) and looks just like own civilization would look like in 55 million years if we continue industrial production on present scale.

        • Alphonse says:

          For clarity, the second article you linked to says that “PETM’s isotope spikes rise and fall over a few hundred thousand years,” which is obviously quite a bit slower than the rate of change currently occurring (although perhaps not by that much if that covers multiple instances of upward and downward spikes in that several hundred thousand year period?).

          To my understanding at least, it seems unlikely that a prior civilization reached a similar level of industrialization as ours since the coal and oil deposits we rely on are extremely old…although maybe that’s not such a great rationale, since I’m seeing sources indicating that major oil deposits may be less than 100 million years old. That doesn’t fit well with a human-equivalent civilization 55 million years ago, but it certainly wouldn’t be incompatible with one a few hundred million years ago (I’m sure a serious evaluation would be much more complex than the five minutes I spent on Google, but the age of our hydrocarbon deposits was less of an issue than I remembered).

          It’s a fascinating issue to consider.

    • Alphonse says:

      One possibility that I’m curious for the thoughts of others regarding: perhaps exceedingly few species are able to take advantage of intelligence to become industrial/technological civilizations?

      As an illustration, it seems pretty likely that even if the average dolphin had an IQ of 120 that they would face insurmountable odds in creating a civilization in the way humans do. Perhaps it’s my lack of imagination, but I have difficulty picturing how dolphins set up the equivalent of a human farm, much less the equivalent of a human factory. Similarly, I doubt that a bunch of Tyrannosaurus Rexes would be able to achieve technological sophistication even if they had genius level IQs.

      The prevalence of other primates who could clearly benefit from higher levels of intelligence makes me doubt this theory, but perhaps all primates are atypical in this way and homo sapiens just happened to be at the leading edge? Were there dinosaurs that would have been able to realistically leverage a human level of intelligence into creating a society? Are there other animals today? If nothing else, it seems plausible that the set of animals who could leverage intelligence effectively to create a technological society would be predominately (and perhaps almost entirely) restricted to land animals.

      (This also might explain why even if life is prevalent in many circumstances, that only life in highly specific environments could develop enough to go interstellar. Maybe there are exotic life forms hanging out in the upper layers of gas giants in other solar systems, but if they can’t ever industrialize, they won’t expand.)

      • theodidactus says:

        Hello it’s me, back much earlier than I’d planned, and BOY are these comments what I was hoping for.

        It’s funny that the last comment most closely matches my own take. I write sci-fi, and my handwavy explaination (more for flavor than as an actual thing I put tons of stock in, though it does seem like the best answer to the paradox) is that humans are really unique in that they have the DESIRE to do industrious things like build factories, and the capability to do so.

        There’s an instinct something beyond the desire to reproduce, eat well, and obtain shelter that goes into a lot of really critical things humans do. Invention, and the systematization of invention, both seem like things that require a REALLY SPECIFIC brain quirk on top of the really specific chemical quirks that give rise to life at all.

        As my favorite example: the ancient invention known as the shadoof/shaduf: https://www.youtube.com/watch?v=bZ9gJAWvHxo

        I’ve always been fascinated by the fact that this tool, which is immensely labor-saving, probably took an staggering amount of time and effort to invent in the first place. Imagine how weird you have to be to actually sit down and build something like this, on a sunny hot day, rather than, I dunno, haul water like the rest of your friends.

        • Nancy Lebovitz says:

          That’s an interesting theory, but we’d have a hard time observing inventiveness in species that don’t have hands.

          As for the Fermi paradox, it may be that intelligence + good manipulators is a big enough barrier. That combination has only happened once on earth.

          It’s possible that intelligence and hands were driving each other.

          I’ve seen a theory that human intelligence is a matter of using the high capacity we need for accurate throwing for other things when we aren’t throwing. Is this at all plausible?

  29. J Mann says:

    Is anyone interested in an adversarial collaboration on allied involvement in the Libyan civil war? I post on the A C thread, but I’d be interested in working something along the lines of:

    The allied intervention in the Libyan civil war was based on unjustified assertions of an impending massacre in Benghazi, and was illegal under international law. Whether through negligence or intention, allied leaders misled the public about the imminence of the threat and whether the mission was aimed at toppling Qaddafi and winning the civil war for the rebels.

    Note – by “allied,” I’m intending to focus mostly on Britain and the US, because I don’t speak French, but I’m open to expanding the scope.

    • Wrong Species says:

      I’m not at a point where I could dedicate myself to this but I do have a request if someone takes your offer: leave off the bit about international law. No one cares about it outside of using it as a rhetorical weapon to hit the other side when they already disagree with them. It’s a distraction from the other issues.

      • J Mann says:

        Noted. (It’s an area I don’t actually know anything about, so it’s easy to leave off). Would US or British law be helpful?

        • Wrong Species says:

          I don’t think people care about Libya or whatever because of the law. Laws can be changed. Let the lawyers hash out that debate.

      • albatross11 says:

        I’m also too busy to do a collaboration. But the biggest problem with the Libyan intervention, to my mind, was the message it sent w.r.t. nonproliferation.

        After Libya handed over their WMD program and did their best to stop being enemies with the US, they were establishing a kind of pattern–instead of doing the Saddam thing (Let your WMD program rust because it’s too expensive, but bluff everyone into thinking you have them) or the Kim thing (develop nukes so the rest of the world won’t screw with you), a country could make peace with the US and its allies, get rid of its WMDs, and that could end well for the leaders of that country.

        If we’d stood aside for the civil war in Libya, that message would have stood. But when we intervened to help topple their government, it sent the message that getting rid of the WMDs was a terrible mistake that led Gadaffi to a really awful death.

        I have to guess that a lot of people in Tehran, Damascus, Pyongyang, etc., were watching how that turned out, and presumably they’re very clear on what lesson they should take from it. I think we’ll be paying the price for that screw-up for decades to come.

        • J Mann says:

          I agree that was a serious consequence (along with the current political situation in Libya) but IMHO, first we have to answer the question of whether we had justifiable reason to believe that Qadafi was about to initiate Srebrenica-style mass murders or worse.

          (Or I guess my theoretical counterpart could argue that we didn’t justify war on that basis).

          If Qadafi was engaged in something that other despots could avoid, then you could still say “Well, as long as you give up your WMD and refrain from planning to engage in horrific mass killings, we’ll leave you alone.” My current position, FWIW, is that he wasn’t doing anything that similar despots wouldn’t do to try to maintain control in a civil war, and it’s not even clear he was doing anything worse than the rebels.

          But I still see people routinely arguing that we made a reasonable decision at the time, because we had reason to believe that Qadafi was about to initiate a wave of mass civilian killings in Bhengazi. I’d love to work on something to explore that.

          • Thomas Jørgensen says:

            Libya mostly is not on the US either way.

            The french and the UK kicked down that door. And it was mostly about the fact that major repression campaigns on europes doorstep tends to result in a lot of boats crossing the med, with a minor side order of “Boy do we ever have a laundry list of scores to settle with this asshole”.

          • albatross11 says:

            Thomas:

            How’s that “intervene in Libya to keep boatfulls of refugees from crossing the med” thing working out?

          • Thomas Jørgensen says:

            Its an african country, people want to leave it, but its not like Syria, which lost a major fraction of its entire population due to people voting against all sides in the struggle with their feet. So.. “Kind of worked?” though that does have the implicit assumption Qaddafi would have been very heavy handed in victory. But, well, see “laundry list of scores” – people had very low expectations re what he would and would not do.

          • Douglas Knight says:

            Thomas Jørgensen,
            Where do you get that belief about the French/British cause? Do you have some source?

            I suspect that you’re confused about the fact that Gaddafi made the opposite threat/prediction.

          • J Mann says:

            @Douglas Knight – I would buy Thomas Jørgensen’s theory.

            As far as I can tell, both (a) the justification that Britain and the US offered for involvement, and (b) the statement of intended scope were simply false. (Possibly negligently false rather than intentionally, but if so, it was a major failure of decision making).

            The story that the British parliamentary inquiry eventually came up with was “yes, there was no reason to believe there would be the massacre that Britain offered as a justification, but the decision makers at the time were focused on Srebrenica.”

            Thomas’s story makes sense in terms of incentives – the worst thing from a European perspective would be a drawn-out war, and while Qadafi wasn’t actually threatening to massacre the population of Bhengazi, he did tell the population to flee to get out of the path of the war with the rebels. Fleeing isn’t the massacre that officials represented, but it’s not any good for Europe.

        • Nornagest says:

          +1. Gadaffi and Saddam give up their WMDs (Saddam being cagier about it for face-saving reasons) and both end up getting killed, mainly thanks to NATO airpower. Kim gives the West the finger, develops nukes, and ends up getting his ass kissed in exchange for relatively minor concessions. If I was a tinpot dictator, I’d definitely be thinking “hmm…” about now.

          I’m not even sure kissing Kim’s ass is the wrong thing to do at this stage of the game, since my city might be on the target list and I enjoy not being a cloud of radioactive ash, but the incentives we’re creating by doing it — and more importantly by screwing around with Gadaffi and Saddam a few years ago — are really, really bad.

          • albatross11 says:

            One thing I wish we (the world, not just the US) could establish is some kind of luxurious consequence-free home for wayward dictators. Make it clear that whatever crimes you may have committed, there’s a point during the uprising/civil war in your country where we offer you a nice exit point–you and everyone in your family/close set of political associates get safe passage out of the country and a guaranteed life of luxury far away from your current country where the peasants are a little cranky about the whole secret police reign of terror/mass looting of the treasury thing you’ve been doing for the last few years. We promise we won’t let anyone fly you to The Hague for any of those tiresome crimes against humanity trials, or hand you back to your successor government in exchange for some diplomatic advantage or something. We’ll even throw in some security for you and your family so you don’t have to worry so much about assassins sent by the families of all the people you had disappeared. But you have to go *now*.

            Wielded properly, it seems like that might lead to much better outcomes in the world–civil wars ending early with the dictator’s boarding a one-way flight to the Bahamas, eather than a griding war of attrition because the dictator and his cronies know they’re dead men if they lose power.

            The first difficulty here is that it’s really hard to swallow giving the Pol Pots and Pinochets and Mugabes and Pahlavis a luxury home on the beach and a generous living allowance, rather than a blindfold and a cigarette.

            The second difficulty is that once we’ve made the promise, there will always be a temptation to back out in this one special case where the dictator was especially unsavory or the new regime is especially important to court.

          • Randy M says:

            You think we could keep Napoleon on Elba this time?

    • mtl1882 says:

      I’m interested. I don’t know a ton about the issue, but I’m good at in-depth research. So basically, you would argue it was unjustified, due to the reasoning above, and I would argue that it was? I have a feeling your side is a lot stronger, but I can definitely make the strongest argument possible for the other side. Or is your argument specifically that the public was misled as to the real goals? I consider that almost certain – that’s how these things work IMO. Not really sure I could plausibly argue their claimed motives were sincerely held or that they were honest with the public.

  30. baconbits9 says:

    In the last links thread I linked my NBA playoff preview and might have done so incorrectly. Correct link

  31. Relenzo says:

    Is there going to be a ‘Comment Responses’ article for the UBI post? A lot went down in those comments and I at least would love some additional processing for that.

    • Randy M says:

      I hope so. I haven’t taken the time to go through them, but the article was interesting and trying to load the comments now would be difficult even if I felt up to reading the tomes written in response.

    • A1987dM says:

      +1

    • Nabil ad Dajjal says:

      I’d be pretty interested in that as well.

      I didn’t make a particularly good showing in that thread. I allowed myself to be infuriated and fury in text form is more sad than convincing. But there were other people who expressed similar points in a calmer and more lucid way.

      Another one that I would like to see a clarification from Scott on was this point by 1soru1:

      > Because people don’t like being poor, same as today. UBI + no job = poverty.

      This discussion really really needs to stop using the same word for two completely different things.

      Poverty-UBI, aka alt-welfare, has more or less zero properties in common with post-scarcity-UBI. Consequently, every argument for or against either can be trivially countered by silently switching to the other meaning of the word.

      Scott’s enthusiasm for UBI, to the point of calling it utopian, only makes sense with post-scarcity UBI. But post-scarcity UBI depends on us having a post-scarcity economy, which we absolutely don’t have today and likely never will. It makes sense as an idea to keep in our collective back pocket but not as a policy objective in the near term.

      A poverty UBI is still problematic. It would inevitably be voted up into a post-scarcity UBI, because our leaders on both sides of the aisle are mad millenarian cultists. But it’s much more defensible, especially in an authoritarian state which could credibly promise not to increase it beyond bare-minimum survival needs.

      • eterevsky says:

        If you tell a person from 18th century about the today’s world, they will almost certainly answer that you are already living in a post-scarcity world. I mean, no-one is dying of hunger, what else do you want?

        The criteria for poverty are moving together with the society’s prosperity, so it might be that we’ll always perceive ourselves far off from the post-scarcity.

      • christianschwalbach says:

        Part of the large appeal of UBI is a simplification of existing welfare programs, expanding the reach but consolidating the application, processing, etc….We have changed elements of our welfare system (US I mean), but it has existed for a decent part of the last century. If UBI is implemented, the political will for it to remain will be quite strong, but I do see potential adjustments being needed to take place depending on current economic matters. That being said, its somewhat an example of the ability to implement welfare in a scarcity based economy.

  32. Are there any other SSC Aussies out there that think that Australia is in a kinda bad position at the moment? Have we stupidly positioned ourselves right in the middle of the two world superpowers who are increasingly aggressive with eachother, with a huge chunk of our trade with one and lots of strong ties with the other?? I don’t want to talk or hear about the geopolitics of it, geopolitics is cancer that eats worthwhile human endeavours, and doing anything worthwhile for humanity means trying to stay clear of geopolitics! I just want to know if others out there that feel we’re trapped in a really strategically bad position with no way out, and how worried should we be? I have no grudge against either country, but sometimes I feel like I’m the only person that notices our situation sorta resembles Poland before WWII? 🙁

    • Wrong Species says:

      I don’t think Australia is important enough on the global scale to be another Poland.

    • engleberg says:

      I think the ‘dagger pointed at the heart of Antarctica’ thing will keep you relatively safe. Poland is between too many other countries.

      • Paul Zrimsek says:

        On the other hand, there’s Australia’s long history of being partitioned between the US and China to reckon with.

    • WashedOut says:

      First of all it isn’t obvious why our positioning is “stupid”. We are politically aligned to the US which has given us military-diplomatic benefits, and we are economically aligned with China (and the rest for SE Asia for that matter) which has given us economic benefits. Being able to leverage the US military is no joke, and during the 2007/2008 GFC our mineral exports to China are partly what saved us from the same magnitude of downturn as many other OECD countries. For the last 10 years or so China has been increasing it’s militarisation of the South China Sea, but it is surrounded by rivals in that space, so not every example of Chinese encroachment amounts to a 1:1 ramping up of tensions with the US.

      Diplomatically I think we do quite well – Julie Bishop has proven her skill in this area and will probably be the next Prime Minister if the Coalition don’t lose too many more seats.

      Your question seems to assume that the likelihood of massive military conflict between US and China is very high. Whilst I agree they are in tension, I think it’s more likely the US will just economically fizzle out on it’s own before China decides to take any major action outside of naval posturing.

      • I wasn’t really thinking of a conventional military conflict. It’s more that both have a lot of influence here, and we are very dependent on them both. So I think we pay the cost when those influences clash. Sure Australian diplomats/foreign affairs/forces do a great job, but its about more than just diplomacy, eg look at the issue of influence in elections around the world in recent years. I just resent any instance of my country’s domestic affairs being dragged into the cancer of geopolitics at any level. I and I think most Aussies value independence and really don’t like that.

    • Protagoras says:

      Poland seems about the darkest comparison you could make. Why not compare the situation with Sweden? The Swedes did all right playing both sides in WWII, and Australia is more off to the side than squeezed between America and China, so it’s unclear why that isn’t a better comparison than Poland.

      • quaelegit says:

        Maybe a better WWII comparison than Poland would be Argentina? They had stronger cultural ties to Germany due to immigration (plus past–and future–rival with the UK), but stronger economic ties to the allies. The US ultimated used economic pressure to force them to side with the Allies, but they held on to neutrality until 1944.

        And as Protagoras points out Sweden stayed neutral all the way through (and sort of for the Cold War also I think?)

    • Reasoner says:

      From my perspective as an American who admires China, this sounds like a good thing, because you can play peacemaker.

  33. Oleg S. says:

    Does anyone know if there is a medical term for inability to formulate correct Google search request?

  34. alveolartap says:

    Do meetups run regularly in Phoenix? I’m in the area until mid-July.

    • andrewflicker says:

      Nope- we’ve had one that I’m aware of in the last few years, though there is an EA meetup this Friday.

  35. Daniel Frank says:

    In order to predict GDP per capita ranking for the year 2100, what factors do you think would be most important?

    Some that come to mind:
    – current GDP per capita
    – historical level of civilization/greatness
    – median IQ
    – form of government/political stability
    – immigration policy
    – neighbouring countries

    • BlindKungFuMaster says:

      Demographic change concerning IQ, clannishness, religion, etc. That’s mostly past and future immigration + differential fertility.

    • rlms says:

      Proportion of land x inches below sea level.

    • SamChevre says:

      Nothing will beat political stability in my view. “Is an active war zone” has been and will be terrible. (Visiting Venice really drove that point home–it’s an island in a swamp, but it was not invade able and so it is incredibly rich per square foot.)

      I’ll put current GDP per capita second.

    • Jon S says:

      Current rate of growth of GDP per capita is quite important too. Needs to be averaged over the last N years.

    • Wrong Species says:

      If there’s a singularity, that would drown out every other factor.

    • Thomas Jørgensen says:

      ….. being magic? Seriously, that is too far out for forecasts to mean anything whatsoever.

    • albatross11 says:

      Assuming the next century looks more-or-less like the previous couple, I’d expect the core issues to be median IQ, form of government (particularly in avoiding some catastrophically bad form of government that gives you Great Leaps Forward or Final Solutions) and neighboring countries (if you’re surrounded by aggressive powerful neighbors, your next century may be all about war or occupation even if you’d like it to be about peace and prosperity.

  36. bean says:

    Naval Gazing begins looking at auxiliary ships today, starting with the stories of the first refueling at sea, or at least the first operational use of refueling at sea in the USN.

  37. GranderDelusion says:

    Declaration of intent: I’m going to collaborate with a fellow Hotel Concierge+Samz[]dat fan, and a layperson, and attempt to distil each of their posts into an easily digestible format. Exact methodology may change, but initially we’ll seek to:

    1. Work though the article and convert it into bullet points
    2. Identify the key argument(s)
    3. Restructure around the key arguments
    4. Drop tangents into a separate section

    Is this something people would like to see? Happy to link drafts or progress as we make it. It won’t be fast, but both of these people are amazing thinkers and I’d like to make them accessible to a broader audience.

    • Anon. says:

      I just find it amusing that obscure bloggers both require exegesis and actually have a readership willing to do it. Maybe next year we will form factions based on competing interpretations!

    • BlindKungFuMaster says:

      Maybe unreadable writers just shouldn’t be read. But go ahead, maybe that’ll allow me to give both of them a second shot.

      • Freddie deBoer says:

        There are no unreadable writers, only bad relationships between reading styles and writing styles. Not everyone needs to read the same things and that’s OK.

        • Aapje says:

          r u sure?

        • Brian Young says:

          There are objective, if not necessarily quantitative, criteria by which writing can be ranked (for example, succinctness). It follows that difficulties can be caused by things other than stylistic mismatches.

    • The Element of Surprise says:

      As a non-native English speaker (and slow reader), I would be very grateful for this! Especially Samz[]dat takes a lot of mental energy for me and still leaves me with the feeling of not actually getting what he is trying to say.

    • Nabil ad Dajjal says:

      Since this seems to be building on my previous attempt with one of Hotel Concierge’s posts, a few thoughts:

      Samzdat and Hotel Concierge have similar writing styles and erudition but their subject matter are IMO very different. They both tie it back to the everyday dysfunction of the modern world but Samzdat seems much more concerned with how we understand things and Hotel Concierge with how we relate to other people.

      Samzdat’s last update lays this out with uncharacteristic clarity. From what I remember he is concerned that our worldview is changing, or has changed, and that this change in context makes it hard or impossible to understand the world in the same terms as before. This is his diagnosis for the increasing use and misuse of quantification; we have lost or are losing our ability to understand the world in qualitative terms. Just like the Jewish people demanded kings replace judges, today their descendants demand that statistics replace heuristics (misnamed as “cognitive bias”) in all walks of life. Heuristics deal with Knightian uncertainty much better than formal statistical methods, and uncertainty is more common in ordinary life than Knightian risk. But, like the prophet Samuel, heuristics just aren’t as authoritative or pretty to look at.

      I would also suggest that if people wanted to do this systematically, The Last Psychiatrist would be a good starting point. He’s by far the clearest writer of the three and in his later posts he very rarely strays far from his central theme.

    • helloo says:

      Why don’t you include SSC ? 😛

      Regardless of the language, the length pushes it out of the “easily digestible” part and not a lot of the TLDR if it’s even included are good enough to be considered a short summary rather than a shortened conclusion.

    • Nick says:

      I’d be happy to see it. More happy to see sam[]zdat than hotelconcierge, because I find the former more difficult to read.

    • mtl1882 says:

      I’d be interested – and even would be interested in helping to distill them if you ended up needing more help in that area.

    • RavPapaBigKasha says:

      Hey there, I’ll work through Samizat articles with you. How should we start collaborating?

  38. Baeraad says:

    Name the following moral philosophy:

    There are two moral frameworks, X and Y. Moral framework X is objectively correct – leave it aside for now where it gains its correctness from. Moral framework Y is the perfect compromise between the opinions of all currently living human beings in the world about what is right and wrong.

    Each person has a moral obligation to act in accordance with framework Y.

    Each person has a moral obligation to hold moral opinions in accordance with framework X.

    If each person does their duty in believing in framework X, then X and Y will be the same thing, and all will be well. However, in practice, many people will hold to incorrect moral opinions, thus pushing Y out of synch with X. This does not however offset the rules as stated above: it is moral to act in accordance with Y, even when that means acting contrary to X. It is very sad when a person who correctly understands X is morally compelled to act against it, and much to the shame of the people whose incorrect moral opinions have put him in that unhappy position, but his moral duty is nonetheless clear: he must be true to the consensus, even though he personally knows better.

    As strange as it may sound, this is, as near as I can tell, what I actually believe. And I don’t think it’s completely insane. I am sure I have heard about people who believed that they had an obligation to follow the laws of the land, even when they personally disagreed with them. And let’s face it, anything anyone can think of, someone else has probably already written a bunch of heavy books about, so I’m absolutely sure that there must already be a name for thinking this way.

    I don’t think I’ve ever heard of it, though. Any well-read person here who might enlighten me?

    • skef says:

      I doubt it has a name because the way you’ve put things sounds contradictory:

      Moral framework X is objectively correct

      it is moral to act in accordance with Y, even when that means acting contrary to X.

      If X is objectively correct, how could it be moral to act contrary to it?

      The way philosophers usually talk about these questions is in terms of ideal and non-ideal theories. An ideal theory describes what everyone should (ideally) be doing. A non-ideal theory addresses what to do given that some people will be acting unethically. A truly objectively correct theory would presumably give the right answer for any case (assuming there are such answers).

      The thing your description asserts that the dichotomy between ideal and non-ideal theories does not is that actions called for by the ideal theory are always morally correct. (Or if you aren’t asserting that, it’s something in the same neighborhood.) But there’s no reason to think that. Either it is right, and an objectively correct moral theory would therefore tell you to do that in that case, or it isn’t, in which case your action may be “socially correct” but morally wrong. (e.g. you did it so as to avoid the social consequences of acting in a different way even though you shouldn’t have.)

      • Baeraad says:

        If X is objectively correct, how could it be moral to act contrary to it?

        Because going against the public will is so wrong that it offsets the rightness of the action.

        • beleester says:

          If that’s the case, then shouldn’t X say “Don’t go against the public will in this case?”

    • BlindKungFuMaster says:

      What would it even mean for a moral framework to be objectively correct?

    • So we could crudely simplify this as: do what your community feels is good, but argue for what you think is objectively good? (I take it you’re not meant to just keep your ideas of objective morality in your head, but instead try to convince others to shift the norm?)

      I have some casual interest in moral philosophy, and I can’t think of anything that formulates things quite like this. I agree its not completely insane – I would probably be comfortable around people holding this view.

      I think one possible practical flaw could be in it lacking convincing powers if the person is not willing to make at least a little bit of deviation from the average views to not just argue but set an example of what they believe. But doing the opposite seems ineffective (and probably dangerous too), so it’s an interesting thought that seems to isolate an interesting issue in applied ethics.

      At a more theoretical level, we can postulate something like this that maps to nicely some people’s moral intuition, but it’s not clear why this amounts to anything more than a generalization about some people’s moral intuitions. It’s not doing much heavy lifting in actually helping us decide what is right or wrong. That being said, I think a lot of moral philosophy is guilty of this, and it’s not that easy to avoid.

      • Baeraad says:

        So we could crudely simplify this as: do what your community feels is good, but argue for what you think is objectively good?

        Yes, that sounds about right. The example that comes to mind is that grumpy dwarf from Prince Caspian – “I know the difference between giving advice and following orders. I have given my advice; now I will follow orders.”

        I agree its not completely insane – I would probably be comfortable around people holding this view.

        Well, you could be reasonably sure that they’d act sanely even if they were insane, since they’d consider themselves morally bound by what the sane majority considered proper behaviour, so there’s that.

        I have some casual interest in moral philosophy, and I can’t think of anything that formulates things quite like this.

        Hmm. Surprising. So I’m actually a bold and original thinker? Or, from a less charitable perspective, a complete fruitcake. :p

    • BeefSnakStikR says:

      I’m not super well-read, but I did some digging, so here it goes. It sounds like some sort of “moral fictionalism,” where moral beliefs are a useful fiction and serve some other function (in this case, allegedly they facilitate moral actions).

      ie. You should to believe “killing is wrong” even though sometimes you should kill. [Self-defense, etc.]

      Though you haven’t really specified any function by which this occurs, so that name could just as easily be the opposite of what you’re describing: that acting according to arbitrary moral rules facilitates us having objectively true beliefs about the world.

      ie. Our capacity to kill gives us the capacity to assert the truth that killing is wrong. [See also “It wouldn’t be illegal if no one did it.”]

    • Protagoras says:

      I think this view ends up being closer to Kant than may appear at first glance. Kant requires us to treat everyone as rational agents, even though obviously most people aren’t being rational most of the time; while the actions which are required out of respect for the pretended rationality of irrational people are not the same as the compromise you describe, there seems to me to be something similar going on.

    • carvenvisage says:

      The word solidarity comes to mind. By acting in line with the moral consensus you prioritise human trust and harmony (and other’s investment in morality) over what someone else would see as their integrity.

      Is it not ultimately motivated by utilitarianism? -A belief in the practical value of (the ascendancy of) faith in morality/moral consensus.

    • Obligation is more than one thing, because everything is more than one thing.

      There is a sense in which you are obliged to follow “the law of the land”, just because you are expected to, held accountable if you don’t and so on. That sense doesn’t have much to do with morality.

      There is a sense in which you are abstractly obligated to do what is ultimately right. That has eveything to do with morality, but not much to do with what people expect of you.

  39. MaxWeaver says:

    Request for help from AI Risk researchers/enthusiasts:
    I work in the military and will be briefing a fairly high level general in about a month. This man controls many millions of dollars that can go towards research projects. In previous briefings he’s expressed that current scientists and experts don’t believe that AI will ever be creative. My briefing to him will be on a tangential topic, current AI algorithm uses, but I will probably have a brief opportunity to sneak some one-on-one time in and attempt to appeal the AI risk case. I’m working to work in some logic but primarily to present something like an updated AI Researchers on AI Risk. I figure that a lot of you have a better idea on the current pulse of such things than I do. The stakes: probably nothing/a slight hit on my career, potentially Department of Defense type grant money.

    • quanta413 says:

      So I have 0 expertise on this, but here’s my input for what it’s worth.

      I don’t think you have a snowball’s chance in hell of convincing a general about the existential risks of AI if he’s not already favorably inclined.

      You’re better off trying to connect whatever you’re interested in to something closer in time and more immediate. Like can you cook up a relationship between whatever you’re interested in and the dangers of a vaguely plausible modern “AI” made by an enemy nation? Or the issues of making AI’s as opponents in a wargame to test battleplans. Etc, etc.

      To elaborate, let’s say your goal was to fund more research on goal alignment. I’ll go with trying to link up to the second set of things.

      Having an AI that could play different sides in a wargame and properly understood the political goals at hand etc. (not just maximizing something like KDR or territory or captured but actually given some natural language description of the political situation could do proper goal alignment in this scenario and then properly translate that into strategy when combined with the hard data of military resources and how combat works etc.) would be an immense accomplishment. It’s also way more plausible sounding than existential risks. It also lets you try to avoid touching any preconceptions the general has that AI’s can’t be “creative”. The AI will “just” be doing a little bit of of understanding heterogeneous human political goals and translating them into constraints and then “just” optimizing a strategy to fulfill these goals.

      • MaxWeaver says:

        Normally I’d agree with you that generals are lost causes, but this one seems fairly dynamic and open minded in comparison. He’s excited about AI in general and worried about enemy nation use of AI, so that’s a natural segue. I think the avenue you mentioned looks particularly promising for now, so I’ll look into that a bit more. I think your overall approach of trying to meet my goals without challenging his beliefs has the best chance. And it’s not like my career will probably amount to much anyways; it’s worth a shot.

      • carvenvisage says:

        “To elaborate, let’s say your goal was to fund more research on goal alignment.”

        I really hope there is no such explicit goal.

        If you’re not looking at it as a potential win win; not in terms of genuine benefit from the POV of the person you’re advising (and by extension the people they represent), that is a huge huge red flag for communist style entryism.

        Like, you can’t just go around conspiring to redirect funds to your favored political causes!

        It is legitimate to bring information to someone’s attention, especially if it’s relevant and they won’t have seen it or they’re misinformed, but approaching it with the mindset of “I want these funds redirected, how can I make that happen”..

        I’m not up on all the lingo, but the word treason springs to mind. (It certainly seems a bit insuboordinate.)

        Anyway, my advice to OP is to look at this as a potential win-win interaction, an opportunity to inform, and not an oppurtunity to push your views or even to persuade.

        I think this is really good advice anyway, even from a cunning entryist machiavelian perspective, but especially because if you were a smooth political manipulator you probably wouldn’t have posted this question on a public forum (a main open thread no less, lol) to crowdsource how to do it.

        • MugaSofer says:

          AI alignment isn’t a political cause, and in any event I would imagine the US military has a vested interest in the US not getting turned into paperclips.

          • carvenvisage says:

            AI alignment isn’t a political cause

            personaly I agree that it’s a legitimate threat, but putting a cause you’re worried about in a position to be written off as such is a basic extension of the same principle. “just do it” is for job applications and dating and ‘ask cultures’ like actors and filmmaking. It’s not for air traffic control or politics. Disregarding the basic mattter of honour and the attitude of an advisor or consultant, –If you’re not really competent to “push” political points you can easily end up doing more damage than good making a concerted attempt to ‘attain your goals’ (transparently manipulate), ..when just being honest and helpful up front might have achieved the same thing. (..even in the short term! of course in the long term trying guilelessness is way better because it credits rather than discredits you)

            I’m reminded by the eliezer yudkowsky describing people at a sumptuous banquest of food eating the plates. Maybe I’ve got this all wrong, but this “goal oriented” approach seems downright crazy to me. If you have to crowdsource how to best manipulate it, maybe you especially would be better off being upfront, “Here’s what I think and why, judge for yourself”.

            (And when I say downright crazy, I am accounting for the indifference implied by the career comment. I mean mind boggling, not merely reckless like telling your CO to fuck off)

            _

            Just to reiterate, a general is not supposed to be a loot pinata, something to extract funds from with the right input. It’s an alien, ..I won’t say fucked up, because it’s probably just naive “autism” and too much shia lebouf, –but really suspicious way to approach such an interaction.

            -You know that saying about how people are too busy worrying about themselves to scrutinise others? Less true around millions in military funding than elsewhere.

            Anyway, my advice again is don’t try to be clever with this shit, don’t approach a general as an obstacle or a task to be conquered. Make your case if you think it’s important, argue it if you want, but it’s ultimately supposed to be, and ultimately is, up to them. (Not a situation for taking ‘internal locus of control’ too far.)

            _

            I could be totally wrong, hopefully I am, but I feel justified/obligated posting my impression when someone is on the one hand talking about potentially tanking their career and on the other posting the idea openly on a public forum.

            (again that’s accounting for any indifference implied by that comment including (possibly) the downright nihilistic. I don’t mean only in the sense that playing russian roulette is crazy, I mean that things don’t seem to add up)

    • SaiNushi says:

      “he’s expressed that current scientists and experts don’t believe that AI will ever be creative”

      I thought Vocaloid (J-POP synthetic vocals band) has been AI created for a couple years now? I haven’t looked into it for awhile, so I’m not sure on that. However, I do remember that AI have made paintings and have written stories that are not necessarily based on algorithms. If I’m remembering these things correctly, then AI is ALREADY creative.

      Also, Google had to shut their AI’s down because the AI’s created a private language that no one could understand.

      • Hackworth says:

        Also, Google had to shut their AI’s down because the AI’s created a private language that no one could understand.

        https://www.snopes.com/fact-check/facebook-ai-developed-own-language/

        Snopes says no, unless you actually refer to a Google AI in a similar but separate incident which Google itself didn’t give me as a search result, rather than a FB AI .

        • MugaSofer says:

          Snopes says false, yet the incident they describe is pretty much what the “myth” claims?

          • Iain says:

            There was no panic, and the project hasn’t been shut down. Our goal was to build bots that could communicate with people. In some experiments, we found that they weren’t using English words as people do — so we stopped those experiments, and used some additional techniques to get the bots to work as we wanted. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI.” If that were the case, every AI researcher has been “shutting down AI” every time they stop a job on a machine.

            Google didn’t shut their bots down because they were speaking a private language nobody could understand. Facebook fixed a bug in their bots that caused them to do a bad job speaking English to each other. (The bots were rewarded for successfully negotiating with each other, but not for doing so in proper English, so they drifted away from well-formed English towards their own idiosyncratic shorthand. This was not what the researchers intended, so they started rewarding the bots for using English correctly.)

          • Randy M says:

            Tell me they called it Ebotics?

      • beleester says:

        Vocaloid is not AI-created. It’s a speech synthesis program that has a convincing singing voice, but a human is the one telling them what to sing. The songs are credited to the Vocaloids themselves, and the Vocaloids get stage shows like they’re real musicians, but it’s ultimately just a marketing gimmick.

        There have been some AI composers, however, like Emily Howell.

    • Reasoner says:

      I used to be a big believer in the Bostrom/Yudkowsky school of thought on AI risk. But as I acquired expertise in AI/ML, their concerns seemed increasingly underinformed. My current belief is that it’s worth considering the safety implications of future AI advances, but the best people to do this are top AI professors with a lot of domain expertise, and the best way for them to do it is to use the writings of Bostrom and Yudkowsky as more of a creative writing prompt than anything else.

      • kokotajlod@gmail.com says:

        I’d be interested to hear more about what you learned that caused this change of mind!

        • Reasoner says:

          In Superintelligence, in the section on value learning, Bostrom writes:

          The difficulty [of value learning] is compounded by the desideratum that, for reasons of safety, the correct motivation should ideally be installed in the seed AI before it becomes capable of fully representing human concepts or understanding human intentions.

          This reveals an anthropomorphic view of cognition: Highly “intelligent” things are unsafe, and “intelligence” is required to understand complex concepts like human values. Anything “intelligent” enough to fully represent human concepts or understand human intentions is also “intelligent” enough to be dangerous.

          Computer science suggests a different perspective: Instead of there being something called “intelligence”, we instead have a collection of techniques. Whether a technique is considered “intelligent” is kind of arbitrary: Object-oriented programming originated in AI research groups, the field of computer science that researches the digital counterpart to human memory is called “databases”, and a lot of machine learning techniques originated in statistics. There’s no reason to think intelligence is ontologically fundamental, but Bostrom and Yudkowsky treat it like it is all over the place. So their thinking is kinda built on a foundation of sand.

          • kokotajlod@gmail.com says:

            Thanks!

            Something like this has bothered me for a while. I think I can give a bit more of a charitable reading to Bostrom+Yudkowsky though. IIRC they explicitly acknowledge that intelligence isn’t a monolothic or fundamental thing, that really it’s shorthand for competence in a bunch of different important domains. So the question is, can their arguments be reconstructed with that in mind, or do they fall apart when we think of intelligence that way?

            Here’s my attempt at a reconstruction:

            While in principle it is possible to have a system that is human-level at understanding human intentions and using human concepts while being subhuman at world-modelling and decision-making, in practice it’s likely that a system we build via a normal method (try things, pick the things that work best, try some variations of them, repeat) with the goal of making something understand human intentions, concepts, etc. will be human-level competent at goal-oriented decision-making as well.
            Thus, *in practice* anything “intelligent” enough to fully represent human concepts or understand human intentions is probably also “intelligent” enough to be dangerous.

            Analogy: “Understanding human intentions” isn’t monolithic either. Technically it’s a bunch of sub-competencies: Understanding my intentions, understanding Bob’s, understanding Julias… Yet we believe (for good reason) that “by default” something capable of understanding mine is going to be capable of understanding Bob’s as well. It would be foolish to say “Don’t worry, this thing won’t have any idea what Bob intends–we are just trying to make something capable of understanding your intentions, you see.”

            Thoughts?

          • Reasoner says:

            @kokotajlod, creating models of the world (which is the ability you want in order to understand human intentions) and goal-oriented decision making are fairly distinct research areas. Very roughly, the first is “ML”, the second is “AI”.

            I think creating accurate models of the world is necessary for effective goal-oriented decision making, but I don’t think effective goal-oriented decision making is necessary for creating accurate models of the world. So I’m pretty optimistic that FAI is solvable.

            Re: your analogy. Yes, I expect that once we have the technology to model your intentions, that technology could be applied to model the intentions of any human. But your concerns still sound frustratingly anthropomorphic to my ears as an engineer. When I’m creating a system, if I don’t understand how it is supposed to work, I am almost certainly not going to successfully create it in the first place. So all this talk about me creating a system which unexpectedly acquires capabilities I didn’t design sounds like magical thinking.

        • Not quite ontologically fundamental, but they do tend to treat it as One Weird trick that can solve anything.

    • Loris says:

      I am not an expert. But.

      In previous briefings he’s expressed that current scientists and experts don’t believe that AI will ever be creative.

      Presumably your general is using “creativity” essentially as a synonym for “ability to succeed by unconventional means”.

      A) There already exists a system AlphaGo, which beats the best human Go player in the world. It uses moves which surprise skilled human players. Go was the last game to fall.

      B) possible slide? : xkcd
      I like this article, and I’ve linked to it before.
      The salient part this time starts: “In 1981, a computer scientist from Stanford University named Doug Lenat …”
      In brief, in 1981 a computer program could generate winning strategies against motivated human opponents, basically just by disregarding human convention.
      Computers have come on a little since then. If an AI has at least the intelligence of a very smart human – even if they can’t find a winning strategy “natively” – they could scrape together the computational resource to run such a program and find one.

      It doesn’t seem like creativity will be a sticking point for an AI.

    • sty_silver says:

      I think you should definitely definitely try, just to voice the contrarian view.

      So on the question ‘what’s the most high-impact thing you can say in a very limited time’ my best guess is something like this: mention the most credible survey, and simultaneously undersell your own confidence a bit because that’ll make you sound more serious. Then suggest a good source to read about it more, like Superintelligence.

      I don’t think that’ll work, but it seems more likely to work than the direct approach. This [what’s expressed in this post] is a fairly low-confidence view, though.

  40. Nancy Lebovitz says:

    Aging might be *partially* cured.

    My guess is that slowed aging– the kind that folks with the good genes get of being healthy into their nineties– is going to be relatively easy. Going by feel, I think that’s going to be available to people in general in 50 years.

    Aging is a process, not just intrinsic deterioration. Progeria is evidence. It may be possible to turn aging off, but it wouldn’t surprise me if there’s slower aging which will be much harder to solve.

    • Nabil ad Dajjal says:

      The state of the field in aging / longevity research really doesn’t support that optimism.

      I was very interested in longevity when I entered graduate school and my first rotation was in an aging lab. In the process I read a fair amount of the literature and saw the kind of results they consider impressive. That lead to my decision not to continue to study aging.

      I am convinced that we will eventually uncover the mechanisms involved in aging and develop treatments which meaningfully increase longevity in model organisms and humans. But I’m not convinced that it will happen in either of our lifetimes.

      • johan_larson says:

        Yeah, the past is not always a reliable guide to the future, but what it is telling us is to expect a slow rise in life expectancy during our lifetimes.

        https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_life_expectancy

        • albatross11 says:

          Intuitively, if aging is mostly a “program running out” kind of problem, then there won’t be one cure, there will be a hundred different places where things need to get fixed.

          By contrast, if aging is mostly an evolved-in program for shutting down old people to keep them from dominating the genepool or something, then it will probably be pretty easy to cure by “breaking” that mechanism.

    • engleberg says:

      @Aging might be partly cured.

      I’d like to see old age homes equipped with centrifuges. A flabby geezer gets in, the centrifuge gives him three days at two G, a buff geezer emerges. At least that’s how I’d make the ads look. The market for silly expensive exercise equipment is huge and centrifuges have that mad science look jocks love.

  41. Dutch Nightingale says:

    A couple of weeks ago in the classified thread I put up a message saying “Hey, I work in machine learning and live in London. Does anyone similar want to get a drink some time?” I got three responses and met three people and these were all positive experiences. It turns out that “people who do similar work to you on a forum you like” is a pretty powerful filter for finding cool people to talk to one-on-one.

    I relate this tale to you for the following reasons:

    1) I still work in ML and still live in London, and does anyone similar want to get a drink some time?

    2) Perhaps you should consider doing something similar.

    • mtl1882 says:

      I live in Boston and work in education and historical research (with a background in law), if anyone is interested. I’d also be interested in meeting people who work in other fields.

      • powerfuller says:

        I live in Boston in work in education and (somewhat) in history. I’d be willing to meet if you’d like. You can drop me a line at powerfuller [at] tutanota [dot] com.

  42. Clarence says:

    In a complete coincidence, China will remove barriers to foreigners registering businesses. I’m sure this has nothing to do with Trump. The US government would have been fine with the status quo of China enriching themselves at our expense. If Trump gets credit for any good deeds, he might get re-elected, a tragedy of immense proportions. Thus we must do our utmost to sabotage North Korea’s peace initiatives and anything else he threatens to accomplish.

    • Anonymous says:

      You’re sarcastic, right?

      • Nicholas Conrad says:

        Sounds more like sarcastic alt-right?

        • Anonymous Bosch says:

          Being sarcastic is a rhetorical tactic to shift the burden that would otherwise be on them to show why China “simplifying business registration procedures for foreign-invested enterprises” is due to Trump. Personally I’m pretty unclear how rules making it easier for American capital to support businesses and jobs in China jibes with Trumpist nationalism.

          • Personally I’m pretty unclear how rules making it easier for American capital to support businesses and jobs in China jibes with Trumpist nationalism.

            I think the answer is that Trump is thinking of it not as American capital vs American labor, where either importing labor (immigrants or imports of labor intensive goods produced by foreigners) or exporting capital benefits capital at the expense of labor.

            He is thinking of it as Americans vs Chinese. When we import their stuff their producers gain, ours lose (of course, our consumers gain, but he’s ignoring that). When we export our capital to them, American businesses gain by the new business opportunities, Chinese businesses face more competition within China and lose.

            From his standpoint, he has gotten them to do something that helps us. He’s a nationalist who sees the issue as America vs China, not a left winger who sees it as American labor vs American capital. He’s also not an economist–or at least, if he does understand the economics, the people he wants to convince don’t.

          • Jiro says:

            of course, our consumers gain, but he’s ignoring that

            He is within his rights to ignore that because when comparing busineses with Americans in China abd businesses with Chinese, both sides of the comparison have benefits to our consumers, but only one to our producers. So the consumers drop out of the comparison.

          • @Jiro:

            My comment about our consumers was with regard to his criticism of Chinese imports, not with regard to China being more open to American business.

          • Jiro says:

            Coimparing Chinese imports to products made in the US, again both sides have benefits to our consumers but only one to our producers.

          • @Jiro:

            If the Chinese spend the money they get from exporting to us on importing U.S. goods, that benefits U.S. producers, just not the producers of the goods they are exporting.

            If they spend the money buying U.S. securities that someone else would have bought otherwise, then either the (foreign) someone else uses his money to buy U.S. exports or the (foreign and domestic) someone else has money to invest in the U.S.

            If they simply pile the money up in a bank vault, they are giving the U.S. an interest free loan.

            In thinking about these issues, it’s worth remembering that if someone is buying Chinese money with dollars in order to buy Chinese goods, someone else is buying dollars with Chinese money in order to do something with them.

        • toastengineer says:

          “How many levels of irony are you on?”

          “Like,, maybe five or six right now, my dude.”

          “You are like a baby. Watch this:”

          If Trump gets credit for any good deeds, he might get re-elected, a tragedy of immense proportions. Thus we must do our utmost to sabotage North Korea’s peace initiatives and anything else he threatens to accomplish.

  43. userfriendlyyy says:

    Another article against your straw man of JG.

    • skef says:

      Phase 1: Figure out how to get accountability out of community-level democratic processes
      Phase 2: ?
      Phase 3: Universal employment

    • Tim van Beek says:

      Would you need to change any laws to implement any of this in practice on a local level?

    • Brett says:

      That’s A for effort, especially since I can believe that essentially crowd-sourcing ideas for where Job Guarantee work might be done is probably going to come up with a lot of useful stuff that relying on a traditional bureaucracy might not found. I’m not convinced it will be any better at implementing it.

  44. Sniffnoy says:

    Since people were talking about airplane seatbelts in the other thread — why are airplane seabelts backwards compared to car seatbelts, anyway? (In that you have to move the “female” component towards the “male” component, rather than vice versa as in a car.)

    And on the topic of the safety briefings about how to put them on, I’m not sure I’ve ever actually heard an airplane safety briefing that did in fact explain the above, which is (based on my own experience) what I would expect to be the confusing part for anyone who hasn’t flown before!

    • bean says:

      I think it’s because you can’t use a take-up reel on an airline seat (almost certainly for regulatory reasons, although it might be cost/weight). The male end is easy to make out of a single piece of metal, while the female end is going to be larger, so it’s easier to package the adjuster in with it.

      • Sniffnoy says:

        Interesting! I’m confused as to why the lack of a take-up reel would affect this, though?

        • massivefocusedinaction says:

          Without a take up reel, the adjustment mechanism would also need to be a lock mechanism (else it would expand to its maximum length in a crash). So you only need one lock if the lock is also the adjuster.

          Old car seat belts (my 69 AH, and my grandfather’s 60s era GMC truck) are duplicates of airplane seat belts (where the lock portion is also the length adjustment attaching to a fixed male end).

          • AlphaGamma says:

            The oldest car seatbelts I’m familiar with are those on the Volvo PV544, introduced 1959. The adjustment and attachment mechanisms are different.

            The attachment mechanism looks like this, and the adjustment is similar to a backpack strap.

          • skef says:

            Those are the oldest three-points. Many cars before 1959 had two point lap belts similar to those in planes.

      • BBA says:

        When I was a kid, school buses in one of the states where I lived had seat belts without reels where the male end had the strap. (In the other state they didn’t have belts at all.) Trying to find pictures of them, I see that now they use retracting shoulder/lap belts on school buses, which seems excessive to me.

  45. hapablap says:

    What are the most wasteful safety measures in terms of $ per life saved in your industry?

    One of my pet peeves is the health and safety that gets tacked on to things with no thought to the actual cost and benefit. And it is by no means just government regulation. Private businesses impose requirements on each other all the time that have no benefit aside from box-ticking, and even though everyone involved agrees they are useless, there is no way to stop it. The actual Health and Safety regulation is not even that onerous, but it takes on a life of its own in industry as everyone scrambles to cover their asses. New jobs get created to deal with it, then those jobs need to justify themselves, and the insanity just compounds in a feedback loop.

    Meanwhile the costs of safety theatre are imposing an enormous drag on the economy that impacts real safety. Less funds for investment in new equipment or productive staff at my company means things are just slightly less safe than they would be otherwise. This fact is acknowledged by anyone not currently giving you one cut out of your thousand cut death. But if someone is trying to cut you, the argument of “I don’t want to implement this safety stuff because it costs me money,” is never going anywhere.

    Unfortunately there does not seem to be a way to fight back against it. The incentive structure is entirely one-sided. If a client says they need you to fill out forms or get a pointless certification to keep their business, you can’t say no. If your new company Safety Manager emails you that they want to add an extra form for every employee to fill out before each job, the employees can’t say no and the boss can’t either, out of fear of how the refusal will look at a wrongful death lawsuit down the track, and they may not want to because “more forms and data make us look good to the regulator.”

    A solution could be that safety management systems could encourage a more numbers based approach to risk. If you google “safety management systems risk matrix” risk matrix you’ll see that most of the likelihoods are defined as “probable” or “unlikely” instead of “once per year” or “once per thousand hours of operation.” This is intentionally so that people who aren’t good at math can still use the charts and define risk (I’ve been told as much at government safety management seminars). But it doesn’t work that way of course. If you’ve subjectively defined the likelihood and the consequence and the resulting box gives you a risk of “25” it doesn’t mean anything. “25” is a red box so you need to take action. Well how much is it worth spending? There’s no way to know.

    The only way to clean things up would be to change the safety culture to be one of cold-calculating rationality instead of the one-sided combination of “damn the cost! this is safety!” and “if it saves just one life…” and “think of how this will look in court after an accident if we don’t do this” that we have now.

    *spoiler* One of the themes in Neal Stephenson’s Seveneves was that a society subconsciously chooses what type of progress and structure it wants to spend its resources of time and money on. He contrasts our current society’s obsession with social media with that of a future society that spends its resources on fantastic space architecture. I think western countries’ growing obsession with safety theatre is enormously, fantastically expensive.

    Here are a list of proposed and current safety measures with absolutely no regard for cost benefit:
    Armed guards at all schools: 35,000 schools x $200,000 per school / 25 lives saved per year, $280,000,000 per life saved

    Helicopter pads at a local airport: ordinary heli pads cost about ~$200 to put in (just gravel, and a few boards if you want to get fancy). This remote airport is on National Park land, and the regulator requires their own special contractor put them in at a cost of $30,000+ for four helipads. $29,600 / 0 lives saved = infinity dollars per life saved

    Federal Air Marshall Service: “protects” <1% of US flights. Cost: $800,000,000 per year / ~0.1 lives saved = $8,000,000,000+ per life saved

    • The point of money is to spend it on things you value. Are you saying that society doesn’t value safety/life, or that it should value something else?

      • Colonel Hapablap says:

        In most cases the entities mandating expensive and pointless safety measures are not the ones paying for it.

        For the safety measures that are actually effective, valuing safety and life is fine to a point, but it becomes absurd when society spends $250,000,000 saving an American life when you can spend $3,000 and save a life in the third world.

        It is also likely that spending hundreds of bilions of our resources to save thousands of lives puts a general drag on our economy that kills tens of thousands in ways that aren’t measurable.

        • 1soru1 says:

          I think this is more or less inevitable so long as the enforcement of regulations is organised along profit-making lines. Law firms want to make money, which drives up the overall size of the market.

          To reverse the incentives, give the job of enforcing regulations to the public sector, who will do it inefficiently, reluctantly and cheaply.

        • Colonel Hapablap says:

          Law firms might play a part, but the problem would still be there without them. They are likely a bigger factor in places that are more litigious, like the states.

          The justifications for safety theatre that I hear often:
          -fear of prosecution for negligence “so and so didn’t do this and they got in the shit when they had an accident, even though it wasn’t a causal factor”
          -fear of insurance claim denial “I know it doesn’t make sense, but if we don’t get all our electrical appliances inspected each year insurance won’t pay out if there’s a fire”
          -wanting to look good to the regulator “technically we don’t need to do this but it looks good during the audit if we show that we’re doing stuff like this”
          -required by a client “the client said we need to fill out these forms with questions like ‘have any of your employees ever bribed a government official'” or “they are only accepting bids from companies with ISO 3283285 certification, that’ll require us to hire two people just for compliance, but the contract is worth x so we might as well”

          This last one is a bit like college diploma signalling. A cheap way for the client to find a good contractor, an expensive way for a contractor to signal that they are a good contractor

      • Randy M says:

        Society clearly values multiple, occasionally conflicting things, and given that, it cannot allocate infinite resources to any value (and even if safety was all, there are multiple ways to increase safety), and therefore it is useful to know the cost of safety measures and determine which are efficient and which are not.
        I don’t think I’m saying anything insightful here, so am I missing something about your comment?

        • Colonel Hapablap says:

          The point is really that it isn’t useful to know the cost of safety measures, since there is no way to roll them back. I did provide a solution there but not a practical one.

          I should have expanded on this more in the original comment. The time and resources it takes to decide to implement a safety measure is some tiny fraction of the time and resources it takes to decide to repeal it.

    • bean says:

      Oxygen masks on airliners. I don’t know what the compliance cost is, but they’ve never actually saved anyone, and once put an airliner into a swamp.
      (I don’t work there any more, but I also don’t have a good example from military aviation.)

      • Colonel Hapablap says:

        Great example, Bean.

        I had a hard time finding the weight of chemical oxygen generators, but this one: http://www.o2pak.com/ weighs 1.36kg for 22 minutes of O2. FAA requires 10 minutes of O2 per passenger, but with the aircraft fittings and mask door hinges etc. it’s probably safe to assume that the generator adds 1kg per seat.

        About half the 30,000 active jets in the world are 737 size with 140 seats, and half are larger up to A380 with 500. So assume average is 250 seats. Airlines are required to carry an extra 10% generators, so that will be 275 generators per aircraft. 275 generators means 275kg of extra weight per aircraft.

        Those 30,000 jets fly an average of 3000 hours per year, for 90,000,000 hours. An extra 275kg of weight means an average increase of about 7kg per hour fuel burn, over 90,000,000 hours is an extra 618,750,000 kg of jet fuel per year. This means the direct cost in fuel is $445,500,000 per year.

        The current carbon footprint cost calculated by world governments is $39 per ton, others estimate the real cost to be as much as $900 per ton. Burning 618,750 tons of jet fuel creates 1,900,000 tons of CO2. This has carbon footprint cost of between $74,000,000 and $1,707,000,000. Presumably that cost includes human lives from global warming.

        So if Bean is right and there is no increase in safety, then we are spending $445,000,000 per year in extra fuel, and speeding up global warming to the tune of $74,000,000 to $1,707,000,000 for no good reason at all.

        There’s another way to reduce weight on every airliner by 275kg, which would net the same drop in $445MM fuel costs: reducing fuel reserve requirements by 3 minutes. There hasn’t been a fuel starvation accident since 1990 that may have been prevented by having more fuel reserves.

        • bean says:

          You left out the maintenance burden of having to purchase all of the stuff and keep it certified. If that runs to $200/generator/year (aerospace equipment and labor is expensive) then you’re looking at $1.65 billion in direct costs, plus the indirect fuel costs. $2 billion/year total for no lives is probably a reasonable middle-of-the-road estimate.

      • Doesntliketocomment says:

        I assume you mean oxygen masks for passengers, as oxygen for pilots has undoubtedly saved hundreds if not thousands of lives.

        • Colonel Hapablap says:

          Yep

        • bean says:

          Yes. Pilot oxygen is very important if flying at high altitudes, even if you have pressurization. In practice, you can always get the plane low enough to keep the passengers alive unless you’re flying over the Himalayas. I’d say tens of thousands easily. (10,000 passengers is only ~40 planes.)

    • Froolow says:

      This is one of my favourite papers – partly because it is so useful to carry around to explain the concept of cost-effectiveness, and partly because it sparked off a very interesting adversarial collaboration on accounting for the cost of climate change that really helped shape my thinking on the nature of ‘costs’.

      https://www.ncbi.nlm.nih.gov/pubmed/7604170

      There used to be a free copy on google scholar but it seems to have disappeared recently, sorry. Its also well out of date at this point!

      There seem to be a few candidates for the most wasteful activity, but my impression overall is that interventions designed to save one specific life are less wasteful than interventions designed to lengthen lives generally (eg pollution control), which is the opposite of my intuition of where people would waste money.

  46. Aevylmar says:

    There’s something that may be a historical law that I have observed, and I’m trying to find all the exceptions I can:

    “No republic that has been stable for more than X years has ever stopped being a republic due to anything other than foreign military intervention.”

    By ‘republic,’ I mean that at least 10% of the population living under the rule of the government has the vote, and that unelected officials do not have the de facto ability to refuse power to elected officials; I count the UK as a democracy despite its status as a constitutional monarchy. Similarly, by ‘stable’ I mean multiparty elections in which at least two parties actually have a shot at winning, without any military coups interfering with these elections.

    The strong version of the claim is that X=50. The weak version is that X=100.

    Apparent exceptions:

    For X=100, the Roman Republic is an exception, but an interesting and complicated one! Most of the things that went wrong with Rome – in particular, the population from which the army was drawn lacking, in large part, the ability to vote – haven’t gone wrong with most of the countries in the western world.

    I definitely think it is worth intensively studying Rome to figure out what the precise limitations are on this theory, but are there any other states we can learn from, that also had very long periods of democracy?

    For X=50, Rome still counts, and Turkey looks like an exception; in fact multiparty elections did not start until 1945 and there were coups in 1960, 1971, and 1980, so Erdogan becoming a dictator should not be as surprising as it is.

    Venezuela also looks like an exception, but Wikipedia says “Much of Venezuela’s 19th-century history was characterized by political turmoil and dictatorial rule,” and from its very, very short article, yup, that looks accurate, with regular coups and military governments straight through not just the 19th century, but also the 20th.

    The Kingdom of Italy may or may not be an exception, I’m not qualified to say. Wikipedia says that, shortly after unification, “only a small percent of wealthy Italians had the right to vote”, and mentions a good deal of corruption, but I can’t tell just how bad that is – all societies complain about corruption. Unification was finished by 1870, and the Fascists took power in 1922, slightly more than fifty years later.

    The book I read on WW2 Japan said that, in order for the Diet to form a government, the army and the navy must both give approval, and you had multiple governments the Diet elected rejected by either the army or the navy. So, in fact, the elected officials had to bow to the will of the military.

    What is the lowest number X can be and still have the Roman Republic be the only exception? Are there more exceptions I’m missing?

    • Anonymous says:

      By ‘republic,’ I mean that at least 10% of the population living under the rule of the government has the vote, and that unelected officials do not have the de facto ability to refuse power to elected officials; I count the UK as a democracy despite its status as a constitutional monarchy. Similarly, by ‘stable’ I mean multiparty elections in which at least two parties actually have a shot at winning, without any military coups interfering with these elections.

      Are you conflating ‘republic’ and ‘democracy’?

      I think, in general, that this observation suffers from much the same problem as the “no two democracies ever went to war with each other” idea – either the sample size is extremely low (with strict definitions), or it’s incorrect (with less strict definitions).

      • Aevylmar says:

        Yes, I am. When I try to use terms like ‘democracy’, everyone goes, ‘oh, Rome wasn’t perfectly democratic, the UK isn’t democratic, the US isn’t democratic’. I try to use ‘republic’, and now… 😛

        Basically, I’m trying for ‘natural’ definitions. That is to say, ‘it is actually a republic/democracy, and it has actually lasted for a while, and it has not just been faking being democratic, USSR-style.’ I think those are the actual requirements, and everything else is just me trying write them down rigorously.

    • WarOnReasons says:

      If my memory is correct there were several Republics in medieval Italy which stopped being Republics according to your definition (i.e. the effective size of the voting population dropped below 10%).

      • Aevylmar says:

        Excellent! I know very little about medieval Italian republics, so they provide an excellent test case for my theory. Can you tell me more?

        • WarOnReasons says:

          Each republic had a different history but the general trend was a gradual transfer of power from general assemblies (which included most adult males) to various councils elected by the patrician class. Here are a couple of links about how it happened in Venice:
          Serrata_del_Maggior_Consiglio
          Concio

          If my memory is correct, similar process also occurred in medieval republics outside of Italy (e.g., Hansa city-states and the Novgorod Republic). I’m not a specialist though, and if you are really interested you should not trust me on this.

          Btw, you were discussing exceptions to the rule, but how many republics are there that do satisfy it for X=100? According to the 10% definition, even the UK barely qualifies (before 1885 less than 10% of its people had suffrage). At the first glance it does not seem to me that the rule is statistically significant.

          • SamChevre says:

            Each republic had a different history but the general trend was a gradual transfer of power from general assemblies (which included most adult males) to various councils elected by the patrician class.

            Isn’t this what we observe happening currently in the US and the EU? I would say that int he last century, a huge amount of power has been transferred from elected bodies (local school boards, town councils, state governments, the US Congress) to appointed/professional bodies (the Department of Education, the EPA, HUD, the state courts, the US Courts especially the Supreme Court).

    • Eric Rall says:

      At a high level, the problem with the Roman Republic was that the power centers of the state shifted away from the groups represented by the Republic’s institutions. The Republic initially represented the landed gentry and yeomanry (anachronistic terms, but the concepts fit) of Rome itself, these being the economic muscle of the state as well as the effective military population, and was revised at various times to add representation for new power centers, usually after a period of civil unrest or outright war.

      The big shift that broke the Republic was the combination of the acquisition of rich possessions outside of Italy (a big source of revenue outside of the economy of Rome proper, plus a place that needed soldiers, governors, and generals stationed outside of direct supervision of the Senate and Consuls for extended periods of time) and the Marian reforms (replacing the Republican army made mainly of levies from among the gentry and yeomanry, represented in the Senate and the Assembly, with long-serving professionals drawn from the lower classes and reliant on their generals to advance their political interests). The latter was largely necessitated by the former, as the Empire increased both manpower demands and deployment times beyond what the Republican military system could sustain.

      • Douglas Knight says:

        Rome ceased to be democratic by conquering enough people to reduce it below 10% suffrage, thus removing the protection of democracy.

        • Douglas Knight says:

          That was a joke, but I guess you already responded to it by saying that only people resident in Rome should count as the denominator for suffrage, because people outside can’t, logistically.

    • Douglas Knight says:

      Sharp thresholds, like 10% and 100 years are arbitrary. Why would you expect a threshold effect?
      I’m confused about your definition of “stable.” It appears gerrymandered to count the America as stable 1860-1865.
      Also, how can you tell the difference between fake elections and a party that just keeps winning? Do you have to make a judgement call about whether to trust the people who say that the 1994 Mexican election was open, or do you, as some people advocate, take the objective criterion of a new party winning power, as only happened in 2000?
      (Instead of defining democracy by 10% suffrage, why not just define non-voters as “an outside force”? So if a cardinal deposes the pope, that’s a coup. But if a bishop does, that’s conquest.)

      This is heavily confounded by modernity. Maybe modern states are just more stable. If you think that democracy is more stable, you should compare like with like, changing only democracy. You should compare democracies to non-democracies among modern nation-states and among Renaissance city-states.

      Sometimes people say that democratic medieval Iceland was conquered by Norway, but it looks to me more like a breakdown of democracy.

      • Eric Rall says:

        In that framework, how do you handle situations where the overthrow is conducted by a coalition of voters and non-voters?

        • Douglas Knight says:

          Yeah, I meant to add something about that and how it might become absurd in the extremely small examples, because individuals can’t take big actions alone. But usually tiny electorates represent people with military strength. If a Prince Elector conquers, the action should be attributed to the voter, not to his soldiers. The College of Cardinals could be more difficult to adjudicate because Cardinals often represent for the interest of a House, but are not the head of it. These are both silly examples, but the point was to push it as far as possible as part of asking how far.

          • Eric Rall says:

            I was thinking of situations where someone inside the system (a “voter” or similar) recruits those outside the system (either foreigners or disenfranchised domestic constituencies) for support. For example, Henry Tudor (an English noble with an arguable claim to the throne) launching an invasion of England with military and financial support from France and Brittany.

            Or the superficially similar situation of a foreign conqueror installing a locally-recruited figurehead, like the Third Reich installing Vidkun Quisling as the head of government of occupied Norway.

            There are also situations where the overthrow is conducted by and on behalf of members of the existing domestic power structure, but the overthrow is driven by circumstances created by “non-voters”. For example, the replacement of the French Third Republic by Petain’s Vichy regime: Petain was the legally appointed Prime Minister, and he was voted the power to dictate a new constitution by a vote of the existing legislature, but not of it would have happened except in the face of France’s military defeat by Germany.

          • Douglas Knight says:

            Sure, that’s a problem, but it’s a problem for Aevylmar. Since you addressed me, I thought you meant that it a problem that gets worse as the suffrage decreases.

            I agree that those examples are formally similar and hard to distinguish by objective criteria, even though it seems pretty clear that we should credit the conquests to a single agent: Henry, Hitler, and Hitler.

      • Sometimes people say that democratic medieval Iceland was conquered by Norway, but it looks to me more like a breakdown of democracy.

        It was a very interesting system, but it wasn’t a democracy in any sense we would recognize. You are correct, however, that it was breakdown rather than conquest, although the breakdown may have been encouraged by the Norwegian crown.

  47. Le Maistre Chat says:

    Let’s postulate what a think tank focused on AI, X risks, far future technologies, virtue ethics and the like would be like.
    Because I’m so tired of utilitarianism being treated as “futuristic ethics” or whatever the exact implication is.

    • Nick says:

      I imagine a big question might be how an AI learns a virtue like justice or charity, provided we take a view that virtues are habits.

      • Wrong Species says:

        I don’t know if this is viable or not, but I’ve been wondering what would happen if you have people fill out a large number of questions related to morality. It wouldn’t just be moral dilemmas, it would include something as banal as “should you kill someone because they didn’t answer your question?” Aggregate enough of these responses, taking in to account how often people disagreed with each other, feed it in to a neural network, and you should get an approximation of humans behave. I’m sure there is some reason it’s more complicated than that but it seems to be a better starting point than trying to solve ethics.

        • Aapje says:

          you should get an approximation of humans behave

          No, you’d get an approximation of how humans want to behave.

          • HowardHolmes says:

            No, you would get an approximation of how humans want others to think they would behave.

          • toastengineer says:

            No, you’d get an approximation of how humans want to behave.

            No, you would get an approximation of how humans want others to think they would behave.

            Label this one “not a bug.”

      • Nabil ad Dajjal says:

        Telos is a lot easier for an AI than a human, because you can directly ask their creator(s):

        Butter Robot: What is my purpose?

        Rick: You pass butter.

        [Butter Robot looks dejected]

        Butter Robot: Oh my god…

        Rick: Yeah, welcome to the club pal.

      • Wrong Species says:

        @Nabil

        The problem is that the AI might not care what you say its purpose is. Humans are designed to have children, but that isn’t going to change the mind of someone who is adamantly against it.

        • Nabil ad Dajjal says:

          Ok, but by that logic it could just as easily disagree with its utilitarian creators that human utility is worth pursuing.

          My point was that the telos of an AI, like that of any other manufactured object, is trivial to determine. Whether or not the AI would actually act in accordance with its telos is a different and much harder question. We can describe an AI which does as virtuous and one which doesn’t as vicious, with much less ambiguity than we would when judging a human being.

        • Wrong Species says:

          I don’t think the telos of humans is that hard to figure out. It’s to have offspring. If you think it is more complicated than that, then I think an AI would probably think the same thing.

    • I think the numberiness of utilitarianism comes from non-metaphysical assumptions. For instance, the proofs by John von Neumann, Oskar Morgenstern, and John Harsanyi don’t really make any assumptions about what “utility” is to “prove” that we should optimize the sum of individual utility [1]. The same proofs follow if utility is “virtues” or “rules-followed-in-universe-branch” or “pleasure-minus-pain”.

      So, (I think?) the idea that we should be optimizing for the sum of individual “goodness” doesn’t really rely on any metaphysical assumptions – just plain ethical ones.

      I personally don’t know of any clearly persuasive arguments that take us from this kind of sum-itarianism to utilitarianism, and I (agree?) that this is probably the weakest link in the utilitarian argument.

      [1] shameless self-promotion: http://compressingreality.com/Ethics/

  48. fredtwilight says:

    Just a little test for the chess series to see if I can embed lichess study’s into the comments here.

    EDIT:
    Seems I can’t could anyone help me out here?
    Here is a link to the study embedding, and here is the study
    Is it possible to put something like an iframe in a comment here?

  49. eterevsky says:

    A random question then. Which of the following events will happen first, and with which probability:

    – A self-sustaining colony on Mars will be established,
    – Aging will be cured one way or another,
    – Humanity will die out.

    I thought about it some time ago and it seems like the first outcome is the least likely.

    • The second event will happen first, probability better than .5, probably within fifty years.

      • Hackworth says:

        Does that include the trivial case, i.e. (self-)annihilation of the human race?

      • Eponymous says:

        The second event will happen first, probability better than .5, probably within fifty years.

        Why do you think this?

        (I’m rather interested in this claim since it might significantly affect my decisions.)

    • userfriendlyyy says:

      Humanity will die out long before the other two are remotely plausible. Climate change will make our present level of economic activity absolutely impossible by 2050. It isn’t a question of if sea level will rise, but when. The best case scenario has one foot by 2100, we are looking like hitting that by 2050. There will not be a functioning economy with Miami, NYC, Bangladesh, and a billion other people displaced from sea level rise. The question is really how many will survive that and what kind of life they will have. I don’t plan on sticking around to find out.

      • Anonymous says:

        I’d bet you considerable money if betting were legal where I am. Maybe I should tip off Bryan Caplan!

      • eterevsky says:

        While I do obviously believe in climate change, I think you grossly exaggerate it’s effect on humanity. While it might become unpleasant, it doesn’t constitute an existential risk in at least the next century.

        And if worst comes to worst, we do have technologies to lower the global temperature, like releasing SO2 in the stratosphere.

        • Anonymous says:

          And if worst comes to worst, we do have technologies to lower the global temperature, like releasing SO2 in the stratosphere.

          Or if civilization collapses, that’ll solve anthropogenic emissions too.

      • Carl Milsted says:

        One foot of sea rise wiping out humanity?! Worst case scenario: romantic comedies take place in Charlotte instead of New York City. Otherwise, humanity could survive quite a bit of water rise. (Go south of NYC, most of the cities are already built away from the coast. The Fall Line is where many cities lie. Norfolk/Virginia Beach being the big exception until you get all the way down to Florida.

        Note that the Chinese have demonstrated the technology to build new cities with incredible speed. Cities can be moved.

        • arlie says:

          Do you realize that global climate change includes rather more than just sea level rise? It seems to me that some of the other effects will be at least as hard to deal with.

          • It includes other effects, good and bad. Doubling CO2 concentration in the atmosphere increases the yield of C3 crops, which is everything major except maize, sugarcane, and sorghum, by about thirty percent, increases the C4 yield as well but not by as much. That’s a huge increase in food production.

            Warming means more people dying in hot summers, fewer in cold winters–and currently, global deaths from cold are much larger than from heat. It means the loss of a small amount of very valuable land through sea level rise, the gain of a much larger amount of less valuable land as temperature contours move towards the poles, increasing the amount of land suitable for human use.

            The one negative effect which I see as very uncertain but possibly very bad is the reduction in the pH of the ocean–as best I can tell, we don’t know how large the effect of that on ocean species will be.

            Your basic point, that there are many effects, is correct. What almost everyone ignores is that those include positive as well as negative effects, that the size of the effects is quite uncertain, with the result that we don’t know if the net effect on humans will be positive or negative.

          • mrthorntonblog says:

            Human migration could be disruptive in unpredictable ways.

        • christianschwalbach says:

          David Freidman you mention nothing about the effects of Drought and the fact that its not just a Global Warming, but a “Climate Change”, which includes freak weather patterns not conducive to consistent crop production. ….and drought. Also, the nations with populations affected by sea level rise do in some ways have less ability to absorb the disruption, esp. in asia. (Netherlands could engineer their way out of it, but thats an exception)

          • In the fourth IPCC report they claimed climate change was causing drought. In the fifth report they retracted that claim.

            Why would you expect it to cause drought? If it’s a little warmer evaporation from land increases, but so does evaporation from ocean, which is where rain largely comes from.

            Also, one of the effects of CO2 fertilization is to reduce water requirements, since plants don’t have to pass as much air through their leaves to get the carbon they need. So for a given level of water availability problems of drought are reduced.

            What do you think the scale is of land loss due to SLR in the countries you mention? At the high end of the high emissions scenario for 2100 SLR is about a meter–more plausibly something like half that. There are a few very low lying coasts where that would be a problem, but most of Asia is considerably more than a meter above sea level.

            And remember that we are talking about changes over most of a century. Fifty years ago China was dirt poor. Now it isn’t.

        • Cities can be moved.

          But not for free.

          • MB says:

            A city gets pretty much entirely rebuilt once every few centuries. If the change is sufficiently slow it will be almost business as usual.

      • baconbits9 says:

        1. The world has multiple cities that are below sea level which function, sea level rise can be managed without abandoning cities.

        2. The average projection for the economic impact of global warming through 2050 is net positive.

        • rlms says:

          2. The average projection for the economic impact of global warming through 2050 is net positive.

          Paging Controls Freak

          • Controls Freak says:

            Hiiiiii. Sorry, but I’m very slow to these threads these days. In any event, here goes.

            All the projections of economic impacts of global warming through, say, 2050 suck. They use static economic models based on today’s economy, which definitely isn’t valid at timesteps along the way. (You are letting your fast economic subsystem converge before stepping forward in the slow climate subsystem, right? Oh, you don’t even actually have a dynamic economic model…? Oops.)

            Think about the fact that economists agree that it’s hard to forecast economic effects of policy choices, such that projections can “differ substantially from outcomes”. Now, think about the fact that the CBO doesn’t even try to project more than ten years in advance (…and then maybe ask your friendly local economist about how much they trust the CBO’s dynamic scoring…). We simply don’t have an economic model with any sort of necessary precision for 2030, much less 2050! Forget about conditioning those models that don’t exist on a slow climate parameter.

            This much should be pretty non-controversial, even before we get to trying to really understand how adaptation timescales work. I think that both the guy who thinks that CC ruins economics and the guy who thinks that economics projections prove that CC is double plus good are both basing their positions on a house of cards. We really have no bloody clue.

          • Unfortunately, doing nothing is not standing still.

          • If you want predictable *costs* the only way is to do what the consensus advises, and spend so many dollars a year on incremental efforts. You can’t predict how much it will cost to remove cities, house refugees and so on.

          • Controls Freak says:

            Unfortunately, doing nothing is not standing still.

            I’m not sure what you mean or why you would believe that I hold an opposing view. Proceeding with or without climate mitigation actions will result in the progression of a timescale-separated dynamical system (the slow system being climate, the fast system being politics/economics/etc). It’s pretty tautologically impossible to stand still, and I actually think that understanding these dynamics is something that the, uh, pro-mitigation folks (I don’t have a good term here that isn’t politically-charged) are failing to get. They hold the fast system fixed (they stand it still), take a huge step in the slow system (imagine a big change in climate were to happen kind of immediately, under current conditions), and therefore end up with a completely theoretically indefensible conclusion.

            If you want predictable *costs* the only way is to do what the consensus advises, and spend so many dollars a year on incremental efforts.

            You can’t even do that. Here’s a pro-mitigation person arguing this case. Relevant pull quote:

            So, what is the cost of an increment of solar power in, say, 2060? Ha ha. Who the frack knows that? How about biomass, or CSS, or electric, self-driving cars, or so-called smart cities? How will advances in some of these technologies affect the cost of others? How will changes in consumption patterns or land use affect their costs? No one knows.

            Fundamentally, if we’re trying to analyze effects which are mediated through a slow-timescale climate system, we have to actually engage with models that are remotely valid on those timescales. To the extent that you’d like to make a different argument, like the, “What happens if we made the world a better place for nothing,” argument, then you can avoid these timescale issues. But then your argument is completely divorced from a discussion about climate change. You would do much better to simply make the short-timeline tech argument that is standard in any other argument about which tech we should use. It’s no longer about mitigation; it’s just about current tech.

            You can’t predict how much it will cost to remove cities, house refugees and so on.

            I’m glad you kind of agree (maybe not knowing how much you’re giving up), but I’d like to continue with my standard spiel here to show why these particular concepts are often misunderstood. You’re not thinking fourth-dimensionally. I like to depoliticize it and talk about airplanes first. Airplanes have a fast subsystem (orientation/velocity/etc) and a slow subsystem (fuel usage). We know that over the course of a couple hours, the plane will consume a lot of fuel, weigh a lot less, and probably have a CG shift. If we were being so flippant, we could say, “You can’t predict how much it will cost to adapt to a large change in weight/CG location.” If we say it in the right tone, it even sounds scary and bad. Well, we actually get a little lucky in that we can model a sudden change in weight/CG location! I can do it in my flight simulator. It gives the airplane a hell of a kick. You’re definitely no longer flying at your most efficient altitude. Worst, if I do it right, I can make the plane go unstable! (…I used to actually have my students do this in their homework…)

            The point is that we would be mentally coming to exactly the wrong conclusion, precisely because we’re ignoring the timescales involved. We’re mentally holding the fast system fixed, making a big change in the slow system, and seeing it all go wrong. In reality, the fast system’s stability characteristics never give it a chance to deviate much from trim, fuel can be pumped around to slow down the changes in CG, and pilots gradually increase altitude throughout the flight to maintain the most efficient altitude.

            We can start to see the devious nature of this failure. Sure, you can talk about the “costs” of a sudden change in weight, but how are they properly considered to be “costs” in the real case? What’s the counter-factual? Instead of just kind of internally imagining something happening all at once, we have to actually construct a counter-factual that is linked to the slow-timescale system. That is, it’s sensible to talk about the “cost” of performing a step up in altitude in increments of 1000ft versus continuously increasing altitude, but you inherently need to model how the fast-system behavior differs and how this is mediated over the course of the flight.

            So, with “remov[ing] cities,” we similarly can’t just imagine everything in society staying fixed, a big change in climate happening (a climate “event”), and wondering, “Crap! What do we do with all these cities and people?!” Instead, like a pilot slowly changing altitude, people and infrustructure will move, just kind of in the course of regular business. “Man, our building is 75 years old, and it’s getting kinda crappy. We’re going to build a new one. It’s in the hot new area that’s inland a bit. Maybe we can sell this place; maybe someone will even still want to build something here and hope they can get a couple decades out of it. Maybe we just let it get swallowed by the sea over the next 30 years. We’ve had a good run and gotten a lot of value from it.” What’s the counterfactual for how this fast system actually progresses? Does mitigation mean that they spend a bunch of money rehabiltating the old building instead? That someone knocks it down and builds a new one in the same site? What’s the actual baseline from which we’re deriving “costs”? I’m not silly enough to claim that I know what the answer is, but I’m also not silly enough to think that stray thought experiments that get the timescales the wrong way ’round and imagine us going from exactly what we have right now to suddenly, 100m of coastline is disappearing, are remotely defensible methods of going about this problem.

            The mention of refugees allows me to repeat one of my favored “not thinking fourth-dimensionally” stories, though I have to admit that someone here has challenged me on the details previously. I also admit that I haven’t had suitable free internet time to really track it down, so the particulars may be wrong (I really need to do this). Anyway, even if interpreted as a hypothetical, the method of error is illustrative. As I recall, folks were concerned about the Mosul dam or some such other dam located in what I recall being in the Middle East. Folks took the amount of water it was holding, projected how much land it would fill if let loose, counted the population in that land, and said, “X million people could die if this dam fails!” Well, turns out that the dam was significantly upstream of the populated area, giving a possible time delay between the dam breaking and the water reaching its ultimate, people-killing destination. I recall reading someone (can’t remember if it was here or elsewhere… thus, I really need to have time to work Google pretty hard) doing the math and saying that a person could walk from any location in the danger area to safety in time << the time it would take for the water to reach the populated area. In this case, going from a wrong methodology to a right methodology reduces the danger from, "X million deaths," to, "X million refugees." This case is obviously still very bad, but again, the key is the methodological error. You're almost certainly imagining a "climate event" which causes a bunch of "climate refugees", but that ignores the fact that doing nothing (to mitigate climate change) is not (people, politics, and economics) standing still.

            We need to actually model these things, having a defensible model of the fast system to give us a baseline with which to actually compare alternatives in order to compute “costs”. The proposed naive thought experiments are theoretically indefensible and don’t cut it. None of the published models in the literature cut it; they all make a fundamental error kind of like this. I want to go further and argue that we simply can’t do it right, and you actually seem to kind of agree… I just don’t think you’ve quite grasped how important these timescale effects are to making this conclusion or how they undercut your other ideas for how to conceive of “costs”.

            I really don’t think any of that should be too controversial. But I expect some form of controversy in an attempt to fight the reasoning, because the next thing I’d like to do is go one step further and say that we don’t have any idea about whether choosing mitigation or not imposes costs, benefits, or is neutral… at least as far as long timescale effects that are mitigated by the slow climate system are concerned.

          • I don’t think you’re guilty,but a lot of people see the costs of mitigating CC as just destroying wealth without getting any return….and the cost of doing nothing as zero. That claim has even been made on this forum, so you can expect it to be more common else where.

            If your concern is the affordability of an ongoing programme to mitigate climbing , what you need to know is whether the global GDP remains stable or increasing…it doesn’t much matter which. Thats a very high level index which is not critically dependent on the fine details of economic activity. I don’t find the airplane analogy to be all that analogous.

            Incidentally, if an economic collapse occur s, that will reduce carbon emissions automatically, so mitigaters can safely ignore the possibility. However, non mitigators expect future generations to pay for the effects of CC, and they might not have the money. Moving a city on the cheap will look like a refugee camp, not a space age future.

            Even if the costs of non mitigation are not point events, they are still defered. Spending $X billion a year will cost $X billion a year. It’s very predictable. Of course, it is not worthily if it achieves nothing. Achieving a target, such as everyone on solar power by 2050, rather than spending a certain amount of money is much less predictable. It spending a certain amount of money, and adjusting the amount up or down according to how targets are being met is relatively controllable and predictable, because iys based on a tight feedback loop.

            With non mitigation, you have two variables…what the environment will.do, and how much it will.cost to fix it. The point of.mitigation is that you only need to predict how much it will cost to fix it.

          • Controls Freak says:

            With non mitigation, you have two variables…what the environment will.do, and how much it will.cost to fix it. The point of.mitigation is that you only need to predict how much it will cost to fix it.

            With non-mitigation of fuel usage, you have two variables – what the fuel usage will look like, and how much it will “cost” to “fix it”. The point of mitigation is that you only need to predict how much it will cost to fix it. And, in fact, we can do this! The easy solution is that we just don’t fly the plane. And in the spirit of your view, we then only need to compute the additional costs that we encounter by moving those people via a bus or cars or whatever.

            I’m hoping it’s starting to be obvious why my view is precluding this type of reasoning. We haven’t justified at all why we’re calling the fuel usage “problem” (the one that arises solely from doing the timescales the wrong way ’round, not the actual cost of the fuel) a “cost” that needs to be “fixed”. Properly conceived, it’s not really a cost at all. So, take your sentence:

            With non mitigation, you have two variables…what the environment will.do, and how much it will.cost to fix it

            …and justify to me why we should consider it a “cost” that must be “fix[ed]”. I don’t think you can do this (at least not in any theoretically-defensible way), so I reject the premise of your argument.

        • arlie says:

          Re #2 – citations please.

        • christianschwalbach says:

          1. Katrina. New Orleans 2005.

          2. This is not scientifically accurate whatsoever.

          • 1. New Orleans has been a catastrophe waiting to happen for a very long time. Sea level rise of seven inches over the previous century isn’t the reason it flooded.

            2. Would you like to cite some sources and estimates? I don’t know what source Baconbits is thinking of, but in Tol 2009, Figure 1 of the corrected version shows positive effects up to about 2°. Figure 10.1 of the latest IPCC report has two estimates for 1°, one a negative effect of about .5% (eyeballing the graph), one a positive effect of about 2.5%. For about 2° it has two estimates at zero, one at about -.5%. All of those are estimates of the net effect, put in terms of the reduction in real income that would have the same effect on humans.

      • arlie says:

        I tend to agree, but with low confidence.

        I don’t think climate change will wipe out the species – I think it will at best create a lengthy (centuries?) period where resources are short, and there are far more important things to do with what resources are available than either try to solve aging or try to colonize Mars.

        At worst, conflicts over scarce resources lead to the kind of war that sterilizes the biosphere, or similar doomsday scenario.

        Most likely? A long period of scarcity, falling living standards, and far too many people to live in comfort with available resources. Population eventually reduces to the new carrying capacity, with living standards significantly down from the 20th century. While some elites still control enough extra resources to direct them to selfish ends, beyond just immediate luxuries, researching life extension is difficult and expensive, and they don’t have enough resources for success to seem plausible. Colonizing Mars is worse, because it requires larger chunks of free capital to do even baby steps.

        Maybe the species will come out of that period and move on to either goal. But not soon. And if standards fall below the ability to maintain modern technological knowledge, getting back where we were before will be a lot harder than getting there in the first place, even after all the carbon effects have worn off.

        • Most likely? A long period of scarcity, falling living standards, and far too many people to live in comfort with available resources.

          What calculation of expected effects do you base that on? Table 10.1 of the latest IPCC report shows estimates of the net effect on humans of various amounts of warming, put in terms of the reduction in world income that would have the same effect on welfare. For warming of up to 3°C, the worst result is just under 3%.

          Currently, world GDP growth is 3.9%/year. Population growth rate is 1.09%/year. So per capita GDP is increasing at about 2.8% year. If we assume three degrees by 2100, that means that global warming from now to then will cost us the effect of about one year of economic growth. Supposing that growth continues at the present rate, per capita GDP will be only 80 percent higher than it is now instead of 83 percent higher.

          And that’s taking the highest estimate of cost on the figure.

          It’s possible, of course, that the results will be much worse than that, or much better. But I know of no basis for confidently predicting that climate change will make us much worse off in 2100 than we are now.

          We’ve been here before. Back in the 1960’s, when the global catastrophe that the experts predicted was population growth rather than climate change, Isaac Ehrlich confidently predicted unavoidable mass famine in the 1970’s, with hundred of millions of people starving to death. That was on the high end of bad estimates, but the general view was that unless population growth was sharply reduced things were going to get a lot worse, at least in the poorer parts of the world.

          What happened was the opposite.