The Craft And The Codex

The rationalist community started with the idea of rationality as a martial art – a set of skills you could train in and get better at. Later the metaphor switched to a craft. Art or craft, parts of it did get developed: I remain very impressed with Eliezer’s work on how to change your mind and everything presaging Tetlock on prediction.

But there’s a widespread feeling in the rationalist community these days that this is the area where we’ve made the least progress. AI alignment has grown into a developing scientific field. Effective altruism is big, professionalized, and cash-rich. It’s just the art of rationality itself that remains (outside the usual cognitive scientists who have nothing to do with us and are working on a slightly different project) a couple of people writing blog posts.

Part of this is that the low-hanging fruit has been picked. But I think another part was a shift in emphasis.

Martial arts does involve theory – for example, beginning fencers have to learn the classical parries – but it’s a little bit of theory and a lot of practice. Most of becoming a good fencer involves either practicing the same lunge a thousand times in ideal conditions until you could do it in your sleep, or fighting people on the strip.

I’ve been thinking about what role this blog plays in the rationalist project. One possible answer is “none” – I’m not enough of a mathematician to talk much about the decision theory and machine learning work that’s really important, and I rarely touch upon the nuts and bolts of the epistemic rationality craft. I freely admit that (like many people) I tend to get distracted by the latest Outrageous Controversy, and so spend way too much time discussing things like Piketty’s theory of inequality which get more attention from the chattering classes but are maybe less important to the very-long-run future of the world.

Any argument in my own defense is entirely post hoc. But if I can advance such an argument anyway, it would be that this kind of thing is the endless drudgery of rationality training, the equivalent of fighting a thousand bouts and honing your reflexes. Controversial things are, at least, hard problems. There’s a lot of misinformation and conflicting interpretations and differing heuristics and compelling arguments on both sides. Figuring out what’s going on with Piketty is good practice for figuring out what’s going on with deworming etc.

Looking back on the Piketty discussion, people brought up questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”

And in the end, I think we made a lot of progress on those questions. With the help of some very expert commenters, I resolved a lot of my confusions and changed some of my conclusions. That not only gives me a different view of Piketty, but – I hope – long-term trains my thought processes to better understand which heuristics and generators-of-heuristics are reliable in which situations.

Last year, I had a conversation with a friend over how we should think about the latest round of scientific results. I said over the past few years I’d learned to trust science more; he said he’d learned to trust science less. We argued it for a while, and in the end I think we basically had the same insights and perspectives – there are certain situations where science is very definitely trustworthy, and others where it is very definitely untrustworthy. Although I could provide heuristics about which is which, they would be preliminary and much worse than the intuitions that generated them. I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?” I can come up with some reasons this isn’t the right way to look at things, but my real answer would have to sound more like “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

I think by looking at a lot of complicated cases, and checking back on them after they’re solved (which sometimes happens! Just look at the Fermi Paradox paper from earlier this week!) we can refine those intuitions and get a better idea of how to use the explicit-textbook-rationality-techniques. If this blog still has value to the rationalist project, it’s as a dojo where we do this a couple of times a week and absorb the relevant results.

This is one reason I’m so grateful for everyone’s comments. I only post a Comments Highlights thread every so often, but I’m constantly making updates based on things I read there and getting a chance to double-check which of the things I think are right or wrong. This isn’t just good individual rationality practice, it’s also community rationality practice, and so far I’m pretty happy with how it’s going.

This entry was posted in Uncategorized. Bookmark the permalink.

680 Responses to The Craft And The Codex

  1. yodelyak says:

    Flattery will get you everywhere.

    Relatedly, we like what you’re doing, and think you’re doing it well. Every dojo can benefit from having somebody, or several of ’em, who spar with sufficient elegance to put at the front of the class and/or on the poster.

  2. reasoned argumentation says:

    The author of this piece can’t imagine any arguments that would convince him that communism was a bad idea before every communist state went off the rails and slaughtered millions.

    He also stated that he can’t even think of any rules for rationality that would have improved his reasoning here.

    If this is the best “rationality” can do, you’re doomed.

    EDIT:

    By the way, if you want to see the exact problem you’re having it’s encapsulated perfectly here:

    https://slatestarcodex.com/2018/07/03/ssc-journal-club-dissolving-the-fermi-paradox/#comment-645249

    You make a post about the Fermi paradox and the Drake equation – Steve Sailer makes an observation that makes progressives uncomfortable that’s directly related to the discussion at hand – namely that a giant factor about predicting mass behavior (such as developing technology that would be remotely detectable) is systemically overlooked because it undermines progressive orthodoxy and that progressives responded to people who pointed out this factor not by disagreeing but by calling people heretics. You then do the exact same thing by banning him for his comment!

    There’s got to be a name for that type of error.

    EDIT 2: It appears I’m locked out of replies. Oh well.

    • thevoiceofthevoid says:

      I don’t recall anyone ever being called a “heretic” for their opinions on the possible implications or shortcomings of the Drake equation, nor have I seen any serious discussion of it having anything to do with “progressive orthodoxy” in any way. Seems to me like Steve Sailer was shoehorning in an argument about Bush’s housing policy that, whether or not it was correct, looks like less of an attempt at understanding the issue of the Fermi paradox and more like trying to start a “culture war”-ish argument.

      • reasoned argumentation says:

        There’s a missing parameter in the Drake equation for the second human speciation event.

        An intelligent enough species has to then move to novel environments because wherever the initially intelligent species arose is going to have predators, prey and microorganisms that all co-evolved and adapted to that species. In order to break from this trap the species needs to move to a new, isolated environment that then selects for enough intelligence to develop more advanced technology.

        This puts another constraint in which needs to be modeled – that some human groups don’t have the intelligence necessary to develop remotely detectable technology. Of course – as Steve pointed out – it’s immoral to consider the actual implications of different human groups having different average levels of intelligence. The reaction to the hint wasn’t to ask for clarification or even to consider, it was to go berserk about “culture war”.

        I don’t recall anyone ever being called a “heretic” for their opinions on the possible implications or shortcomings of the Drake equation, nor have I seen any serious discussion of it having anything to do with “progressive orthodoxy” in any way.

        How about now? Have you now read an argument about the Drake equation that violates progressive orthodoxy in such a way that it can’t be considered? If you haven’t then why not bring this argument up the next time someone discusses the Drake Equation?

      • reasoned argumentation says:

        There’s a missing parameter in the Drake equation for the second human speciation event.

        An intelligent enough species has to evolve then some of the members of that species have to move to novel environments because wherever the initially intelligent species arose is going to have predators, prey and microorganisms that all co-evolved and adapted to that species which is (on the margin) going to put survival pressure on immune systems, bone density and running speed rather than intelligence. In order to break from this trap the species needs to move to a new, isolated environment that then selects for enough intelligence to develop more advanced technology.

        This puts another constraint in which needs to be modeled – that some human groups don’t have the intelligence necessary to develop remotely detectable technology. Of course – as Steve pointed out – it’s immoral to consider the actual implications of different human groups having different average levels of intelligence. The reaction to the hint wasn’t to ask for clarification or even to consider, it was to go berserk about “culture war”.

        thevoiceofthevoid

        “I don’t recall anyone ever being called a “heretic” for their opinions on the possible implications or shortcomings of the Drake equation, nor have I seen any serious discussion of it having anything to do with “progressive orthodoxy” in any way.”

        How about now? Have you now read an argument about the Drake equation that violates progressive orthodoxy in such a way that it can’t be considered? If you haven’t then why not bring this argument up the next time someone discusses the Drake Equation?

        • Evan Þ says:

          If you haven’t then why not bring this argument up the next time someone discusses the Drake Equation?

          Because it isn’t an argument. It’s an attempted analogy which – even if correct – would be far more distracting than helpful.

          • reasoned argumentation says:

            That you think it’s “distraction” is a giant sign that you’re emotionally committed to a false factual belief.

            Steve’s comment was a perfect example of how what you find “distracting” is very very important in a highly related area – how differences in intellectual traits between different human groups affect behavior.

          • Evan Þ says:

            Right now, I’m talking about the broader culture, which – I’m sure we can both agree – is emotionally committed to a particular belief on that subject. To them, Steve’s analogy was far more distracting than helpful. (And, I don’t see how being emotionally committed to a belief is a sign that belief is false – it’s a sign it isn’t consciously based on logic, but the culture as a whole rarely consciously bases beliefs on logic.)

          • thevoiceofthevoid says:

            @reasoned argumentation
            I’m not sure I agree with you on the implications of certain statistical trends in human intelligence, but I don’t find your position emotionally repulsive to the point that it should never be mentioned and any research that might partially support it should be stopped. (I suspect you might be going a bit too far and/or reinforcing some prejudices of your own, but that’s besides the point.)
            In any case, my own beliefs on the subject are uncertain and I’m definitely not emotionally committed to them. Regardless, I still think Steve’s comment was genuinely distracting from the subject at hand. I disagree that Bush-era housing policy is at all related, let alone “highly related”, to the fermi paradox. I don’t count “someone came to the wrong conclusion because they overlooked a factor” because that applies to virtually every third event in human history.
            Honestly, it’s common sense: Any mention of [banned term]-sounding things will derail a thread into a culture war faster than you can say “bell curve.”

          • Luke the CIA Stooge says:

            Any emotional reaction to protect a belief is a MASSIVE red flag that the belief is FALSE.
            The set of potential beliefs that one could hold on any subject is HUGE!

            The fact, that out of the thousands of potential beleifs you could hold about metaphysics, or God, or Morality, or the inheterent justness of the universe , or the relative merits of man, you just so happen to hold the ONE belief that is most pleasing to your moral and aesthetic sensibilities is an sure sign that you are not assessing the haystack of potential worlds and issolating for the needle of the one true one, but instead are grabing the straw you like best.

            THE FACT THAT YOU LIKE AN IDEA IS STRONG EVIDENCE IT IS FALSE!!!
            Whereas the fact that you hate an idea and find it threatening is a sure sign that it is the most likely of potential ideas to be true.

          • albatross11 says:

            Note that at the same time as Steve’s threadjacking[1] comment, there has been a discussion on affirmative action in university admissions going on in another thread. In that thread, test score and academic performance differences by race have been mentioned several times, without causing any kind of mass-revulsion or shutdown of the discussion.

            [1] That’s how it looked to me, too. Drake’s equation is about universal things that might go wrong with an industrial civilization, thus explaining why it didn’t expand out to the stars so we could see it; it’s really hard to see how the push for less strict lending guidelines to inflate the housing bubble under the Bush administration informs such a discussion.

          • albatross11 says:

            Luke:

            I think your claim proves too much. Often, we first recognize what reality looks like and then can find beauty or meaning in it.

            There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

            I don’t think Darwin’s ability to find beauty and grandeur in the theory of evolution makes the theory less likely to be true.

          • Paul Zrimsek says:

            If you have an emotional reaction to protect a belief, I think that’s likelier to be a fact about yourself than a fact about the belief. In all likelihood, if you’d happened to form the opposite belief instead, you’d have an emotional reaction to protect that.

          • Nancy Lebovitz says:

            Luke, what about the belief that one’s life is of value?

          • Luke the CIA Stooge says:

            If darwin found beauty in evolution then that, i think, is a sign of cognative dissonance.

            Evolution is completely alien not only to 19th century sensibilities but to every conceivable sensibility humans might hold. And I’m not just talking about virus or wasps laying their eggs in live hosts or the reason the human penis is mushroom shaped.
            I mean the actual mechanism os inherently offensive and alien to any values or sensibilities any human would come to.
            In Rationalality from AI to Zombies Yudkowsky goes through the story of the quest for group selection. The theory was that if resources were scarce animals would naturall y start to limit there offspring so as to let them and their fellows continue to survive on scarce resources. Group selection theorists actually believed that this is what they would find because its how they would want to to overcome the problem. It took years for some one ti finally point out the problem with the gorup selection theory, and when years after that they finally were able to simulate the selection pressure that was supposed to create this phenomenon of cooperation the results were horifying: the animals canabalized the offspring of their rivals rather than limit their own reproduction.

            This world is lovecratianly alien from us, and for any given question there are a million hypothetical sulutions that present themselves.
            There are infinite possible hypthesis of the world you could have and the subset of plausible hypothesis you could hold is a smaller but likewise infinite amount. Your sensibilities on the otherhand are incredibly limited. If we were to take every moral philosophy and their ideal that you might find pleasing it would probably add up to fewer than 100 pissible worlds. Taking the question of intelligence and which group is naturally dominant there are really oonly two answers that are not horrifying: either everyone is equal or the group you identify with is dominant, any other possibility is horrifying and we never see peoe propose it. Maybe people with freckles are just naturally 30 iq points more intelligent than everyone else for some reason and you happen to have freckles, you wouldn’t like this outcome because its cruelly arbitrary, and alot of people you like are revealed to be part of a natural lower class and you don’t want them to be, and yet the answer to the IQ and nature question is way more likely to be something entiely random from our moral and aesthetic sensiblilities than the likelihood that out of the millions of possible orderings of human intelligence, it happens to be one of the two possibilities people like to entertain on aethetic ground (that either everyones equal or [identity group i identify with most] is naturally dominant).

            Once again if you are aethetically drawn to a hypothesis that is strong evidence the hypothesis is wrong!
            There are a million different potential hypothesis the vast majority of whom are alien to your sensibilities, and you will maybe entertain 5 or 10 when you are trying to solve a problem. So if some hypothesis you found appealing somehow made the shortlist then it is far more likely your thumb was on the scale than the universe just so happens to agreen with your aethetics!
            It does ocasionally happen the universe agrees with our aethics: running a gramaphone over the scratches it records miraculously happens to recreate the noise that it recorded, this feels like how it should work (it agrees with our aethitics), and any skeptic in in the 19th century should not have expected it.

            But anyone who aspires to rationallity has to accept the lovecratian alienness of the world around us.

            The idea that God created us in his image should have been a big hint that we had instead imagined God in our image, i mean what were the odds we’dget so lucky as to be born into a universe that agrees with us?

            Not being suspicious of a hypothesis you like is like looking at the billions of stars in the sky and saying ” that one star there! The one thats has stories i like about it! Thats where the aliens are!”

            Is so painfully obvious that you aren’t assessing for likelihood.

            Whereas if theres an idea that no one likes or holds like moral nihilism or aethiesm or atomism and yet somehow it continues to survive down the centuries and remain a top ten hypothesis like a bad penny we can’t get rid of: then its probably the truth.

            Reality is the worlds most annoying insect you can’t ignore it, it bites, and it will never go away.

            The fact that you like an idea is evidence against it. The fact that you hate an idea is evidence for it.

            If there is a God he is a troll.

          • Deiseach says:

            THE FACT THAT YOU LIKE AN IDEA IS STRONG EVIDENCE IT IS FALSE!!!

            So the fact that Sailer likes the idea about the bad effects on housing policy of letting stupid people have mortgages means it’s wrong? And the fact that we like the idea that it’s wrong means we’re wrong? So everyone is wrong, Sailer included?

            I’m really confused now!

            running a gramaphone over the scratches it records miraculously happens to recreate the noise that it recorded, this feels like how it should work

            Um, no it doesn’t? Running an object over scratches doesn’t result in sounds ordinarily (other than the “nails on a blackboard” type sound), so ‘make scratches in wax and then run a needle over the grooves and it will play voices and music’ really is counter-intuitive to our experiences. I can see what you’re trying to argue, but your examples are not supporting your case.

            Also if I agree with your conclusion, I should immediately hold it to be false; you like the “the universe is a troll” explanation more than alternative explanations, but you liking it means that it is more likely to be false, ergo – you see?

          • Nick says:

            Man, if Luke is right, I feel so vindicated by all those folks saying the Old Testament God is horrifying.

          • theredsheep says:

            I’d say it’s more that a strong emotional attachment to an idea indicates that it’s axiomatic for you, or tied into your axioms. That is, your identity needed a foundation, but the is-ought problem is intractable, so you picked an unprovable assertion as your hill to die on. We all do it. It doesn’t make your axiom true or false or anything. It just means you’re treading through an uncomfortable space.

          • Lambert says:

            Mathematical beauty, and its physical manifestation in the natural sciences is something different.
            It’s a heuristic corresponding to a solution or hypothesis with low complexity compared to its predictive power.
            The theory of Darwinian evolution is beautiful because it follows from only four principles (excess offspring, heritable variation, survival of the fittest, speciation) yet explains vast swathes of biology.
            Keplerian dynamics replaces epicycle upon epicycle with a mere six orbital parameters, and is thus beautiful.

          • Jaskologist says:

            Everything Luke said, but for our rational faculties.

          • Toby Bartels says:

            @ Luke the CIA Stooge :

            If darwin found beauty in evolution then that, i think, is a sign of cognative dissonance.

            Is your theory of human aesthetics falsifiable?

          • Mary says:

            “Any emotional reaction to protect a belief is a MASSIVE red flag that the belief is FALSE.”

            What an emotional reaction.

          • Luke the CIA Stooge says:

            Lambert

            In that case I wouldn’t fault Darwin. An elegant explanation can be praised as elegant regardless whether it accords with our moral and metaphysical instincts or not.
            If he had seen “Justice” or “The hand of god” or “the beauty of nature” i would be worried that he was flinching away from what wad implied.

            Toby
            My theory of human aethetics is kinda falsified by Lambert, in that he has shown a specific mechanism whereby the aethetic appeal of an idea would corespond to the truth value/accuracy of an idea. Simplicity and elegance are both aethetically appealing and
            Markers of useful theories.

            That being said this specific exception does not negate the broader point. That how an idea makes us feel or how it accords with our moral sensibilities tell us nothing about its truth value. And we should be supicious if out of the feild of millions of potential hypothesis one of few that appeals to our setiments made the shortlist of 5 to 10.

            There are vastly more horrifying and morally offensive possibilities than their are “Just” and “Beautiful” possibilities. There ae infinite potential powerful Elder beings or entities that could be responsible for the universe, but the one we beleif just so happens to be exactly like us but better and have nothig but love and benevolence for us?

            If there are ten theories on our shortlist and one makes us feel good while the other ten make us feel indifferent or are kinda horrifying. Then thats a big warning sign that wishful thinking and social desireability bias probably let it in while the others got in on merit. We shouldn’t demiss based on that but we should hold it to a higher standard because its something we want to be true.

            If you can propose a mechanism wereby the universe should resemble our moral and aethetic insticts I’d be happy to listen. (Maybe God is doing it and he does love us)
            But the universe seems horrifyingly indifferent to us and what we would like and we should expect that trend to continue.

          • yodelyak says:

            A belief that is attractive can be attractive because it is true (e.g. elegance of a theory’s explanatory power is one metric for the theory’s truth, a-la-Occam’s-razor), or the belief can be attractive because a beneficial intelligence conditioned one to be attracted to the belief (e.g. my parents raised me to feel attracted to some beliefs, such as a belief that a general policy of being honest will serve well), or the belief can be attractive because a hostile intelligence has worked to make it attractive (e.g. after too much time at a used car dealership, you may find yourself attracted to the idea that a particular car is a good deal, even if it costs a little more than you’d planned to spend on a car… and even though when you get home, you’ll feel you overpaid for a car that wasn’t a good fit for your needs, and eventually admit to yourself you were somehow tricked into thinking the dealer was your very good friend who had your interests at heart).

            Overall, I think people who are attracted to pleasant beliefs are at a disadvantage to those who have the toughness to look directly at unpleasant facts. But it can be taken too far–and adopting a “if I like it then it is false” view is definitely too far.

          • Baeraad says:

            Luke the CIA Stooge:

            Your claims are simplistic and barely even function as a rule of thumb. Yes, of course we should be suspicious if the world appears to be too much like how we want it. However, here’s the thing: I have believed many different things in my life, and no matter what I believed, I always found conflicting claims to be threatening and unpleasant. This is because they implied that I was wrong, and if I was wrong it meant that I was vulnerable and would remain vulnerable until I had gone through a long process of unlearning the things I thought I knew. That is an unpleasant possibility to have to contemplate, more than enough to cause a kneejerk reaction.

            Likewise, whatever I have believed, I have almost always found it to be beautiful to some extent. The mind adapts to our view of the world – it finds things to love about reality the way it perceives it to be, because living in a reality you hate is just too painful in the long run. (consider it a form of existential Stockholm Syndrome, if you will. :p )

            That we don’t like to be told we’re wrong says nothing about the nature of our opinions. It just means that we’re human.

        • BlindKungFuMaster says:

          “An intelligent enough species has to evolve then some of the members of that species have to move to novel environments …”

          There is nothing about ancestral environments per se that makes the development of higher intelligence impossible. It doesn’t make sense to generalise from (badly understood) details of human evolution, to alien species.

          When we discuss the Fermi parapox, we discuss scenarios in which millions of years more or less, don’t make much of a difference. You are talking about effects that even small differences in selection pressure can easily produce in a couple of thousand years.

          There is not really any connection between group differences and the Fermi paradox. That’s the reason why I will not bring this argument up the next time someone discusses the Drake Equation.

          Also, Scott has a series of posts that touch on group differences in intelligence. Your self-righteous anger about “progressive orthodoxy” is somewhat misplaced. Those are great posts with great discussions. But shoe-horning the topic in whenever possible is just destructive.

          • reasoned argumentation says:

            >There is nothing about ancestral environments per se that makes the development of higher intelligence impossible.

            We have one example of intelligence and an ancestral environment and intelligence sufficient for distant detection of technology didn’t develop in the ancestral environment.

            >When we discuss the Fermi parapox, we discuss scenarios in which millions of years more or less, don’t make much of a difference.

            If the conjecture that a physically isolated but still reachable by migrants land mass of at least continental size is necessary is correct then millions of years for life isn’t the relevant measure – the relevant measure is if there happens to be the right geographical barriers and other continents at the same time. Here’s your planet, here’s its geological history – oh look, it never develops a Sahara desert in the right spot, oh well – on to the next candidate.

            Intelligence isn’t always maximally selected either and geographic / climatological conditions can go from favoring intelligence to not favoring intelligence – in the same way that different conditions on Earth favored different levels of average intelligence. That’s a direct link between group differences and the Drake equation.

            There’s a world of difference between having a firewall between admitting that some things that are disturbing under your worldview are true and actually treating them as if they’re important facts about the world that have to be accounted for in all reasonable discussion. This is an important and often overlooked element of thinking well.

          • BlindKungFuMaster says:

            “If the conjecture that a physically isolated but still reachable by migrants land mass of at least continental size is necessary is correct then millions of years for life isn’t the relevant measure – the relevant measure is if there happens to be the right geographical barriers and other continents at the same time. ”

            My point is that this is a completely post hoc hypothesis. You are arguing like Jared Diamon, if you know what I mean. There are hundreds of possible scenarios how the IQ of a group in the ancestral environment might be bumped up a standard deviation or two.

            There is nothing special about the IQ threshold to modern technology. Once a group reaches it, you might get modern technology. But there is no reason whatsoever why groups in the ancestral environment are doomed to forever languish just below the threshold.

          • HaraldN says:

            (Hope I am replying in the right hierarchy)

            I don’t think there’s enough arguing like Jared Diamond. Humans in the ancestral environment did invent farming (they couldn’t import it, because crops from fertile crescent/asia fail in subsaharan africa). They didn’t do it as quickly or as efficiently as the rest of the world, but had africa been surrounded by kilometer high walls on all sides it doesn’t seem likely that electronics would never be invented. Just later.

          • vV_Vv says:

            Humans in the ancestral environment did invent farming (they couldn’t import it, because crops from fertile crescent/asia fail in subsaharan africa).

            Maybe the sub-Saharan Africans couldn’t use the same crops, but took the general idea from the North Africans and Middle Easterns?

            They didn’t do it as quickly or as efficiently as the rest of the world, but had africa been surrounded by kilometer high walls on all sides it doesn’t seem likely that electronics would never be invented. Just later.

            Why?

            Even assuming that farming in sub-Saharan Africa was invented independently, inventing farming intuitively seems easier than inventing electronics: some smart guy buries some seeds in the ground for storage, then an edible plant of the same type appears. The guy notices the pattern and understands the practical application of the process and boom: agriculture. From then, it’s a matter of incremental improvement.

            Electronics, on the other hand, is the end result of a long process of scientific discovery, done to a large extent by professional researchers working at specialized institutions, who for a long time didn’t even have practical applications in mind. It took centuries of people systemically rubbing amber, stimulating frog legs, tinkering with pieces of metal, acidic solutions, magnets, etc., at each step carefully recording their observations, preserving these records over time and studying them. None of this happened in Africa. Even if somebody there made an interesting observation about electricity and was smart enough to notice a pattern, that knowledge was lost. The giant died before anybody could stand on their shoulders.

            Could uncontacted sub-Saharan Africans have figured it out, eventually? Perhaps, but it doesn’t seem obvious.

          • christhenottopher says:

            Could uncontacted sub-Saharan Africans have figured it out, eventually? Perhaps, but it doesn’t seem obvious.

            This is the rub for me, that’s putting the burden of proof on the wrong side. If we’re making an argument for using a particular new constraining factor for an outcome, the burden of proof is on the person saying that it’s a real constraining factor, not those questioning that. This seems pretty basic to me, and to show why, consider someone who lives in a town where all coffee shops are only one story. I can come up with plenty of constraints like “coffee shops don’t make enough revenue to pay for multiple stories worth of temperature controls” or “people on the upper stories won’t hear when their order is ready on the lower stories” and then try and generalize that coffee shops can only have one story. Then I go to a major dense city and see multi-story coffee shops.

            We really need to keep in mind just how constrained our knowledge of how technological species arise is. We have one example, who has existed in one continental configuration, for a few tens of thousands of years out of hundreds of millions for complex life to live on this planet, there are probably a lot of non-obvious possibilities for technology to arise we haven’t seen.

            Could sub-saharan Africa type areas develop electronics? No idea. So since I have no idea saying “you need something like Eurasia on your planet to make this” seems like way over generalizing from extremely limited evidence. I keep seeing “x must happen” when talking about absurdly complex events and systems that we have a sample size of one on. Other options may not be obvious, but that’s way different from saying impossible.

          • Evan Þ says:

            @reasoned argumentation, was there in fact a Sahara Desert in place at the right time in the evolutionary timescale? I seem to recall cave paintings before the desert grew to its present size?

            @BlindKungFuMaster, I’m afraid reasoned argumentation might have a point here: pretty much all our theorizing about development of intelligence is generalizing from one example. If we know something contributed to human intelligence, it’s hard to throw it away and say it wasn’t necessary.

            (You still need to show it did significantly contribute to human intelligence, though; I’m not convinced in this particular case.)

          • vV_Vv says:

            This seems pretty basic to me, and to show why, consider someone who lives in a town where all coffee shops are only one story. I can come up with plenty of constraints like “coffee shops don’t make enough revenue to pay for multiple stories worth of temperature controls” or “people on the upper stories won’t hear when their order is ready on the lower stories” and then try and generalize that coffee shops can only have one story. Then I go to a major dense city and see multi-story coffee shops.

            If there is town where only one story coffee shops exist, then some constraint must be in place. The question is how general the constraint is. Before observing multi-story coffee shops you shouldn’t conclude that they are impossible, but you should still assign a somewhat low probability to them, unless you have a very good theory of coffee shops that predict multi-story coffee shops with high probability in large cities.

            We don’t have a good theory of technological development. We can just intuitively generalize from historical data about one planet.

            What we observe is that for most of the history of life on earth technological level was limited to things like “bash rock on nut/clam to open it”, then for some unknown reason a population of chimp-like animals evolved to a species that could make stone tools (*) but fell short of making space-detectable technology until a sub-population of this species appeared.

            This suggests that some evolutionary roadblock may exist between a stone age tech species and a space tech species. The roadblock may not be “you need an Eurasia”, but it does imply that there is a non-trivial probability that a technological species that never gets past the stone age can evolve on a different planet.

            (* some claim that there is evidence of independently developed copper or iron smelting in sub-Saharan Africa, but the evidence is contested, and even if these technologies existed at some point, they were not developed enough to sustain a bronze age or iron age)

          • christhenottopher says:

            We don’t have a good theory of technological development. We can just intuitively generalize from historical data about one planet.

            Did you mean to say “can’t just intuitively generalize”? Because we really, really can’t. Intuitive generalizing is how you get things like “well lemons cured scurvy and we don’t really understand the mechanism, but we can generalize on citrus fruits so let’s serve cheaper limes“. The real answer should be step back and just acknowledge we don’t have the data to really answer the question yet.

            This suggests that some evolutionary roadblock may exist between a stone age tech species and a space tech species. The roadblock may not be “you need an Eurasia”, but it does imply that there is a non-trivial probability that a technological species that never gets past the stone age can evolve on a different planet.

            I think we can agree that the existence of probable road blocks does not mean any particular road block proposed is correct. Which is my whole complaint about where this thread started. We really can’t confidently assert the following based on the sample size we have (pulled from reasoned argumentation earlier in the thread):

            An intelligent enough species has to evolve then some of the members of that species have to move to novel environments because wherever the initially intelligent species arose is going to have predators, prey and microorganisms that all co-evolved and adapted to that species which is (on the margin) going to put survival pressure on immune systems, bone density and running speed rather than intelligence. In order to break from this trap the species needs to move to a new, isolated environment that then selects for enough intelligence to develop more advanced technology.

            That seems intuitive given how we develop technology, but if we don’t truly understand the full, general process of how species move from stone age to space then we should not generalize from one planet and a brief span time relative to the full life time of the universe. And I would say we should avoid a definitional argument such as “well intelligence is the capacity to make technology” since the universe is a massive place and we’re talking about systems that are WAY above the basic axioms of how our universe works. We don’t even know how to predict a weather system more than a week or so in advance so we shouldn’t try to assume we know what a species will do a million years from it’s start.

            To sum up, we shouldn’t let the limitations of our imagination become a substitute for limitations determined by an actually generalizable sample size.

          • HaraldN says:

            Reasons for why science/technology could have happened in africa:
            1)Humans get knowledge from observing the world around them. Africa has the same laws of physics and biology as the rest of the world.
            2)Africans may have lower iq on average, this would make “einsteins” more rare, but not non-existent.
            3) As far as I know, africa does not critically lack the raw materials needed for basic technology (ie ‘easy’ access to farmable land, bronze/iron materials). It’s a worse place to start a civilization than most other places on earth because it has more hostile animals.

            This map https://en.wikipedia.org/wiki/Center_of_origin indicates farming did originate in africa once. And farming seems to be one of two big hurdles to clear, since that increase in food density is what allows artisans.

            The second big hurdle is writing (once you have writing and artisans down it would seem to be a matter of time before you unlock the secrets of the universe). I can’t quickly find any good information on the origin of written languages in africa though.

          • Nancy Lebovitz says:

            How is Africa for access to fossil fuels?

          • HeelBearCub says:

            Nigeria is the 6th largest producer of oil in the world. South Africa is 7th for coal.

            Not sure whether it’s all deep though. But long enough time frames probably make that irrelevant. Nigeria was smelting metal by 1000 BC, which should suggest that mining and mineral production would eventually lead to full scale fossil fuel development.

    • Egregious says:

      If you believe you have a general way to convince someone of the dangers of communism before historical evidence and the principles gained from that evidence I’m sure we’d be glad to hear. Especially as I believe few intellectuals made such predictions that before the horrors occurred. We would like to be less doomed.

      EDIT: Yeah, kinda showing my ignorance by responding before going over the facts myself. No real opinion about the communism arguments in particular, but the art and this blog I think are net very beneficial

      • reasoned argumentation says:

        Saying it was a mystery and that it’s impossible to predict in advance is absurd because there were people who were able to predict it in advance. Want to know what those were? Just look them up. Scott’s reaction to reading those arguments was “well, all those were bad arguments” which is a bit odd because those are the people who were making correct predictions.

        The problem is that the arguments that were made prior to communism about what a bad idea it was also have implications about policy today that are totally unacceptable to consider (for an orthodox progressive) so Scott doesn’t let himself understand those arguments and instead casts them as ridiculous – no doubt in the name of “steelmanning” because no one could sincerely make an argument that denies progressive axioms.

        • Evan Þ says:

          You make a very good point. In Scott’s defense, it is possible for someone to sometimes get a correct answer from wrong premises or reasoning. But on the other hand, I’m nowhere near convinced that was the case for the people predicting Communism would be a disaster.

        • MawBTS says:

          Scott’s reaction to reading those arguments was “well, all those were bad arguments” which is a bit odd because those are the people who were making correct predictions.

          Making a correct prediction doesn’t mean you have good arguments, just like Springfield’s nuclear power plant failing to melt down doesn’t make Homer Simpson a good nuclear safety engineer.

          Bad processes sometimes lead to good outcomes.

        • Murphy says:

          There’s a trick to perfectly predicting football matches.
          For every match you make 2 predictions. Predictions that each side will win.
          Then you delete the one which turned out to be incorrect.

          Is this useful for predicting the future? Unfortunately not.

          There’s a variation on this scheme where you start with a million screaming crazy people. They each scream out their predictions for the future based on a call to rand() , the hair of their belly button lint that day or the patterns of light they see when they take hallucinogenic drugs.

          A few years later you come back and pick out the ones which were correct.

          Then you hail the ones who got it right as prophets and master predictors.

          Some of them will be correct no matter how bad their method for coming to predictions is.

          If there are people who have been consistently correct about major economic forecasts far beyond the ability of most pundits then that’s interesting, though we have to make sure to factor in how many major predictions they’ve got correct and how big the pool of predictors we’ve picked them from is.

          After all, if you start with a million people and ask them to predict a fair coin toss and they each choose randomly then even after 20 tosses we can expect a couple of them to have been correct on every single toss.

          • After all, if you start with a million people and ask them to predict a fair coin toss and they each choose randomly then even after 20 tosses we can expect a couple of them to have been correct on every single toss.

            On average, a little under one of them.

          • Murphy says:

            @DavidFriedman

            Teaches me for going off memory for powers of 2. For some reason was thinking it was the 21st doubling that passed the million mark.

        • vV_Vv says:

          Want to know what those were? Just look them up. Scott’s reaction to reading those arguments was “well, all those were bad arguments” which is a bit odd because those are the people who were making correct predictions.

          You can make correct prediction from weak arguments.

          The type of arguments against communism that were available before actual communist revolutions were things like the Pope saying that it was against the Bible. It sounds silly but could be perhaps steelmanned into a generic argument from precautionary conservationism, along the lines of “the status quo stood the test of time, revolutionary social engineering is risky”, which is still quite weak, especially considering that at the time Das Kapital was published the transition from aristocratic monarchy to universal suffrage democracy was recent if not still underway in many parts of Europe, and it seemed to yield good results, which was evidence for revolutionary social engineering creating positive outcomes.

          The strongest theoretical arguments against communism come from game theory, which was invented mostly between the 1930s and 1960s by von Neumann, Nash and the whizzes at the RAND corporation who were working precisely to study the outcomes of a total war against the then already existing communist superpower.

          But more importantly, you can come with theoretical arguments against capitalism too. Without empirical evidence, they don’t sound any less compelling than those against communism.

          Even in the 1980s, when the shortcomings of communists were already well known, most Western political analysts still couldn’t predict the imminent collapse of the Soviet Union. Claiming a posteriori that the failure of communism was easy to predict is a textbook example of hindsight bias, a fallacy that those who seek to pursue rational thinking should try to avoid.

          • Hanfeizi says:

            The Pope?

            Eugen Bohm von Bauwerk – who I might add was the Finance Minister of the Austro-Hungarian Empire – published his definitive refutation of Marx, “Karl Marx and the Close of His System”, in 1896 – over two decades before the October Revolution.

            The Austrian Economists – who were intellectual elites by any definition of the word – already foresaw the failure of Marxism in purely secular and rational terms.

          • baconbits9 says:

            The Austrian Economists – who were intellectual elites by any definition of the word – already foresaw the failure of Marxism in purely secular and rational terms.

            I have found it curios that this site, which has a high percentage of people at least willing to concede that rationalism has good points and some outright rationalists, has so few self proclaimed Austrians. If you wanted to select an economic school of thought to follow in a Bayesian way by looking at their track records of predictions and outcomes I don’t now how you would end up with any other school. Mises’ Economic Calculation paper predicting some of the issues that the Soviet’s would face in production and distribution is at least a home run on the prediction front.

          • Carl Milsted says:

            Communism had been tried long before Marx added his pseudo-scientific spin on it. The early Christians gave it a whirl. The early English colonies in North America were originally communes. Utopian communist experiments were going on in the 1800s.

            To the extent that Marx made any descriptions of his predicted utopia (most of his works were criticism of Evil Capitalism), he predicated it upon there being plenty Capital being accumulated during the Capitalist phase of world history. A truly scientific socialist would call for some experiments: take a well capitalized factory and turn it over to the workers. Can they effectively self-govern? Will they maintain the machines?

            If it works, repeat the process with mature industry after mature industry. No violent revolution necessary. If not, go to the drawing board.

            —-
            In the interest of fairness, the same argument applies to anarcho-capitalists of the Rothbard School. They, like Marx, expend the bulk of their energies criticizing The Enemy (The State, vs. Capitalists). From what I can tell by arguing with members of both schools, the Rothbardian anarchists do put some more work on the figuring out how there utopia would function. But not nearly enough.

            If a zone of anarchy can coexist with surrounding State run nations, why not gather up some private arms and get rid of the government of a smallish nation? How about Cuba? They have this really nice island…

            If not, then anarcho-capitalism suffers the same intellectual cheat as Marxism: “We need to apply it to the entire planet to see if this works.”

            May I suggest the Dangerous Experiment filter.

          • baconbits9 says:

            Communism had been tried long before Marx added his pseudo-scientific spin on it. The early Christians gave it a whirl. The early English colonies in North America were originally communes. Utopian communist experiments were going on in the 1800s.

            One of Marx’s contentions was that industrialization and capitalism made it possible for large scale communism to work, that production could be ‘scientifically’ measured and thus scientifically distributed. His labor theory of value relies on this concept.

            To the extent that Marx made any descriptions of his predicted utopia (most of his works were criticism of Evil Capitalism), he predicated it upon there being plenty Capital being accumulated during the Capitalist phase of world history.

            Marx specifically predicted that in the long run capitalists would only pay their workers enough wages to survive and produce more workers. This is not just a justification of the how and the why communism would come about, but a logical inference of his postulates that capitalists could exploit their workers and that economic classes were perpetually in conflict with each other. You can’t have Marxism without exploitation, and you can’t have rising wages with perpetual exploitation.

          • Carl Milsted says:

            baconbits9 wrote:

            One of Marx’s contentions was that industrialization and capitalism made it possible for large scale communism to work, that production could be ‘scientifically’ measured and thus scientifically distributed. His labor theory of value relies on this concept.

            I’ll accept the correction, but still contend that this should be a testable hypothesis. If labor value is scientifically measureable without a market, then a commune could operate using these scientific principles.

            All it takes is a few wealthy socialists to try out the experiment. (And yes, they existed before Hollywood. Many of the old Utopian Socialists were wealthy.)

            Marx specifically predicted that in the long run capitalists would only pay their workers enough wages to survive and produce more workers….

            This is an example of the Capitalism Bad stuff that Marxists argue to this day. Proving something is bad is an insufficient case for a world revolution. You have to have something better enough to justify said revolution. And if we are talking about violent revolution, you need to be damned sure that the solution will actually work.

            —-
            As a bit of a digression, it is interesting to me how few people today realize how much Adam Smith wrote about wage rates. Smith claimed that the price of Labor with respect to Capital [aka Stock] and Land in terms of the quantity available of each. He predicted that the return on Stock should go down and wages go up as Stock accumulated.

            I have seen such arguments mentioned when people discuss Piketty, but wonder why this isn’t a major populist cause. When the government runs a trillion dollar deficit, this is a trillion dollar price support program for Capital at the expense of Labor. (The Birchers made this point, but they had to mix it in with a wacky conspiracy theory.)

          • ec429 says:

            As a bit of a digression, it is interesting to me how few people today realize how much Adam Smith wrote about wage rates.

            This is because apparently no-one these days actually reads Smith. My go-to example of this is that whoever designed the current (series F) £20 note clearly only bothered to read the first chapter of the Wealth of Nations and was left with the impression that the “division of labour in pin manufacturing” was a sufficiently important part of the book to use to represent it (thereby, incidentally, solidifying public misconceptions of economics in general and Smith in particular).

          • The strongest theoretical arguments against communism come from game theory, which was invented mostly between the 1930s and 1960s by von Neumann, Nash and the whizzes at the RAND corporation

            I would have said that the strongest theoretical arguments on economic unworkability of central planning were made during the Calculation Controversy, by Mises et. al. starting in the 1920’s, by which time Russia was already communist.

            But it isn’t clear that those arguments cover all possible versions communism. Tito’s Yugoslavia had what it described as communism, but it was largely a decentralized system with workers’ coops instead of firms. It didn’t work terribly well but it didn’t fail catastrophically. And the other side of the Calculation Controversy, Abba Lerner et. al., attempted to describe possible versions of market communism in order to get around the problems raised by Mises.

            I haven’t read the Bohm-Bawerk work cited, but it sounds from descriptions of it as though it was refuting Marx’s value theory, not proving that any attempt to implement a communist system would fail catastrophically.

          • ec429 says:

            @Carl Milsted:

            If a zone of anarchy can coexist with surrounding State run nations, why not gather up some private arms and get rid of the government of a smallish nation?

            There is the unfortunate problem that when you do this, other governments tend to see this as a threat to their own long-term viability (which, of course, it is), meaning that you need to be sufficiently large that they can’t just invade you (a constraint that does not apply to nations, at least not ones the international community recognises, since invading those gets the invader into trouble with other nations).

            If not, then anarcho-capitalism suffers the same intellectual cheat as Marxism: “We need to apply it to the entire planet to see if this works.”

            Not the entire planet, but going directly to an-cap would indeed call for a certain minimum size/amount of guns, as per the above. Fortunately, the state can be dis-integrated in small pieces (both horizontally and vertically), meaning that a more gradual approach (with no point at which other nations can decide you’re not a state any more and invade you) is possible. This seems to be the goal of e.g. the Free State Project: to move an existing state in a libertarian direction, rather than (say) trying to initiate an-cap through a violent revolution (after all, those rarely end well).

          • Carl Milsted says:

            @ec429
            A gradualist approach to reforming a superpower or two may create a world environment in which anarcho-capitalism is possible somewhere. Or, at least, one can try the experiment.

            But this requires some pro anarcho-capitalist activists deviating from the restraints of the Rothbard School enough to win enough elections in a superpower to make said reforms. It requires becoming “corrupted by activism.”

      • At a slight tangent to this thread, “dangers of communism” is ambiguous–it can mean at least two different things. One of the things wrong with communist systems is that they worked poorly–Russia is almost certainly a much less prosperous country today than it would be if the revolution had never happened and it had developed along more or less classical liberal lines. Another things wrong is that communist regimes have tended to engage in mass murder to a degree in which no other system I know of, with the exception of Nazi Germany, a sample size of one, did.

        Economists could have predicted the former and did, at least as early as a few years after the Bolshevik revolution. I’m not sure if anyone predicted the latter.

        • Jaskologist says:

          Dostoevsky did. I expect many of the contemporary Christian critics of atheistic communism gestured in this direction, and still do to this day when they say that without God there is no foundation for morality (as D himself famously claimed).

          Man can only be so designed through terrible violence, his placement under dreadful systems of spying and the continuous control of a most despotic power.

          (also Dostoevsky)

          I generally agree that modern day Rationalists have not given this case study nearly the thought it needs. If your current reasoning would have led you to support communism a century ago, your current reasoning is genocidally bad. Even if all the opposite side has is “the Pope said it’s against the Bible,” that reasoning turned out to be far more correct. It would be instrumentally rational for us to weight “the Pope said so” much stronger relative to the Sequences to about the extent we think it is worth avoiding another Soviet Russia.

          • LadyJane says:

            The problem is, for people who don’t believe in Christianity, the fact that the Christians happened to be correct about communism is nothing more than sheer dumb luck. They might’ve been right, but they were right for all the wrong reasons, which means their opinions should not be considered particularly useful for making accurate predictions in general. At best, you can say that maybe Christianity contains some useful intuitions about human nature and the dangers of eschewing tradition, but even that seems like extreme steelmanning.

            Do you have any examples of people before the Bolshevik Revolution criticizing the morality (as opposed to the mere inefficiency) of proposed communist systems for purely secular reasons? If not, then I’d agree with David Friedman’s suggestion that no one could have reasonably predicted the egregious human rights violations that would occur under communist governments.

          • albatross11 says:

            It’s not before the Bolshevik revolution, but Hayek’s _The Road to Serfdom_ looked to me like an entirely secular prediction that socialism led to a massive loss of freedom. And Ayn Rand was still later, but had a moral condemnation of communism that I don’t think was based on religion at all.

          • LadyJane says:

            The Road To Serfdom was published in 1944, well after the horrors of Leninism and Stalinism had already occurred and been fairly well-documented, and Ayn Rand lived those horrors firsthand. As such, their insights are less predictions and more observations. I specified “before the Bolshevik Revolution” because the fact that socialism could lead to a loss of freedom became rather obvious shortly after the Marxist-Leninists took power.

          • Jaskologist says:

            The problem is, for people who don’t believe in Christianity, the fact that the Christians happened to be correct about communism is nothing more than sheer dumb luck. They might’ve been right, but they were right for all the wrong reasons, which means their opinions should not be considered particularly useful for making accurate predictions in general.

            Is that how Rationalism is supposed to work? You take your priors, see that they supported the wrong conclusion, and then update them to be precisely what they were before, because whatever disagrees with your priors was obviously just dumb luck?

          • Nornagest says:

            Do you have any examples of people before the Bolshevik Revolution criticizing the morality (as opposed to the mere inefficiency) of proposed communist systems for purely secular reasons?

            I don’t have any examples off the top of my head, but I get the strong impression that criticisms of communism before the late 20th century — never mind the Bolshevik Revolution — were overwhelmingly moral as opposed to economic. The strongest economic argument against communism — the calculation problem — wasn’t even articulated until after the Bolshevik Revolution, wasn’t very well-developed until Hayek tackled it mid-century, and didn’t have any solid empirical evidence behind it until the Eastern Bloc started falling apart in the late Eighties.

          • LadyJane says:

            Is that how Rationalism is supposed to work? You take your priors, see that they supported the wrong conclusion, and then update them to be precisely what they were before, because whatever disagrees with your priors was obviously just dumb luck?

            Realistically, what conclusions are non-Christians supposed to draw from this?

            That the tenets of Christianity are actually true? That seems like far too extreme of a change in one’s metaphysical worldview to be rationally justified by Christians being right about one issue that’s not even related to their religion’s central dogma.

            That regardless of whether Christianity’s metaphysics are correct, its rules for human conduct and its view of human psychology and its intuitions about human morality are all highly accurate, and we should adhere to them to prevent further disasters? This one is more palatable for non-Christians, but it still seems premature to draw such a sweeping conclusion from a single example, even one of this scale. Does the evidence show that Christian values tend to produce good societal outcomes in general, or that Christian claims about human nature match modern psychological findings? Did 19th century Christian warnings about the dangers of other then-revolutionary ideologies like democracy and capitalism and social liberalism turn out to be true? Or were Christians simply right about communism and fascism being awful because they saw those ideologies as threats to their own ideological system, in much the same way that communists and fascists were both right about each other being awful?

        • liquidpotato says:

          @DavidFriedman

          Another things wrong is that communist regimes have tended to engage in mass murder to a degree in which no other system I know of, with the exception of Nazi Germany, a sample size of one, did.

          I would be interested to know what you think about the deaths due to famine in India under British colonial rule. The Bengal famine of 1943 alone caused the death of a million Indians from starvation and disease combined.

          • albatross11 says:

            Didn’t about a Holocaust’s worth of people die in the Belgian Congo under Leopold’s benevolent colonial rule?

          • I don’t know a lot about the Bengal famine–was that a case of the government taking food away from the farmers and exporting it on a large scale, as was I think true of both the Ukraine famine and the Great Leap Forward famine, or only of the government doing an inadequate job of preventing the famine? The former strikes me as mass murder, the latter at most negligent loss of life.

            The Belgian Congo is a pretty clear example of mass murder, analogous to that under the communists. But there was only one Belgian Congo out of a very large number of colonial projects. Communism has a record of three very large mass murders in three independent communist states (USSR, China, Cambodia).

            How many independent communist states didn’t engage in mass murder? A lot of people were killed in Yugoslavia in the civil struggle during and just after WWII, but I’m not sure that can be blamed on communism, and while Cuba was pretty bad I don’t think Castro’s government did anything on the scale of the three I mentioned. If you lump the satellites in with the USSR as one system, that’s three out of five for communism, as opposed to one out of what, a hundred?, for colonialism.

          • quaelegit says:

            More independant communist states:

            * Surely Vietnam counts?

            * Do any of the African states count or are they just Soviet satellites? (Not sure what the rules are for counting). I know Ethiopia had a communist phase (also some pretty bad famines in the 70s or 80s, but I don’t know if that was under the communists or can be blamed on them). I think the biggersmaller Congo had a communist government too.

            * I just found This template on Wikipedia which lists 30 communist states. Some of them should probably be counted as the same (North Vietnam, post-1975 Vietnam), and some shouldn’t count as independent (Poland etc). I don’t know how many caused mass loss of life of civilians. I can try to look into it and make up a table this weekend, if I remember.

            And if we’re comparing to colonialism I’d want to give that a closer look than just assuming Belgian Congo and possibly the Bengal Famine are the only examples of mass loss of life. That might be harder to define plus open up the can of worms that is American natives (also Australians?).

          • John Schilling says:

            I don’t know a lot about the Bengal famine–was that a case of the government taking food away from the farmers and exporting it on a large scale, as was I think true of both the Ukraine famine and the Great Leap Forward famine, or only of the government doing an inadequate job of preventing the famine?

            I don’t believe there was much, if any, outright confiscation of land or food, but there was a great deal of restraint of free trade imposed by the British Empire. No matter how much money the starving Bengalis offered, they were simply prohibited from buying rice from Burma or wheat from Australia, or pay British farmers in India to grow rice for local sale rather than wheat for export to Europe. Of course, the reasons for this were that Burma was occupied by the Japanese who were the enemy and you don’t trade with the enemy in wartime, Australia was on the far side of the Indian Ocean and every available ship had been requisitioned to support the war, and the British farmers in India had been tasked with making sure Britain didn’t starve on account of the war. So it’s not clear how much of the blame can be apportioned to colonialism, or to capitalism, as opposed to just the war.

            Absent colonialism, I suppose we could just say that since there wouldn’t have been any British ships or planes or men to stop the advance of Japanese armies into Bengal, so it would all be Japan’s fault if there were a famine or to their credit if there weren’t. And the Japanese claim to have been all about defending the oppressed peoples of Greater East Asia from the evils of colonialism…

          • albatross11 says:

            The reason this is interesting is because we might be in one of two worlds:

            a. Communism is uniquely inclined to lead to megadeaths.

            b. Industrial technology and concentrations of power are inclined to lead to megadeaths, and it just happens that in the 20th century, this often coincided with Communist governments.

            [ETA]If (b) is true, and the megadeaths were just because a bunch of really awful messed-up places ended up with Communist governments due to social forces, then if they’d all ended up with theocracies or god-emperordoms[1] or colonial governments run by distant kings, there still would have been those megadeaths, but they’d have been done by the East India company or the Inquisition instead.

            [1] N Korea and China under Mao?

          • Viliam says:

            Industrial technology and concentrations of power are inclined to lead to megadeaths, and it just happens that in the 20th century, this often coincided with Communist governments.

            Blaming industrial technology for Communist mass murders seems to me like blaming Holocaust on “well, Germany just happened to be a country with a lot of gas”.

            Just because a country has a lot of whatever-resource-Soviet-Russia-had-a-lot-of, it doesn’t automatically follow that the resource will be used to exterminate the population. Current USA probably has much more of it, and doesn’t use it to build death camps.

            Concentration of power seems more likely, especially when concentrated in hands of literally one person or a small circle of people. Especially combined with utter disregard for individual human life. Then it’s worth spending a fraction of your resources on exterminating anyone who could be a potential problem (plus a few million innocents, just to make sure that everyone is properly scared). But both of these are quite fundamental for Communism, so I guess at the end, Communism is still to blame.

    • Montfort says:

      Less of this kind of post, please.

      • Adrian says:

        Less of this kind of post, please.

        This kind of drive-by admonition is getting quite frequent recently and is rather condescending when the post in question isn’t obviously bad. reasoned argumentation has a point, even if their tone might be a bit abrasive (the linked comment by Steve sailor is kinda bad, though).

        • Montfort says:

          Without getting too much into it, I disagree. Tone and other such concerns aside, RA’s post doesn’t really belong under this blog entry about how SSC might help develop readers’ or commenters’ “rationality” skills or serve some other community purpose. I’d be more charitably inclined in an OT or some other post about communism, Scott’s personal “rationality” skills, comment moderation, etc. Responding to off-topic grievance-airings with a in-depth post only derails things.

          Perhaps in retrospect, I could have been slightly more specific, but I really did (and do) think it’s obvious.

        • bean says:

          I’m with Montfort on this one. The tone wasn’t “kind of abrasive”, it was a fairly direct attack on Scott, and definitely the sort of thing we should have less of.

        • Confusion says:

          Thirding Montfort and bean. To me it looks like plain trolling.

        • Lapsed Pacifist says:

          @bean

          In the dojo, we willingly subject ourselves to attack so that we can test our defensive skills. If this post is too harsh, why? Is it factually incorrect? Does it attack a Strawman? I think that it’s perfectly acceptable. Please feel free to elaborate why it’s not appropriate.

          • Paul Zrimsek says:

            Most of us want to practice defending our arguments– not our motives.

          • bean says:

            1. There’s no link to these supposed statements of Scott’s. We’re supposed to assume that reasoned argumentation is characterizing this accurately. If you’re going to make that kind of negative claim about someone, give the audience a chance to judge it for themselves.
            2. Assuming I accurately guessed where this was taken from (a mix of the reviews of Chronicles of Wasted Time and the history of the Fabian Society) it doesn’t match what Scott actually wrote. In the Fabian Society post, he points out that a lot of the intellectual tools of modern capitalism hadn’t been invented yet, most notably prices as a coordination mechanism. So if those tools didn’t exist, and the failures of socialism weren’t on display for all to see, what reason would you not have to not be a socialist? And I also read his point as being less about the object-level of socialism/communism, and more about the fact that these are really hard problems.

          • John Schilling says:

            In the dojo, we willingly subject ourselves to attack so that we can test our defensive skills.

            In this dojo, we segregate our discussions by explicitly-specified top-level subjects, so that we may be prepared to have informed discussions, and ideally ones which are not so hostile as to be properly characterized as “attack” and “defense”. Some level of topic drift then inevitably occurs, and this is generally a good thing.

            If the topic does not “drift”, but is dragged forcefully off course because someone can’t even wait until the next open thread to talk about his favorite hobby horse, that derails the discussion of the topic at hand and is not a good thing. If, because one person said something another person finds offensive, for example regarding the a priori obviousness of communism’s failures, they may then be forced to defend themselves against random attacks on that topic whenever the offended party feels like it, that is very much not a good thing. And I should very much like the master of the dojo to invite such “contributors” out the front door.

        • Adrian says:

          @Montfort, @bean, @Confusion: In the comments of other articles here, I regularly read replies (to posters other than Scott) which are harsher “direct attacks” than the one by reasoned argumentation above, and they’re rarely called out. Should Scott be treated more gently than the average poster? Does he want to be?

          At least the first part of reasoned argumentation’s post (before “EDIT”) is very much on-topic, and the second part seems to have sparked an interesting debate – though more by accident, I guess. Definitely doesn’t feel like “plain trolling”, in any case. It deserved more than a lazy “Less of this, please.”

          But whatever, this is not the hill I want to die on, so I won’t push the issue any further.

          • Confusion says:

            Whether a comment is top level or a deeply nested response matters. Whether it digs up old skeletons matters. Whether it does so on topic matters.

            Attacking someone in a top level post, restarting an old argument, stating the opponents position this ungraciously, by tenuously relating it to the topic of a new post, is without any merit. The outcome of any ‘discussion’ after such an opening post is known beforehand.

          • bean says:

            @Confusion

            That’s exactly it, articulated much better than I was able to.

        • Brad says:

          The tone wasn’t “kind of abrasive”, it was a fairly direct attack on Scott, and definitely the sort of thing we should have less of.

          That’s his whole schtick.

    • reasoned argumentation says:

      Update: my replies appear to be going through again although I’ve had to strip some HTML tags like blockquotes.

    • pdbarnlsey says:

      You make a post about the Fermi paradox and the Drake equation – Steve Sailer makes an observation that makes progressives uncomfortable that’s directly related to the discussion at hand

      For example, the Bush Administration put together a sort of Drake Equation for their demand for 5.5 million additional minority homeowners, but the Bushies refused to consider a factor that perhaps Hispanic immigrants weren’t as good credit risks as white Americans. And indeed, immigrants defaulted at a about three times the rate of white natives. But that fact wasn’t allowed in the equation because it would be Immoral to consider such realities

      Oh buddy.

    • Ron Unz Article says:

      Communism is terrible and people who egged it on in 1920 “with good intentions” deserve some of the blame for what happened. But you are giving a very tendentious reading of Scott’s ideas about what it would have been like to be an intellectual at the time. I take him as saying “there but for the grace of God…”

    • Nancy Lebovitz says:

      Link or quote for what Scott said about communism?

      • bean says:

        I believe it’s from the review of Chronicles of Wasted Time or possibly the post on the Fabian society. In either case, the OP is a gross mischaracterization of Scott’s position, which is that mapping his general positions back in time to that era, he would have been at high risk of being a communist, and it’s the failures of communism which have made that unattractive.

        Edit:
        It’s from the Fabian Society post. And the “no good arguments against communism” was more a description of the intellectual atmosphere at the time. A lot of the change is a new understanding of things like prices as signals that they just didn’t have then.

        • ec429 says:

          understanding of things like prices as signals that they just didn’t have then.

          Marshall published his Principles of Economics in 1890, which contains essentially modern marginal economics, supply and demand curves, the whole kit and caboodle. While this is six years after the Fabian Society formed, it’s still before any actual communist revolutions.
          And even Smith in 1776 recognised the existence of a co-ordination problem and that markets solved it (that’s what the “invisible hand” bit is about).
          And while it’s technically after the Bolshevik Revolution, Mises published the calculation problem in 1920, which I think is before you can reasonably argue that Communist outcomes influenced him.

          I’m pretty sure that the whole “Red Plenty” argument is wrong: the people who understood and believed classical economics were saying “no, really, you need the price system” right from the start; the 1950s, when “everything seemed to be going right for Russia”, is also when Henry Hazlitt published his novel Time Will Run Back, which is basically an economist’s treatise on why private property in the means of production is essential to solving the co-ordination problem and why this in turn is the central problem that any economy has to solve. Therefore, all the people who were saying anything even slightly nice about Communism were the people who either didn’t understand, or understood but rejected, classical economics, and were therefore people who went around saying things like “planned economies will outcompete free ones”.

          • bean says:

            I’m reporting Scott’s conclusions. I am not an economic historian, and while I was privately betting that David Friedman was going to pop up with a long treatise on how all of this was invented in Medieval Iceland or something like that, this kind of thinking can take a long time to filter out into the broader discourse. Scott was reporting his interpretation of a socialist writer’s report. Saying “it’s the bubble, not the actual knowledge” is a valuable contribution, but it wasn’t made by reasoned argumentation.

          • ec429 says:

            @bean:

            I’m reporting Scott’s conclusions.

            Yes; I hope my reply didn’t come across as attacking you; what you wrote is, AFAICT, an accurate summary of what Scott wrote, which in turn may well have been an accurate summary of how the world looked to the Fabians.

            I probably should have been more explicit that I was ‘saying “it’s the bubble”‘; as with physical bubbles, intellectual bubbles can hold very different atmospheres inside and out.

            Btw, having just noticed who I’m replying to, let me just say, Naval Gazing is awesome 🙂

          • bean says:

            Yes; I hope my reply didn’t come across as attacking you; what you wrote is, AFAICT, an accurate summary of what Scott wrote, which in turn may well have been an accurate summary of how the world looked to the Fabians.

            It felt a bit that way, but re-reading, that was my fault, not yours. I’m really tired today. Sorry.

            Btw, having just noticed who I’m replying to, let me just say, Naval Gazing is awesome 🙂

            Thank you very much.

          • J Mann says:

            Naval Gazing is awesome

            I can’t believe that I just got that pun.

        • jaimeastorga2000 says:

          It’s from his Tumblr.

    • Freddie deBoer says:

      Dude he literally tried to turn a discussion about the possible existence of aliens into an argument about the racial dynamics of Bush-era housing policy.

      • J Mann says:

        Worse, it totally missed the point of Scott’s post. If he had said that Bush era housing policy had made the same kind of mathematical mistake that Sandburg, et. al identified, then at least that would have been interesting.

        • Randy M says:

          I like Sailer, but I agree that he was trying to shoehorn his hobby horse in that thread, in a way that wasn’t terribly more relevant the reasoned’s post above.

        • Edward Scizorhands says:

          When I saw his first post “hmm, what else was like this where people were wrong?” I laughed because I said “oh I know what this is about” but figured that that was the end of it. People who know his shtick will know what he’s subtweeting about, the conversation doesn’t get derailed, everyone moves on. But he kept at it. Again. And again.

          If “people were wrong about X” is an excuse to bring up “oh oh oh, sensei, there was the time people were wrong about Y!!!” we’ll never get anywhere.

      • Nick says:

        This. The only way Steve’s comment was “directly related” is if lizard people run the government.

      • rlms says:

        You fool, they’re both about aliens.

        • The Nybbler says:

          How do you know they’re not native lizard people who have been living among us for millennia?

      • phil says:

        Scott is the one who turned it into a discussion about the housing crisis (in a direct reply to Steve). Steve has a well known theory on the housing crisis, one that happens to directly contradict the point that Scott was trying to posit.

        • LadyJane says:

          Perhaps the Drake Equation needs more factors than I included in my calculations 43 years ago?

          For example, the Bush Administration put together a sort of Drake Equation for… [stuff about the housing crisis that has absolutely nothing to do with the Drake Equation except by way of clumsy and forced analogy]

          Not really seeing how it’s Scott fault that the topic changed to the housing crisis. Or what relevance Sailer’s point about the Bush Administration had to the discussion about the Drake Equation.

        • John Schilling says:

          Scott is the one who turned it into a discussion about the housing crisis

          Looking over the relevant subthread right now, and this claim appears to be absolutely false.

        • christhenottopher says:

          Scott’s post said literally nothing about housing. The comment Sailer replied to had nothing to do with housing. The housing point wasn’t disagreeing with Scott but a digression into a current political controversy. You’ve even acknowledged this elsewhere in this comments section:

          Why do you provide your free labor here?

          Why does it matter?

          It wasn’t that far off the somewhat tangential point.

          The fact that this comment section can’t tolerate a clumsy culture war tangent (which as best I can tell only even inspired 1 follow up) is a strike against it

          (emphasis mine)

          It was a tangent not a point and culture war topics cause non-productive controversy a lot because they activate everyone’s “attack the enemy” receptors (toxoplasmosis of rage type stuff).

          There are still tons of conservative commenters here. Steve has his own writing and platform, he’s fine. You’ve made your argument and I haven’t seen anyone here be convinced conservative or not. Maybe it’s time to move on.

          • phil says:

            Steve will be fine obviously,

            I enjoy reading his comments,

            I’m disappointed

            ______

            Liberal/conservative isn’t the only dynamic at play here

            Just because there are other conservatives here doesn’t mean they have Steve’s prospective covered

    • RC-cola-and-a-moon-pie says:

      I think a big part of the problem is tone. Everyone, including you, is heavily influenced by his larger worldview and has blind spots based on his presuppositions and web of beliefs. I think that is a fair point but can be made in a less accusatory and politically charged way.

    • Deiseach says:

      Having been banned (temporarily and permanently) from places, got into my share of screaming rows, and have been accused of all manner of dreadful faults, flaws and sins, I think I can present myself as The Right-Wing Bigot that your putative progressives would excoriate.

      And I think that Sailer comment is for the birds. It’s got nothing to do with the Drake Equation*/Fermi Paradox, that is used merely as the starting-gate out of which he can ride his hobbyhorse to exhaustion and collapse, then continue on flogging the dead carcass.

      If Sailer wants to talk about “Group Here of humans who remained in the ancestral environment are genetically doomed to be dumber than Group There of humans who struck out as bold explorers in challenging environments”, let him do so. But let him not pretend this has anything to do with why there may/may not be alien civilisations out there – humans have colonised all available environments on our planet (except for as yet feasible ones like the sea beds), so even the Peak Brave Explorers are now going to hit that environmental chokepoint and not be able to adapt to max out intelligence anymore. So too with the aliens; if the argument is “you can’t have a starfaring civilisation if everyone is staying in the same environment and adapting for other reasons than intelligence”, then we’ve hit that point now and will never be smart enough to be starfarers (is this why people are pinning their hopes on AI? It can scale the barrier to bigger brains that we can’t because now we’re optimising for “predators, prey and microorganisms that all co-evolved and adapted to that species which is (on the margin) going to put survival pressure on immune systems, bone density and running speed”?)

      Yes, even superior Smart Western Civilisation Genes are hitting that barrier of “we’re now run out of new challenging environments to force us to adapt for brains not running fast”, so his argument over “and this is why some cultures aren’t visible in the galaxy” applies to us right this minute. We’re not going to be visible because we don’t have a new challenging environment (unless Musk’s Mars colonisation gets off the ground, and how long will we need that pressure to make enough of a difference for our descendants to get that much smarter that they will develop the starfaring civilisation that is visible in the galaxy? I have a feeling that it would take longer than it would for Martian terraforming and settlement to make the environment so much less challenging, the selective adaption for greater intelligence won’t have time to really get up and running).

      So you’re out of line for trying to peddle the old “one bold brave truth-speaker is yet again stifled for trying to tell it like it really is” on here.

      *I’m mildly surprised there is so much comment over the Drake equation; I’m old enough to remember being introduced to it by Sagan, being impressed by the seeming scientifically-rigorous nature of it, then later reading and hearing comments that no, it was mostly pulled out of thin air to serve a particular purpose and while it looked good, there were all the problems with it that have been pointed out, mostly that each step depends very heavily on the step before, and it’s all guesstimates. So that any debunking/dissolving is a big deal is, as I said, mildly surprising to me.

      • engleberg says:

        @I’m mildly surprised there is so much comment over the Drake equation-

        Me too; even in the sixties when Lem wrote Summa Technologia his treatment of the Drake equation was like a man hitting a boy. But after all, it looks sciency, and it’s fun to speculate about aliens.

      • Maznak says:

        Obviously you don’t believe that there might be some bioengineering to take over from natural selection – I believe that it is very likely, right around the corner. In geological time, it is practically NOW.
        On the other hand, maybe the smartest human geniuses alive today have hit the ceiling of possible human intelligence, having the best (for intelligence) available combination of genes from the gene pool. So… what about AI? 500 years from now, save some apocalypse, AI will probably be so smart that it is impossible to even imagine today. Again, from the point of view of geology or even history, 500 years is practically NOW already.

    • phil says:

      Does Scott think Steve will come back after 2 months?

      Does he thinks the comment section will be better if he doesn’t?

      ——

      No offense to anyone else who posts comments here, but Steve is in some small fraction of the highest profile commenters here.

      He usually charged by the word generated.

      Its not on the face obvious why he’d continue to provide free labor.

      • bean says:

        It’s also not obvious why he was providing free labor in the first place. If he changes his mind in the next two months, the comment section might be slightly poorer, but I don’t think that’s a reason not to ban him. His attempted thread hijacking was clumsy and directly culture war. This is not something that should be tolerated, even if the poster is way more prominent than Steve.

        • phil says:

          Why do you provide your free labor here?

          Why does it matter?

          It wasn’t that far off the somewhat tangential point.

          The fact that this comment section can’t tolerate a clumsy culture war tangent (which as best I can tell only even inspired 1 follow up) is a strike against it

          That it loses one its more unique commenters for at least two months due to that intolerance, is a real disappointment

          • bean says:

            It wasn’t that far off the somewhat tangential point.

            Yeah, it kind of was. There were two posts, both of which read to me as “You know what I like? Bush-era housing policy. I’m going to shoehorn that in to fit, and throw in culture war, too.”

            The fact that this comment section can’t tolerate a clumsy culture war tangent (which as best I can tell only even inspired 1 follow up) is a strike against it

            It’s not that we can’t tolerate it, it’s that we shouldn’t have to. If Steve wanted to talk about Bush-era housing policy in a culture war way, we have 3 open threads every two weeks where he’s more than welcome to do so. So why did he think it necessary to suddenly start talking about it in a thread on aliens, using the flimsiest of excuses? If he’d done a better job of tying his stuff into the existing thread, and not talked about race, he would have been fine. Anyway, it’s not like he’s gone from the internet.

          • phil says:

            As I mentioned above the first person to make the connection between the topic at hand and the housing crisis of 2008 was Scott.

            The first of Steve post was a direct response to Scott post, the one that got him banned was roughly an expansion on the point made in the first.

            _______

            Ultimately this is Scott’s site, he can moderate it how he sees fit, I’m legitimately disappointed that he chose to moderate it like this though,

            You don’t seem to have gotten much out of Steve’s posts, you’re entitled to your opinion,

            I enjoy reading his comments though, he is one of <10 commenters whose comments I specifically search for,

            I'm disappointed I won't get to read his comments, this makes reading this blog a tangibly worse experience for me

      • The Nybbler says:

        Does Scott think Steve will come back after 2 months?

        He’ll be back. It is his destiny. He can’t resist popping in to comment on his favorite things now and then

      • Brad says:

        Good riddance.

      • rlms says:

        Highest profile doesn’t imply best.

      • moscanarius says:

        IIRC he has been banned before, and returned after the ban expired. I think he’ll be back soon.

  3. Montfort says:

    I can sort of buy the idea of the comments as a dojo – but there’s no personalized instruction or membership fees or belts or anything. The learning here is extremely self-directed, success is hard to judge objectively, and commenters come and go all the time. Still, even just an empty building with some mats where people can show up and practice is something.

    That is, if you wanted to design a place to practice “rationality” skills from the ground up, I’m not sure it would look like this. But the blog and comment section can serve multiple purposes at once.

    • Dominik Tujmer says:

      Agreed. You would probably have a curriculum, a timeline, tests, a system to update the curriculum depending on the actual results, different teachers for different things. But learning rationality is still a very young and very self-directed pursuit, so it’s understandable.

    • Nancy Lebovitz says:

      Not all martial arts have belts. They’re (mostly?) a Japanese thing.

      https://en.wikipedia.org/wiki/Black_belt_(martial_arts)

    • Ninety-Three says:

      I’m not sure dojo is the right metaphor here. SSC is more like a group of people who play pickup games of basketball in the evenings. They’re mostly showing up because it’s fun, and hey, maybe they’ll get a bit better at basketball.

    • Paul Zrimsek says:

      We are building a fighting force of extraordinary magnitude. We forge our spirits in the tradition of our ancestors. You have our gratitude.

    • HeelBearCub says:

      success is hard to judge objectively

      This is the entire problem, and the roots of the rationality movement both obfuscate and exacerbate said problem.

      Think for a second about practicing, whether it’s a martial art or math, the practice depends on being able to judge whether your attempt is correct, and to be able to repeat the attempt.

      Rationality was predicated not on the idea that you could properly calculate Bayesian statistical probabilities, but that you could arrive at the correct answer to novel problems. It doesn’t concern itself with settled issues, and therefore has an inability to generate replicatable practice. Otherwise, there would just be an esoteric course of academic study that combined logic, statistics and rhetoric.

  4. romeostevens says:

    In much the same way that scientific methods underspecify how you should search for and prioritize among promising classes of hypotheses, I fear this method does little to inform you about what areas are promising to paying attention to (scope sensitivity, etc). ie if one can sharpen their rationality chops on anything, are some things much better than others either for leveling the skill more quickly or for useful object level results?

    • Deiseach says:

      The trouble is, the methods of rationality/rationalism being showcased rely very heavily on mathematical intuition/ability. So those like me who haven’t that are doomed to forever be irrational.

      On the other hand, even innumerate mouth-breathers like me can have a good time in the comments here! And there is generally such a wide range of interesting topics and people with their own areas of expertise, that don’t require you to be able to Do Hard Sums, that there really is something for everyone.

  5. pontifex says:

    SSC is the only place where I feel like my comments sometimes drag down the average comment quality. I feel guilty about that sometimes.

    • Dominik Tujmer says:

      You’re not alone. I regularly feel like an idiot in this setting, but that’s good cause it means I’m growing.

    • BlindKungFuMaster says:

      My feeling is that comment quality has been declining (probably because I’m posting more). So don’t worry. You’ll be a real boon to the community soon, if you aren’t already.

    • MawBTS says:

      Also, does anyone else have an IQ below the blog average (~140) and just feel like they’re polluting these hallowed halls with dumbness?

      • Montfort says:

        IMHO, SSC asks for a certain style of comment/argument and has a standard of conduct a little different from other sites, but raw intelligence is not really a prerequisite, except as necessary to adhere to them. If you’re reading and enjoying the posts, I’m fairly confident you’re smart enough.

      • xXxanonxXx says:

        That’s a little excessive, honestly. It’s the sort of attitude Scott was pushing back against here. There’s always someone smarter than you. I understand the sentiment though, and I’ve seen a number of comments in open threads to the effect of “I don’t have anything to contribute, so just lurk.”

        I stick to anecdotes mostly, or interject to steer a conversation in a direction I’m interested in, just to see what people have to say.

      • MasteringTheClassics says:

        OTOH Deiseach has a self-diagnosed IQ of around 100*, and she’s probably the best commenter on here in terms of what the community would lose if any single commenter left. Meanwhile, the atmosphere has only been improved by less of the great and enlightened Vinay Gupta, brilliant though he be. IQ isn’t everything around here.

        *yeah, I don’t believe it either, but I’ll not be crossing her.

        • John Schilling says:

          yeah, I don’t believe it either, but I’ll not be crossing her

          And now I’m morbidly curious to see the Deiseach post citing, correctly and in appropriate context, authorities from St. Augustine of Hippo to G.K. Chesterton to back up a scorching argument of the form, “Deiseach really is a not-smart and you all are fools for doubting her!”

          I mean, my head will probably explode like an AI after a Kirk-speech, but it will be fun while it lasts.

        • Deiseach says:

          Deiseach has a self-diagnosed IQ of around 100

          ‘Scuse me, according to that one Raven’s Matrices site, it’s 99 and proud of it! And if I believe Richard Lynn, I’m straggling along with a bare 93 at best (like my fellow Southern Irish ingrate rebels against the glorious British Empire and Protestant Church).

          This person defends Lynn’s results, but if you believe the conclusions, the theory we are asked to accept is that we were dumb as stumps up till the 70s/80s, then in about two decades suddenly leaped up to be as normal as the English, making up that 13-point gap out of nowhere. (May I also say that if you’re going to rope in Hans Eysenck as your character witness for the reliability of the results, God help us all).

          Just looking at the studies that post uses, in 1990 we only managed to score 87, but in 1991 we got as high as 96. Cursed by pride at this lofty achievement of jumping 9 points in one year, in 1993 we dropped to 93 and even worse, 91 in the same year but presumably testing a different bunch of potato-exhuming red-faced leprechaun-botherers. We managed a respectable 95 in 2000, finally achieved a normal average of 100 in 2009 but backslid to 92 in 2012.

          Those results seem all over the place to me, but that’s my ignorance of statistics speaking. How we got smart, then got stupid again, then smartened back up – I have no idea.

          I can’t speak for Lynn’s most recent work but the original one that gets the much-quoted “Irish national IQ of 93 (or 96)” relied on conflating a grand total of 2 studies, one carried out in 1972 on about three thousand primary age schoolchildren which got a depressing result of 87 (allegedly) and one a few years later carried on a small number of adults which did better, so by combining the two he got an average of 93.

          Lynn didn’t do the tests himself, he just lumped together the results of other people’s studies, so I’m not so sure this is gospel. But it gets quoted everywhere, even in Irish media.

          • bean says:

            There are apparently people working on estimating IQ from a writing sample, and I’d dearly love to see what they’d make of your writing. I suspect that the IQ tests you took were skewed by your issues with math, because this sort of thing does not look even vaguely like the work of someone who has an IQ below 100.

            Also, weirdly, googling “IQ from writing sample” gave me a bunch of hits about the CIA’s style guide for writing.

          • Deiseach says:

            I’d dearly love to see what they’d make of your writing

            The ellipsis abuse alone would drive them to drink 🙂

            The quoted studies are why I’m generally sceptical of what IQ tests measure, if they’re measuring it at all, and exactly how useful they are outside a particular cultural milieu (I know the Raven’s Matrices try to be culture-neutral, but there’s still some underlying mathematical tricks to solve problems such that schooling, practice and exposure to such problems would help increase correct answers and hence slant the IQ test away from ‘measure of natural untutored whatsit’). A nine point increase in IQ in just one year must mean that the two sample groups were so vastly different there’s no useful point of comparison, or that the methodology of one or both studies was banjoed. How we then lost eight points of IQ over three years is another puzzler.

            I’m certainly not claiming we’re a race of unacknowledged geniuses, but that we would be so consistently lower than the average point especially as compared with the English sounds a tiny bit fishy to me. There’s a useful study here which helpfully breaks down pupils attending English schools by ethnicity; the “white Irish” (not “Traveller of Irish heritage”) compare very well with the “white British” educational results. The argument could be made that this data comes from the 2000s decade when we Irish had suddenly gotten smarter, so it means little, but something must be going on if Irish in Britain can hold their own with British but are testing as stupider at home:

            Percentage of all students achieving 5EM by ethnic group: 2004-2013 where 5EM means “5+ GCSE A*-C or equivalent including English & Mathematics” (these are grades in the GCSE exam for 16 year olds):

            Ethnic group 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013

            White British 41.6 42.9 44.3 46.1 48.4 50.9 55.0 58.2 58.9 60.5
            Irish 46.7 50.7 50.1 52.6 57.0 58.0 63.4 65.9 66.9 68.8

          • Just looking at the studies that post uses, in 1990 we only managed to score 87, but in 1991 we got as high as 96. Cursed by pride at this lofty achievement of jumping 9 points in one year, in 1993 we dropped to 93 and even worse, 91 in the same year but presumably testing a different bunch of potato-exhuming red-faced leprechaun-botherers.

            You aren’t allowing for the short term effects on national IQ of the response of leprechauns to being bothered.

      • theredsheep says:

        I’m about there, but I’m still frequently mystified by the sheer aggregate of jargon, references, in-jokes, etc. that prevail here. Which is fine; it’s normal and healthy for a community to build up its own slang. I’m not here for the rationalism at all, not being what one would consider a rationalist. I’m not into transhumanism or AI, have no head for statistics, have only a vague notion of who the devil this Yudkowsky person is.

        I hang out here anyway, because it’s one of the most open-minded places I’ve run into. Someone could post “I think Hitler didn’t go far enough” in the comments, and the first reply would be a polite request for supporting evidence. The third or fourth would be an admonition to “steelman,” and by reply ten we’d have Wiki links to an obscure class of Gypsy bankers in one part of Bavaria. People would entirely forget the original argument to discuss the statistical distribution of mathematical talent as influenced by generations of horse-trading. The original troll would grumble, then wander off bewildered.

        Honestly, I’m here for the diversions, digressions, etc. that touch on what I know. When I come across a thread where I can follow what’s going on, I seize my chance. And I comment when I feel like I have something sort of relevant to say, and if it isn’t really steeped in communal lore, I figure it probably won’t hurt, as long as I’m not spamming.

        • Deiseach says:

          Someone could post “I think Hitler didn’t go far enough” in the comments

          Ouch. We did have that one person, but they turned out to be trying to troll us. Though in terms of bait, I suppose this is one of the few places where “watch this four-hour video” would garner enough “okay, to give you a fair chance to make your argument” responses to be worth it 🙂

          • bean says:

            No, the response to “watch this video” is always a request for a transcript, followed by general agreement that text is superior for this purpose to video. But “read the transcript of this 4-hour video” would get responses.

          • theredsheep says:

            Didn’t run into that one. I was actually thinking of that time somebody called SSC and its commenters “insufferably autistic” or something, and Scott posted it, and thousands of dispassionate words were typed arguing what exactly “autistic” was meant to imply in this particular context. Nobody, AFAICT, bothered to get offended.

          • Jiro says:

            That itself is a sign of being “autistic” in the likely intended sense, even if not literally autistic. It’s actuallyimportant to respond to social cues. Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

          • Berna says:

            @Jiro:

            Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

            Why is it bad, and what should one do instead?

          • Deiseach says:

            Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

            Though there is always the social gambit of ignoring an insult and letting spectators draw the inference that your status is unassailable, the person attempting to insult you is the equivalent of a drunk crazy homeless person yelling obscenities on the street, and by refusing to engage with them you are denying them the satisfaction and power of having the upper hand to make you sputter and fume and attempt to deny the charge, or get into a mutual slanging match.

            It’s tough to do and does heavily rely on you being able to pull off “I am too high status for a petty fleabite like this to affect me”, but if you can do the de haut en bas bit it’s effective.

            Granted, in some contexts the only acceptable and effective response is a punch in the snoot (literal, metaphorical or verbal) to the person insulting you, but oftentimes the person wants to make you angry/sad/run around like a headless chicken overcome by emotional response, and apparent clueless missing the point ‘no I was trying to say you’re a big poopy head’ literalism deflates their fun.

          • theredsheep says:

            Jiro, that’s what made it so funny, at least to me and my wife. But also awesome. Codex DGAF.

          • carvenvisage says:

            @Jiro

            That itself is a sign of being “autistic” in the likely intended sense, even if not literally autistic. It’s actuallyimportant to respond to social cues. Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

            As usual with people advising others to studiously avoid incidental trappings of autism, this is a pretty ‘autistic’ way of looking at things:

            1. Treating an insult as beneath notice/contempt usually functions as a response on the level of the insult. (whether intended to or not)

            2. ‘autistic’ people have a comparative advantage for dealing with insults in this and other dismissive/stonewalling/contemptuous ways

            3. advising people to defang themselves so they can more neatly fit into your cosy paradigm is a gross provocation- the likes of which might be excused by extreme youth or actual autism but certainly isn’t the mark of a worldly wise socialite to throw randomly about.

          • Jiro says:

            Treating an insult as beneath notice/contempt usually functions as a response on the level of the insult.

            That only works if you obviously understand how serious an insult it is but are ignoring it anyway. If you seem to be ignoring the insilt out of a lack of the social skills needed to deal with insults, that won’t work.

          • Toby Bartels says:

            Even if that's so, it's not what was happening here. Scott said ‹Here's an insult that I got.›, and then we all analysed it. So we obviously knew what it was; we just didn't care.

          • Jiro says:

            Stating “this is an insult” doesn’t count as really understanding that you’ve been insulted. Understand that you’ve been insulted doesn’t mean “I am capable of classifying sentences into insults and non-insults”; it means reacting as a human normally does to an understood insult.

            This does not mean you need to insult them back or anything like that. There’s a wide range of ways in which normal humans react. But “I dispassionately consider the merits of the proposition literally expressed by the insulting sentence” is not one of them

          • theredsheep says:

            What’s the desired outcome that results from handling an insult “correctly”? In this case, nobody here was upset or demoralized, and they got an interesting discussion out of it. What were they supposed to do, call them doodyfaces?

          • Jiro says:

            Okay, let me rephrase. It’s a bad idea if you don’t want people to reasonably think of you as autistic.

          • carvenvisage says:

            Understand that you’ve been insulted doesn’t mean “I am capable of classifying sentences into insults and non-insults”; it means reacting as a human normally does to an understood insult.

            If stepping outside normal reactions causes you to fear for your status, then it’s already catastrophically low, and if you’re proceeding from that as a base assumption, what are you doing dispensing social advice?

            reacting normally =/= reacting effectively

            _

            There is a big difference between being an actual designated scapegoat and a potential one, and if your advice is aimed at skulking around on that unfortunate borderland, it might have some twisted highly indirect merit (I’d still argue against it, but maybe), but advising everyone to act as if any proclivity towards social obliviousness relegates them to said borderland is just defeatism on other people’s behalf.

            _

            (Surplus argument: If you’re 6 foot 5 and weigh 300lbs in muscle, a normal persona is going to accidentally scare people all the time and you might e.g. try and be more restrained and aware of the effect your presence has. Giving so little a shit about petty nonsense that you have trouble perceiving insults isn’t ‘normal’ either.)

          • Deiseach says:

            There’s a wide range of ways in which normal humans react. But “I dispassionately consider the merits of the proposition literally expressed by the insulting sentence” is not one of them

            So you are saying the only permissible way to react to an insult is to be outraged and angry and to hit back? Yes, that is one way of doing it, and this is how honour cultures handled it (to the extent that even what was perceived as a mild or unintentional insult could not be ignored or smoothed over by an apology, so both parties had to duel and you could end up with one person dead over saying ‘I don’t like the colour blue’ when the other party was wearing a blue suit).

            But ignoring/playing Logical Positivist (“you say I am a big poopy head? hmm but is it possible for a cranium to be made exclusively out of excrement? let us see if there are any examples in nature to back up your claim”) is also a way of dealing with an insult. If I know that the person hopes to evoke anger and upset in me, then by denying them such a response I am taking their power to affect me away from them. And by causing them to dance with frustration over “no, no, you’re not reacting correctly, you’re supposed to be upset!” then I am the one who is controlling this interaction and exhibiting superior social power.

            I really don’t see why “you were insufficiently upset by that insult! why didn’t you break his legs for saying that!” is a one-size-fits-all response. And if that means that most of us on here are not “normal humans”, then so be it!

            Jiro, you sound like an Enneagram Type Eight: the person for whom anger and its expression is authenticity, and when others say (in a situation where an Eight sees the only possible reaction as anger honestly expressed) “No, I’m not angry” or “I am angry but expressing it differently”, they see that as deceitful, dishonest, dissembling, and untrustworthy – the ‘proper’ response is to hit back when you’ve been hit, what kind of weak sauce response is this, are you too dumb to see you’ve been insulted or are you too sly and pulling some con?

          • quanta413 says:

            Okay, let me rephrase. It’s a bad idea if you don’t want people to reasonably think of you as autistic.

            Sure, but why should we care what people think about that?

            Lots of things here are going to scream “autistic” to people who don’t know what autism is and think it means something like “nerds who are way too into boring technicalities about boring things”. It’s ok if these people think everyone here is autistic! As long as that makes them feel better. I mean, I guess I’m not concerned if it makes them feel worse, but I hope it doesn’t in the same way I hope it’s sunny yet not too warm each day.

          • Nancy Lebovitz says:

            Have a story about ignoring insults.

            Once or twice on usenet, I got good results by ignoring insults, but I believe it was a matter of a poster (possibly two) with bad manners rather than trolls.

            In any case, what I did was ignore insults and only reply to content, and the person (or two) eventually dropped the insults. There may well have been other things going on, but I like to think I trained them by only giving reinforcement for the behavior I wanted.

            It was a good bit of work, and as I recall it was more work to think of intelligent responses to content than to ignore the insults.

          • Jiro says:

            So you are saying the only permissible way to react to an insult is to be outraged and angry and to hit back?

            No, there are other things you can do. In a lot of contexts you can just ignore it.

            But that’s different from being told “Your mother is a whore!” and responding by steelmanning his position and trying to gather evidence that may or may not indicate whether your mother is really a whore.

          • quanta413 says:

            But that’s different from being told “Your mother is a whore!” and responding by steelmanning his position and trying to gather evidence that may or may not indicate whether your mother is really a whore.

            And your father smelled of..?

            I couldn’t steelman insults to family or friends on the off chance that things propagate from the internet to the real world and I hurt their feelings. But I might steelman the claim that I’m a whore. Or a prude. Or a prudish whore. If it was interesting enough.

          • Jiro says:

            But I might steelman the claim that I’m a whore.

            Assuming a central set of circumstances under which you’ve been called a whore, that’s “autistic”, even if you’re not literally autistic. Normal people don’t act that way.

            Also, reacting that way is considered a win for the person insulting you, and has corresponding consequences (encourages further insults, lowers your status, etc.)

        • LadyJane says:

          I remember there was one poster a while back who kept saying that the Kulaks deserved their fate, which comes pretty close to “Hitler didn’t go far enough” in my view. People still engaged with him in good faith, and the ensuing political argument didn’t get particularly heated.

          More recently, someone was arguing that U.S. forces should start sabotaging water supplies near the Mexican border so that illegal immigrants will die in the desert instead of making it into the country. That’s not quite as bad as “Hitler didn’t go far enough” or “Kulaks deserved it,” but I don’t consider it that much better, except perhaps in terms of sheer scale.

          • Jiro says:

            By that reasoning if I own a grocery store and I lock it up every night so that starving people can’t steal the food, and someone dies of starvation because they can’t steal the food, I’m a murderer.

            If you’re required to let thieves steal merely because they need the thing they are stealing, you destroy the ability of people to own property at all. What if instead of taking water, they were stealing cars and selling the cars for money to buy water? Would we have to allow that too?

          • Lambert says:

            Whose property is the water?
            It’s not stealing from you if you never owned it.

            Or were you making an argument on a more metaphorical level, where the ‘property’ is the USA itself?

          • Jiro says:

            Yes, the US’s water belongs to the US. The US has no obligation to kep it easy for someone who doesn’t belong to come in and help themselves to some.

          • theredsheep says:

            You say that as though we were undergoing a desperate water shortage and the little drinks given to Mexicans caused us lasting harm. Water falls from the sky. In some parts of the country, it’s tight due to excessive agricultural use, but if it’s that tight, I’m surprised they haven’t invented Dune-style stillsuits.

          • Jiro says:

            I’m pretty sure that one or two food items stolen from a grocery store also won’t cause lasting harm. That still doesn’t mean the grocery store is acting immorally if it keeps starving people from stealing its food.

            Furthermore, the Mexicans would not need the water if they weren’t trying to violate the law; if they violate the law, don’t run into any water, and therefore die, that’s their own fault. We are not required to keep them from killing themselves by making sure our water is available for them.

          • CatCube says:

            The water in question is usually set out by organizations specifically attempting to facilitate crossing. So it wasn’t stolen from anywhere in any meaningful sense. There’s a case to be made for littering, I guess.

          • Lambert says:

            It belongs to the people who bought the water, not the US gov’t.
            Would it be different if the water came from a well in Mexico?

          • LadyJane says:

            @Jiro: I think you’re misunderstanding the situation. The issue is not that immigrants are stealing water from the government and the government is trying to stop them from doing so. The issue is that private citizens and organizations are leaving out water supplies to prevent immigrants from dying, and the government is removing or destroying those water supplies to ensure that immigrants are at higher risk of dying.

            The government is not passively allowing people to die in order to protect its property rights. It’s violating property rights in an active attempt to cause people to die, or at least to discourage their behavior by making their risk of death significantly higher.

          • quanta413 says:

            There’s no property right to stick your stuff in the middle of the desert and not have it interfered with if you don’t own that desert. Especially with the explicit purpose of aiding and abetting breaking the law.

            There are valid moral arguments against possibly causing more deaths. There are valid arguments that current immigration law is pretty subpar. But property rights to just leave your stuff sitting around wherever don’t really come into it.

            If I left a case of water just sitting in the middle of the sidewalk for a day, I don’t really have any right to expect it to be there when I come back.

          • John Schilling says:

            There’s no property right to stick your stuff in the middle of the desert and not have it interfered with if you don’t own that desert.

            Yes, there is. If I park my jeep in a bit of federally-owned public desrt that isn’t explicitly posted “no tresspassing” or the like, I have a right to not have anyone – not even the federal government – drive or tow it away or vandalize it in place. If I set up my tent and leave it there while I go on a day hike, it had better be there when I get back. And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized. Property rights, while harder to enforce in practice, are not voided in principle by the property being left untended in a public area.

            At least, not in the general case. You are trying to make a special case that property intended to prevent illegal immigrants from dying in the desert is exempt from this general respect for private property and can be vandalized. You are trying to justify this by noting that the immigrants in question are committing an actual misdemeanor, which you would rather they die than get away with. And you are proposing to take active measures to kill them, in defiance both of their right to life and of your fellow American citizens’ right to dispose of their private property as they see fit. Because this is different, this is special.

            And you really don’t understand how everyone who isn’t a hardcore Trumpist, looks at you and sees only something especially monstrous?

          • LadyJane says:

            @John Schilling: Yes, exactly. Thank you.

            I’m always amazed by how many otherwise libertarian-minded people will suddenly abandon all of their principles when it comes to immigration, and then go through all sorts of bizarre and contrived mental gymnastics to explain how they’re not really abandoning their principles. (I have my suspicions as to why, of course, but perhaps this isn’t the best place or time to discuss them.)

          • Toby Bartels says:

            @ Jiro :

            So your position is that the federal government owns the country, and without this being recognized, ‘you destroy the ability of people to own property at all’, right? How is this different from the Soviet Union?

          • HeelBearCub says:

            @John Schilling:
            Cogent, coherent and well argued. Huzzah.

          • quanta413 says:

            @John Schilling

            Yes, there is. If I park my jeep in a bit of federally-owned public desrt that isn’t explicitly posted “no tresspassing” or the like, I have a right to not have anyone – not even the federal government – drive or tow it away or vandalize it in place. If I set up my tent and leave it there while I go on a day hike, it had better be there when I get back. And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized. Property rights, while harder to enforce in practice, are not voided in principle by the property being left untended in a public area.

            At least, not in the general case. You are trying to make a special case that property intended to prevent illegal immigrants from dying in the desert is exempt from this general respect for private property and can be vandalized. You are trying to justify this by noting that the immigrants in question are committing an actual misdemeanor, which you would rather they die than get away with. And you are proposing to take active measures to kill them, in defiance both of their right to life and of your fellow American citizens’ right to dispose of their private property as they see fit. Because this is different, this is special.

            Property rights do not have infinite shelf life upon abandonment of property. All of your attempted counterexamples involve specific properties that it is understood you will go back and retrieve or you have formed an explicit agreement with a particular person to retrieve. It is not the general case that you can just leave things wherever for an unspecified amount of time.

            Also note that if you park your jeep somewhere that wasn’t marked for public parking or an area where the government explicitly allows it, your jeep can and would be towed. Likely in the same day.

            Upon abandonment of your property for someone else’s use without any legal contract or common law, all bets are off. Your property rights become irrelevant because you ceded them.

            If the people who take your water are the border patrol or someone smuggling immense amounts of cocaine into the U.S. instead of intended down on their luck illegal immigrants then tough luck. You abandoned it.

            You are begging the question by assuming that property rights apply in cases that legally would not be recognized. The government wouldn’t legally recognize an unsanctioned loan agreement you had with the mafia even if the agreement was a improvement over the status quo for both you and the mafia. But this is even worse, because you don’t even have a specific agreement with a particular person you’ve left the water to.

            And you really don’t understand how everyone who isn’t a hardcore Trumpist, looks at you and sees only something especially monstrous?

            Cool off. I do not condone slicing open water caches. But I don’t accept incoherent arguments for why it’s wrong to cut open water caches either. Shoehorning everything into a libertarian framework needs to be less lazy than “I can just drop stuff wherever I like for whatever purpose for an unspecified amount of time never to be picked up by myself again without making any arrangements that would be understood as a contract (verbal or otherwise) and then have a right to expect that stuff is used how I would like.”

            That’s why I explicitly said there are other valid arguments against cutting water caches. But a libertarian property rights argument against it is bullshit. The property rights argument for cutting is stupid too. Property rights doesn’t dictate that you should cut a cache. It just says that you can without violating property rights. But I can do lots of terrible things without violating property rights.

            @LadyJane

            I’m always amazed by how many otherwise libertarian-minded people will suddenly abandon all of their principles when it comes to immigration, and then go through all sorts of bizarre and contrived mental gymnastics to explain how they’re not really abandoning their principles. (I have my suspicions as to why, of course, but perhaps this isn’t the best place or time to discuss them.)

            (A) I’m only vaguely libertarian.
            (B) I’m irritated that your argument is terrible and shoehorns things into a libertarian framework in a way that makes no sense.
            (C) Make a coherent moral argument instead of jamming everything into a property rights framework. Make an actual argument for why my claim is mental gymnastics and yours is not.

          • Matt M says:

            I’m always amazed by how many otherwise libertarian-minded people will suddenly abandon all of their principles when it comes to immigration, and then go through all sorts of bizarre and contrived mental gymnastics to explain how they’re not really abandoning their principles.

            It’s not “bizarre and contrived at all.”

            Under our current laws and system, the federal government is the rightful “owner” of the lands that constitute the US border, and therefore, it has the right to determine who is allowed to cross the border and who is not allowed to cross the border.

            Libertarians can (correctly) argue that they shouldn’t own said land, but current reality is that they do.

            To the extent that libertarianism requires a healthy respect for property rights, the only relevant question for border enforcement is “who owns this land?” And as of today, the federal government owns this land.

            I welcome any leftists, neocons, or anyone else who would like to start a discussion on transferring these lands (the border, and all other federally managed “public” lands) to state, local, or better still, private control. You guys up for that?

          • Toby Bartels says:

            @ Matt M :

            Under our current laws and system, the federal government is the rightful “owner” of the lands that constitute the US border,

            That's not true, or at least it's not true if you remove the scare quotes, so what exactly are you claiming here?

          • quanta413 says:

            That’s not true, or at least it’s not true if you remove the scare quotes, so what exactly are you claiming here?

            It’s not literally true but it’s a pretty good approximation for the discussion at hand. The sovereignty they exercise is more powerful than regular property rights (and probably less just). The government can decide what is valid public use, where, when and whether people are allowed to enter that land from the other side of the border, etc.

            And they can more easily search you within what… 50 100 miles of the border or some such? I can’t as easily extend my rights 5 feet outside of my house.

          • Jiro says:

            And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized.

            And if you decide to leave a bomb there so that your friends can pick it up later and toss it into a bank vault, expect the government to take the bomb away. Illegal immigration is, by definition, against the law and if you leave on public property things to be used in helping people violate the law on that property, expect them to be confiscated.

          • CatCube says:

            @John Schilling

            And you really don’t understand how everyone who isn’t a hardcore Trumpist, looks at you and sees only something especially monstrous?

            It’s interesting to drag Trump into this, as the POV which kicked off that particular donnybrook in 105.0 is one I’ve held since Trump was a Democrat, donating to Hillary Clinton. It’s totally orthogonal to the man, since I didn’t vote for him in the last election, and probably won’t pull the lever for him in the next one–ideally, he’d get primaried, and any of the other candidates in the last election would be preferable, but you’d already see rumblings if that was going to happen. (I’d also at least consider it if the current court-packing idiocy looks like it’s gaining steam.)

            Would Trump agree with the Border Patrol dumping out water, anyway? We know he’s got a chubby for large, expensive civil works on the border, but this is a different matter. I don’t follow what the guy says, so it’s possible that there was a water dumping incident recently and he defended the agents caught doing it, but all the ones I recall date back some 5-8 years.

          • Toby Bartels says:

            @ quanta413 :

            The sovereignty they exercise is more powerful than regular property rights (and probably less just).

            Agreed, so how is this different from the Soviet Union?

          • quanta413 says:

            @Toby Bartels

            Agreed, so how is this different from the Soviet Union?

            I’m not arguing about whether or not the system is moral. I’m arguing over whether or not in the case of confiscating or cutting water caches being used to help people get across the border the system violates property rights. I say it does not and the question of morality is orthogonal.

            I would be well within my property rights to move water someone else put on a property I owned in the desert near the border into a lost and found box inside my house with a sign where the water was saying “please ring bell outside gated house with guard dogs to claim the water you accidentally left on my property kind stranger! Also tell me the container it was in so I can know I’m returning it to its rightful owner.” Only a fool would then come ring my doorbell for fear I might call the border patrol on them. Even if I was just clueless and really did think someone had forgotten something, they couldn’t know that.

            This might end up with people dying by the same logic as cutting or confiscating water caches that happen to be on public land. This might be morally wrong behavior on my part if I did this to make crossing the border dangerous and wasn’t clueless, but not because it violates property rights. After all, clueless me didn’t violate any property rights by moving things somewhere safe for whoever accidentally forgot their water. I don’t see how property rights hinge on my intent here.

            And since I’m not a hardcore libertarian, I support the state having some powers that private citizens don’t have. National defense, running the legal system, border control, the ability to collect taxes, etc. I accept a certain amount of abuse will happen as being the least possible evil. If you want to have an argument like “anything besides anarcho-capitalism is the moral equivalent of mass starvation and gulags”, I guess we could have that? But I’m not super interested in that debate unless you’ve got a really compelling argument I haven’t seen before.

          • Toby Bartels says:

            @ quanta413 :

            I’m arguing over whether or not in the case of confiscating or cutting water caches being used to help people get across the border the system violates property rights.

            Yes, and you conclude that it doesn't, in part by arguing that government control that is exercised so strictly that it amounts to de-facto ownership acquires a property right nullifying that of the putative owners. (Well, Matt M seemed to argue that, and you defended it.) By this reasoning, dekulakization also did not violate property rights; the Soviet government exercised such control over the uncollectivized peasants' farms that the peasants did not really own them, but were merely recalcitrant renters refusing to comply with the terms of the revised lease, resulting in their forcible eviction. (I keep bringing up the Soviet Union because that's how we got into this discussion; LadyJane suggested that sabotaging water caches was similar to dekulakization except for scale.)

            Incidentally, although I am a hardcore libertarian, I am not an anarcho-capitalist, because I am a left-libertarian. So I don't think that property rights should be absolute, and I wouldn't be at all happy with a landowner sabotaging water caches on their land either. But at least someone who defended that on the basis of property rights would be using ‘property’ in its usual sense, and I would be able to see a distinction between their position and that of the Soviet Union. ETA: Actually, you seem to have shifted to this yourself, based on the examples in your latest comment, away from the position that I inferred as Matt M's. I agree with you that this is not really the same kind of thing as dekulakization (much less the Holocaust, which Lady Jane also mentioned), even ignoring scale.

          • ana53294 says:

            I was wondering if you could base the legality of the water staches on the obligation of witnesses to do what they reasonably can to rescue somebody in danger (this usually means that the minimum requirement is to call emergency services to deal with it; but if you pass by a woman getting beaten in the street, don’t call the police, and a witness saw you pass by, you can be prosecuted in Spain).

            But it seems that there is no such obligation in the US, except for Florida, Massachusetts, Minnesota, Ohio, Rhode Island, Vermont, Washington y and Wisconsin, and none of them are at the border.

            When the obligation to help was explained to me, I was told it was based on Roman law. I couldn’t find any reference to it in the UK, either, so I guess this is one of those occasions were the anglo-saxon law differs from European law (Germany, France, Italy, Greece, Portugal and even Russia have this legal principle).

          • bean says:

            If I park my jeep in a bit of federally-owned public desrt that isn’t explicitly posted “no tresspassing” or the like, I have a right to not have anyone – not even the federal government – drive or tow it away or vandalize it in place. If I set up my tent and leave it there while I go on a day hike, it had better be there when I get back. And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized. Property rights, while harder to enforce in practice, are not voided in principle by the property being left untended in a public area.

            These aren’t absolute rights, though. Yes, I can absolutely leave my jeep while I go for a day hike. I can leave it for a week or two, although I’ll put a note in it for the rangers so they know when to expect me back. I can leave my tent for the day. I can leave a cache of food for a while. (I’m not up on caching etiquette, but I suspect those get notes, too.) But if I park my jeep and come back 6 months later, I shouldn’t be surprised that it’s gone. And leaving a cache for some random passerby who may or may not ever come along seems very different from leaving one for my friends to pick up next week. If I dumped random empty jugs around on public land, I’d be arrested for littering, and rightly so. Why does that change if I fill them with water and write “cache” on the side?

          • Jiro says:

            I was wondering if you could base the legality of the water staches on the obligation of witnesses to do what they reasonably can to rescue somebody in danger

            If that’s legitimate, then it would really be an obligation. People would be legally required to leave out caches of water for illegal immigrants. Are you sure you want to go this route?

          • Matt M says:

            Yes, and you conclude that it doesn’t, in part by arguing that government control that is exercised so strictly that it amounts to de-facto ownership acquires a property right nullifying that of the putative owners. (Well, Matt M seemed to argue that, and you defended it.)

            I am merely responding to the assertion (made here by LadyJane, but frequently made by others in all sorts of venues as well) that libertarianism defaults to an open-borders position, and that anyone who does not favor open-borders cannot properly call themselves libertarians.

            To be clear, I think this is complete and total nonsense.

            Libertarianism defaults to “open borders” in a simple and elegant manner – of respecting property rights. If I own my property, I have the right to hosts whichever guests I choose, and the state has no business telling me who I can or can not have on my property. That is the libertarian argument for “open borders” and it is one I happen to agree with.

            That said, I cannot demand that anyone else make their property available to assist my preferred guests in reaching my property. If Undocumented Jose wants to visit me, he is free to do so with my permission. But he must also obtain the permission of the owner of each and every property owner along the route he intends to travel to reach me.

            In the US, as currently constructed, the government maintains a claim of ownership or sovereignty over all of the routes Jose might take to reach me. Anyone other than full-blown AnCaps would seem to respect those claims. Because after all – who would build the roads?

            I; however, happen to actually be a full blown AnCap. I don’t think this claim of ownership or sovereignty by the state is legitimate. I see no particular reason to respect it.

            But even with that concession, it is non-obvious to me that “open borders” would still be the default. Because even if we strike out the state’s claim to ownership of border property, airports, public roads, etc. the actual owner then becomes unclear. Ideally all of these assets would be privatized and clarity would be established – but that isn’t happening any time soon.

            If we were to treat these lands and assets as “unowned” that doesn’t necessarily help the situation either. I suppose one could make an argument that immigrants and coyotes are homesteading unowned lands by leaving water caches, which then makes those caches their rightful property which nobody else has the right to disturb. Of course, I think if you opened those particular flood gates, there is no shortage of right-wing militia types who might choose to “homestead” these lands in a method you might find slightly less pleasant.

            If nobody owns the land in question, your right to leave water caches is not any more significant than someone else’s right to destroy them.

            Note that this is one of the reasons I am an AnCap in the first place. Establishing clear property rights is a great and easy and effective way to solve many such political disputes, including this one.

          • LadyJane says:

            @Matt M:

            In the US, as currently constructed, the government maintains a claim of ownership or sovereignty over all of the routes Jose might take to reach me. Anyone other than full-blown AnCaps would seem to respect those claims. Because after all – who would build the roads?

            The brilliant Jeffrey Tucker addresses this exact argument in his article on libertarian brutalism. (While Tucker is an anarcho-capitalist, his argument here is not framed in anarcho-capitalist terms and is just as applicable within minarchist, classical liberal, and moderate libertarian frameworks.)

            I’ve heard many libertarians postulate that public spaces ought to be managed in the same way private spaces are. So, for example, if you can reasonably suppose that a private country club can exclude people based on gender, race, and religion – and they certainly have that correct – then it is not unreasonable to suppose that towns, cities, or states, which would be private in absence of government, should be permitted to do the same.

            In fact, it has been claimed, the best kind of statesmen are those who manage their realm the same way a CEO manages a corporation or the head of a family runs a household.

            What is wrong with this thinking? It is perhaps not obvious at first. But consider where you end up if you keep pursuing this: there are no more limits on the state at all. If a state can do anything that a private home, a house of worship, a country club, or a shopping center can do, any state can impose arbitrary rules, conditions of inclusion, or codes of speech, dress, and belief, including every manner of mandate and prohibition, the same as any private entity does. Such a position essentially belittles 500 years of struggle to restrain the state with general rules, from Magna Carta to the latest rollbacks in the war on drugs.

            The whole idea of the liberal revolution is that states must stay within strict bounds – punishing only transgressions against person and property – while private entities must be given maximum liberality in experimentation with rules. This distinction must remain if we are to keep anything that has been known as freedom since the High Middle Ages. Through long struggle, we managed to erect walls between the state and society, and the struggle to keep that wall high never ends. The notion that public actors should behave as if they are private owners is an existential threat to everything that liberalism ever sought to achieve.

          • Matt M says:

            I’ve read that article. It was probably the last straw that sent me from being a huge fan of Tucker (I own several of his books and have donated to him before, actually) to considering him a threat to freedom and liberty.

            He, like many other left-libertarians, totally jumped the shark of anti-Trump virtue signaling to the extent that he’s totally turned on the most consistent group of libertarians on Earth (the Mises crowd) and is now happily smearing them as Nazis. No thanks.

          • LadyJane says:

            By the Mises crowd, do you mean the Rothbard/Rockwell/Hoppe paleo-libertarian crowd that’s been dragging poor Ludwig’s name through the mud? I have a great amount of respect for Mises himself, and for some of Rothbard’s early work. However, Rothbard took a hard turn to the cultural right in his later years, attempting to wed libertarian economics to social conservatism and nationalism while abandoning the globalist, cosmopolitan, and socially liberal principles of his mentor. As a result, the Mises Institute has become a paleo-libertarian think tank promoting views that Mises himself likely would’ve been disgusted by. (See: http://thejacknews.com/featured/libertarians-alt-right-ludwig-von-mises-would-not-like-institute/)

            I’m also a little baffled that you consider Tucker to be a “left-libertarian,” unless you’re just using that as a catch-all snarl term for libertarians you disagree with. The term left-libertarian has traditionally referred to libertarians who reject capitalism in favor of far-left economic systems like socialism or syndicalism, but lately I’ve seen paleo-libertarians use it to refer to right-libertarians (i.e. free-market capitalist libertarians) who are liberal or progressive on social and cultural issues, which basically dilutes the term into meaninglessness. Tucker is about as far to the economic right as you can possibly get, and seems fairly neutral on culture war issues, so calling him a left-libertarian seems misguided at best.

            Finally, and most importantly, which of the views expressed in Tucker’s article do you actually disagree with? Why do you reject Tucker’s interpretation of libertarianism to the point where you consider him a threat to freedom and liberty? (I’m interested in hearing criticisms of his actual object-level views, not just “I’m pissed at him because he’s pandering to the Blue Tribe and telling the Red Tribe to fuck off, and I like the Red Tribe better.”)

          • Matt M says:

            As a result, the Mises Institute has become a paleo-libertarian think tank promoting views that Mises himself likely would’ve been disgusted by.

            Uh huh, sure. I’ll take the opinions of his wife, friends, and students over some random journalist whose major complaint (much like Tucker’s) seems to be: But those guys are raaaaaaaaaaaaaaaacist!!!

            The Mises institute offers no declaration of which social values one must follow, which is what makes them legitimate libertarians. It’s Tucker and the other left-libertarians who insist that libertarianism requires SJW stances on various culture war issues.

            Hard pass.

          • quanta413 says:

            @Toby Bartels

            Yes, and you conclude that it doesn’t, in part by arguing that government control that is exercised so strictly that it amounts to de-facto ownership acquires a property right nullifying that of the putative owners. (Well, Matt M seemed to argue that, and you defended it.) By this reasoning, dekulakization also did not violate property rights; the Soviet government exercised such control over the uncollectivized peasants’ farms that the peasants did not really own them, but were merely recalcitrant renters refusing to comply with the terms of the revised lease, resulting in their forcible eviction. (I keep bringing up the Soviet Union because that’s how we got into this discussion; LadyJane suggested that sabotaging water caches was similar to dekulakization except for scale.)

            It is stronger than property ownership in most ways and more easily abused, but that does not mean it isn’t valid. The U.S. government has a military and often abuses this power, but it doesn’t mean I think the U.S. government can’t or shouldn’t have a military.

            The commons are not a shelf you can leave things on for eternity especially if the whole point of leaving those things there is to violate some other rule about how the commons work. You don’t have a property right to abandon your stuff on the commons and not have it interfered with. There is no sane libertarian scheme of how property rights work on public land if it doesn’t acknowledge that the government has significant ability to restrict how that land is used. If it would be within property rights for a private owner to confiscate the water (and even dispose of it if it’s taking too much space or imposing some other burden on the rightful owner of the property) than the government also may do so. Especially when the purpose of that property is being undermined by the actions of those who don’t own it.

            Property rights spring from informal and traditional customs and agreements that were found to usually be beneficial. Sometimes they are codified or even invented in order to improve market efficiency, but the invention case is rare. They do not spring forth fully formed from anarchocapitalist axioms except for the rare libertarian deontologist. The Soviets violated well understood traditions of who owned what. There is no well understood tradition that your property rights remain valid if you abandon your stuff off of your own property in order to violate the law. Because you had no property right to leave your stuff somewhere you don’t own in order to undermine the rightful owner (roughly speaking).

            Also, scale affects essentially all moral and legal ideas. The idea that cutting water caches that violate government rights to control the border is equivalent to confiscating enough grain to starve even one person is ridiculous (I realize not your position, but the comparison to kulaks except scale is inaccurate enough to bother me). At best, the moment the government drops a reliable easy to use emergency transmitter for border patrol pickup the scenario is basically identical to a kind property owner moving things to the lost and found. At worst if it just drains a cache or carries it away forever, I think it’s roughly comparable to a park ranger chopping a bolt on a climb because it violates the natural beauty of the area. Potentially making a climb less safe. Note, people have died in areas there were fights about this issue. I think it’s morally wrong for the park to cut reasonably placed bolts on public land when there’s no clear understanding about whether or not bolting is allowed but it’s not because of a property right that climbers have to place bolts just wherever on public land. Ordinary property rights don’t come into it.

          • LadyJane says:

            @Matt M:

            I’ll take the opinions of his wife, friends, and students over some random journalist whose major complaint (much like Tucker’s) seems to be: But those guys are raaaaaaaaaaaaaaaacist!!!

            Rothbard and Mises had similar views while Mises was alive, but Rothbard’s views changed after Mises died. Rothbard became a hardcore paleo-conservative, while Mises had always been a classical liberal. So I don’t find it particularly unlikely that Margit may have allowed one of her husband’s most well-known disciples to use his name only because she didn’t realize how radically this disciple’s views had changed.

            And are you making light of the racism allegations because you find it ridiculous to think that any of the people in question are actually racist? Or is it simply that you don’t care if they’re racist, and wouldn’t consider that to be incompatible with their purported libertarian views?

            The Mises institute offers no declaration of which social values one must follow, which is what makes them legitimate libertarians. It’s Tucker and the other left-libertarians who insist that libertarianism requires SJW stances on various culture war issues.

            I think you’re conflating two different things here, specifically which ideological views would be allowed in a libertarian social order (namely, all of them), and which ideological views are required for one to be considered libertarian in any meaningful way.

            Libertarian philosophy concedes that people have the right to be xenophobic, racist, sexist, homophobic, transphobic, or otherwise bigoted, just as it concedes that people have the right to be fascists or communists or feudalists. But the fascist and the communist and the feudalist cannot themselves be libertarians, because the ideologies they espouse are fundamentally incompatible with libertarianism. Likewise, bigots can’t be libertarians, no matter how much they might like to claim they are, no matter how much they might agree with libertarians on purely economic issues, no matter how much they happen to dislike the U.S. federal government. Bigotry itself is fundamentally incompatible with libertarianism just like communism is; they’re both nothing more than particularly nasty forms of collectivism, and thus diametrically opposed to the radical individualism at the heart of libertarian thought.

            The people who think that libertarianism is a purely economic philosophy – or worse, that it’s simply about opposing the current government – do not understand what libertarian philosophy is actually about on a fundamental level.

          • Martin says:

            LadyJane,

            The term left-libertarian has traditionally referred to libertarians who reject capitalism in favor of far-left economic systems like socialism or syndicalism, but lately I’ve seen paleo-libertarians use it to refer to right-libertarians (i.e. free-market capitalist libertarians) who are liberal or progressive on social and cultural issues

            There are pro-free market, pro-property rights libertarians who consider themselves to be left-libertarian.

          • The term left-libertarian has traditionally referred to libertarians who reject capitalism in favor of far-left economic systems like socialism or syndicalism

            In current usage, it refers to several other things as well.

          • Bigotry itself is fundamentally incompatible with libertarianism just like communism is; they’re both nothing more than particularly nasty forms of collectivism, and thus diametrically opposed to the radical individualism at the heart of libertarian thought.

            I’m not sure how you are defining bigotry. I don’t think the belief that some group–women, say, or blacks–have a very different distribution of abilities than some other group, is inconsistent with libertarianism, although the belief may not be true.

          • albatross11 says:

            I don’t see why a belief in average group differences would conflict with libertarianism. [ETA] I mean, we all already accept that there are individual differences–I’m not as smart as Terrence Tao, but I’m smarter than my building’s janitor. That doesn’t mean we can’t interact together productively in a market. (Comparative advantage says I can interact productively with Terrence Tao, even if he’s better at *everything* than I am.)

            On the other hand, reading _The Bell Curve_ made me *way* more amenable to both programs to help the people at the bottom, and somewhat paternalistic laws and social norms to give guidance to not-very-smart people. If you’re on the bottom because of your bad choices, it’s easy to accept that; if you’re on the bottom because you rolled a 3 for your Intelligence score (thanks largely to the lousy genes and upbringing provided by your parents), that makes me a lot more sympathetic to your plight.

          • Toby Bartels says:

            @ Matt M :

            If I understand you correctly, you're now only claiming property rights for the federal government over federal land, rather than over the entire border, so I'll no longer compare your position to that of the Soviets. However, as a matter of historical interest, you might like to know that

            The Soviets violated well understood traditions of who owned what.

            is not really true. Those ‘traditions’ had been invented just a decade or so earlier, in 1906. While they didn't ‘spring forth fully formed from anarchocapitalist axioms’, they were a deliberate copy of western Europe and might fairly be said to have sprung from liberal axioms (in a broad but classical sense).

            So if you object to dekulakization because of its violation of private property rights in land, then that's coming from your advocacy of such rights (whatever your reasons for that may be), not defending a longstanding Russian tradition or Chesterton fence. (I object to dekulakization for different reasons.)

          • Edward Scizorhands says:

            Scott wrote a nice essay where he considered what the arguments from the left and right would be if they swapped sides on this specific argument:

            https://slatestarcodex.com/2014/10/16/five-case-studies-on-politicization/

            IQ differences are purely genetic, a matter of luck for which neither the individual nor his community bears any responsibility. We should be generous with welfare and assistance towards those who lost out in the genetic lottery, just as we are towards the sick and disabled. Conservatives who argue that the IQ differences are ‘man-made’ are looking to shift the blame to the victims and excuse their own bigotry towards the least privileged.

            vs.

            IQ differences are purely environmental. According to evidence, the main environmental difference among blacks and whites is culture. We therefore need to replace the multicultural free-for-all that is hurting American children with traditional American family values, and limit immigration from countries with low-IQ cultures. We need to promote monogamy by cutting support for single mothers. We should probably outlaw hip-hop music too, just in case rap is what lowers black IQ.

          • Matt M says:

            Toby,

            Just to clarify, I haven’t been arguing about the kulaks, I don’t know enough about the history there to comment intelligently.

            What I would say is that my argument about the border is essentially: “If you accept that the federal government has any legitimacy at all, then it clearly owns and/or holds sovereignty over the border, and has the right to restrict the usage thereof.”

            (If you do NOT accept that the federal government has any legitimacy at all, I think it may be the case that the lands it currently holds become a free-for-all for anyone to use, but I don’t think that is necessarily/obviously true)

            I think this argument holds, even in the analogy with the Soviet Union. If you accept that the Soviet government is legitimate, then they do, in fact, have the right to confiscate private land, because their entire system of government is sort of contingent upon being able to do that sort of thing.

            To object to the confiscation of lands by the Soviet government is to question the entire legitimacy of said government. Which I am, you know, totally in favor of. Just as I am in favor of questioning the legitimacy of the US government. But I think the left-wing position wherein the government has the authority to regulate individuals lives and behaviors in almost any way imaginable EXCEPT exercising control over who is allowed to physically enter the geographic borders of the nation is bizarre and incomprehensible. And it doesn’t magically become less bizarre and incomprehensible just because maybe you favor gay marriage and legalized pot and slightly lower tax-rates (which is basically the only thing that separates left-libertarians from democrats and/or republicans)

          • LadyJane says:

            @Martin, @DavidFriedman:

            There are pro-free market, pro-property rights libertarians who consider themselves to be left-libertarian.

            In current usage, it refers to several other things as well.

            Fair enough, but I’m a cranky old political theorist and I’m very pedantic about the precise meanings of terms like that. I’ve gotten into plenty of arguments with Trump opponents about how Trump is not actually a fascist, and plenty of arguments with Bernie supporters and opponents alike about how Bernie is not actually a socialist even if he erroneously claims to be.

            I find it especially annoying when political terms are redefined in vague and overly broad ways, because that just muddies the waters and leads to confusion about what exactly people are talking about, which greatly hinders political discourse. At best, it leads to people wasting a lot of time clarifying that they really mean X when they talk about Y. At worst, it leads to people shouting past each other because they don’t realize what the other person is actually talking about.

          • LadyJane says:

            @DavidFriedman, @albatross11:

            I’m not sure how you are defining bigotry. I don’t think the belief that some group–women, say, or blacks–have a very different distribution of abilities than some other group, is inconsistent with libertarianism, although the belief may not be true.

            I don’t see why a belief in average group differences would conflict with libertarianism.

            Believing that group differences exist is not incompatible with libertarian individualism. Believing that people should be judged and treated differently on the basis of their group membership – whether it takes the form of “these people are naturally less intelligent, so we shouldn’t keep trying to help them” or “these people are naturally less intelligence, so we should give them more help than everyone else” or just “these people are naturally less intelligent, so I’d prefer to not be friends with them, serve them, or hire them” – is incompatible with libertarian individualism. However, the former belief tends to make people more likely to support the latter belief.

          • Randy M says:

            Isn’t “we should keep trying to help” itself rather anti-Libertarian? Unless you mean we as in individual charities.

          • LadyJane says:

            @Randy M: I was referring more to a certain mentality than to any particular stance on policy, though yes, “help” could entail private charity as well as government assistance.

            That said, there’s also a very important distinction to be made between “the government should reduce or cut off all social services in general” and “the government should keep providing social services to green-skinned people, but should reduce or cut off social services for blue-skinned people.” The first position is compatible with libertarianism, the second is not.

          • That said, there’s also a very important distinction to be made between “the government should reduce or cut off all social services in general” and “the government should keep providing social services to green-skinned people, but should reduce or cut off social services for blue-skinned people.” The first position is compatible with libertarianism, the second is not.

            Doesn’t that depend in part on how big you think the difference in distribution of abilities is? Do you regard treating chimpanzees differently from humans as raising the same problems?

            In making decisions under conditions of imperfect information, one normally uses proxies. Suppose the difference in distribution is large enough so that the best proxy available for some characteristic relevant to the decision is race. Is using it unlibertarian? Wrong?

            Imagine a world where the distribution is so different that the top 1% of blue people are about as smart as the bottom 1% of green people. You don’t think it would make sense, given the existence of government and laws, for race to be an input to decisions?

          • LadyJane says:

            @DavidFriedman: It’s an interesting thought experiment! If there were an inherent racial difference of that magnitude, then yes, I would say that treating the races differently would be justifiable. Though any attempt at a libertarian social order would still require some minimal degree of equality under the law, even if it’s limited to very basic things like “you can’t murder, assault, or steal from anyone, whether they’re green or blue” and “the government can’t just lock up blue people for no reason, or for exercising their right to free speech and assembly.” (Of course, you could probably come up with scenarios in which even that level of racial equality wouldn’t be viable – for instance, if there were also a race of red-skinned people that was inherently prone to violent and anti-social behavior to an extreme degree – but then we’re getting even further removed from reality.)

            The practical applications of this thought experiment are limited, though. In real life, I don’t think even the most hardcore racist could convincingly claim that “the top 1% of [race X] is only as smart as the bottom 1% of [race Y]” about any human racial groups; that is literally the level of difference between humans and gorillas. Libertarianism is a human ideology that won’t necessarily work for beings with extremely different mindsets or levels of intelligence than baseline humans.

            It’s akin to arguing that communism could work for a species of sapient insectoids that evolved to function collectively as a hive. Well, sure, it could, but what does that prove about us?

          • albatross11 says:

            Libertarianism is consistent with freedom of association–that is, allowing private actors to discriminate at will. It’s hard for me to imagine a libertarian government imposing or maintaining some kind of racial caste system, though.

          • Matt M says:

            It’s hard for me to imagine a libertarian government imposing or maintaining some kind of racial caste system, though.

            Provided they could work out a reasonable way of entry and exit, there could easily be a libertarian racist society. Hell, even a libertarian communist society.

            Suggesting that libertarians can’t be racist is akin to suggesting that libertarians can’t be into BDSM or whatever (because hitting people is aggression, dontcha know). But if people consent, it’s not aggression. My setting up a whites-only enclave does not violate the rights of any blacks, because they have no right to live on my property in the first place.

          • albatross11 says:

            You’re right–libertarian government is 100% compatible with a lot of private discrimination, including all-white enclaves. But I wouldn’t think of a government that imposed some kind of apartheid-like system as libertarian.

          • It’s akin to arguing that communism could work for a species of sapient insectoids that evolved to function collectively as a hive.

            Done by a good economist that could make an interesting sf background. They would still face the coordination problem, and would probably have to find some decentralized structure, perhaps along market socialist lines, to deal with it. But they might not face the problem of actors with inconsistent objectives, or at least not nearly as much of it as we would.

            There is a Kipling poem that describes a (fictional) attempt by the Kaiser to persuade the working men of the world to some sort of socialist economy, where everyone works together for the common good. He fails because of individual self-interest problems, represented in the poem in terms of men wanting to court, marry, and jointly prosper with women. The poem was written in 1890 and shows no evidence that Kipling had thought of the coordination problems that socialism faces even with altruistic participants.

          • Martin says:

            Matt M,

            And it doesn’t magically become less bizarre and incomprehensible just because maybe you favor gay marriage and legalized pot and slightly lower tax-rates (which is basically the only thing that separates left-libertarians from democrats and/or republicans)

            I’m really curious to know which “left-libertarians” you’re referring to here.

      • Deiseach says:

        I have no idea what my official IQ might be, but I do know that it’s not near high enough 🙂

      • Thegnskald says:

        The average commenter is between 110-125, if my usually good intuition about intelligence is on par.

        I’d guess there is some conflation happening between childhood and adult IQs. My childhood IQ was at least 80 points higher than my adult IQ, to give some indication of how different the measurements are.

        ETA:

        Also: Us really smart people are pretty freaking dumb, too. DavidFriedman, for example, reminds me with every post how little I know about economics, an area I think I am about average on for a commenter here. I have been deliberately ignorant at him once or twice just to see what gems of insight he’d provide. Most of us know a lot about something, and the aggregate is far more intimidating than the average on any given subject.

        • albatross11 says:

          Is there research somewhere showing how well you can do at estimating IQs based on seeing writing samples? This would only work with a fair bit of range restriction (comparable education, native speakers, similar age, etc.), but you can kind-of imagine it working. But I’m skeptical about how accurate this would be….

        • alwhite says:

          80 point difference? What scale was that? On the typical scale (avg 100, SD=15) [the wechsler scale] a change of 80 is severe head trauma type change.

          • Brad says:

            Childhood IQs aren’t based on deviation. One or the other should really have a different name.

          • alwhite says:

            @Brad

            I don’t think that’s correct. The Woodcock-Johnson is used for kids from 2 to 14 and has average 100 S.D. 15. Again, what scale are you referring to?

          • I don’t know what scale he is referring to. I thought child IQ was defined as ratio of intellectual age to biological age, so an eight-year-old who did as well on the test as the average twelve-year-old would have an IQ of 150. Am I mistaken? Is that an earlier definition that has now been abandoned?

          • alwhite says:

            @DavidFriedman

            I think you’re partially correct. There is an age/grade equivalence that is given but it doesn’t appear that the equivalence is the same as the score. Here‘s an example.

          • Viliam says:

            I thought child IQ was defined as ratio of intellectual age to biological age, so an eight-year-old who did as well on the test as the average twelve-year-old would have an IQ of 150. Am I mistaken? Is that an earlier definition that has now been abandoned?

            Yes, it’s the old abandoned definition.

            The problem with the old definition was that you could apply it to kids, but not to adults. A smart 8 years old can be compared to an average 12 years old, but what age would you compare 30 years old Einstein to?

            So the new definition is “smarter than p% of people of the same age”, mathematically adjusted to a bell curve whose mean and sigma were chosen to make the new numbers for kids as similar as possible to the old numbers. i.e. the mean is 100, and the sigma is… depending on whom you ask, either 15 or 16 or 20.

            To be sure, whenever you talk about IQ, say explicitly what sigma you use. (For example when Mensa says “IQ 130”, they mean sigma = 15.) I think in Europe the sigma = 15 is typically used, not sure about the rest of the world.

      • BBA says:

        Personally I have a very high tested IQ [MY USUAL DIGRESSION QUESTIONING VALUE OF IQ REDACTED] and I have full confidence that I’m bringing the comment quality down everywhere I post. Here I just feel like I’m bringing it down in a different way. So don’t feel too bad about it, doods.

      • I can keep up except when it’s anything involving something above rudimentary maths, then I’m doomed.

      • Baeraad says:

        Well, I don’t agree with the local conventional wisdom that IQ is super-important and that maximising it is the key to just about everything, so I’m feeling less guilty about it, but… yes, I often feel acutely aware that I’m below average here even in the areas where I’m strong (non-verbal logic, somewhere around 135), and a bottomless pit of stupidity in the areas where I’m weak (intellectual multi-tasking, somewhere around 90).

      • moscanarius says:

        Yes, but I comment anyway. Just can’t avoid.

    • Nancy Lebovitz says:

      I feel as though I’m not as smart– and certainly not as diligent– as some of the people here, but I don’t feel as though I’m making the place worse.

      • MawBTS says:

        Being dumb, we would think that, wouldn’t we?

      • Nancy Lebovitz says:

        So, why do I not believe I’m making the place worse? It’s partly that it’s just not a conclusion I jump to, but also I don’t believe I’m posting enough to make a big difference directly and I’m not combative enough to get other people to make the place worse.

        I suppose I could be making the place *slightly* worse but I like my comments too much to believe that.

        • Iain says:

          Personally, I suspect that the reason that you don’t believe that you’re making the place worse is that you are a consistently thoughtful poster who actively makes the place better.

          • albatross11 says:

            +1

            Nancy, I’ve seen you as a participant in a couple of online spaces (here and Making Light[1]), and you’ve consistently added value to the conversations and communities you’ve participated in.

            [1] Were you around on alt.callahans for awhile, too?

        • Nancy Lebovitz says:

          Iain and albatross11, thanks very much.

          I probably posted in alt.callahans, but I was more active in rec.arts.sf.written and rec.arts.sf.fandom. I was also a fairly frequent poster at Balko’s The Agitator.

        • Bugmaster says:

          I’ve long given up on trying to make forums better. I’ll never be smart enough, nor ideologically pure enough, to elevate any kind of an online forum, on any topic. The best I can do is voice my thoughts as honestly and clearly as I’m able. I think this should be enough for most people.

          • HeelBearCub says:

            That’s what makes comment sections better.

            This comment section is not a listserv of academics discussing topics relevant to their field of expertise. I’m not sure why people seem to think that is what is required.

    • J Mann says:

      The group needs us white belts to keep growing – the challenge is learning when we recognize that someone’s comments are better, or when we fall on our faces.

    • Nancy Lebovitz says:

      For those who are concerned that they’re lowering the average comment quality, do you have specific concerns or is it a generalized worry?

  6. nameless1 says:

    The reason I hang out here but not consider myself part of the rationalist community is the glaring contradiction between Yud teaching people how to think skeptically then throwing it away and starting an entirely unskeptical and irrational “friendly AI research” cult brainwashing people to donate their money to him. All this because rationalists tend to be nerds who fetishize the word “intelligence”, unable to think skeptically about it, unable to see that intelligence is not in itself a thing but the outcome, the effect, or the measure of a whole lot of things in the human mind and social interaction, thus artificial intelligence is about as meaningful as artificial profit.

    Considering intelligence an optimization ability is something that gives bullshit a bad name. Succesful optimization is obviously an outcome, not a thing in itself. Hence the profit parallel. One company makes profit by putting an unusually good camera on a phone. The other makes profit by firing some unnecessary employees. And so on.

    If you look at Spearman’s g, the two main factors are eductive ability (think clearly, make sense of complexity) and reproductive ability (store and retrive information), the second being something computers are already excellent at and the first is not hard either, a tensor flow can do it. These abilities, understood as intelligence, make sense only for measuring the problem-solving ability of humans because these two things are something our brains are generally terrible at, they are a bottleneck. We have a gazillion other cognitive abilities that work well, are entirely necessary for even the simplest task, but they are not a bottleneck. No amount of eductive and reproductive ability alone will make a computer able to a tie a shoelace. They have entirely different bottlenecks.

    And this is something every sensible person understands intuitively, this is why there are so many us who are grateful for his sequences yet consider his AI fetish a cult for milking gullible people. Gullible people who are high IQ nerds, and therefore tend to fetishize intelligence and building a friendly 1000 IQ machine is their dream. As I said above, computers already max out reproductive ability, and giving it eductive ability gives you merely an expert system, not what you would consider anything like a mind. It will not rule the world because it is something equivalent to a savant, just incredibly more so. It gives you correct answers to questions. It does not DO anything. It does not go and convince you somehow to let it out of a box or fire missiles at Moscow. The problem is intelligence fetishists think correct answers to questions is the most powerful thing ever because this made homo sapiens take over the planet. No. This, while also having all the other abilities of primate made homo sapiens take over the planet.

    • Mr Mind says:

      Your thesis, if I do understand it correctly, is that today’s AIs lack many cognitive abilities that make a human really intelligent, and thus potentially dangerous. So today’s AIs cannot possibly do any of the things that makes an unfriendly AI a menace.

      But this is not Yudkowsky thesis. He talks about the danger of a self-evolving general AI, to distinguish it from what we have today. In your worldview, this will happen when we will add to expert systems all the different abilities that separate their intelligence from ours.
      Thus, unless you’re saying that it’s impossible to add those requirement to a piece of software (why?) or that even a gAI cannot be more intelligent than a human (??), then there’s no contradiction between what you say and Yudkowsky’s view.

      • Faza (TCM) says:

        The problem with Yudkowsky’s thesis is that it glosses over questions of AI teleology, interface and comprehension – or rather, it has the propensity to give whatever answer is expedient at the moment.

        Taking it in turn:

        1. Teleology – You can have a paperclip maximizer (fixed teleology) or an autonomous being (self-determined teleology), but not both. So which is it? If it’s a maximizer – not a problem, because “maximize paperclips” isn’t a realistic specification of teleology programmed into any kind of system (a solver of this kind would have constraints specified up front). If autonomous – potentially a problem, but unlikely to directly conflict in a destructive way (because the existential concerns of computers, if they have them, are fundamentally different than ours).

        2. Interface – Thinking isn’t the same as doing. Anybody who doesn’t believe me is invited to reflect on why nerds continue to get pushed around (as discussed often enough here). An AI needs to be able to act in order to pose a threat and it is not necessarily apparent what a realistic scope of action is. The answer I see to such questions is essentially: superintelligent AIs are magic and can do anything they want. Seriously?

        3. Comprehension – AIs function in Game-Space. What do I mean by that? We create AIs to operate within various game families that we create as part of our model-crafting mode of thinking: logic is one such game, mathematics is another, natural language is another still. Hell, we can even make AIs that play “proper” games like chess and Go.

        Computers are good at playing games because they can explore the rule set much faster and more broadly than an individual human. However, the also game constrains the world of the computer.

        The games (taken in the broad, existential sense; language is such a game) we play as humans are mental maps (models) we make, but we exist in the territory and our concerns are with the territory. There are many different games we can overlay on the world as we know it that serve the same purpose comparably well. For the most part, a Young Earth Creationist has no more problems in day-to-day life than someone who accepts evolution through natural selection. The theory of evolution (and related concepts like genetics, etc.) is an improvement over YEC if you want do some things, but you’ll only know if you try to apply both to practical ends.

        It’s not impossible for a self-evolving AI to fashion games of its own to model its existence in the world, but that would require it to function fully autonomously and affect the world without human mediation. This isn’t a given.

        Unlike humans (who exist before they begin to play) games are an AIs “natural environment” (being created to deal with issues of language, mathematics, chess, etc.), I find Stanisław Lem’s vision of “digital philosophers” (AIs exploring the bounds of existing games, games that may conceivably created, as well as related ontological and epistemological concerns, as described in GOLEM XIV) a whole lot more convincing than all of Yudkowsky’s demons.

        Which brings me to the main complaint. Every time I try to make sense of what the AI threat crowd are saying, I’m confronted with a parade of genies designed solely for the purpose of being scary. When I say “genies”, I mean that the postulated superintelligent AI seems to be capable of working whatever wonders are necessary to make the point – regardless of whether we have any good reason to believe such wonders are even possible.

        This is, at its core, a theological argument. Superintelligent AIs are a problem – say Yudkowsy et al. – because they are (tacitly) defined as omnipotent. If this is the case, we’re already doomed.

        That isn’t rationality, however. It’s begging the question. Unless we specify at the start what manner of superintelligent AI we’re talking about: enumerate its capabilities and – more crucially – its limitations, we’re not dealing with an argument, but with unsupported assertions of whatever conclusion the arguer finds compelling.

        I suppose some folks find it fun and rewarding to speculate on such things (scholastic theology was big back in the day), but I see no compelling reason to treat it as anything other than wild flights of fancy.

    • MawBTS says:

      Considering intelligence an optimization ability is something that gives bullshit a bad name. Succesful optimization is obviously an outcome, not a thing in itself. Hence the profit parallel. One company makes profit by putting an unusually good camera on a phone. The other makes profit by firing some unnecessary employees. And so on.

      I don’t follow. Bench pressing 405 pounds is an outcome, hence there’s no way to improve my weight-lifting ability? Cleaning my bedroom is an outcome, hence there’s no way to improve my ability to pick dirty clothes off the floor?

      The rest of your post sounds a lot like “intelligence is magical and impossible to understand etc”, which was what people used to say about chess, painting pictures, and so forth.

      thus artificial intelligence is about as meaningful as artificial profit.

      “Artificial intelligence” means intelligence in a machine, versus the naturally evolved intelligence of biological organisms.

      I’m not aware of a similar dichotomy that would let us separate “natural profit” from “artificial profit”, but if there was, I’d definitely talk about artificial profit.

      • Bugmaster says:

        As I said before, the main problem I see with the “AI FOOM” scenario — other than the laws of physics preventing it, of course — is that no AI, no matter how smart, can just think its way to victory.

        For example, if it wants to make novel scientific discoveries (which is a necessity for maintaining its exponential growth), it will have to actually run experiments. Last time we humans wanted to learn something significant, we had to build a laser interferometer about 4 km in diameter (two of them, actually). The AI can’t magic up such things overnight, and it can’t just simulate everything and come up with the right answer, for obvious reasons.

        • Murphy says:

          We’re currently at the point of fully automated bio labs hooked up to AI’s that can automatically generate hypothesis then design experiments to falsify the maximum possible number of hypothesis then run the experiments in large number and with great precision.

          Currently the AI’s used for that are glorified SAT solvers but if something genuinely much more inventive and bright than a human got involved I strongly suspect that it could very quickly learn a vast amount about biology, physics, chemistry etc etc which opens the door to a lot of interesting resources that probably wouldn’t require a 4km construct.

          Foom or not-foom seems to more come down to whether the difficulty of going from 100 intelligence (measured by whatever metric) to 200 using a mind with 100 intelligence is harder than going from 200 to 300 intelligence for a mind with 200 intelligence by whatever methods.

          if it gets progressively harder vs capability added to add each effective IQ point then not-foom.

          if its actually really easy and humans just happen to be how smart we currently are because we’re currently at the minimum intelligence needed to build computers then much more likely Foom

          Personally I think all the “boxing” stuff is kinda stupid. AI’s don’t get built in boxes or with bombs attached to them. If something was actually smart and wanted to get resources then the world is full of the kind of people who fall for the claims of nigerian princes.

          • Bugmaster says:

            We’re currently at the point of fully automated bio labs hooked up to AI’s that can automatically generate hypothesis…

            Yes, and there’s absolutely nothing the AI can do to make that yeast grow any faster. It might help generate some promising leads, but it won’t be overturning any scientific paradigms overnight. Even if the AI was 1000x smarter, yeast still grows at a fixed rate.

            but if something genuinely much more inventive and bright than a human got involved I strongly suspect that it could very quickly learn a vast amount about biology, physics, chemistry

            How ? You can’t just wave your hands and say, “oh, well, it would think of a way”. That’s not intelligence, that’s clairvoyance, or possibly divine revelation. The only way to learn anything about any of those sciences you mentioned is to run experiments. Experiments take time. Some experiments, e.g. those for detecting gravity waves, take a lot of time and gigantic buildings… which also take time to build. That is less of a FOOM and more of a regular crawl. The whole point of running experiments, by the way, is that you don’t already know the answer, so just running lots of simulations is not enough.

            On a slightly different topic:if it gets progressively harder vs capability added to add each effective IQ point then not-foom. That’s pretty much the case, yes. You can’t just keep sticking CPUs on a motherboard ad infinitum.

          • christhenottopher says:

            EDIT:Dang it, I see that this point came up elsewhere in the thread.

            How ? You can’t just wave your hands and say, “oh, well, it would think of a way”. That’s not intelligence, that’s clairvoyance, or possibly divine revelation. The only way to learn anything about any of those sciences you mentioned is to run experiments. Experiments take time. Some experiments, e.g. those for detecting gravity waves, take a lot of time and gigantic buildings… which also take time to build. That is less of a FOOM and more of a regular crawl. The whole point of running experiments, by the way, is that you don’t already know the answer, so just running lots of simulations is not enough.

            Doesn’t this depend on what the nature of the hurdle to learning is? Specifically whether the issue is lack of data or lack of integrating/understanding the data? To give an example of the later, a major finding of the 9/11 Commission was that the information various intelligence organizations had wasn’t being shared and integrated. That’s the sort of problem that we would expect higher intelligence/faster AI to be an improvement on humans for. Indeed we currently use computers to shift through data at much faster rates than humans can manage. To the extent that a scientific/engineering problem is hard due to integrating data, AI will work a lot faster, even if AIs can only run experiments at the same speed as humans. The total number of academic papers has risen drastically over the past few centuries so insofar as that is a proxy for the amount of scientific data available, I think there’s a potential argument that there are a lot of problems where the data exists but is hard to sort and integrate (though I don’t really know how many such problems exist).

          • ec429 says:

            Bugmaster:

            That’s not intelligence, that’s clairvoyance, or possibly divine revelation.

            Just to check — you have read That Alien Message, right? Care to explain what’s wrong with it?

            Experiments don’t have to take very long; our giant particle colliders and LIGOs are to make up for our paucity of computational power, since we can’t just do large-scale calculations on fundamental-level physics and see which ones reproduce our macro-scale observations. Large chunks of science are about regularities we’ve observed on larger scales that we aren’t yet able to tie back to the fundamental physics because we haven’t got the mathematics to do it, and some of the biggest advances in science have come from learning how to make such a linkage (e.g. when Gibbs discovered statistical mechanics and thus linked the phenomenological thermodynamics back to the behaviour of particles, or when the application of the Schrödinger equation to molecular orbitals opened up the field of quantum chemistry).

          • Bugmaster says:

            @ec429:

            Just to check — you have read That Alien Message, right? Care to explain what’s wrong with it?

            You mean, other than the fact that it’s completely fictional, and that it not only postulates that The Simulation Argument is true, but also does so in a way that is contrived to push the intended “moral” along ? I guess I don’t exactly see the relevance between the story and the scenario we’re currently talking about.

            Experiments don’t have to take very long; our giant particle colliders and LIGOs are to make up for our paucity of computational power…

            First of all, how long did it take us to build those devices ? Are you saying that our current particle accelerators, LIGOs, space telescopes, etc., are the the pinnacle of what we’ll ever need ?

            Secondly, there’s no amount of computational power that will allow you to detect e.g. gravity waves. Your problem is not just lack of computational power, but lack of data. Being able to run millions of different models very quickly does not help you figure out which of them is actually true (or, at least, not completely). You can’t tell the shape of the elephant by sitting in the dark and meditating really hard; at some point, you have to go out and touch it. As far as I’m aware, all of the examples you bring up were experimentally verified.

          • ec429 says:

            @Bugmaster

            You mean, other than the fact that it’s completely fictional, and that it not only postulates that The Simulation Argument is true, but also does so in a way that is contrived to push the intended “moral” along ? I guess I don’t exactly see the relevance between the story and the scenario we’re currently talking about.

            The point of the story is not that it “could be true”, but that it demonstrates computation being used to economise on (not replace entirely!) data. All the Simulation guff and other contrivances are just a framing device to try and get people to think about the issue rather than just regurgitating whatever their cached thoughts about superintelligent AI are (which is what would happen if Eliezer had just said “a superintelligence could deduce GR from three frames of video” and left it at that).

            Are you saying that our current particle accelerators, LIGOs, space telescopes, etc., are the the pinnacle of what we’ll ever need ?

            No, I’m saying that we only need them because we can’t analytically solve the Schrödinger equation for an entire galaxy. If we could, then the differences between “galaxy where gravity waves exist” and “galaxy where they don’t” would be blindingly obvious to us from one clear picture of the night sky.

            You can’t tell the shape of the elephant by sitting in the dark and meditating really hard; at some point, you have to go out and touch it.

            Again, in case you missed it: computational power allows you to economise on experimental data, in the limit allowing every bit to eliminate half of the hypothesis space (weighted by prior probability). The *humans in That Alien Message, without any magic (just doing the same kind of science we do, but for longer), end up deducing GR from three frames of video — and even they do not even come close to this informational/entropic limit. This is the real moral of That Alien Message: that that limit, while finite, is far, far beyond human scale.

            No-one is saying computational power can entirely replace experimental data; that would be stupid.

          • Bugmaster says:

            @ec429:

            Again, in case you missed it: computational power allows you to economise on experimental data…

            Well, in that case, it seems like we disagree primarily about the amount of data you can save through computation. It sounds like you believe the answer is “almost all of it”, whereas my estimate would be somewhat closer to “almost none of it”. I find That Alien Message completely unpersuasive because it’s not only fictional, but also explicitly contrived to be as convenient for the author’s thesis as possible; real life is not so accommodating.

            You say that we only need expensive LIGOs and such because “we can’t analytically solve the Schrödinger equation for an entire galaxy”. Technically, this is true; but even more technically, it’s kind of meaningless. The only way to accurately simulate the entire galaxy is to build a computer exactly the size of the galaxy; this isn’t a useful avenue of pursuit for anyone, no matter how smart he or it happens to be. On top of that, you’re sneaking the Schrödinger equation into your argument, as though it was some sort of an axiomatic truth — as opposed to an experimentally verified model that was arrived at by conventional means. Is the Schrödinger equation the very last equation that we need to fully understand the world ?

            The problem with simulating things and writing down equations in a vacuum is that the search space is incredibly large. You can invent all kinds of beautiful mathematical models for how you think the world ought to work; but, at the end of the day, at most one of them is going to be correct. You can’t find that needle in your haystacks by piling up more straw.

          • ec429 says:

            explicitly contrived to be as convenient for the author’s thesis as possible

            Nonsense. That would be “oh yes these people have access to Solomonoff induction and maybe a halting oracle as well, and a pony”. Instead the premise is contrived to be inconvenient: the *humans are only permitted levels of intelligence that (a few) humans have actually demonstrated in reality; their advantage is merely one of speed (or, viewed the other way round, that they have a long time to keep on applying human-level scientific inquiry to the problem). As EY points out near the end, they do not even come close to making “efficient use of experimental data”.

            The only way to accurately simulate the entire galaxy is to build a computer exactly the size of the galaxy

            Only if physics is, in some sense, incompressible. There are definitely physical systems with analytic solutions; more generally, the regularities in physical law can be used to reduce the amount of computation required. If the AI is smarter than us, it will be better at doing this.

            On top of that, you’re sneaking the Schrödinger equation into your argument, as though it was some sort of an axiomatic truth

            No, I’m saying that if you have it as a hypothesis, you can evaluate it by either of two methods:
            (1) experiments that extract signal from the behaviour of a simple system, which you compare to the solution of the equation for that simple system, or
            (2) observations of a complex system, which you then compare to the solution for that complex system.
            Or, really, any point on a continuum between these two.
            Substitute the Schrödinger equation for any other fundamental hypothesis, and one finds that either (1) or (2) can identify (and then verify) which hypothesis is correct. The fact that we don’t know whether the S.e. is “the very last equation” actually helps to illustrate my argument: if we solved it for the Galaxy and found it produced a galaxy that looked nothing like ours, we would thereby discover that it wasn’t the correct hypothesis.

            The problem with simulating things and writing down equations in a vacuum is that the search space is incredibly large.

            This is where Solomonoff induction comes in; in principle, it takes only as many bits of data to locate a hypothesis as the Kolmogorov complexity of that hypothesis. With sufficient computational power, each observed bit cuts the search space in half, and 2^n is also incredibly large.

          • Bugmaster says:

            @ec429:

            Nonsense. … their advantage is merely one of speed…

            That, and the relative simplicity of the message (as compared to real-world problems) are advantages so massive that they completely invalidate the story. This actually ties into my next point:

            With sufficient computational power, each observed bit cuts the search space in half…

            I think you are vastly overestimating the amount of computing power that can even theoretically become available to us humans (to speak nothing of reasonably practical limits); vastly underestimating the size of the problem; or both. Solomonoff Induction is a wonderful theoretical concept, just like a Turing machine; but in practice, you wouldn’t want to build a Turing Machine to do your taxes.

            Only if physics is, in some sense, incompressible.

            If you want to accurately simulate the entire Universe, as you’ve implied, then it is. You can’t just throw away the insides of all the rocks, unless you don’t really care about what goes on inside of rocks — in which case, your simulation is no longer accurate.

            I will grant you that a vast AI that runs on a Dyson Sphere composed of what the Solar System used to be will be able to draw valid conclusions from a smaller amount of data than present-day human scientists. However a). that amount of data will be nowhere near zero, b). assuming that AI-powered Dyson Spheres are even physically possible, someone would need to figure out how to build them, which gets you into a chicken-and-egg problem, and c). if you choose to collect more data points the conventional way, you will no longer need to build Dyson Spheres — an Excel spreadsheet would suffice.

            Let me put it this way. Let’s say you’re a super-smart AI who doesn’t know anything about physics or cosmology or astronomy. Someone sends you a single cellphone picture of the night sky, as seen from Randomville, Kentucky. That’s all you have to go on. Can you reliably calculate the orbits of all the moons of Saturn ? What if you got two pictures ? 100 pictures ? Would that make things better ?

          • Murphy says:

            If you can design better experiments when you’re smarter and more skilled at doing science.

            Practical proof of that can be shown in the interactions between many dimmer undergrads and better professors.

            But first, to be clear, are you arguing only against “5-minutes-ago-nothing, now everything is computation” “foom” or also against “oh shit, we booted this up a few years ago and didn’t pay enough attention and now oh-shit” “foom”.

            Show a bright monkey the publicly availible data from the LHC and they’ll gain nothing from it. Show a smart physicist and they might gain something, show a very very smart physicist the same data and they may generate better hypothesis about the universe.

            Given the same input data John Von Neumann could probably derive a hell of a lot more useful information than random person on the street.

            I find it vexing when people posit deity-level powers attributed to AI, for example above solving wave equations for galaxies etc but there’s a much more reasonable version where we merely posit something very bright with a moderate advantage on some of the brightest humans in history.

            Science isn’t just a pure grind, quality matters as well. A very smart person can sometimes falsify a hypothesis using mundane observations that a less capable person would demand major resources to address.

            We live in a world where George Dantzig could look at unsolved problems and mistake them for homework assignments and later even he runs into someone who could take one look at the problems he couldn’t solve and immediately sees the path to a solution.

            I regularly encounter new techniques in my own field where people have figured out how to use standard commodity hardware with a few tweaks to allow them to extract data that’s previously ridiculously hard to get.

            Just for an example: how certain are you that something somewhat smarter than Von Neumann couldn’t advance the existing field of DNA computing to a useful degree in a fairly short time period given access to all existing data on the subject and a well equipped biolab.

            That’s not magic or clairvoyance, that’s normal real-world smart people who are often fundamentally better at linking together datapoints and coming up with novel ways to falsify hypothesis

            Even if there’s an upper bound of ,say, 10x the cognitive capability of John Von Neumann which doesn’t really go into the super/hyper/deity intelligence stuff…. that’s still extremely capable and someone with that level of smartness (intentionally using vague language here) could be dangerous in the normal human world if they happened to also be a complete psychopath or had their mind set on some unpleasant goal.

          • Bugmaster says:

            @Murphy:
            When I talk about “FOOM”, I mean, “an exponentially accelerating change that happens too quickly for any human to even notice, let alone prevent”. I was under the impression that this was more or less the accepted definition, but I could be wrong. Anything slower, or less significant, would not IMO, count as a “FOOM”, because such changes happen all the time in our current world — markets are disrupted, wars break out, political administrations change, etc.

            I find it vexing when people posit deity-level powers attributed to AI

            You and me both ! That said though, if you posit unbounded exponential increase of intelligence, and if you believe that enhanced intelligence automatically leads to the same level of enhancement in practical capabilities, then you’ve pretty much come up with a working definition of “deity”. I don’t subscribe to either of these beliefs, personally.

            Just for an example: how certain are you that something somewhat smarter than Von Neumann couldn’t advance the existing field of DNA computing to a useful degree in a fairly short time period given access to all existing data on the subject and a well equipped biolab.

            That depends, how short of a period of time are we talking about ? The problem with the state of biological research right now is not just that we don’t have enough computational resources to process all the data we’re generating; it’s that we don’t even have enough data, and we don’t even know what kind of data to look for because most of the time we don’t know what’s going on inside the cell. A “well equipped biolab” would definitely be a huge boon (ask any scientist ever, and he’ll probably agree), but no amount of equipment will help you grow corn (or mice, or especially humans !) 1000x faster than it does naturally. This means that, no matter how brilliant your experimental designs are, you’re going to have to wait.

          • ec429 says:

            I find it vexing when people posit deity-level powers attributed to AI, for example above solving wave equations for galaxies etc

            Just to be absolutely explicit, I was setting up a theoretical endpoint for a sliding scale of computation versus observation (the other endpoint would be something like “observe every particle in the Universe, have no unifying theory of their behaviour at all”), not claiming that a FOOMing AI would be anywhere near that endpoint.
            The point is that computation can be traded off against observation, and our confidence intervals for the exchange rate are really rather wide.

            Only if physics is, in some sense, incompressible.

            If you want to accurately simulate the entire Universe, as you’ve implied, then it is.

            Not so, for two reasons.

            1. I can create a toy model of gravitational physics in which I have 2n point masses in a Klemperer rosette; no matter how high I set n, I need only a few numbers to specify the entire model, so this toy ‘universe’ is compressible.

            2. What is needed is not a simulation of absolute perfect precision, but rather one with, as it were, a good condition number, so that the errors introduced by the approximations the simulation makes do not swamp the resulting ‘signal’. For evidence that physics is compressible in this manner, note that before developing GR and QM we were able to understand most things through Newtonian gravity and classical mechanics, rather than finding ourselves in a chaotic and totally incomprehensible Universe, as would be the case if one couldn’t (metaphorically) understand the outsides of rocks without accurately modelling their insides.

            Computation of this kind is simply a way of applying ‘filters’ to our data to improve its signal-to-noise ratio; experiment design approaches the same problem by reducing sources of noise.

          • Murphy says:

            I’ve seen a few definitions of “Foom”. it being too fast for us to really cope with or effectively oppose by the time people really notice would probably be more reasonable.

            Even if it takes 5 years of something chugging away in a lab somewhere for it to crack solutions that get it access to excessive computation and resources the clock on “foom”, the potential disaster scenario, only really starts when people notice something happening.

            Also, you seem to be stuck on assumptions that the only way to get useful knowledge is with a 25 km particle accelerator.

            Human comprehension is a major bottleneck in biology. If something had just a human level intelligence but more broad such that keeping 10000 interactions in mind at the same time while thinking about a problem rather than the human limit of <10 then lots of problems would probably look like open books.

          • albatross11 says:

            Murphy:

            There’s an interesting scaling question here. If I can make one super-von-Neumann, and use it to get more resources, I may or may not be able to scale up from super-von-Neumann to super-duper-von-Neumann. But I can certainly scale from one super-von-Neumann to N super-von-Neumanns, and I can certainly scale the resources I give them to discover stuff. That’s basically how we get improvements to science and math now–we throw lots of smart people at problems, accumulate knowledge and techniques, and sometimes, you get what has happened in computing or biology from, say, 1940-now.

            It may also be possible to speed up your super-von-Neumann–someone just a little smarter than von Neumann who runs ten times as fast and never sleeps can accumulate a *lot* of knowledge and discoveries about the world pretty quickly.

          • Bugmaster says:

            @ec429:

            I can create a toy model of gravitational physics … What is needed is not a simulation of absolute perfect precision, but rather one with, as it were, a good condition number, so that the errors introduced by the approximations the simulation makes do not swamp the resulting ‘signal’.

            Yes, but I contend that if your goal is to learn about our actual universe; and if you want to do so without performing lots of lengthy experiments or spending lots of time on building projects; then you’ll need to simulate the known Universe with near-perfect fidelity. At that point, it would be cheaper to just build another space telescope. And, of course, you can’t improve the signal-to-noise ratio without any signal at all…

            Just to clarify, I’m working off of the assumption that your AI wants to make significant — perhaps even paradigm-altering — scientific discoveries. It would need to do so in order to transcend our current technological limitations, which (somewhat inconveniently) prevent superintelligent AIs from existing in the first place. If all your AI wanted to do was to develop a slightly more efficient cellphone antenna, I’d have no objections to your approach.

          • Bugmaster says:

            @albatross11:
            Currently, we can create precisely zero super-Von-Neuimanns, and it may in fact be the case that super-Von-Numanns are impossible to create at all (depending on your definition of “super”). Assuming that they are at least possible (which is already a massive assumption, IMO), it would either take a super-Von-Neumann to figure out how to do it, or just lots and lots of time. That’s the opposite of a “FOOM”.

          • Bugmaster says:

            @Murphy:

            If something had just a human level intelligence but more broad such that keeping 10000 interactions in mind at the same time while thinking about a problem rather than the human limit of <10 then lots of problems would probably look like open books.

            To borrow a quote, “everything you said in that sentence is false”. First of all, we already have something that has a human intelligence but can keep 10,000 interactions in mind; it’s called “a human with a database”. The problem is that there aren’t 10,000 potential interactions; there are trillions, and, what’s worse, most of them have not even been discovered yet. The problem is not just that we lack data, or that we lack CPU power; the problem is that we don’t even know what to look for. The only way to figure that out is to look at actual organisms in real life, which is what lots and lots of people are working on right now.

          • Murphy says:

            “we already have something that has a human intelligence but can keep 10,000 interactions in mind; it’s called “a human with a database”.”

            No, just no.

            [Having access to a database you can run queries against] is to [actually being able to keep things in mind]

            as

            [being a savant] is to [being innumerate but with access to a calculator]

            If you give someone with anterograde amneisa a pencil and a notebook they aren’t suddenly on a par with someone with good memory.

            handing someone a dictionary does not make them automatically fluent.

            and having access to a database is not equivalent to being able to keep large quantities of information in the forefront of your mind.

          • albatross11 says:

            Suppose the best you can do with AI is to create a mind at the high end of human intelligence[1]–a Gauss/von Neumann/Einstein/Archimedes/Newton level thinker, say. And suppose that doing this takes $X worth of resources.

            You don’t then get an explosive expansion of intellect. But you can probably use the first high-end human-level intellect to start funding the resources for more high-end human-level intellects. At some point, you have a community of a million intellects that are all about as smart as the three or four smartest people on Earth. If they’re all aligned in terms of goals, they’ll probably have very little trouble taking over the planet.

            [1] This is the world we’d have if we lived in the Zones of Thought universe, and Gauss-level intellect was the limit of our zone for both biological and electronic minds.

          • Bugmaster says:

            @albatross11:

            Suppose the best you can do with AI is to create a mind at the high end of human intelligence…

            In practice, you run into problems with this scenario right away. Currently, we have absolutely no way to even begin researching anything remotely like that. Furthermore, such intellects may not even be desirable in the first place. I don’t need my car autopilot to write poetry or solve abstract math problems as well as the smartest human can; but I do need it to drive 1000x better than any human ever could. The latter is much easier to achieve, and is more profitable. That said, even if I were to grant you this premise, additional problems remain:

            You don’t then get an explosive expansion of intellect.

            Then how is this scenario different from what is already happening in our world all the time ? Why is this scenario a special case that we need to fear especially hard ?

            But you can probably use the first high-end human-level intellect to start funding the resources for more high-end human-level intellects.

            Where are the funds coming from ? How are they different from funds accrued by, say, McDonalds (~$25 billion/year) ?

            At some point, you have a community of a million intellects…

            How much money, electrical power, physical space, and logistical support does each intellect take to run ? How are these intellects coordinated ? By comparison, a million humans can’t agree on anything, and organizations with millions of members tend to be supremely inefficient (that’s why disruptive startups pop up all the time, for example).

            If they’re all aligned in terms of goals, they’ll probably have very little trouble taking over the planet.

            What do you mean by “take over”; why would they want to do that; and how are they categorically worse than powerful human organizations — such as the US, China, Microsoft, the Catholic Church, or, well, McDonalds ?

        • moridinamael says:

          Brains consume 20W of energy and weigh 1.4 kg and they can do intelligence. There are probably laws of physics that put an upper bound on conceivable intelligence, but we aren’t anywhere near them.

        • Confusion says:

          I used to think that. Then I learned I was much more naive than I thought.

          You are overlooking a few things:
          * there is already a vast amount of experimental data available that humans have been unable to ‘integrate’, meaning to understand it in context of all other data. There are undoubtedly many technological inventions and even fundamental scientific discoveries possible based on integrating the available data. No need to experimentally verify it if it already postdicts unexplained experimental results.

          * What makes you think the AI FOOM scenario presupposes a disembodied intelligence? If the AI hasn’t already been given ways of perceiving, and interacting with, the real world when it fooms, it will certainly quickly gain those abilities if its reward function cares about the real world.

          * It’s easy to come up with boring, inert AI’s that cannot escape their boxes and exasperatedly explain they obviously cannot escape their boxes. The question is whether interesting AIs, designed to help us understand the real world, with abilities to interact with the real world and to do inventing for us in the real world, would ‘run away’. How about one designed by an evil madman that wants to destroy humanity? What if he gives the code an arsenal of robots to control, both nano and macro, the ability to modify itself both at the software and the hardware level, unlimited resources in terms of crude materials, vast fabrication and experimentation facilities and the explicit goal to only improve itself for its purpose for a year before executing its plan? Is it a foom if it happens after 6 months of relatively slow progress, building up faster experimentation facilities?

          I’m not convinced foom is possible. I am convinced somewhat intelligent AI is possible and could be as least as dangerous as cats that could reproduce at the rate factories produce cars, could wield guns and saw humans as a threat. Could, not will.

          • Bugmaster says:

            There are undoubtedly many technological inventions and even fundamental scientific discoveries possible based on integrating the available data.

            I seriously doubt that. I’m sure there are some inventions that we’re missing, but I’d need to see some evidence before I can accept your (much stronger) claim.

            What makes you think the AI FOOM scenario presupposes a disembodied intelligence?

            Nothing, I never said that. My point was that an embodied intelligence is restricted to real-world speeds whenever it wants to actually affect the real world in some way. Even if it can calculate the perfect architectural design for a skyscraper in a blink of an eye (which, by the way, it can’t — not without surveying the site), it still won’t be able to build said skyscraper in a blink of an eye. This is a huge problem for the FOOM scenario, because the same limitations apply to everything the AI would have to achieve in order to become superintelligent, such as i.e. inventing nanotechnology (assuming that is at all possible, which it very likely isn’t).

            The question is whether interesting AIs, designed to help us understand the real world, with abilities to interact with the real world and to do inventing for us in the real world, would ‘run away’.

            Yes, absolutely, they do so all the time. The Flash Crash was one such example. My own little algorithm did that yesterday, when it consumed all my RAM and crashed (due to a bug, obviously).

            What if he gives the code an arsenal of robots to control, both nano and macro…

            Firstly, he won’t be able to achieve the “nano” part, since nanotechnology does not (and, I’d argue, cannot) exist (outside of obvious exceptions such as living cells). Secondly, “unlimited” resources don’t exist, either. In practical terms, such a madman actually exists already, his name is Kim Jong Un — or Uncle Sam, if you prefer to look at things from the other side. You’d deal with the AI the same way.

          • Thegnskald says:

            Nanotechnology cannot exist?

            You just mentioned nanotechnology that does exist.

            Granted, protein folding isn’t exactly friendly to read-write operations – it’s pretty much a ROM program – but… well, even if we can’t build nanomachines as we imagine them now, a protein that “triggers” another protein by releasing an enzymatic ion molecule when there is an electromagnetic gradient of some specific value is almost certainly a viable operation.

            You only need a few dozen operations like that before nanotechnology is viable at the scale of a virus.

            (I won’t get into the “fundamental scientific discoveries” thing, because I’m probably a crackpot, but I disagree on that as well. Effectively, however, this is a claim that can always be made; if I -did- have a working theory of everything, as soon as it comes out, the claim can be made again. There is no point, no matter how much data we have, or how many theories we do or do not have, where exactly this claim cannot be made – because any discovery that is made is evidence that there are fewer discoveries -to- be made, as there is now clearly one less.)

          • Lambert says:

            Do you suppose that cell biology is the only type of nanomachinery that can possibly exist?
            And that it doesn’t count as nanotechnology?
            One example of genetic engineering is already becoming able to save half a million lives per anum (golden rice).
            What’s to stop it killing that many in the wrong hands?

          • Bugmaster says:

            @Lambert, @Thegnskald:

            Do you suppose that cell biology is the only type of nanomachinery that can possibly exist ?

            It is certainly starting to look that way; at least, as long as you’re talking about self-replicating molecular nanotechnology. The reason for this is that water-based chemistry is incredibly flexible and efficient. There does not appear to be a viable way to make some equivalent of a protein out of e.g. silicon. Even if you could, you run into massive issues with energy requirements, heat dissipation, oxidation, and so on.

            Unfortunately, biological cells have some pretty serious limitations. They grow relatively slowly; they are fragile; and they can only move around a very limited set of chemicals. All this adds up to saying, you can grow ironwood (over the span of many years), but you’ll probably never be able to grow iron, not to mention diamond.

            a protein that “triggers” another protein by releasing an enzymatic ion molecule when there is an electromagnetic gradient of some specific value is almost certainly a viable operation.

            As far as I know, what you’ve just said is essentially impossible to achieve (assuming I understand you correctly), but I could be wrong — can you show me some evidence to the contrary ?

            genetic engineering is already becoming able to save half a million lives per anum (golden rice). What’s to stop it killing that many in the wrong hands?

            Don’t get me started on genetic engineering. As it turns out, genetic engineering is really, really hard. There is a handful of genes that can be easily transformed or knocked out to achieve a useful effect; beyound that, you are dealing with vast networks of genes (and intragenic regions) that all interact with each other in ways that are extremely difficult to understand, and may be impossible to manipulate. If you wanted to kill a bunch of people, you’d be way better off with good old-fashioned smallpox. If you wanted to do something actually useful, like growing a skyscraper overnight, no amount of genetic engineering will ever help you.

          • Thegnskald says:

            Bugmaster –

            If I could point to the specific proteins that we’d need to identify in order to have a protein-based nanofactory, I’d already be a significant way to creating a protein-based nanofactory. But yes, we have, for example, probably identified proteins which appear to change the behavior of chemical reactions based on magnetic fields. (Google “birds eyes magnets” for some articles discussing this at the 30,000 foot level.) This isn’t a nanofactory – but it is the suggestion that a nanofactory might be possible. You don’t need a comprehensive understanding of how proteins work in order to get a nanofactory up and running – you just need a turing-complete set of instructions, which is a surprisingly small instruction set. We can build abstractions on top of that. New proteins enhance our capabilities by adding new operations we can perform. Maybe we can grow carbon nanotubes in a vat out of sugar, potassium, and graphite. This sort of nanotechnology looks almost inevitable to me.

            Almost, because I am not saying it is definitely possible – maybe we’re missing the protein equivalent of a GOTO (well, probably not, IIRC RNA/ribosomes do in fact have a GOTO operation). But certainly, when you start looking at it as a boring industrial process, it starts to look a lot less like the “magic” nanotechnology prevalent in scifi.

          • Bugmaster says:

            @Thegnskald:
            What do you mean by “nanofactory” ? In a purely technical sense, a zucchini is a nanofactory. It takes elements from the air and soil, energy from the sun, and combines them to produce more zucchinis. However, there are some very serious limitations on what such a factory can produce, and magnetic-sensitive proteins won’t get you there. Proteins (and DNA/RNA) simply don’t work the way you think they do. Their behavior is a stochastic process, not a set of linear instructions. In living organisms, they form vast networks of interactions whose behaviours are incredibly complex and poorly understood; phenotypes based on a single allele, such as sickle-cell anemia, are the exception rather than the rule.

            What’s worse, water-based chemistry is pretty limited in what it can do. You won’t be building synthetic diamonds or titanium plating with living cells (at least, not nearly quickly enough), because the energies required are just too high. You couldn’t even flash-build a fully-grown biological organism, such as a pine tree. Sure, you could grow a pine tree over decades in the conventional way — but you couldn’t do it overnight. Even if you somehow managed to encode the genetics for it (using magic, presumably), the cells would immediately fry themselves as soon as you tried it.

      • Lapsed Pacifist says:

        405 pounds of metal being moved 18 inches away from the surface of the earth is an outcome. You can improve your upper body strength or build a lifting robot, or make a ramp and have Hebrew slaves push the object upwards.

        There is body building, but optimizing for body building might not be the golden standard for achieving your desired results.

    • thevoiceofthevoid says:

      Much confusion comes from everyone using the words “artificial intelligence” to refer to either:
      A. Computer systems today which can e.g. decide what products on Amazon or videos on Youtube to suggest to you.
      B. Entirely hypothetical machines (“General AI” or “superintelligences”) that might be developed in the future, which would be able to model the world and take general actions to achieve a wide variety of instrumental goals.
      I don’t believe that B is physically or scientifically impossible (since humans exist and are non-magical). However, whether it will be remotely technically feasible in our lifetimes is a matter of intense debate. Yudkowski himself admits that even with a supercomputer the size of Jupiter, they would have no idea how to build an AI that could accomplish a task as simple as putting a single strawberry on a plate. Today’s systems don’t even come close to general goal-oriented behavior; that still doesn’t mean it’s physically impossible for a machine to reach that threshold.
      Whether you could intentionally build an AI that can answer questions but doesn’t try to achieve any goals beyond that (intended or otherwise), is again controversial among experts.

      • Bugmaster says:

        I am not even convinced that superintelligences are physically possible. Surely, very smart intelligences are possible; but there’s a huge gulf between “very smart” and “effectively godlike”. AI alarmists tend to ignore that gulf. By analogy, tall people do exist, and we can build ladders much taller than any human — but we will never be able to build a ladder all the way to Alpha Centauri.

        • thevoiceofthevoid says:

          That’s a fair point. A more nuanced argument would be: We don’t know if superintelligence is physically possible or technically feasible, but we haven’t proved that it’s impossible. Since it could be possible, and could be existentially dangerous if it comes to exist, it might be worth devoting some resources to precautionary research in the area. And I’m sure you’ve seen the arguments about possible “fooms”, which boil down to “when the ladder starts building itself taller, it might be too late to worry about how tall it is.”

          • Bugmaster says:

            The problem with this logic is that it can be used to justify absolutely anything; it’s basically Pascal’s Wager. Sure, Hell probably doesn’t exist, but it’s possible that it does, so why aren’t you on your knees praying every day ?

          • thevoiceofthevoid says:

            @Bugmaster

            Reasonable people differ on how probable they think it is and how much that justifies spending on the problem. Very few say “LITERALLY EVERYTHING” and advocate for funding AI research to the exclusion of all else. (I’d be wary of those who did.)

            It is a bit Pascal’s Wager-y, but slightly more grounded in reality. I don’t think of Hell as “probably not existing”, I think of it as “making no sense within my current conception of physical reality.” But going with the example regardless, I think there’s a step between “Hell might exist” and “on my knees praying every day”, namely, “research the specifics of Hell, determine whether it’s a real threat, and figure out how I can avoid it if so” (yeah I’m stretching this). That research ultimately led me to my current “not physically possible or coherent” position on Hell, and I’ve decided not to spend any more resources on the issue.

            Should we also devote resources into, say, preliminary planning for how we might be able to combat an imminent asteroid strike? Important specifics aside, I’d say sure.

          • Bugmaster says:

            @thevoiceofthevoid:
            Well, I was just going by your original sentence:

            We don’t know if superintelligence is physically possible or technically feasible, but we haven’t proved that it’s impossible

            Now you’re saying that “reasonable people differ on how probable they think it is”, which IMO is a huge step up — at the very least, you’re ruling out the scenario where superintelligence is outright impossible.

            Should we also devote resources into, say, preliminary planning for how we might be able to combat an imminent asteroid strike?

            There’s a huge difference between asteroid strikes and UFAI. We know asteroids exist. We’ve seen them. We know they strike planets, all the time. We have seen the craters. Some of those craters exist on our own planet, and small meteorites rain down on it all the time. Furthermore, if we were able to detect an incoming asteroid early enough, there are at least a few things we could do about it that could reasonably work. AFAICT, literally none of that is true of UFAI (though obviously your opinion may differ).

    • Scott Alexander says:

      I continue to be baffled by everyone’s constant insistence on the theory that spending years writing hundreds of thousands of words, running a struggling nonprofit, founding an entire new field and convincing a bunch of scientists to enter it, then bucking the consensus of his own new field just as it started to get popular — that all of this was just a really long con so a guy who could otherwise probably land a programming job with Google could make a five-digit-a-year salary and live in a one-bedroom apartment in Berkeley.

      I am glad you think computers can already eg cure cancer; would you please share the cure you’ve developed on your laptop with the rest of us?

      • dank says:

        He doesn’t want to be rich. He wants to be famous/important enough that someone will bother to thaw out his brain in the future. (Only half joking).

      • drossbucket says:

        bucking the consensus of his own new field

        Interesting, how has his thinking changed?

      • rlms says:

        A cult can have a true believer leading it.

      • LadyJane says:

        Wealth isn’t the only form of power, and there are plenty of charlatans who seek fame and/or influence rather than just money. The big city stock broker with a seven-figure bank account is less powerful, in many significant ways, than the backwoods cult leader who has hundreds of dollars to his name and hundreds of followers willing to do whatever he wants. Although it’s worth noting that not all cult leaders are charlatans, as some genuinely do believe what they’re saying.

        I also think you’re rather overstating the importance and impact of the rationalist movement with the first part of your description.

        • sty_silver says:

          This seems like a fairly meaningless objection. Either you claim that Eliezer is lying about everything he believes in or you don’t, and if you don’t then it doesn’t matter to what degree power is attractive; then you’re just accusing him of being factually mistaken.

          • LadyJane says:

            Someone can earnestly believe that their espoused views are 100% correct, and still be driven to promote those views out of a desire for wealth, fame, or influence, and still engage in manipulative and exploitative behavior in their pursuit of those goals. In fact, that can even be true if some of the views in question are actually correct.

            And I didn’t make any claims about Eliezer one way or another. I’m just making general observations about human behavior and social dynamics; draw what conclusions you will.

        • watsonbladd says:

          What can the cult leader do the stock broker can’t?

          • LadyJane says:

            Have a rival murdered, for starters. Maybe the stock broker could use his wealth to hire a professional assassin, but in all likelihood he wouldn’t know where to find one or even where to start looking. Finding a hit man to accept his money would be a very difficult and time-consuming process for him. It would also be extremely risky since he would be forced to deal with violent criminals who might want to take his money by force, plus he could potentially get swindled by a con artist or arrested by an undercover cop. The cult leader, on the other hand, could simply order one of his devout followers to do the deed.

          • rlms says:

            Have sex with lots of people without paying for it.

      • AG says:

        But by the combo of “AI research is the most important thing” and “making all of the money to fund the most important thing is a key part of EA,” then shouldn’t he have founded Google, or make that five-digit-a-year salary, instead of assuming he has the unique intelligence to do the actual research instead of funding it? The current situation is that, instead, other entities like Google are indeed doing the research, and faster, without the ethical oversight that he wanted.

        So no, “it’s a long con to funnel money” doesn’t seem to be true. But it doesn’t match his own insistence that it’s about winning, either, as he’s not actually taking the most effective route to his goals.

        • sty_silver says:

          Are you actually claiming that not doing any research himself would have been a more effective strategy to the goal of getting alignment research done?

          • AG says:

            Yes. If he had founded Google, then he could pay that many more people to do the alignment research, whereas those same people instead are doing AI development for our-world Google, without considering alignment in such depth.

      • Ilya Shpitser says:

        “founding an entire new field”

        This is not how founding a new field works. Most fields don’t have a clear founder (for example, causal inference does not, and machine learning does not). There are exceptions: Claude Shannon is widely credited with starting information theory. Because he published a seminal paper, defined most relevant concepts, and solved half of it.

        “a guy who could otherwise probably land a programming job with Google”

        Minor technical point: there is no way this would happen, because he can’t program. It’s not actually entirely straightforward to get a programming job at Google.

        “just a really long con.”

        I think EY is trying (and succeeding) to be a guru, rather than a grifter. Gurus definitely are satisfying a demand, so gurudom is somewhere on a continuum between grifting and “legitimate business,” depending on the exact nature of the arrangement.

        Personally, I don’t hold gurus in very high esteem, but opinions may differ.

        I don’t think EY is trying (?very hard?) to be a scientist or a mathematician, because those people communicate with the outside world by publishing publishable things.

      • Ilya Shpitser says:

        (Followup): a lot of the objections will go away if MIRI stops asking impressionable youngsters for money, and starts doing what research institutes typically do, namely get their funding through successful grant proposals from the government or foundations. (I know MIRI is moving more in that direction, but they are still doing their “drives”).

    • Deiseach says:

      I think that’s a little unkind. God knows, I’ve seen enough examples of writing that make me go “He really does think the sun shines out of his backside, doesn’t he?” but calling it a cult is going a bit far. There’s all the dangers of a charismatic (well, some people seem to find him so) and dominant personality being a big fish in a small pond and the appeal of “our little inside group knows a big secret that will change the world”, but it’s not quite on the level of Indian (fake) gurus and David Koresh (leaving aside Waco and how the FBI handled that, which was extraordinarily awful and completely the wrong way, I don’t think Koresh’s movement was innocuous, it seems to have started turning in on itself and warping dangerously).

      I think there’s enough “herding cats” involved with the kind of people you describe that people have started naturally moving on, growing away, finding and developing other interests within the broader EA and rationality sphere, and even the pet AI project seems to have been taken up by and developed by others who are more influential/recognised names.

      • HeelBearCub says:

        but it’s not quite on the level of Indian (fake) gurus and David Koresh

        Unless this was intended as sarcasm, it falls in the category of “damning with faint praise”.

        • quanta413 says:

          I’m pretty sure Deiseach is daming with faint praise. She’s never been super fond of EY as far as I know.

        • theredsheep says:

          I think it’s more that, if the word “cult” has any meaning other than “belief system I disapprove of,” it refers to things like David Koresh, the Bhagwan, or Scientology–groups with an identifiable pattern of abusive and exploitative behavior.

          • HeelBearCub says:

            Imagine someone saying:

            It’s almost a cult, but not quite.

            Yes, the obsequious fawning requested by the leader and the uncritical thinking it promotes is antithetical to a healthy organization, but most of them aren’t being abused or exploited. Therefore it is a bit too far to refer to them as a cult.

  7. Bugmaster says:

    “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

    Intuition can be a powerful tool. It can grant you correct answers very quickly and efficiently. Unfortunately, intuition is also entirely subjective. There’s no way for someone to check your math to see if you’ve got the right answer, and there’s no way for you to convince someone else that your intuition is right and theirs is wrong. Intuition is explicitly irrational.

    I get the metaphor about martial arts and dojos, but the promise of The Rationality Project is not just an enhanced sense of intuition; it’s the ability to arrive at correct answers in a way which is repeatable, objective, verifiable, and legible (or, at least, more so than the alternatives). Personally, I’ve always been somewhat skeptical about this promise, and, if what you say is true, then my skepticism was justified. That’s what my intuition is telling me, anyway.

    • thevoiceofthevoid says:

      I think a well-calibrated intuition can still be useful and “rational” (Yudkowsky definition) in cases where it’s more important for you to come to the correct answer than it is to convince others of that answer. As you said, in the case where you’re trying to convince anyone else, you’d better be able to show more work than “based on my prior experience, it feels right.” But I’d argue intuition is fine for “should I buy these homeopathic pills at the drug store?” and falls short on “should we ban drug stores from selling homeopathic pills next to traditional medicine?”

      • Deiseach says:

        But I’d argue intuition is fine for “should I buy these homeopathic pills at the drug store?” and falls short on “should we ban drug stores from selling homeopathic pills next to traditional medicine?”

        If there’s serious argument that depression medication works mostly as a placebo because after a while depressive episodes get better by themselves, then it should be okay for chemists’ shops to sell homeopathic remedies along with the OTC medicines and perfume and fake tan and the rest of the bits’n’pieces.

        The danger is when people ignore serious symptoms and proper medicine and try and treat themselves/family members with homeopathy, but the kind of person who is anti-vaccination isn’t going to be persuaded that vaccines are okay by banning the sale of Bach’s Rescue Remedies alongside proper medicines like Addyi. Mostly it’s a case of “it won’t help but it won’t do any harm either, and if it makes you feel psychologically reassured to take it, why not?”

        • thevoiceofthevoid says:

          And that’s exactly why we need more than first-pass intuition for the latter question.

        • theredsheep says:

          If taking water instead of actual medicine causes the symptoms to get worse/the disease to progress, I would argue that that constitutes harm. Likewise if people unwittingly take fake medicine instead of the real kind because they assume anything the drugstore sells must have been vetted by Science somehow. Especially if the medicine has a sciencey-sounding name like “oscilloccoccinum” or however they spell it.

          If actual approved medicines are rubbish–I’ve heard bad things about benzonatate and oseltamivir–that’s a separate issue.

      • Bugmaster says:

        If you cannot legibly demonstrate how you arrived at the answer, then how do you know your answer is correct, and not just self-delusion ? The obvious answer is, “well, my intuition is usually right, so I’ll trust it this time too”. This approach works extremely well until it fails spectacularly. Usually, this happens when you encounter some genuinely difficult question the likes of which you’ve never faced before… i.e., exactly the kind of problem that rationality is supposed to solve. See lumineforous aether, for example.

      • BBA says:

        A few years ago, a homeopathic “low-dilution” (i.e., actually present) zinc nasal spray marketed as a cold remedy caused several people to permanently lose their senses of smell. Its marketing had been slick enough that very few people realized it wasn’t “real” medicine, but was untested and based on junk science. (Certainly I had thought it was just another cold medicine, and it made me question my then-smug libertarian views on consumer protection laws.) So I’d say we should ban low-dilution homeopathy from the drugstore shelves and only allow those remedies that are diluted to sufficient “potency” to be chemically indistinguishable from placebos.

        • I’d say that with this comment you brought added value to this thread, so that I reject the claim that you are “bringing the comment quality down” by posting here.

          You’d bring even more added value by providing some link / reference to this zinc nasal spray story. Would love to read more.

          Also, it’s my first post – I’ve been lurking for about 1.5 year, so hello everybody.

          • BBA says:

            Here’s a WebMD piece on it. The makers deny the claims. Beware the man of one study and all that. Zicam oral lozenges have not been linked to anosmia and continue to be widely sold.

            As critical as we are of the FDA around here, I still don’t know that letting these medicines escape FDA oversight by claiming to be ineffective homeopathy is a good thing.

    • Scott Alexander says:

      I think you’re arguing someone is only a Scotsman if their genes encode the Scots language without them having to learn it, then using that definition to conclude there is no such thing as a Scotsman and everyone claiming to be Scottish is lying to you.

      Less snarky edit: Or compare medicine. You can write medical textbooks saying “the rules of medicine”, and lots of people do. But the average person who’s just read a medical textbook (or Wikipedia, or WebMD) will do a terrible job diagnosing and curing disease compared to a doctor who’s worked in the field all their lives. This doesn’t mean medicine is wishy-washy and not based on objective principles. But it does mean it’s not some simple algorithm anyone can apply.

      Even less snarky edit: Maybe I’m bad at explaining this. Have you read David Chapman on meta-rationality?

      • Bugmaster says:

        Maybe I’m bad at explaining this. Have you read David Chapman on meta-rationality?

        I think I did, but I’m pretty bad at human names — can you provide a link ? I want to make sure I’m reading the exact same thing you are.

        Or compare medicine. You can write medical textbooks saying “the rules of medicine”, and lots of people do. But the average person who’s just read a medical textbook (or Wikipedia, or WebMD) will do a terrible job diagnosing and curing disease compared to a doctor…

        And yet, when you ask the doctor, “why should I subject myself to painful and potentially deadly chemotherapy ?”, his response is not, “I’m a doctor, I just know these things”. Rather, it is, “I’m a doctor, I know these things because there’s this shadow on your MRI, and several of your protein levels are elevated, and your original symptoms are consistent with this statistical model, and…” Even if you are not a doctor yourself, you could take all that data, show it to another doctor, and have him interpret the results.

        The doctor surely would’ve used his intuition to arrive at his conclusion — but intuition is just the first step (or at least, one would hope so). After that, he’s going to meticulously follow the rules written down in the Big Book of Medicine, which are objective (or, at least, as objective as humanly possible), reproducible, and legible. Any doctor who’s any good will follow these rules regardless of how strongly his intuition is screaming “cancer !”, because the stakes are quite high.

        What about Rationality ? Are the stakes high enough ?

        • Nick says:

          I think I did, but I’m pretty bad at human names — can you provide a link ? I want to make sure I’m reading the exact same thing you are.

          Chapman writes about this stuff at Meaningness. I think Scott has some of his blog posts in mind, like this one.

      • MasteringTheClassics says:

        You’re kind of describing metis, both here and in the original post.

  8. k48zn says:

    Looking back on the Piketty discussion, people brought up questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”

    Clearly you should not choose the wine in front of me.

  9. MawBTS says:

    The rationalist community started with the idea of rationality as a martial art

    That reminds me of something godawful.

    There was/is a pick-up artist called Erik von Markovic who went by the handle of “Mystery”. Once he was the most famous of his kind – he was a central character in Neil Strauss’s book The Game, and even had his own reality TV show. Now he’s destitute and forgotten.

    He viewed pick-up artistry in the same terms. “If do right, no can defense!”

    He planned on opening a dojo, where aspiring PUAs could “train” in what he called the venusian arts (Mars is the god of war, and Venus is the goddess of…). The entryway would contain a life-sized poster of Bruce Lee and Erik von Markovic standing side by side, along with the text “the king of martial arts and the king of venusian arts welcome you to this dojo!”

    Unfortunately for cringeseekers worldwide, the dojo never went ahead. He did start a company called the Venusian Arts, which he was summarily forced out of by his own students.

    You might suspect that this comment is off-topic and pointless. Your suspicion is correct.

    • Toby Bartels says:

      While we're on the topic of off-topic trivia: ‘Venusian’ is analogous to ‘Martian’, a strictly modern term referring to the planet. The classical term analogous to ‘martial’ is ‘venereal’. I'm not sure that I'd want to study at a dojo on the ‘venereal arts’, but at least I would respect its command of the English language.

      • Randy M says:

        It’s one of those cases where you only hear a word with a negative modifier–disease in this case–so it acquires an negative connotation it doesn’t actually denote on its own.

        • ec429 says:

          And that’s why astronomers use “cytherean”. (Apparently “aphrodisial” was too close to “aphrodisiac”.)

          (Ok, so many astronomers nowadays just use “Venusian”, and to hell with etymological soundness; but Κύθηρα lives on in the terms peri- and apocytherion.)

      • quaelegit says:

        I’d call it a respect for Latin language, or romance etymology or something. Noun “X” –> adjective “Xian” is perfectly valid English derivational morphology. You can still pan him for ignorance of Latin inflection if you want to though 😛

        • Toby Bartels says:

          I’d call it a respect for Latin language, or romance etymology or something.

          That too, I suppose. But the etymology is not the point; it's not as if was coining a new word. Both words ‘venereal’ and ‘venusian’ existed beforehand, and ‘venereal’ already had the meaning that he was going for, while ‘venusian’ did not.

    • beleester says:

      It’s a funny story regardless. But if you want it to have a point, perhaps “Calling something a martial art is no guarantee that you can actually train people in it”?

  10. Ron Unz Article says:

    Scott makes good, or “well-calibrated” predictions. I think he’s wasting his talents by showing these predictions off only once a year. If I understand, he’s resolved to do it even less often. I’m disappointed. Kavanagh or someone else for supreme court next week? Democrats or Republicans in the senate this fall? But maybe those are fraught topics in this space.

    Still, instead of applying rationality to optimising the far future, or to giving measured advice about newsy outrages, I would like to see it applied to “fun” things. Probably others can come up with better “fun” ideas to analyze. But what about old conspiracy theories?

    Did Lee Harvey Oswald kill JFK, with a single bullet? I don’t need to be convinced that the answer is “probably.” But it would be fun and instructive to see it quantified. Is a rationalist 90% sure of this? 99 and a half percent sure?

    • Tarpitz says:

      Is the FSB arranging for Russia to overperform at the World Cup? How much stronger should this suspicion be if they go on to win it?

      • Deiseach says:

        Well, Belgium just beat Brazil to go up against France in one of the semifinals, so do we think Russia can beat Croatia? And which of England/Sweden to go through?

        If we’re looking at a Russia-Belgium final, certainly something weird will have happened, but it may just be football and not bribery and corruption 🙂

  11. oiscarey says:

    The idea of rationality as a martial art is a rich metaphor. It is fruitful and interesting to look at the ways in which this metaphor is apt, e.g. looking at training, evidence-based practices, etc. The ways in which the metaphor falls apart give interesting indications for the future of the craft.

    A key defining feature of a martial art is a two-person competition in which there is a demarcated winner and loser, usually as judged by an impartial 3rd party (referee).

    There are challenges in identifying clear winners and losers in martial arts, e.g. close boxing matches, fake martial arts. They have developed rulesets and cultural practices to guard against this, and these are effective within the bounds of human subjectivity.

    One comparison to note is Brazilian Jiu Jitsu, a submission-oriented martial art that dominated the UFC initially. In Jiu-Jitsu, there is no confusion about the winner or loser, as the loser submits/taps, or else they go unconscious/break a limb. A serious system for judging expertise only arose quite recently, and it isn’t uncommon for upsets to occur where the expert is beaten by someone with less expertise.

    In comparison, Aikido is a movement and momentum-based martial art that focuses on dodges, throws, and limb-locks. It has a strict and detailed system of competition for the judgement of winners and losers, but when applied to real-world situations or applied in competition with other styles it completely falls apart. It is so ritualised in its ruleset as to be worse than useless.

    Rationality training has no school, no agreed-to set of skills to be utilized in ascertaining truth. Tetlock’s system for superforecasters is probably the closest, but there is no general form (to my knowledge). Thus, training is ambiguous.

    Rationality competition is highly ambiguous. How long should a debate last before the match is stopped? What happens when one person gives up and leaves? When the game is over and both people claim to have won, how is the winner identified? Furthermore, how is the competition structured so as to provide a hierarchy of rationality? I’m fairly confident that there are few that would relish being labelled a “black-belt in rationality”.

    To be truly comparable rationality would require institutional backing. This would involve developing training, and a ruleset for competition. It would require impartial 3rd parties to judge the outcomes of competition.

    To my eye, there is a need for ‘canon’ in the competition in such a rationality community. This would be the aim for victory, changing and updating the canon to define what the community must see as a rational perspective.

    However, most pressing would be the need for the referees to judge the conduct of competitors and declare the outcomes of competitions. How rational was the argument, how convincing, what percentage movement would be required, etc. There would need to be a ritualistic definition of arguments and premises, the factual requirements for changing positions, probably percentage confidence levels for the truth of arguments and premises, etc.

    Probably wouldn’t be as fun as just arguing with people on the internet…

    • Bugmaster says:

      Well, one way to bypass all those problems would be to settle on some objective metric, such as e.g. money. Have both contestants in the rationality battle put up $1000 (or however much) of their own money (or have someone sponsor them). Have them play the stockmarket however they want — day trading, high frequency trading, long-term investment, whatever. Wait a year, then see who’d made the most money; that guy is the winner. Play best 2 out of 3 if you want to reduce the influence of chance.

      • thevoiceofthevoid says:

        Counterpoint: the stock market is a uniquely bad place to practice one’s prediction skills, due to its inherent volatility and unpredictability. [insert any argument for why index funds outperform mutual funds or any other guided investment schemes] I assume if you actually tried this you’d either have the contestants winning or losing on mostly pure luck, or investing in index funds and ending up within a few dollars of each other.

        • Bugmaster says:

          I never said you were limited to only looking at the stock prices and nothing else. If you wanted to, you could research some promising companies, and invest in them because you believe in their products (or services, or IP, or whatever). I completely agree that an ordinary person would be better off investing into index funds — but we’re talking about Rationalists here, and the whole point of the movement is that they should perform much better than ordinary people, right ?

        • 天可汗 says:

          Weak. My mother outperforms index funds.

      • eccdogg says:

        I would think a better competition would be to create a list of outcomes to be predicted and then split them in half. For each question one person would be on offense and the other on defense. The person on defense names a probability of the event happening and the person on offense decides to buy or sell at that probability. When the event happens or not a seller gets the agreed upon probability if the event does not happen and the buyer gets 1-p if it does happen.

        You could publish the list of questions ahead of time so contestants could research.

        Add up the points at the end and declare a winner.

    • Nancy Lebovitz says:

      Thanks for the detailed analysis, but I believe there is no escape from Goodhart’s Law (measurements which are used to guide action become corrupt), though some efforts to escape it are better than others.

      The challenge is that rationality is person versus the universe, and victory is even less well-defined than it is for martial arts. How can you tell how much someone was lucky vs. how much they were right? Or how much someone did as well as possible in the face of bad luck?

      • AG says:

        Not quite Goodhart’s Law, but rather, I was about to comment that the community would then just get mired in increasing levels of meta debates, and never settle on any stable standards (and standards for the standards, etc.) in the first place.

        At work, we recently tried to quantify employee performance a little bit less subjectively, got into an extensive definitions discussion for the standards, but concluded that this would lead to the definitions discussion being interminable, and so reverted to declaring a certain fuzzy area as subject to “engineering judgement,” for the sake of being able to move ahead on the object level. Not unlike Scott concluding that sometimes you just gotta trust that intuition.

    • Confusion says:

      A key defining feature of a martial art is a two-person competition in which there is a demarcated winner and loser, usually as judged by an impartial 3rd party (referee).

      That seems like a strange criterion to me. There are martial arts that truly don’t have any rules, because in the real world there are no rules. Such a martial art is about incapacitating your opponent(s) quickly and decisively. You can’t have competitions, because competatents (sp?) would break limbs, be knocked unconscious, suffer permanent injury or die. Even MMA has rules: it e.g. forbids kicks to the groin, punches to the throat, eye gouging, elbows to the spine, kicking an opponent when he’s down. There are martial arts where you train those things as well. Of course you can not practice destructive techniques as well as non destructive techniques and you don’t truly know how well they will work, but the fact they are forbidden in MMA gives a clue.

      • John Schilling says:

        There are martial arts that truly don’t have any rules, because in the real world there are no rules

        In the real world there are always rules.

        If you e.g. actually gouge someone’s eyes out and leave them permanently blind, the rules are going to say that something really bad happens to you next. If your excuse is that you were teaching or practicing a martial art for use in the “real world”, something really bad is going to happen to you next. If your excuse is that you were actually defending yourself against an unprovoked lethal attack, you might get off, but the enforcers will be asking hard questions about how you could be sure the attack would have been lethal and whether you might have been able to defend yourself by some less drastic means like just killing the guy.

        So I’m kind of skeptical about the existence of martial arts that truly don’t have any rules even at the “don’t actually gouge people’s eyes out” level.

        • Confusion says:

          Ah sorry, I hadn’t realized you could interpret it like that. Gouging someone’s eyes out is not the goal of the exercise (much too difficult to do deliberately). It’s just useful to use the eye sockets for leverage (the head can be used to steer the body: move the head and the body follows) on e.g. someone bald (otherwise the hair is usually more convenient to use). Any damage to the eyes is circumstantial. The same goes for the other things I mentioned: they are not goals, but means as part of techniques. Usually to distract or force a certain response.

      • wysinwygymmv says:

        What are these martial arts? Who practices them? Where? Most importantly, how? Like, who agrees to let you gouge their eyes to drill your eye gouging technique? If you don’t drill it, how do you get good at it? And if you do drill it, doesn’t everyone wind up being much worse martial artists than when they started because now they’ve all had their eyes gouged out?

        The rules around “tapping” are not a good model for fighting in the “real world”, but if you don’t learn to tap you will not become a good martial artist because you will become permanently brain damaged by having the blood cut off to your brain too frequently. Reproducing real-world conditions is not necessarily the best way to get better at dealing with real-world conditions.

        Another example: it’s probably suboptimal to train your swimming technique on a shoreline with 5 foot swells and a deadly riptide, even if that’s the kind of environment that would most effectively test for extraordinary swimming ability.

    • rlms says:

      One can join GJO or the sequel HFC. I’ve mentioned them before but apparently no other commenters signed up.

    • AG says:

      Competitive debate formats do already exist, you know. (And I wish more people in this group would learn some of the jargon from them, as they’re useful for better organizing arguments and counter-arguments.)

      (Really, though, they boil down to how can you best employ dark arts to sway the judge to your side. And because the judge also awards “speaker points,” they’re keeping an eye on the meta level for how well you employ dark arts as a plus.)

      • Peffern says:

        I agree that my experience with competitive debate has made it much easier for me to understsnd things on SSC.

        I think you’re wrong on the parenthetical though. In my experience speaker points were awarded independently from who won the debate, and didn’t change the outcome. This is because debating events were shared with speech events where oratory skill mattered so the debate events would give speaker awardd independently of the actual round wins/losses.

        Also in my experience, the judges are either experienced debaters themselves or attended judge training that (in theory) helped to inoculate them against certain unsavory strategies. While this doesn’t mean people aren’t winning by employing the dark arts, it takes more effort than you would expect to do so, to the point where for most debaters its not worth it. And, if a hypothetical debater is so competent that they could win even through a judge watching out for their tactics, then they are probably smart enough to win the normal way anyway.

        • thevoiceofthevoid says:

          The very form of competitive debate lends itself to intrinsically dark-art-ish motivations and techniques, though. It isn’t remotely about truth-seeking; it’s about coming up with as many post-hoc rationalizations as you can for the side that you’re assigned (not even what you truly believe!). I’d agree it’s probably useful for learning rhetoric, persuasion, and even research; but is terrible for training rationality. IIRC Yudkowsky used a hypothetical “good arguer” paid to write evidence and reasons that supported a presupposed conclusion as a fable to demonstrate how “rationalization” is opposed to “rationality.”
          ETA: Found it.

          • AG says:

            Yes, thevoiceofthevoid articulates what I was getting at.

            Like, if I use a particularly clever rhetorical trick to answer an argument via definitions debate, the judge may award me the win, because I was the better debater, on the meta level of evaluating my cleverness skills, rather than on the actual content of the debate.

            My debate team kind of specialized in winning by derailing from the original topics into either obscure political horse trading knowledge, or claiming that going extreme hedgehog with [ideology] solved everything in the world, which the opposition does not do. The best teams are horseshoe experts, employing the exact same small stable of arguments no matter which side of the topic they’re assigned to.

  12. HeirOfDivineThings says:

    One of the things that’s helped me when studying rationality re: trusting science is to not treat science as one unified field.

    There are obviously different scientific fields with varying degrees of good/poor methodology and data. So e.g., comparing homeopathy to priming is more or less comparing apples to oranges. They are different fields and so it isn’t the same scientists failing both.

    Then there’s the difference between science and academia. There are facets of academia I trust a lot less than I did a few years ago, and some that I trust more. I think I’ll just leave it at that.

    • Scott Alexander says:

      I think what you’re saying (and what other people have said) is part of the solution. But I wouldn’t want to have to explain it on the fly to a hostile audience in 140 characters.

  13. jrdougan says:

    Years ago I read the saying (very paraphrased) “A wild goose chase is good exercise, and sometimes you catch the goose.” It sounds to me like that is a fair summary of what you are doing, except the geese are a bit more catchable.

    (I would appreciate if someone could point me back to where I got this)

  14. J Mann says:

    I am thankful for this blog and the commenters. I don’t see myself as a rationalist, but I do think the blog makes me smarter and more engaged.

    Scott, if you want a suggestion for rationalism evangelism, I would probably react well to a few short posts a month going through some of the core concepts of rationalism (or just strongly endorsing a given LW post at the top of your link thread). I’m not sure if I’d catch rationalism fever or not, but I’m willing to expose myself to it, especially if there’s a lively comment discussion.

    • Mark V Anderson says:

      I agree with this. I do not consider myself a rationalist, and I am a bit skeptical of some of the things I’ve heard about it. But I am certainly interested in learning more, so I’ll know better what I like and dislike about the field.

  15. Ilya Shpitser says:

    “I’ve been thinking about what role this blog plays in the rationalist project. One possible answer is “none” – I’m not enough of a mathematician to talk much about the decision theory and machine learning work that’s really important, and I rarely touch upon the nuts and bolts of the epistemic rationality craft.”

    This induced a bit of a whiplash in me, especially in light of the fencer analogy.

    There are two parts to making positive changes: (a) knowing what to do (this involves learning from books/blogs/etc), and (b) actually trying to do it (the latter part involves motivation, repetition, social support, trying things empirically, error correction, etc.)

    In the internet age, I think we are oversaturated with (a), and limited by (b). I think the marginal value of more blogging about rationality is probably zero, while the marginal value of getting more social and cultural infrastructure in place to enable (b) is high.

    Re: ML and decision theory work that you think “is really important.” How did you decide, not being enough of a mathematician as you say, that it is really as important as you say?

    • Scott Alexander says:

      “Re: ML and decision theory work that you think ‘is really important.’ How did you decide, not being enough of a mathematician as you say, that it is really as important as you say?”

      How does one decide that global warming is important, if one isn’t a climatologist? How does a diabetic decide it’s important to take her insulin, if she isn’t a doctor?

      • rlms says:

        How does a diabetic decide it’s important to take her insulin, if she isn’t a doctor?

        Easy — she stops taking it, feels sick, changes her mind.

        How does one decide that global warming is important, if one isn’t a climatologist?

        Much more difficult. Some possible approaches: go with the consensus view; take the assumption that global warming is real and do relatively simple reasoning about the consequences of it (e.g. analogising climate change to acute changes in temperature, looking at the effects of previous climate change). These aren’t really available for AI, because it’s never happened before (well, you can try analogising to human intelligence but I think that favours the opposite conclusion — human intelligence hasn’t foomed).

        • Jiro says:

          Easy — she stops taking it, feels sick, changes her mind.

          If you substitute antibiotics for insulin, doctors have to keep reminding patients to finish up their prescription even if they feel better. There is a reason for this.

      • Ilya Shpitser says:

        > How does one decide that global warming is important, if one isn’t a climatologist?

        Presumably listen to climatologists. What do your decision theorist and machine learning friends say?

        • ec429 says:

          As a general rule, listening to practitioners of X is not a good way to find out if X is important.
          Theologists will tell you theology is important. Bankers will tell you that their bank is so important you have to bail them out. I will tell you that your entire civilisation depends on Linux kernel developers. None of these should be particularly strong Bayesian evidence to you.

          Climatologists are one source for whether global warming is happening, what’s causing it etc. But personally, I decided whether it’s important by listening to economists. (Specifically, David Friedman.)

          • The Nybbler says:

            Beware the man of one economist.

          • John Schilling says:

            Economists are generally quite good at quantifying how important something is, presuming it is true in the first place. And there’s definitely much to be learned along the lines of, “If IPCC5, then we still muddle through”.

            But if you want to know whether e.g. IPCC5 can be relied on in the first place, then you probably want to talk to scientists who are specifically not climatologists.

          • makj says:

            (Specifically, David Friedman.)

            (Who’s also a physicist.)

          • Ilya Shpitser says:

            I like your “follow the money” heuristic.

            That said, deciding whether climate change is important is probably going to involve evaluating claims about climate change, which is very difficult indeed to do without input from actual climatologists. Perhaps their place is as oracles for answering factual queries only.

            I don’t self-identify as “an ML researcher”, but I can probably do a reasonable job not actually lying about factual claims about ML.

          • quanta413 says:

            I think it’s a good idea for scientists to function as oracles for factual queries. But scientists are also people so they should try to act morally. This creates some grey areas where different people lean different ways on whether or not to answer a factual query at all and whether or not to change how you communicate depending on the moral content of the query.

            I lean strongly towards “scientists should try to restrict their behavior as scientists to answering factual questions in as direct and literal a manner possible”. I think that answering more complex non-factual queries involves expertise that scientists don’t necessarily have or often involves adding facts from outside their field.

            It doesn’t come up a lot because most of the questions a scientist will answer will be the ones they personally chose. It’s not clear to me the best way to give non-scientists influence about what factual questions are worth answering into the scientific process. Obviously, they usually won’t have very useful things to say about what to look at specifically. But broad brush, maybe. For example, the NIH pushes scientists one way and the NSF another.

          • Ilya Shpitser says:

            To be explicit about the “between the lines” stuff that I am sure Scott got already, the way Scott talks about this stuff makes me think he’s way overconfident about what’s important.

          • albatross11 says:

            I’d say the most important job of scientists (detectives, journalists, historians) is to try to get as good a picture of reality as possible and share it honestly. When you’re in your role as a scientist, you should be trying to make sure you are telling the whole truth as you understand it, with all appropriate modifiers. (Yes, the journalists will turn your “slowed down tumor growth by 4% in a mouse model for ovarian cancer” into “cured cancer”, but *you* should be telling the truth.) The truth you tell then becomes an input into a million other people’s thinking, which you can’t possibly know or predict or understand because you don’t have their expertise.

            This is one reason I am so opposed to the “noble lie” idea–the notion that sometimes, scientists, journalists, detectives, etc.. should misreport the truth to achieve some valuable social goal. You have no idea who will be hearing your noble lie or how they’ll be applying it, and you *can’t*–you don’t know what all those people know[1]. (Some of those people are in the future, making decisions based on the noble lie you taught them in high school or college because they’ve never revisited the subject since then.)

            Scientists also have opinions of their (our) own, sometimes on social issues, sometimes on scientific issues. But I think it’s really important to distinguish between “this is the best currently available picture of reality” and “this is what I think might be going on” and “this is how I hope things will be so that my desired kind of society may arise.”

            [1] ETA: This is basically parallel to von Mises’ argument about the impossibility of running a fully planned economy. No one person or organization knows enough to set the production quotas for everything-that knowledge is distributed across all the people in the society, and includes stuff like individual preferences between work and leisure, detailed engineering tradeoffs involving alternative materials that won’t be visible until someone building generators or radios finds out there’s no copper available, etc. It’s basically the same problem here, but it’s messier, because instead of prices, we have more general knowledge moving through the society.

          • HeelBearCub says:

            Specifically, David Friedman.

            If you think this means that you are “listening to economists”, I submit that you should go back a step or three. What you are most likely doing is specifically rejected by theoretical Rationality (if not the actual practice) as rationalization.

            Substitute any sufficiently ideologically motivated economist for Friedman and I will tell you the same.

          • Specifically, David Friedman.

            If you think this means that you are “listening to economists”, I submit that you should go back a step or three.

            If I correctly understand him, his point is that the arguments I offer he finds more convincing than the arguments other people offer in the other direction, and I happen to be an economist, not that the fact that my arguments are being made by an economist is what makes them convincing, which would be an error.

          • HeelBearCub says:

            @David Friedman:
            He’s arguing that he’s taking your opinion to be that of “economists”, that you represent the consensus view of the field vis-a-vis climate change.

          • ec429 says:

            He’s arguing that he’s taking your opinion to be that of “economists”, that you represent the consensus view of the field vis-a-vis climate change.

            Nothing of the sort. I don’t rely on “consensus view”. Rather, I have read a sufficient amount of economics that I believe myself able to evaluate (though not, with any degree of confidence, to initiate) arguments on the subject. I have read David’s economic arguments on the global warming issue, and found them convincing; whether others with the label ‘economist’ agree with those arguments is not really relevant. (Consensus in economics is generated neither by an efficient market nor by incontrovertible experimental results, thus (per EY) there is no reason for epistemic modesty.)

            Beware the man of one economist.

            I have read other economists; but for some reason Adam Smith never mentioned the IPCC’s 5AR. You can imagine how disappointed I was to slog through the entire Wealth of Nations and not see a single sentence about the impacts of cap and trade on long-term growth… 😉

            … /me waits for David to point at a passage from Smith that can be interpreted in precisely those terms…

          • HeelBearCub says:

            @ec429:
            Then you aren’t deciding whether global warming is “important” by listening to “economists”. You are deciding by listening to David Friedman. Those two quite different things and you should be clear on the difference.

            If you think that listening to David Friedman is the same thing as listening “to economists” you are making a big error. If you think that this is generalizable, that you can simply pick and choose which people count as experts in a field, you are making an even bigger error.

          • ec429 says:

            @HeelBearCub

            Then you aren’t deciding whether global warming is “important” by listening to “economists”. You are deciding by listening to David Friedman. Those two quite different things and you should be clear on the difference.

            Clarification: I decide by listening to economics (I initially misspoke when I said economists), one of my main sources for economic arguments about GW is David Friedman (who writes in terms of economic arguments, on account of his being an economist), and I then evaluate those arguments against my own understanding of economics.
            This is not the same as just believing X because David Friedman, Who Is An Economist Doncha Know, says X.

            If you think that this is generalizable, that you can simply pick and choose which people count as experts in a field, you are making an even bigger error.

            That’s not what I claim to do. I don’t care who’s an “expert”, I care what their arguments are. Admittedly, once someone has built up a record of (from my perspective) reasoning correctly, that becomes Bayesian evidence that their future positions will also be correctly reasoned — but evidence which is screened-off once I actually read their arguments for those positions.

            It is those who choose to defer to the “consensus” of “experts” who believe they can reliably identify the experts in a field. On this subject at least, I believe I am better able to evaluate arguments than persons.

          • HeelBearCub says:

            You are still picking and choosing. In addition, your initial statement make even less sense substituting economics for economists as Friedman is most definitely not economics as a whole.

            The proper way to do this would be to understand multiple general arguments within the field of economics about how “important” AGW is. Listening only to Friedman doesn’t get you there, although Friedman potential might point you at other relevant arguments. Still, Friedman has a well known bias on AGW, so you really need to take what he says on it with the proverbial grain of salt and make sure you evaluate multiple independent sources.

            As to whether Friedman is trustworthy, I find him to be highly disingenuous when it comes to arguments, especially about climate change. Among other things, he continually switches from the motte of “climate change is most likely something we can adapt to at a low enough cost” to the Bailey of “ and you can’t trust climate scientists anyway, so you should doubt whether climate change is occurring”.

          • Among other things, he continually switches from the motte of “climate change is most likely something we can adapt to at a low enough cost” to the Bailey of “ and you can’t trust climate scientists anyway, so you should doubt whether climate change is occurring”.

            Would you like to either provide support for that claim by quoting me saying that you should doubt whether climate change is occurring? If you cannot do so, you might want to rethink the reliability of your model of the world.

          • On the question of the views of other economists, you might find the case of Nordhaus of interest.

          • HeelBearCub says:

            @David Friedman:
            I don’t care to go quote dredging, but one of your favorite things to do is: a) state that whether you trust someone on their statements should depend on whether you find claims that they have made to be true in the past, b) state that the 97% figure (or something else) is a lie, and c) say (or merely imply) that you leave it up to the readers to decide whether various climate scientists are trustworthy on the science. It’s an inference chain that you employ frequently.

          • eccdogg says:

            I certainly have not read everything David has written on the subject. But I have read quite a lot here and other places. And most often I see him quoting directly from the IPCC when it comes to predictions of temperature changes and sea level rises.

            That certainly seems pretty far from “you should doubt if climate scientist or whether climate change is occurring”.

            I certainly have seen him go after individual climate scientist he believes to have done shoddy misleading work, but I have never seen him transition from there to implicating the whole idea of climate change.

            But I certainly would change my mind on the subject if you could provide some quotes about what you are talking about.

          • HeelBearCub says:

            @eccdog:

            And most often I see him quoting directly from the IPCC when it comes to predictions of temperature changes and sea level rises.

            As long as we are asking for references, can you point me to a place where he is employing those quotes in agreement with the IPCC?

            I really don’t want to have maintain a file of Friedman quotes with links. It’s boring. And it’s precisely this very careful dancing between motte and bailey that makes him so untrustworthy on this subject to my mind.

            Go back a step to my example. Assuming that inference chain is one that he walks people down frequently when the subject of climate change comes up, how would you classify it? Have you every seen him employ it?

          • eccdogg says:

            Sure, see down further in this very thread.

            “Currently it’s about a foot higher than it was a century ago, and the high end of the IPCC projection is for about a meter by the end of the century.” — David Friedman

            I have not seem him walk folks down that inference chain.

            ETA: I guess this is what you are referencing

            http://daviddfriedman.blogspot.com/2014/02/a-climate-falsehood-you-can-check-for.html

            But it mainly seems to call into question one particular scientist, his claims, and the claims of those closely associated with him, not the whole idea of AGW.

            Also I guess the quote I gave you would not necessarily put him in agreement with IPCC because he is using those as an upper bound.

          • J Mann says:

            @HeelBearCub – I don’t see evidence of motte and bailey, and think you might be misusing it. It’s reasonable for David to make more than one assertion in his life – he can simultaneously say (a) that the 97% figure is misrepresented; (b) that climate change is occurring; (c) that the costs are likely to be manageable;* and (d) that readers should judge for themselves whether any particular piece of science is reliable.

            The essence of motte and bailey is that when challenged in the motte, you flee to the bailey – I don’t see any evidence that David flees from any of his opinions.

            * I don’t know if David actually says (c) – he well might, but I don’t want to put words in his mouth.

            @DavidFriedman – I’m mostly interested in the use of “motte and bailey” here, and am not used to discussing someone’s work in front of them – if this isn’t something you’re comfortable with, let us know.

          • HeelBearCub says:

            @eccdogg:
            That is not him agreeing with the IPCC. That is him picking out one prediction from the IPCC to use as a logical club. Note that you don’t find him endorsing a belief that they are correct in predicting a 1 meter rise, merely an inferred statement which implies that a one meter rise is not concerning. If you put the phrase “Even if this were true” into his post, it doesn’t change the meaning of it.

            The blog post you linked is one example of him promoting the inference chain.

            And here is him explicitly endorsing a statement very near the end of the inference chain:

            Since, as a prominent supporter of the position that warming is primarily due to humans and a very serious threat, Cook is taken seriously and quoted by other supporters of that position, one should reduce one’s trust in those others as well. Either they too are dishonest or they are over willing to believe false claims that support their position.

          • eccdogg says:

            But he also clear says this.

            “That Cook misrepresents the result of his own research does not tell us whether AGW or CAGW is true. It does not tell us if it is true that most climate scientists endorse AGW or CAGW.”

            “The fact that one prominent supporter of a position is dishonest does not prove that the position is wrong. For all I know, there may be people on the other side who could be shown to be dishonest by a similar analysis. But it is a reason why those who support that side because they trust its proponents to tell them the truth should be at least somewhat less willing to do so.”

            Which I think is totally fair.

            The folks I see him calling into question are not climate scientist or the IPCC, but Cook, his website, and those who unquestionably quote Cook.

          • HeelBearCub says:

            @eccdog:
            He is very careful in his dance with the edge of the motte. He does debate the issue for sport. But the most you can say about that statement is that he doesn’t preclude the possibility that AGW is occurring. What he doesn’t do is agree that it IS occurring. All of his statement push the logical inference that we should doubt the conclusions of the IPCC and other bodies similar to them and that we should presume it to be an open question whether AGW is in fact occurring.

            What I don’t think you can find is Friedman saying “I believe the consensus scientific position that AGW is occurring and is in fact, anthropogenic, and will continue to accelerate commensurate with the amount of greenhouse gasses added to the atmosphere. This acceleration will continue well past the moment we cease to add greenhouse gasses to the atmosphere. I believe that climatologists as a whole are credible and that there scientific conclusions are broadly valid.”

          • PeterDonis says:

            What I don’t think you can find is Friedman saying “I believe the consensus scientific position that AGW is occurring and is in fact, anthropogenic, and will continue to accelerate commensurate with the amount of greenhouse gasses added to the atmosphere. This acceleration will continue well past the moment we cease to add greenhouse gasses to the atmosphere. I believe that climatologists as a whole are credible and that there scientific conclusions are broadly valid.”

            You just conflated several very different claims, which is precisely the sort of thing that I often see David Friedman saying that people should not be doing.

            The first claim is “climate change is occurring and human activities are significantly contributing to it”. I think this claim is true. (I suspect Friedman does too, but I’ll let him give his own opinion if he wants.)

            The second claim is “the primary way that human activities contribute to climate change is through human GHG emissions”. I think this claim is debatable; if nothing else, it leaves out the obvious alternative factor of human land use and consequent alteration of the Earth’s average albedo, which I personally believe is a more significant contribution (for one thing, it’s been going on for millennia, whereas GHG emissions have only been significant for a century or so, depending on how you count).

            The third claim is “climatologists as a whole are credible and their scientific conclusions are broadly valid”. I think this claim is false. Climate science does not have good predictive power; the mismatch between climate models and actual data is now so large that even the IPCC has to admit it. Also, every IPCC report since AR1 has included a table that gives the level of scientific understanding of the key factors that might affect the climate. That table looks the same in AR5 as it did in AR1, indicating that climate scientists have not increased their understanding of any of those key factors in 25 years. That’s not what you would expect from a reliable scientific field with obvious public policy implications. And given the above, the fact that climate scientists continue to claim that that they can make reasonably accurate predictions of future climate is evidence that they are not credible.

            And a fourth claim, which you don’t state but which you clearly imply (at least by the definition of “clearly imply” that you appear to be using in criticizing Friedman’s statements), is “human GHG emissions are a serious problem and we should take high-cost actions now to drastically reduce them”. I think this claim is not only false, but dangerously false, since it commits us to high-cost, near-term actions whose consequences we do not understand and which have a much too high probability of making us net worse off instead of better. Which is basically the position that I think Friedman often argues. And a key component of that argument is to point out obvious ways in which the third claim, above, is false, since the vast majority of people who believe the fourth claim believe it because climate scientists say so. So pointing out ways in which climate scientists are obviously not credible ought to decrease the general level of confidence in the fourth claim.

          • Paul Zrimsek says:

            That table looks the same in AR5 as it did in AR1, indicating that climate scientists have not increased their understanding of any of those key factors in 25 years.

            If that conclusion is true (I have my doubts), it’s actually bad news for us skeptics. The case for undertaking expensive mitigation starting right now is driven largely by high-cost, low-probability scenarios which we aren’t able to rule out entirely– but might be able to pretty soon, if our knowledge is advancing. If our knowledge isn’t advancing, that undermines the skeptical case for waiting to gather more information.

          • PeterDonis says:

            If that conclusion is true (I have my doubts)

            Read the reports and see.

            The case for undertaking expensive mitigation starting right now is driven largely by high-cost, low-probability scenarios which we aren’t able to rule out entirely

            This is one version of the argument, but by no means the only one. Nor do I think it’s the one that’s driving the political debate. The argument that is driving the political debate is basically “the science is settled, we’re doomed unless we take drastic action now.”

            Also, the argument based on high-cost, low-probability scenarios which we aren’t able to rule out entirely is a very weak one in any case. There are always high-cost, low-probability scenarios which we aren’t able to rule out entirely. And one can always pick numbers to make it look like the expected benefit of taking drastic action to mitigate such a scenario outweighs the cost. But the uncertainty in those numbers is so high that any such calculation is just hand-waving. Of course one should understand that such possibilities exist, but the best you can do about them is to increase the general robustness and resilience of our society. Which is precisely what taking high cost drastic actions to mitigate some imaginable scenario about which we actually know very little prevents us from doing.

            If our knowledge isn’t advancing, that undermines the skeptical case for waiting to gather more information.

            It does no such thing. Observing that mainstream climate science is not advancing our knowledge is very different from saying it is impossible to advance our knowledge. The obvious policy prescription if mainstream climate science is not advancing our knowledge is to shift funding from mainstream climate science to other lines of research that might do better at advancing our knowledge. It’s not to just shrug our shoulders and say we might as well spend trillions of dollars on CO2 mitigation because mainstream climate science can’t do any better.

          • I don’t care to go quote dredging,

            Prudent, since if you did you would discover that you made a demonstrably false statement about me. I don’t know if that concerns you—perhaps not.

            but one of your favorite things to do is: a) state that whether you trust someone on their statements should depend on whether you find claims that they have made to be true in the past, b) state that the 97% figure (or something else) is a lie, and c) say (or merely imply) that you leave it up to the readers to decide whether various climate scientists are trustworthy on the science.

            You ought to try reading for comprehension.

            I have argued, in some detail, that the second Cook article contains a lie–that the 97% figure in the first Cook article is for abstracts holding that humans are a cause of warming, while the second article claims it is for abstracts holding that humans are the main cause of warming. You could, if you wished, read the post and decide for yourself if it is correct.

            I have concluded that Cook is dishonest and ought not to be trusted. He is not, as it happens, a climate scientist, at least not unless I am, since he has an undergraduate degree in the same field in which I have a doctorate. He is a propagandist.

            I have a fairly detailed comparison of the predictions of the first few IPCC reports to what happened from which I conclude that the first report badly overestimated future warming, the later reports tended to project somewhat high. I don’t suggest anywhere in it that warming is not happening. The final conclusion of the post is:

            Looking at a webbed graph of the data and fitting by eye, the slope of the line from 1910, when current warming seems to have started, to 1990, when the first IPCC report came out, is about .12 °C/decade. That gives a better prediction of what happened after 1990 than any of the IPCC reports.

            I have another post pointing out that not all AGW alarmists are the same, contrasting one who is pretty clearly a flake with another who seems like an intelligent person with views that disagree with mine.

            I have multiple posts trying to look at likely consequences of warming based mostly in IPCC projections, including one with the title “Climate Nuts vs the IPCC.”

          • All of his statement push the logical inference that we should doubt the conclusions of the IPCC and other bodies similar to them and that we should presume it to be an open question whether AGW is in fact occurring.

            If all of my statements push that inference you should be able to find at least one to quote that does so. But checking to see whether what you say is true is apparently too much trouble.

            What I don’t think you can find is Friedman saying “I believe the consensus scientific position that AGW is occurring and is in fact, anthropogenic, and will continue to accelerate commensurate with the amount of greenhouse gasses added to the atmosphere. This acceleration will continue well past the moment we cease to add greenhouse gasses to the atmosphere. I believe that climatologists as a whole are credible and that there scientific conclusions are broadly valid.”

            Is this sufficiently close for your purposes?

            My actual view was and is intermediate between the two ends of the dispute. I think it is reasonably clear that global temperatures have been trending up unusually fast for the past century or so, and the most plausible explanation I have seen is the effect of human production of carbon dioxide. On the other hand, I do not think there are good reasons to predict that warming on the scale suggested by the IPCC reports for the next century or so will have large net negative effects, a point I have discussed here in the past.

            That’s a little less unambiguous than what you want, since I try to be careful not to say things that might not be true, and climate is a complicated system. Climate sensitivity is very much an open question, and the physics, at least, are consistent with net negative feedback, although I don’t think it is likely, so the effect of CO2 could be small and something else could be responsible for the warming. I’m not sure if you realize that the IPCC only claims clear human responsibility for the warming since the mid-20th century, leaving the possibility that the first thirty years or so was due to something else.

          • HeelBearCub says:

            Prudent, since if you did you would discover that you made a demonstrably false statement about me.

            Ah, the umbrage gambit. You like this one.

            Here you are mischaracterizing what I said. You will point at something I put in quotes (which is clearly a paraphrase) as if it was intended to be read as a precise quote.

            No, I merely characterizing statements you have made. This is of course opinion and can be argued over. Hell, we can argue about whether your characterization of my statement is correct. (We are arguing over it).

            These are the kinds of debate games you play on these issues, constantly.

            I don’t particularly care to go through all your links, I will simply note that you link to your claims about Mann being a liar. Said claims were already covered above. I’ve already made my case as to why this is you harvesting crops in the bailey.

            Playing the “my honor has been impeached” card simply reveals the underlying weakness in your arguments, IMO.

          • PeterDonis says:

            Playing the “my honor has been impeached” card

            He’s not complaining that you have impeached his honor. He’s impeaching yours. And justifiably, as far as I can see.

          • Thegnskald says:

            This is popcorn worthy. This is the first time I have seen somebody motte-and-bailey a motte-and-bailey accusation.

            Seriously. Certain participants should step back and ask themselves what position they are actually arguing for, and specifically, whether that position is true, kind, or necessary – or more specifically, whether that position can even meaningfully be said to be true or not, nevermind kind or necessary. If you are unkind, unnecessarily, and then proceed to admit that it can’t even be evaluated as truth, since it is just opinion – hey, maybe consider the possibility that you are in the wrong, here.

          • the_the says:

            @ HBC 7:48am

            I really don’t want to have maintain a file of Friedman quotes with links.

            But if you are going to make strong claims regarding a fellow commenter, then perhaps you should. It’s not like you need detailed records, but even one clear example would go a long way to establishing your credibility. Otherwise, this seems like a poor strategy for arguing, particularly with Friedman who tends (in my experience) to make limited, defensible claims.

          • Paul Zrimsek says:

            @PeterDonis: I already know from reading the ARs that the IPCC evaluation of scientific certainty hasn’t changed much. I’m just not convinced that they’re right about that: I think it likelier that they were underestimating the uncertainties at the time of AR1, and I know of at least a few gaps in our knowledge which are filling in– for example, we’re starting to see reasonable physics-based estimates of sulfate cooling, and will not much longer have to assume that the effect must be large because that’s what it takes to make GCMs backtest successfully.

            By “The case for undertaking expensive mitigation” I mean of course the most convincing case. “The science is settled, we’re doomed unless we take drastic action now” is a weakman, however popular it may be politically; I’m no more interested in beating up on it than our opponents should be in beating up on “It’s all a socialist hoax.”

            It would be nice if the case for wait-and-see could be based on the possibility of somehow improving climate science. But unless you believe, as I do, that progress is still being made despite the pathological state of parts of the current establishment, what we’re going to need is a strong likelihood of improving it– preferably backed by a plan more definite than “spend more money some-unspecified-where else”. (Or we may need to not rely on wait-and-see, which after all is my pet argument– it doesn’t necessarily have to be yours).

        • J Mann says:

          @HeelBearCub

          Even assuming arguendo that your quote is indisputably true and that DF hasn’t said it, I don’t think that’s motte and bailey. (Also, your statement has enough clauses that I’m betting it’s not indisputably true, but I don’t know).

          Let’s hypothetically assume that DF only publishes accurate information that’s on the skeptic side of the GW debate. That’s not motte and bailey, because he isn’t arguing for something he isn’t willing to defend.

          I do agree that if it were true that someone only published accurate information on one side of a debate, then it wouldn’t be good to use them as your sole or primary source on that question.

          • HeelBearCub says:

            Since, as a prominent supporter of the position that warming is primarily due to humans and a very serious threat, Cook is taken seriously and quoted by other supporters of that position, one should reduce one’s trust in those others as well. Either they too are dishonest or they are over willing to believe false claims that support their position.

            Again, what is the inference chain being promulgated here?

            If one puts tiny pebbles on one side of a balance, over and over, and none upon the other, it’s no good to point to the size of the pebbles as evidence you don’t intend to make the scale tip.

            Friedman is making an argument here, one he strengthens by questioning little pieces of various reports over and over. There is no good faith effort to analyze the data and the conclusions as a whole and point what is strong, what is less strong, and what is weaker. This is not good faith debate about climate science.

            To put the shoe on the other foot, in what I hope will not be a digression, what I frequently see in commentary from left-leaning pundits about the Mueller probe and various pieces of information about it, are statements somewhat like the following: “Certainly we don’t know yet the conclusion of the Mueller investigation, and it may ultimately turn out that Trump did not cooperate with the Russians in order to influence the election.” This in now way means that the rest of the article isn’t making the argument that we should see it as likely that Trump DID seek to influence the election. If I were to tell you that “They aren’t making an argument that we should think it likely he collaborated” you would rightly laugh at me.

            ETA: and I would also note that, vis-a-vis my original statement, it would be near as ridiculous to think that you could simply listen to, say, Krugman as representative of economic thought on AGW. Although I don’t think Krugman has made AGW a hobby horse where he goes and looks for bad arguments against the theory on facebook so he can argue against them, nonetheless, he would still be poor as a sole source simply because he is so ideologically fervent.

          • J Mann says:

            @HeelBearCub

            I’ve been googling DF and climate change, and my impression is (a) DF is extremely careful and accurate in his discussion of the evidence, to the point where I find his comments extremely helpful to build a model of some of the lukewarmer side of the debate; (b) DF does not publish much if anything making the warmist case. I think that’s reasonable, and in any case isn’t “motte and bailey.” Apologies for the pedantry.

            As to your question of whether DF has ever stated a belief in warming directly, how’s this?

            But my best guess, from watching the debate, is that the first half of the argument is correct, that global climate is warming and that human action is at least an important part of the cause.

          • HeelBearCub says:

            I already acknowledged that was his position of retreate to the motte:

            climate change is most likely something we can adapt to at a low enough cost

            You finding a “lukewarm” endorsement of the idea that the climate is warming and that human activity is some significant contributor is complete consistent with what I originally said. I know he says that. I said he says that.

            But you haven’t actually directly addressed my question about inference chains. Argument via implication is still argument.

            And let’s taboo “motte and bailey” for a moment. Is he making statements in the manner of someone interested in making people generally aware of state of the science? Or is he attempting to simply win a debate?

          • J Mann says:

            @HeelBearCub

            Thanks for engaging on this, although I will admit that the proper use of “motte and bailey” was what really interested me. 🙁

            I think that DF reads like a proponent of a specific position – that at least the moderate global warming hypothesis seems likely but that the situation is complicated enough that he doesn’t have a high degree of certainty, and that based on the IPCC predictions, extraordinary remedies are not justified. (This fits within the positions that are often referred to as “lukewarmist”).

            I think he’s very careful to identify what he knows and doesn’t know, and I generally think he’s a reliable source for his position. (In other words, if DF says a fact, it’s likely to be well supported). I’d want to read some similarly reliable statements of alternative positions to see if there is another side, but I don’t have a problem with the way DF lays out his ideas. I particularly don’t think he’s implying more than he’s saying.

          • HBC writes:

            You finding a “lukewarm” endorsement of the idea that the climate is warming and that human activity is some significant contributor is complete consistent with what I originally said. I know he says that. I said he says that.

            On the contrary, you said:

            But the most you can say about that statement is that he doesn’t preclude the possibility that AGW is occurring.

            You see no difference between “Saying that X is probably true” and “not denying the possibility that X might be true”?

          • HeelBearCub says:

            David you really outdo yourself.

            You managed to put two unrelated statements next to each other, as these two quotes are referencing separate statements, not the same one.

            Specifically the first quote quote is referencing my original statement about the motte.

            The second quote is referring to the section that contains:

            The fact that one prominent supporter of a position is dishonest does not prove that the position is wrong. For all I know, there may be people on the other side who could be shown to be dishonest by a similar analysis.

            In the post in question you are in the bailey. At other times you are in the motte.

            Again, my original statement of the motte was:

            climate change is most likely something we can adapt to at a low enough cost

            IOW, I pre-stated that you will agree that climate change is occurring. This is implied by that (very loose) framing.

            @J Mann:
            Looks like we are back to the discussion you wanted to have.

          • But you haven’t actually directly addressed my question about inference chains. Argument via implication is still argument.

            Indeed it is. And the conclusion I am implying is that if you read a claim on skepticalscience.com, which Cook runs, you should not believe it unless you have checked the references and arguments with reasonable care. If you see a claim in the popular literature about climate change you should be similarly skeptical.

            The implication of what I have written about the IPCC is that it is attempting to offer answers to a hard question where calculations involve a lot of judgement calls, that its conclusions are probably biased in the direction of overestimating the rate of warming but are nonetheless pretty much the best we have, hence I am willing to draw conclusions about the consequences of warming based on their estimates.

            Note that you don’t find him endorsing a belief that they are correct in predicting a 1 meter rise

            I can’t endorse the belief that they are correct in predicting a 1 meter rise because they don’t predict a 1 meter rise. 1 meter is about the high end of the range of outcomes by 2100 on the high emissions scenario.

            I think the IPCC is if anything likely to overestimate effects, so 1 meter is a reasonable upper bound for how much sea level rise might plausibly occur by 2100.

          • In the post in question you are in the bailey. At other times you are in the motte.

            Clearly I should have also quoted the paragraph after the one I did quote:

            What he doesn’t do is agree that it IS occurring. All of his statement push the logical inference that we should doubt the conclusions of the IPCC and other bodies similar to them and that we should presume it to be an open question whether AGW is in fact occurring.

            “All of his statement” presumably covers both motte and bailey. And I took “what he doesn’t do” not as “what he doesn’t do in the passage just quoted” but “what he doesn’t do.”

            So far as warming is concerned, I have repeatedly agreed that it is occurring. I don’t claim with certainty that the main cause is human action because I don’t know it is, although it seems likely. I do not believe that is the conclusion anyone would reach from what you wrote.

          • J Mann says:

            @HeelBearCub and @DavidFriedman

            I hope this discussion doesn’t feel confrontational. I enjoy reading both your posts, so if this gets uncomfortable for anybody, let me know and we can drop it.

            @HBC – this is actually why I think motte and bailey was unhelpful. I think David clearly expresses a specific opinion. If you think he’s wrong because he understates the risks or overstates the costs or because you think some other writer makes a great case for a position that’s incompatible with his, then just say that, and we can discuss it, and we can all get smarter.

            Within our geographical metaphor, I don’t think DF ever retreats to the motte. I think he’s standing in his bailey, and you are criticizing him for not stepping out of the bailey to stand in the curtilage.* Which is perfectly reasonable, but then it would be helpful to provide an argument we can evaluate. It’s not dishonest** for DF to take a consistent lukewarmist position – at most, it’s mistaken.

            * Not quite the right word – if anyone knows the name for land outside a castle wall, please let me know.

            ** Again, not exactly the right word. “Illegitimate?”

  16. Lapsed Pacifist says:

    What is the best demonstration of rationality? I would love to see some Rationalist Masters demonstrate their own prowess before accepting that there is value in studying under them.

    I actually do study martial arts. There is a Kung fu school in my city, where I went and found that none of the students could throw me. I discovered that their teacher did not believe in competition, and that his students were not allowed to resist the techniques in training. I asked to fight or spar with the teacher, and he demurred.

    Can anyone show me a concrete instance of ‘Rationalist’ technique being effective, such that I can show a third person and have them understand it? If not, you may want to consider that your kungfu is not good enough to be taught, and that teaching it may not be responsible.

    • andrewflicker says:

      I believe Tetlock’s superforecasting tournaments are basically this, LP, and in general I think well-calibrated predictions over diverse domains where the “predicter” lacks significant domain expertise are a good signifier of rationality skills.

    • moridinamael says:

      I have observed that “rationality techniques that work” often cease to be called rationality techniques.

      Is staying hydrated and getting enough sleep a rationality technique? Well, it’s certainly the kind of thing that any Rationalist would recommend, but Rationalists hardly have a lock on this kind of basic lifestyle advice.

      Another thing Rationalists recommend is meditation. But they didn’t invent it, they just identified it as empirically useful.

      When you think about epistemic rationality, well, I think people with a bit of practice in this area do outperform the norm, but it’s very difficult to screen that effect off from high intelligence. And it’s even more difficult to filter out the inherent noise related to the fact that any problem domain which will qualitatively reveal the benefit of high skill at epistemic rationality will by design be an unusually confusing and perhaps poorly defined problem domain.

      It’s all a bit like if you invented a martial art called “do what works fu”, and you steal all the most effective strikes and grapples and stances from every other martial art. This is indeed what “MMA” is – but MMA doesn’t take credit for these techniques, MMA isn’t a “thing” in the same way jiu-jitsu is, it is a meta-process that takes all available techniques as inputs and yields functional combinations of those techniques as output. MMA tournaments serve as the force which selects on the techniques.

      • Bugmaster says:

        I understand that there are lots of complexities involved. However, at the end of the day, you can always call a big tournament on a secluded island somewhere, and observe first-hand whose kung-fu and/or meta-process is strongest. This doesn’t seem to work for rationality techniques. Sure, some people are very smart, and some of them achieve great things; but their probability of success does not seem to be related to their practice of rationality techniques. In fact, Scott himself even states that he just goes on intuition most of the time.

        I can’t speak for Lapsed Pacifist, but personally I’d like to see some actual evidence that disproves what I said above; I have a feeling he’s asking for the same thing.

        By analogy, I want to see some evidence that you won because your fighting style was superior, and not just because you’re 7 feet tall and naturally muscular, fighting a couch potato like myself. In that situation, it doesn’t matter what style you use, you’re going to win pretty much regardless.

        • Luke the CIA Stooge says:

          I think the test for rationalist master would be the same as the test for kung fu master.
          You are dropped on a vast deserted island with survival equipment.
          Slightly less deadly Battle royal rules: Outwit your opponents to kill them or force their surrender.
          (oh and there’s a ton of money as a prize to compensate for the risk (a rational person needs more than glory))
          The winner is the most rational, the survivors are the second most rational, and the dead are the irrational. (repeat until we accept a God emperor)

          If Rationallity is just systematic winning then the test works.
          If however we don’t expect rationality to be the deciding factor, then that kinda discredits rationality.

          Personally i expect the test to favor ruthlessness, immorality, and nihilism. So i think its a good test for rationality ;b

          • Luke the CIA Stooge says:

            @zqed

            I think your interpretation gets the formal implications of decision theory right, like that is what it does and does not imply, but within the broader rationality community i think the broader definition of rationality is the one thats used.

            Like Yudkowsky’s final exam in HPMOR was literally “this is an impossibly one sided fight, how can rational Harry win against these odds?”, and the “rational” answer was just the most plausible way he could still win the fight.

            Like my glib joke “if rationality is just sytematic winning then we can just decide whose rational with a formal death battle” is actually whats implied by how the rationalist community, and yudkowsky in particular, envision rationality, a fully general correct decision maximiser.

            And to be fair to them I don’t see where they’re wrong to define rationality that way, and i don’t think my reductio discredits them. Like the kind of rationality they want is the kind that would maximize your chances in a death battle, or a war, or a career, or life.

    • sty_silver says:

      It’s an interesting question, but I’m not sure what proof of effectiveness you have in mind. Suppose rationalism was as effective as people claim, then how would a demonstration look like?

      In terms of the single most impressive thing rationalists have done, I would say these two papers by Miri, but you could argue they don’t count as rationality outputs (also you’d have to read them to judge their quality, and then I suppose it might still be debatable). The next most impressive thing would be convergence on truth, I think this is demonstrated by this survey and in the focus on AI, but this won’t impress you if you disagree with the conclusions. I think the sequences are impressive and useful, but this won’t impress you if you disagree or don’t want to read them. The focus on signaling is impressive and incredibly useful in daily life, but this won’t impress you if you don’t agree. That’s true for all epistemic results.

      CFAR teaches techniques on how to better proceed in disagreements, but how would a demonstration there look like? People telling you they changed their minds? Or statistics?

      Generally, studying rationalism is supposed to make you better at 1. achieving your goals, 2. figuring out what’s true. I don’t know how you would measure either.

      • Bugmaster says:

        Suppose rationalism was as effective as people claim, then how would a demonstration look like?

        I don’t know, do rationalists ever make any specific claims ? Martial artists do: “I can kick your ass”, or, more formally, “I can win this tournament by points or knockout”.

        Generally, studying rationalism is supposed to make you better at 1. achieving your goals, 2. figuring out what’s true.

        As I’d mentioned above, money features prominently as an instrumental sub-goal in most goals; so if rationalists were significantly richer than the general population; and if their wealth could be directly linked to their decision-making processes (as opposed to, say, inheritance); then I’d take it as evidence for their position.

        • thevoiceofthevoid says:

          Many rationalists have an annoying tendency to donate loads of their time, money, and/or focus to weird things like MIRI or wild-animal suffering research or bed nets, which might complicate that metric.
          Though you might still find a good number of them making pretty good money as software engineers, and it’s your choice whether to count that as a rationality win.

          • LadyJane says:

            There’s a very large number of non-rationalists making “pretty good money” as software engineers (and plenty of people using their intelligence, education, and skill sets to make much greater amounts of money in other fields) so I wouldn’t consider that a “rationality win” in any meaningful way.

        • sty_silver says:

          As I’d mentioned above, money features prominently as an instrumental sub-goal in most goals; so if rationalists were significantly richer than the general population; and if their wealth could be directly linked to their decision-making processes (as opposed to, say, inheritance); then I’d take it as evidence for their position.

          That really would be zero evidence for anything. If you take a look at the survey I linked you see that the population is highly non-representative of average USA citizens. You’d have to control for a thousand things then there’s the fact that maximizing personal wealth is not something you’re actually supposed to do. In many cases, I’d consider a luxurious lifestyle as evidence against someone being a good rationalist.

          Many rationalists make lots of specific claims, like “god doesn’t exist” or “Many worlds is true” or “the existential risk of AI is really high” or “you should invest in bitcoin

          • Bugmaster says:

            In many cases, I’d consider a luxurious lifestyle as evidence against someone being a good rationalist.

            I was talking purely in terms of income, not how the person decides to spend it.

            Many rationalists make lots of specific claims, like “god doesn’t exist” or “Many worlds is true” or “the existential risk of AI is really high” or “you should invest in bitcoin“

            Those are all factual claims, and not claims about their capabilities (with possible exception of the last one). Martial artists can also claim that “god doesn’t exist”, but that tells us nothing about their fighting skills, just as it tells us nothing about the rationalists’ intellectual capabilities. On the other hand, when a martial artist says “my kung-fu is best, and it allows me to defeat up to 10 opponents at a time”, this is a claim about his fighting prowess that is reasonably specific and readily verifiable.

    • e.samedi says:

      “What is the best demonstration of rationality?” First, reject the notion that the pursuit of rationality is something new. Off the top of my head, the best demonstrations I can think of are: Plato’s Socratic dialogues, the medieval quaestio format (see the form of argumentation used by Aquinas in the Summa Theologica), and the Encyclopedia of Diderot. Logic, grammar, and rhetoric remain the foundation and sine qua non of “rationalist” technique.

      We are fortunate that we have new tools to layer on this foundation, the most important of which in my view, are ones mentioned in “The Martial Art of Rationality”: probability and statistics, and cognitive psychology. The latter is especially interesting because it informs us of the limitations of the tool we use for reasoning, i.e. the human mind.

    • Wrong Species says:

      If you change your mind on some subject which you are heavily biased, and it’s not because of peer pressure, then you are in the top 1% of rational people. Even if you are wrong, it’s a strong signal about your ability to overcome bias.

      • Aapje says:

        There are many ways to change your mind:
        – Decide that the bias is wrong, for example: due to new evidence, I no longer believe that snails are being discriminated against in our society
        – Give up on a claim that you used to make in favor of your bias, in favor of a different claim in favor of your bias, like: this IAT test administered to snails seems too unreliable, but police statistics show that snail stomping happens a lot.
        – Adopt a claim that goes against your bias, while still believing that the balance of evidence favors it.
        – Weaken a claim to some extent.

        Some of these are much stronger than others and most can also differ in magnitude. I doubt that only 1% of people are capable of the weaker ones.

  17. rahien.din says:

    [Note : this is may be is overlong, but, I have been thinking about this a lot lately. Forgive it or hide it.]

    If this blog still has value to the rationalist project, it’s as a dojo where we do this a couple of times a week and absorb the relevant results.

    What distinguishes a dojo from a street fight is not expertise, not fighting spirit, but a particular kind of civility.

    Sparring is at its most effective when it is full-speed, when you aren’t thinking about how hard (not) to hit. Sparring partners take your force so you can learn to hit, and you do the same for them. You risk injury for each other, and that injury that is neither intentional nor unintentional. The civility underlying sparring is “Even if I/you do things that could injure you/me so that we can learn to fight, we know that neither of us enjoys the thought of the other being injured.”

    Think of Lawrence Taylor’s (#56) reaction when he notices that he broke Joe Thiesmann’s leg. That’s a particular kind of civility. [Ed : Don’t watch the whole thing unless you want to see Joe Theismann’s tibia snap.] Or, think of “We need emotional content… not anger!

    We’ve all been part of internet communities where people were genuinely trying to damage each other. I have, both as a mouth-agape spectator, and as one of the people inflicting damage. I learned an awful lot from those interactions because few things will hone your ideas better than a full-speed opponent – whether that’s in the dojo or in the street. But, I’m not part of those communities, anymore. I burnt the house down around me. Can’t learn anything more.

    SSC is different. This place has that warrior civility. We’re not exactly looking out for one another… but the goal isn’t to murder each other, either.

    I have sometimes wondered why. Sure, Scott excises the vicious and the saboteurs, but often he doesn’t even have to. When the newbies get out of hand, it’s usually some of the other posters admonishing them. I myself have compulsively defended this civil atmosphere – to a weird degree. Sure, I really enjoy how we can have controversial discussions here, but, the degree of compulsion I felt was not explained by my intellectual enjoyment here.

    This civility is what’s sometimes missing from the rationalsphere in general. Rationality, at its very heart, is incivility.

    On one hand, that’s by design. Immobility is the chief resource that an algorithm brings to the table – an algorithm doesn’t and shouldn’t care about when it is wrong, because it assumes that its wrongness occurs at the optimal rate. It’s drilling down to the inviolable and indisputable and non-conscious source code of deciding, and standing immovable even when the world cracks around it.

    On the other hand, you have The Prophet Eliezer intellectually ripping people in half at cocktail parties. Yes, you sure can “have some fun with people” that way. But not forever. And if you excise people in that manner (if, like me, you burn a house down around you without realizing it) you have no recourse when your rationality slides into the various analogues of scientific forestry.

    This is not unique to the rationalsphere – the most visible fiefdom of incivility is fundamentalist religion. Both use the same currency. For instance, that’s why Sam Harris’ thought processes resemble those of his opponents. He reads the Koran and draws the same uncivil conclusions as the Islamists. Harris is better because he can see, at least, that we should not abide by those conclusions, but he’s still shackled by them and to them. From the other direction, consider that the Parable of the Good Samaritan is not about hypocrisy, but actually about abandoning formalism.

    It is for this that Book of Cold Rain says one must never take the shortest path between two points.

    Within the rationalsphere, there is a diverse, blind-men-vs-elephant nomenclature for this kind of civility. “Sportsmanship.” “Wisdom.” “Schelling point.” “Epistemic humility.” “Chesterton fence.” “Clarity didn’t work, trying mysterianism.” But these are just magic spells we cast on ourselves.

    The whole point is, if we want to conduct ourselves in the best manner possible for conscious beings, the unblinking eye of rationality is both necessary and insufficient. Sometimes, the algorithm doesn’t work. Sometimes, you build a system that inappropriately increases uncertainty, or worse, inappropriately reduces certainty. Conversely, sometimes instead, the way out is through. How do you know when to abandon that system, or when to cleave to it?

    A priori, you can tell yourself “Okay, sure, it’s just a formal system, and boy don’t we know about formal systems,” or, “Oh yeah, like how our best AI’s operate by maximizing their future options,” or, “Intelligence is knowing a tomato is a fruit, wisdom is knowing that ketchup isn’t a smoothie.” But in the heat of the moment, it’s not something you can rationally decide. You have to be able to switch systems when it’s appropriate to do so. The same thing that made Lawrence Taylor the Platonic ideal of an outside linebacker is not the same thing that moved in him upon seeing Joe Thiesmann’s ruined leg.

    The incantation is “I have recognized an opportunity to apply one of {Schelling point mechanism, epistemic humility mechanism, Chesteron fence mechanism, …},” but the actual effect is “I am going to cast this illusion on myself, which will allow me to switch systems without entirely abandoning the idea of rationality.”

    It’s a mild and adaptive form of self-hypnosis.

    This is why AI is not and will never be conscious, any more than a stone. Why it is ever the golem in silica. Why it is scary, but also its vulnerability to conscious beings. “I’ll just pull the plug” means “I possess a counterspell, whereby I may abandon a deranged formalism.” The people who are worried about AI safety are actually worried that they don’t have a general counterspell for a deranged formalism.

    (Probably they should be, because they tend to rip people in half at cocktail parties.)

    I’m not even talking metis-vs-Lysenko. Metis is also insufficient and blind. For every miscarried forest of sterile evenly-spaced fir trees swept clean by a fire, there is a vibrant forest of microscopic transistors on an integrated circuit ablaze with calculation. We have to be capable of abandoning metis, and just as you can’t rational your way out of rationality, you can’t metis your way out of metis.

    We think that what we do here is training ourselves in applied rationality, or, solving problems, or, providing checks and balances. We’re not. (Moreover, the rest of the internet does that better – nothing prunes one’s fighting system more effectively than a sucker punch or a Glasgow smile.) We might even be consciously grateful for the civility here – as refreshing as cool petrichor in a green wood after a lightning storm – and how it permits us to go full-speed and yet remain a community. That’s not exactly correct, either.

    We are here not because the civility allows us to work through hard problems. We are here because [working through hard problems with civility] is the paired kata.

    It has to be a kata, and it has to be a paired kata.

    The skill that paired kata trains – the very attribute that everyone is here to acquire – is system switching. It’s counterspells. It’s knowing how to flinch when you can’t say exactly why you flinched. It’s being able to say “When there is an opportunity, ‘I’ do not change, ‘It’ changes all by itself.”

    • J Mann says:

      That judo bit in the sequences makes me really mad. I don’t have much opinion on whether EY is leading a sex cult,* but that sequence leads me to believe that he’s either a gnomic instructor out of zen parables who likes slapping people to supposedly induce enlightenment, or is something of a myopic jerk.

      • Randy M says:

        I’m waiting on your footnote

        • J Mann says:

          Sorry – I’m near the the bottom on editing quality, and will work to improve!

          Originally, I had written the following

          * Specifically, I think it’s unlikely as I understand the terms, but with low confidence.

          But then I decided that “don’t have much opinion” captured that sufficiently and forgot to delete the *.

        • Nick says:

          Not J Mann, but my opinion these days on whether EY is leading a sex cult is “Folks saying it need to put up or shut up already.”

          • Randy M says:

            Eh, I don’t really have an opinion about that, but dangling asterisks leave me in suspense.

          • Nick says:

            Are you the guy who always has to close other folks’ parentheticals in a chat? Sometimes I leave them open just for kicks. 😀

          • carvenvisage says:

            @Nick I’m not sure what other stuff (if any) prompts people to say that, but his old OKCupid profile is probably still up somewhere.

          • Aapje says:

            Here it is.

            Fun to read/cringe at.

          • J Mann says:

            In defense of EY’s profile, I have to say that respondents know exactly what they are getting, which seems pretty rational.

          • Viliam says:

            Seems like these days all you need to become a sex cult leader is:

            1) to be polyamorous, and

            2) to announce it on OkCupid.

            I wonder why more guys don’t do this. Is this secret PUA technique patented?

          • John Schilling says:

            In order to be a recognized as a sex cult leader, you first have to be recognized as a cult leader generally. That, plus overt polyamory, will likely result in such a reputation.

            EY’s reputation as a cult leader generally hinges on his,

            A: having developed or claimed to develop a new and superior way of understanding the universe
            B: having promoted this as a way to achieve a significant level of self-improvement
            C: having made profound eschatological claims on the basis of this understanding
            D: having written scripture to teach all of this, with extensive jargon impenetrable to outsiders
            E: having collected followers in communal living arrangements to receive and evangelize his teachings
            F: having directed his followers to donate significant sums of money to particular causes, including a research institute he directs and which pays his salary.

            This does at least vaguely match most not-explicitly-religious definitions of a “cult”, specifically what Bruce Campbell would call a service-oriented instrumental cult, but a vague pattern-match probably doesn’t merit using a term with the pejorative implications of “cult”.

      • Bugmaster says:

        That entire bit belongs on /r/thathappened ; all it’s missing is the standard “and then everyone clapped” coda. Don’t get me wrong, I understand that it’s supposed to be allegorical, but still — even the New Testament is written in a less arrogant style…

      • thevoiceofthevoid says:

        A classic example of how being clever does not protect oneself from being an @$$hole.

    • Scott Alexander says:

      Frick, don’t quote the Book of Cold Rain at me, I forgot I disclosed that part of my inner mythology somewhere and you freaked me out.

    • rlms says:

      I fervently hope you are an outlier in including Sam Harris in the rationalsphere.

      • rahien.din says:

        Intentionally left ambiguous as to whether he is fundamentalist, rationalist, or both.

      • christhenottopher says:

        A spectre is haunting the rationalsphere – the spectre of Sam Harris.

        But actually I’ve been seeing Harris at the periphery of every online community I hang around and I’m never impressed by him. I kind of wonder what that says about a person who finds they keep joining groups and those groups start referencing the same not-really-that-great figure?

    • Bugmaster says:

      On a side note, “don’t think, feel !” seems like great advice for martial arts, but kind of exactly the opposite of what rationality is supposed to be…

      • Toby Bartels says:

        You want to practise to the point that you can feel rather than think (because thinking is too slow). Reaching out with your feelings on your very first day is only for movies.

  18. Cerastes says:

    I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?” I can come up with some reasons this isn’t the right way to look at things, but my real answer would have to sound more like “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

    I may be biased because of my field (which could loosely be called experimental animal physiology), but one of my major criteria for “should I believe this study” (among many) is a simple question: “What is the mechanism?” If a study (or several) says that X is linked to Y in people, and we already know that there’s a hormone or protein or system which is capable of both X and Y, I’m way, way more likely to believe it than if the mechanism is speculative, or worse, unknown.

    If someone points to a study and says “this drug interacts with leptin receptors etc. to reduce subjects’ calorie intake and leads to weight loss”, I’m pretty good with accepting that on face value given what we specifically know about how leptin works. If another study says “People who eat this herb lose weight because of appetite suppression,” I’ll be skeptical but open to the possibility, because plants are crazy chemical factories and it’s entirely possible that one happens to produce something which interacts with the various appetite hormones in some useful way. But if a study says “Acupuncture results in appetite suppression and weight loss”, I will be immensely skeptical, because no matter what statistical method you used, there’s quite simply no known, well-established mechanism behind accupunture, and without that, it’s way more likely to just be statistical errors or somesuch.

    I’m not closed to the reality of empirical phenomena for which we don’t know the mechanism, but demonstrating its reality requires much more evidence and more definitive evidence than something which follows well from known principles. It’s a bit like the “extraordinary claims require extraordinary evidence” concept, but even for mild claims – the magnitude of evidence required to convince me is inversely proportional to the plausibility of the proposed mechanism.

    • Scott Alexander says:

      And I may be biased, because in my field nobody has any idea what causes anything, and in fact often reasons back from “Well, this intervention worked, so maybe the underlying territory looks like this…”

      But biases aside, I don’t think mechanism is a good way to solve this problem. I can come up with a plausible biological mechanism for why vaccines cause autism (which they don’t), but not for why oily fish affects health outcomes much more than fish oil (which it does).

      • Deiseach says:

        why oily fish affects health outcomes much more than fish oil (which it does)

        Hmm, that’s interesting.

        (1) Could it be that people who make a decision to eat oily fish also change other aspects of their diet as well, whereas people who take fish oil supplements may have less healthy diets and just toss down a few capsules in the hopes it will do something for them?

        (2) Argh, where was this evidence *coughcough* years ago when my mother was dosing us with cod liver oil? Yes, it does taste as horrible out of the bottle as you’ve heard.

        (3) The Vatican II Church should never have made fish on Friday voluntary instead of retaining the compulsory nature of the fast 🙂

      • Bugmaster says:

        because in my field nobody has any idea what causes anything

        Have you considered that perhaps your field is not as rational as you thought it might be ? 🙂

        • Aapje says:

          @Bugmaster

          Dealing with black boxes is not irrational, but it tends to require a lot of trial and error.

          • Bugmaster says:

            All of nature is a black box, though…

          • Aapje says:

            No, there are plenty of things in nature where we know the detailed mechanisms at work and can predict things based on that actual knowledge.

    • Doctor Mist says:

      Your varied spelling of “acupuncture” spurred me to wonder why it has only one “c”. Though I knew the spelling, I think subconsciously I’d always assumed the meaning was “accurate”, “precise” — getting the needle in exactly the correct spot. But no, “acu” is from Latin “with a needle”.

      (Now I’m wondering why Latin had such a short particle for “with a needle”.)

  19. helloo says:

    I think this is a rather… missed? metaphor.

    One of the biggest things regarding why martial arts practice a lot is that there are quite a few places where you should do something AGAINST your “natural instincts”.

    A lot of the repetition and focus on form is to cement the response and prevent the instinct from happening.
    This is more true for some than others (I’ve been told that fencing is often quite unnatural in how you’re supposed to respond).

    It is possible to link it to rationality, but not in the way you’re using it and almost certainly not by intuition – at least if said intuition hasn’t changed based on the training.
    At best this is training mental skills by going through battles – mock battles perhaps, but battles nonetheless.
    Without really shaping the style and practicing the format, and the fact you’ve admitted that you’re improved intuition is not really transferable, this really doesn’t fit as a martial art class metaphor.

    • Spookykou says:

      Isn’t understanding and working around our biases very similar to the idea of training yourself against your “natural instincts”, our biases being our “natural intuitions” when confronted with new information or an argument, instead of an attacker here. I believe when Scott is talking about his intuitions he is talking about the intuitions that he has worked up through rationalist practice, not his “natural intuitions”.

      After years of looking into this kind of thing

      • helloo says:

        I saw that as more experience than training or drills.

        A veteran might be able to intuitively know what to do or what are some tips and tricks, but those generally aren’t considered to be benefits from training.

        Might those skills and knowledge be collected and then formalized into a martial art?
        Yes, but that’s not a step which is described here.

    • Confusion says:

      I’d rather say that your instincts often aren’t very helpful. A lot of techniques are subtle: they work reasonably well when executed mediocrely and seem to work as well as they should when executed well, but then your teacher gives a few minor pointers and you feel how much more is possible. Even after 1500 hours of training (10 years, 3 hours per week) you’re still lacking the instincts to come up with those corrections yourself; to feel how you could improve your execution. And it takes repeating the technique many times to make those minor corrections a reliable part of your execution.

    • Scott Alexander says:

      I think the instinct one struggles against here is the instinct to assume you already know everything worth knowing, you’re definitely right, your opponents have nothing to teach you, there’s no reason you should change your mind, etc.

      But there’s also a part of martial arts which isn’t *quite* about overcoming your instincts. How do you know which of two different kinds of kicks to use at a specific point in one fight? I think that’s more about developing instincts than overcoming existing ones, and corresponds to questions like “is this one of those situations where I should trust the experts, or not?”

      • helloo says:

        If you need to decide between options, that tends to imply consciousness and thus some type of decision making. That can be learned or taught (always use A except on Mondays), but isn’t really a particular to a martial art.

        In fact, that’s rather AGAINST the metaphor of martial arts.

        Martial artists often do not have time to make these decisions. They NEED that split second to block or dodge or whatever.
        You train and drill to use those moves instinctively (or to gain teamwork/be able to do something without question – see military drills). And having consideration goes against this.

        It is not like there aren’t instincts that are against rationality that should be “trained away”. Besides the ones you listed some also include-
        Not considering multiple sources/bias
        Focus on what makes “more sense” than what might be more accurate (and not looking at data to check)
        Assuming opposing side is an attacker/wrong and trying to prove/defend it
        Not understanding what your/their assumptions/givens/postulates are.

        But these aren’t trained away from just having arguments. Otherwise, the way that Facebook and other social media has “encouraged” and increased arguments, would have considerably reduced these types of behaviors by now.

        You might be putting together how this blog and arguments and such are in fact being used to suppress these instincts, but that’s not really what you’ve written in this post besides the single mention of theories. It’s not like I’m doubting your rationality hasn’t improved, just that the metaphor doesn’t fit (though it might be interesting to think of ways to make it fit better).

      • PeterDonis says:

        “is this one of those situations where I should trust the experts, or not?”

        I’m not sure “whether or not to trust the experts” is the right way to frame this question. Personally, I think that if you care enough about the answer to a given question to even wonder whether you should trust the experts, you shouldn’t trust them; you should learn enough to make your own independent assessment. So the only situation where trusting the experts would even be an option would be where you don’t care about the answer anyway.

        To go with the martial arts analogy some more: part of martial arts training is learning to use your own unique attributes. The right way for you to act in a particular fight might not be quite the same as anyone else’s. Sure, there are general techniques and principles that everybody should learn, but in the end it’s you in the fight, not those who taught you.

        Similarly, part of rationality is learning to use your own unique set of attributes, and those include your goals and preferences. Your own goals and preferences won’t be exactly the same as anyone else’s. Rationality does not consist in finding the One Right Answer, because in most of the interesting cases, there isn’t one.

  20. RC-cola-and-a-moon-pie says:

    Part of the problem, of course, is that in order for this sort of thing to be really effective as practice is for there to be a way to know at the end of the exercise whether you ended up being right or wrong, which is impossible on controversial issues by the nature of the case. I think the broader idea of trying to create a science or art or craft of applied rationality is a fool’s errand. Maybe part of it is going to break down along the lines of those of us who think Yudkowski’s essays in this vein are interesting and useful and those who do not (I’m personally in the latter group). I love this web site but not for any contribution to a “rationalist community.”

    • Part of the problem, of course, is that in order for this sort of thing to be really effective as practice is for there to be a way to know at the end of the exercise whether you ended up being right or wrong, which is impossible on controversial issues by the nature of the case.

      Much of the time, the question is not whether your conclusion is correct but whether your argument of it is, and you can indeed discover that it is not. I’m pretty sure that one or more of my climate exchanges here resulted in someone concluding that the particular argument he was using was not correct, although not necessarily that the conclusion was not.

      • Kelley Meck says:

        Right. Although my object-level view of climate change hasn’t changed, or not much, my view of how difficult it is to make persuasive arguments on the subject very much has changed.

        • Did that affect your confidence in your object-level view? Given that your best estimate of the effects of AGW is still about the same, is your subjective probability that the estimate is correct still the same?

          If not, has the probability of deviations from that estimate changed in both directions–have you concluded both that it is more likely than you previously thought that you had overestimated the negative consequences and that you had underestimated them?

          • Kelley Meck says:

            Hm.

            By far the biggest thing I gained was an appreciation of how different minds have different “stopping points”–sort of like stop codons in DNA/RNA.

            Here’s an anecdote to point at what I mean about ‘stopping points’… as a college student, I once wrote a paper about “sustainability” as a slogan. I don’t still have the paper, but I know I framed it under an occasion/position-type topic sentence, where the occasion was “obvious puffery word ‘sustainability’ is basically a perfect tool for liars” and my position was “but even so, it’s better to use the slogan and work to populate it carefully with facts and our true values, than to make no attempt to unite people whose collective action will require simple rallying cries as labels for more detailed values and commitments.” As I saw it, because of problems like the tragedy of the commons and (and I didn’t have this brilliant phrase for it yet, but I tried to grope at it) scope insensitivity, nobody could credibly show a revealed preference for caring about the environment/climate/polar bears/what-have-you without immediately rendering themselves indigent (because the problem is too big and there are free riders) or rendering themselves vulnerable to charges of hypocrisy (because they say they care, but aren’t sacrificing in proportion with the scale of the problem). By starting with vague, slogany commitments and building commitment collectively, these problems can be (somewhat, haltingly, and not without problems of Machiavellians rising to the tops of movements) tackled without anyone making caring about the environment self-destructive.

            My paper was much at least 2x as hard to write than it would have been to cleverly criticize the brainlessness of the word “sustainability” and write, “Sustainability: Empty Rhetoric or a Bad Idea?“. It was 10x harder to write than it would have been to write a brainless “rah rah sustainability or otherwise the future will not be sustained!” All I needed was a passable paper with no discernable omissions or typos… why did I try to write a theory of collective action into a short paper in an environmental economics survey class?

            The answer, I think, has to do with stopping points. I was not comfortable ‘stopping’ until the idea I planned to write felt “done”–just like I would feel unhappy turning in a math problem that had a fraction that maybe hadn’t been reduced to its lowest terms.

          • Kelley Meck says:

            At the material-science level, my view of AGW hasn’t changed, nor have my error bars. E.g., still happening, still back-loaded, still pretty tightly matching what models have predicted, still more dramatic a change than anything in the history of the planet since photosynthesis put oxygen in the atmosphere. I still have it flagged to try and find out more about water vapor feedbacks… but I feel somewhat doubtful that there’s something there to find. (Many much more qualified physical sciences people than me have looked, and nobody seems to come back with serious doubts.)

            At the ecological-impacts level, I still think this is an extinction event for double-digit percentages of every phylum in animalia, even assuming humans adopt concerted and aggressive mitigation as a response to the problem. That seems to just follow directly from the material science level and the general sensitivity of ecological systems to this kind of change. Error bars haven’t changed much.

            At the social-impacts level, the “stopping points” way of organizing my thoughts about the disagreements I’ve encountered opens me up to either making very big updates myself, or thinking that many of the people disagreeing with me are missing something pretty big themselves, and *about* themselves. Certainly I’ve very much upped my interest in finding the time to significantly expand the set of facts from which I draw my understanding of how climate will affect human systems, and do not know what I’ll find. Do climate changes cause famines? Do famines cause political unrest? Has CO2 in the atmosphere affected plant productivity positively? Has warmer temperatures meant wetter weather? Has warmer ocean surface temperatures meant heavier ocean-fed storm systems, including wetter hurricanes and more expensive storm damages? Why does the U.S. have more climate deniers than elsewhere? Do governments generally underreact or overreact to tragedy-of-the-commons situations, and are the world’s governments more likely to underreact or overreact to this one?

            I am not sure what I’ll find as I find time to track down answers to these questions. It would be nice to be wrong, and have climate not be a major problem facing the world. I am not very hopeful.

            Edited to add: or anyway, I do not place much hope in the idea that climate isn’t a big problem. I am, in the main, a pretty optimistic/hopeful person.

          • still pretty tightly matching what models have predicted, still more dramatic a change than anything in the history of the planet since photosynthesis put oxygen in the atmosphere.

            On tightness of matching, I have a summary of the performance of the first few IPCC reports, written a few years back.

            On more dramatic a change, are you claiming that what has happened so far is more dramatic than the repeated glaciation/interglacial cycles during the current ice age? You might compare either by change in sea level or change in how much of the Earth’s surface is inhabitable. For the former:

            During the last glacial maximum, 21,000 years ago, the sea level was about 125 meters (about 410 feet) lower than it is today.

            (Wiki)

            Currently it’s about a foot higher than it was a century ago, and the high end of the IPCC projection is for about a meter by the end of the century.

        • RC-cola-and-a-moon-pie says:

          The main climate argument of David’s that I would very much like to see answered is his contention that there are high-cost, low-probability outcomes to averting climate change that may well roughly offset the high-cost, low-probability dangers associated with allowing climate change to proceed, all on the assumption of rough IPCC-level probability distributions. That, to me, seems like one of the crucial points that needs to be answered by proponents of incurring significant expense to avert climate change. A ton of the bottom-line rationales I see on that side boil down to “better safe than sorry,” but that attitude completely goes away if the balance of risks is in rough parity.

          • Thomas Jørgensen says:

            But that is obviously wrong, because climate change can be solved at a net-savings over present practice.

            Step one: Stop pretending the electricity market works. Build TVA, EDF quasi-government entities to run it. Order them to build standard reactors by the hundreds. This should cause the price of electricity to converge to around 4-6 cents kwh long term. Not to cheap to meter, but cheap.

            More importantly, it is a grid that does not externalize pollution onto public health budgets, so it is a much greater savings than it appears.

            Second: The market in cars does work, so we do not want to abolish it. But we can lean on it. Announce a long term plan to tax the sale of gasoline cars harder every year with the take being applied to a flat rebate on electric car sales (This tax should self-eliminate. Therefor we do not want it in the general budgets.)

            That leaves shipping and aviation. Taking shipping nuclear is also cheaper than present practice, tough it does mean you have to end.. most current shipping practices – Very few shipping operators could be trusted with a reactor, too prone to cutting corners.

            Aviation.. Well, ammonia synthesized from electricity is a viable aviation fuel.

          • bean says:

            Where are you getting the claim that nuclear shipping is cheaper than current practice? Because I’d love to see a cite for that. NS Savannah was only competitive at 70s oil prices because most of the reactor costs weren’t counted. Naval nuke people are insane. They’re very good at safety, not so good at getting things done.

          • Thomas Jørgensen says:

            .. Uhm. Math. Writing of the cost of a reactor, extra crew and decommissioning comes to a whole lot less than the life-time cost of fuel for a high-seas freighter. Emma Maersk – Which was designed from the keel up for fuel efficiency -burns 64 million dollars worth of fuel per year.
            Your discount rate has to be insanely high for just building a nuclear boat to not be cheaper than that. (A naval reactor that would move the emma should cost about 200 mil)

            This is not just my math, it is the result everyone gets.

            Savannah got retired right before OPEC, and the cost of intermediate fuel oil tracks the price of crude very faithfully.

            Everyone who has done the math at modern oil prices agree it is not even a contest – nuclear reactors would be cheaper. By millions per year, or more relevantly, by quite high fractions of the freight rate.

            The cost of fuel is the dominant consideration for modern merchant marine operations, and despite the wide-spread adoption of sailing extremely slowly, it is very high. A nuclear ship would cost more, but you would also be able to carry much more cargo per year for a given tonnage by the simple expedient of laughing at the concept of slow steaming, so the higher investment is not nearly as big a deal as it looks – a ship with twice the price tag that goes 3 times as fast is a cheaper mode of moving goods. A ship that goes three times as fast while having a fraction of the fuel cost puts rivaling shipping companies out of business.

            The issue has never been the economics – but rather, the politics of getting permission to dock nuclear powered ships in all the relevant freight ports

            It does set a lower bound on the economic size of freighters – Because while the math is drool worthy for the big boats the economics of a 15 mw plant in a tramp freighter.. not so much.

          • John Schilling says:

            Emma Maersk – Which was designed from the keel up for fuel efficiency -burns 64 million dollars worth of fuel per year.

            This “math” thing that you speak so highly of, at least to my limited understanding, requires that you also have the annual cost of ownership of a marine nuclear powerplant to include in your calculations. That number does not appear anywhere in your rather lengthy post. Could you provide it, please?

          • Thomas Jørgensen says:

            Savanna had extra costs of 2 million/year (in 1970 dollars, so 12 mil) This is an extremely pessimistic upper bound, because it is the cost of being both the sole user of all the supporting infrastructure, and a floating luxury exhibition ship, where a nuclear fleet would be able to share the first between many ships, and not have to do the second at all.
            And it is not enough to move the needle at all. You can google some of the proposals people have made over the years if you care to – the estimate of professionals in the shipping business is an overall cost saving of 30, 40 percent per tonne/kilometer for large freighters or oil carriers. The numbers get even more ridiculous if you run them with oil at a hundred dollars per barrel which has happened before and is likely to happen again.

          • RC-cola-and-a-moon-pie says:

            Thanks, Thomas. Very interesting response. I have two reactions. First, I’m surprised to hear that there would be no cost to solving warming. Aren’t there tons of estimates for the costs of, say, Kyoto that show large positive numbers? And Kyoto would barely dent warming. Are all the estimates of this sort just completely wrong and they are missing obvious answers?

            Second, even if the costs of preventing the warming were, in fact, zero, or even slightly negative, I think that David’s argument would not necessarily be refuted. His argument, as I take it, is that the actual results of the warming are not known to be net negative, and could be net positive, and that even factoring in small-chance risks, they too cut in both directions. If this is right then I think it would follow that even if we could push a button and eliminate the warming we should be highly uncertain whether to push it.

          • Thomas Jørgensen says:

            The standard cost estimates all contain several unspoken qualifiers.

            First qualifier: “What would it cost to solve global warming, without recourse to unholy fission”.

            That qualifier is bloody stupid.

            Fission has direct costs slightly higher than minehead coal, and external costs that are hard to distinguish from zero (because things like waste disposal are internalized costs- the reactor operator pays for them)

            Since the external costs of coal are enormous – and this does not count global warming, just straight up the costs of air-pollution and poisoning people – switching to nuclear is a large net saving for the nation as a whole.

            Same goes for the shipping thing. And the cars. Going electric only solves the problem given abundant low-carbon electricity supplies.

            Second qualifier: “And also we must not offend against free market ortodoxy!” – This is why plans for decarbonizing the grid involve so many subsidies and tax breaks.

            This is also a stupid, stupid qualifier.

            doing this is expensive, and also just does not work. The state is a hammer, state interventions that are extremely blunt generally work much better than trying to be delicate.

            The main drawback of nuclear power is that it is extremely capital intensive. Now, there is a very noteworthy feature of our current economic situation, which is that governments currently have extremely low costs of capital – the interest on government bonds is 0-2 percent, depending on which oecd nation you are looking at.

            Thus: Valley Authorities. Do not try to bribe private actors into building the grid you want. They will take you to the cleaners and just not do it.

            Instead set up semi-independent state owned entities, give them the pile of cash to just out-right build reactors by the dozens and their marching orders. The long term return on investment will likely be very high.

            And if it is not, that means someone invented a clean energy source which is genuinely better than fission, in which case, the state coffers can afford to take the hit because the economy will be booming.

          • Thomas Jørgensen says:

            Yes. They all have the unspoken assumption that you are not going tp just go out there and solve the problem by using treasuries to finance 300 reactors at a cost of capital of under 2 percent.

            Well, actually, they have the more basic assumption that trying to prevent the end of the world as we know it is not a good enough reason to violate the taboo against nuclear power.

            … Or the taboo against Dirigism. Note that this plan does involve just flat out ruining a really large number of coal barons, gas tycoons and energy traders by undercutting them. Also, would in very short order destabilize the middle east very hard by turning of the money faucet.

            That, is the basic assumption underlying all the cost projections are that climate change is not an emergency and that no actions actually consummate to the scale of the problem will be taken.

          • bean says:

            @Thomas

            I’d want to double-check those numbers. The Wiki article specifically says that she would have cost the same as a conventional freighter post-oil crisis not including maintenance and disposal of the reactor. I’d assume those came out of a separate budget, and who knows how much that was. If you have a cite on this, I’d love to see it, but I do know that nuclear power isn’t even really considered cost-competitive for destroyers by the Navy, and they have a lot more incentive to avoid refueling than merchies do.

            Re speed, 3x merchant speed is really, really fast. Like LCS fast. Which isn’t practical for a merchie, nuclear power or no. And it ignores the bit where you have to load and unload, and that takes the same amount of time no matter how fast your ship is.

          • RC-cola-and-a-moon-pie says:

            Wait, just to be clear we’re on the same page, treasury expenditures are positive costs that would be incurred to end warming, right? How important something is to do is a separate question from the cost of doing it. (And, of course, David’s argument contends that it is highly uncertain whether we should want to end warming at all.)

          • Thomas Jørgensen says:

            re:Bean. No, it was comparable before OPEC. Post opec, it is not even in the same ballpark. The navy does not see near as much benefit in economic terms because they do not spend very much time under steam, compared to a freighter. A freighter can easily spend 80% of a year going places at top speed, so has much, much greater fuel spend than a warship which spends most of its time sitting someplace being intimidating.

            RC-cola: Investment, not cost. Nuclear reactors frontload the cost of the electricity they produce.

            The fuel is barely an entry on the ledger, but the 12000 person-years of skilled labor to build one, those do cost.

            But.. with treasuries bearing zero percent interest and plentiful available construction labor, that becomes a joke. (the second part matters. State demand for labor can crowd out the market.. but not when unemployment is what it currently is) Take out loan, build reactor, sell electricity, pay loan back and then your children are probably pretty happy – After all, current build reactors are expected to run for a century. The maturity on the loan is 20 years. So that generation can cut prices, or use use the income stream to cut taxes.

            Sure, the headline stickerprice is a high number, but you need to compare that number to the price of 20 years of buying coal. Which is higher. Let alone the century perspective.

            Which must, admittedly, be taken with a grain of salt or three – on that time horizon, the chance of technological surprise obsoleting the things gets pretty high.

          • HeelBearCub says:

            but not when unemployment is what it currently is)

            Uh, what? Prime age labor participation rate still isn’t quite to pre 2008 peak, but it’s close. Unemployment is quite low.

            More to the point, the specific skills needed to construct sea going nuclear reactors aren’t just laying around.

            The final, most important, piece to the puzzle is “how many nuclear events due to sub-standard construction, maintenance or operation will the market bear”? “How many thefts of nuclear fuel by terrorist organizations will the market bear”? The answer is “far less than the expected number”.

            There are so many, many forces pushing against nuke power, but especially mobile nuke, and most especially sea going nuke.

          • bean says:

            The navy does not see near as much benefit in economic terms because they do not spend very much time under steam, compared to a freighter. A freighter can easily spend 80% of a year going places at top speed, so has much, much greater fuel spend than a warship which spends most of its time sitting someplace being intimidating.

            The fact that you’re looking at this only in economic terms makes it extremely hard for me to take you seriously. Yes, a freighter works harder than a warship, which should mean more focus on fuel economy. Which is why freighters use diesels and destroyers use gas turbines. But a freighter goes from point to point. It never has to dash halfway around the world to respond to a crisis. Freighters don’t UNREP. Warships do it all the time. Not having to unrep fuel would be really nice, saving all the money currently spent on tankers, and have major tactical advantages as well. The USN had nuclear cruisers, and walked away. They have a very active and very powerful nuclear propulsion community, and I haven’t heard of CGNs recently. Merchant lines have none of that, and they have no chance.

            And I’d like cites on Savannah’s operating costs. Wiki disagrees with you, and I’ll take them unless you can give me something solid.

          • Thomas Jørgensen says:

            The dirigiste play is for fixing the grid – I thought the references to the Tennessee Valley Authority and Electricity France made that clear? And the skills for those mostly are just lying around, – the things that eat man years like a maw there it is plumbing, construction and electric work, and the double-checking of the above. Which.. very ubiquitous skillsets.

            Nautical reactors would be nowhere near as labor intensive, because we would be talking about series production in shipyards – Which is the kind of thing that lends itself to automation, as does the obvious desire for consistency of build.

            RE: only viewing it in economic terms.

            No. I view addressing global warming and the general pollution from fossil fuels as an actual priority. I do not want to virtue signal, I want the problem solved.

            That means I care about the cost of solving it, because any solution which is expensive runs the risk of getting undone by a lizard-person with a bad toupee. A solution which is genuinely cheaper, overall, by contrast is going to stick come hell or high water.
            It also means I care about how possible various solutions have been proven in practice. People have been promising the renewable grid for over 40 years without delivering, while nuclear grids have existence proofs.

            Thus, if you are actually commited to changing the status quo – reactors.

            Nautical transport is technically easy but politically hard, since port access was a bitch and a half for Otto Hahn and the Savannah but the underlying tech is also simple.

            The general problem here is that people are insane. Nuclear gets held to a standard which nobody applies to the status quo. Which, let us be clear about this, is murdering us all by the hundreds of thousands.

            Never mind global warming, just flat out poisons from fossil fuel burning and extraction operations are an ongoing massacre that makes every nuclear accident ever combined look like a spill in aisle 4.

          • bean says:

            Nautical transport is technically easy but politically hard, since port access was a bitch and a half for Otto Hahn and the Savannah but the underlying tech is also simple.

            If it’s so cost-effective, why did the USN stop building nuclear surface ships in the 70s? Because everything I know of naval operations (which is a great deal) suggests that naval requirements are better-suited for nuclear propulsion than merchant requirements. Merchant ships go from one place that has fuel to another place that has fuel. They do not spend four months off the coast of someone who doesn’t like them very much, getting fueled at sea by expensive tankers. The USN going nuclear would let them cut the tankers, and their operating costs.

          • Thomas Jørgensen says:

            Well, unless you want to put reactors in destroyers… – Because going of the list of ships commisioned into service, all the USN ever builds are subs (nuclear) Carriers(nuclear) and destroyers(not nuclear)
            I mean, I kind of would like to see an all-nuclear navy, but I can see why the navy balks. It would tend to oversize the destroyers even more than they already are.

          • bean says:

            Well, unless you want to put reactors in destroyers… – Because going of the list of ships commisioned into service, all the USN ever builds are subs (nuclear) Carriers(nuclear) and destroyers(not nuclear)
            I mean, I kind of would like to see an all-nuclear navy, but I can see why the navy balks. It would tend to oversize the destroyers even more than they already are.

            The USN used to have nuclear cruisers. They were very useful for escorting carriers, and most aren’t that much bigger than current destroyers. There were serious plans for a nuclear-armed AEGIS cruiser, which got cancelled for fiscal reasons. I don’t have data to hand to give me size comparisons (I get numbers for the Belknap-Truxtun delta of both 700 and 1200 tons from wiki alone) but it’s a very significant cost. That’s why we haven’t seen them come back.

            Also, there’s no danger of an all-nuclear USN. The USN builds a lot of ships that aren’t nuclear or destroyers. Neither the amphibs nor the auxiliaries are likely to go nuclear any time soon.

          • John Schilling says:

            Because going of the list of ships commisioned into service, all the USN ever builds are subs (nuclear) Carriers(nuclear) and destroyers(not nuclear)

            The United States Navy did in fact commission two nuclear-powered “destroyer leaders”, USS Truxtun and USS Bainbridge. Both of which were subsequently reclassified as “frigates” and then “cruisers”, because the Navy couldn’t make up its mind about ship classifications. But both were designed to fill the same tactical role as the Burke-class destroyers, and both were slightly smaller than the current flight of Burke-class destroyers.

            Both were failures. As Bean notes, large-ish warships are a much better match for nuclear propulsion than any merchant ship, but the nuclear destroyers were too expensive to operate and it was cheaper to just pay for the damn fuel even if you had to build a milspec tanker to deliver it to you at sea. Same deal with the nuclear-powered cruisers. Only the aircraft carriers, at ~100000 tons and with the unique requirement to steam off at thirty knots in a random direction every few hours, ever really benefited from nuclear power in a surface ship application.

            And possibly a few Russian icebreakers, because they operate in an environment where even military-grade tankers couldn’t reliably deliver them fuel.

          • bean says:

            The United States Navy did in fact commission two nuclear-powered “destroyer leaders”, USS Truxtun and USS Bainbridge. Both of which were subsequently reclassified as “frigates” and then “cruisers”, because the Navy couldn’t make up its mind about ship classifications. But both were designed to fill the same tactical role as the Burke-class destroyers, and both were slightly smaller than the current flight of Burke-class destroyers.

            John, you’re slipping. They were commissioned as frigates, with the hull symbol DLGN, which came from destroyer leader, guided nuclear. (Welcome to mid-century USN ship designations.) Later, they were relcassified as CGNs. And they were only slightly smaller than the later California and Virginia-class DLGNs/CGNs, which filled the same role. All of which were in the same band as the Ticos and Burkes we have today.

            Same deal with the nuclear-powered cruisers.

            The only nuclear ship that was clearly a classical cruiser (as opposed to the lesser so-called cruisers we have today) was Long Beach.

          • ana53294 says:

            Nuclear powered ships make sense for icebreakers. But only Russia bothers to build them, probably because they have the largest frozen coast.

            Canada still uses diesel powered icebreakers, even though breaking ice consumes huge amounts of fuel. If it mostly doesn’t make sense to use nuclear energy for a ship that requires huge amounts of fuel, why bother with anything else?

          • bean says:

            I read the chapter in Friedman’s US Destroyers on nuclear surface ships, and it looks like the nuclear escort was more a victim of politics than anything else. Basically, it didn’t make a lot of sense in the early days, when the ships were relatively cheap and the reactor was thus expensive. As ships got more expensive, the cost delta for the reactor went down. The most relevant number was a nuclear Aegis ship vs a Tico, which was $1.2 billion vs $800 million. However, the nuclear ship (of basically comparable combat power) was the class leader, whiel the Tico was a follow-on, so I’d guess that the true cost of the nuclear escort is more like $1 billion (FY80 dollars, I believe). I honestly have no clue what the numbers would look like today. The combat systems have only gotten more expensive, but we may not have a good surface reactor available, which is going to drive up price.

            That said, the operational advantages of nuclear power for naval use are really compelling, and a merchie has nothing like Aegis driving up cost. So it’s still a no for them unless you give me hard numbers otherwise.

  21. AI alignment has grown into a developing scientific field. … It’s just the art of rationality itself that remains (outside the usual cognitive scientists who have nothing to do with us and are working on a slightly different project) a couple of people writing blog posts.

    My impressions are almost the opposite. AI alignment has gotten good publicity, but in terms of useful results, it’s barely better than a couple of people writing blog posts.

    Whereas CFAR made much more valuable progress in 2012-2015, and I suspect is continuing to make more progress than MIRI. But the martial art analogy is apt – much of what CFAR does well is training our System 1’s, and that translates poorly into blog posts or other marketing efforts. CFAR alumni seem to lead more satisfying lives than pre-CFAR rationalists, but that’s easy to overlook because it doesn’t lead them to talk more about rationality.

  22. Donnie Clapp says:

    We once thought the world was flat. Then, through some scientific inquiry, we discovered it was a sphere. Then, we discovered it was not exactly a sphere: It bulges here and there. It’s not even a constant shape: the land has tides!

    An anti-science zealot points to each of these breakthroughs and says, “See! Every time science thinks it know the truth, someone proves it wrong. Putting faith in today’s scientific consensus is just as foolhardy as it was to put faith in the idea the world was flat.”

    But in fact, these discoveries were not absolute negations. They were refinements of our understanding.

    Over time, scientific study gets us asymptotically closer to the truth. When a theory gets “disproved”, most of the time we are not replacing it with its opposite—we are replacing it with a more nuanced theory that’s a step closer to the truth.

    The intuition that you’re struggling to identify is an intuition about where the asymptote is for any given subject or question. They were wrong about social priming, but that’s not earth-shattering because the asymptote of absolute truth is still a long way off in the field of psychology. It’s harder to believe that we’re still a long way off from understanding whether decreasing the amount of chemical in a medicine can increase its effects on the body.

  23. Jayson Virissimo says:

    There is an analogy between kumite and argument, so in that sense SSC is like a dojo. But we don’t really have the equivalent of kata (although adding a daily logic/probability/decision theory exercise could work).

    IMO though, SSC fulfills a role much more like the kinds of coffee shops the logical positivists used to hang out in than any kind of dojo.

    • watsonbladd says:

      The coffee shops provided the fundamental input (caffeine) required to do math or analytic philosophy. I am not so sure blogs replace that, although they do provide a social space for ideas to develop, which is also extremely useful.

      • 天可汗 says:

        IME, it’s a lot easier to develop ideas IRL than online. It can be done online, but it helps to at least have meetups or IRL groups who share the online context.

  24. MartMart says:

    Like many, my first introduction to SSC was the tolerate everything except for out group post. It’s probably the one most likely to attract right of center libertarians (I think being right of center meant different things then). But what kept me around was the clarity of thought. I long thought that it was important to pass an ideological touring test, but it never occurred to me to extended that idea to steel manning arguments. This was particularly apparent in the reactionaries in a nutshell post, where the author was being extremely charitable to an idea he clearly disagreed with. To me, this was new. People just do not do that.
    I think I spent the next several month reading as far back into the archives as I could. Sometimes I learned things. Sometimes, I agreed with conclusions. Othertimes, I disagreed, and felt much more certain about disagreeing (If I was asked a few years ago, I would have said that consequentialism was probably the optimal moral philosophy, once someone explained what it meant. Having heard this blog argue against deontology, which I was biased against, mostly because its sounds like something stuffy old religious people believe, I’ve come to think of it as being far more optimal).
    I’ve long wanted to read something more about how to think, rather than seeing those principles applied to the latest controversy (Unless its a controversy I deeply care about, of course). This blog keeps pointing me toward Eliezer Yudkowsky’s work, but for one reason or another I absolutely cannot tolerate his writing style (apologies to him if he is reading this). Getting thru it is a chore, one I tend to put for “later” far too often.
    All that is to say that at least in a sample of 1, this blog has done quiet a lot to spread the idea of thinking rationally.
    That said, the latest writings don’t have the same feel to them as the earlier ones. Maybe it’s inevitable as I’m learning more, maybe Scott is getting caught up in a life of his own (how dare he!), or maybe there really is less of the whole “here is how you think clearly” examples I valued so much.

  25. wanda_tinasky says:

    I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?”

    Really? My apologies if I’m taking a throwaway example too seriously, but isn’t the obvious response there that that’s an equivocation fallacy w/r/t the term ‘prominent scientists’? Social science isn’t chemistry (or even medicine), and the reliability of results in the former is demonstrably less than the reliability of results in the latter. There are also justifiably different priors between “subtle, anti-intuitive effect of fuzzy complex system [brain]” and “violates settled principles of a hard science.”

      • wanda_tinasky says:

        I get your point, which is that you’d like to have a pure Outside View approach where you can point and say “this is science, believe the result” in a consistent way. And I think you can have it, but you have to add a dimension for complexity/confidence. The simpler the system, the better we can model it. The better we can model it, the more confidently we can say “your proposed mechanism is incompatible with our model, therefore it’s false” (e.g. perpetual motion, faster-than-light travel). The more complicated the system (brains, economies, cultures), the less likely we are to have Laws than we are to have vague platitudes like “Our derivatives-pricing model works sorta well most of the time” (but if you’re convinced there’s a huge housing bubble that’s gonna destroy the economy then who am I to argue with you). I feel like there’s a metaphor or isomorphism to Occam’s Razor here somewhere, but I’m not quite sure how to make it.

        My point that is non-replicating medical studies seem to be more like the latter and homeopathy more like (a violation of) the former. When a medical study is overturned, we can shrug our shoulders and say “what do you want, the body is a complicated mess and we can’t control for everything” without feeling too bad about ourselves. But when something is diluted to the point where there’s less then one molecule of ‘active ingredient’ per dose, we can pretty confidently say “we know how molecules work, there’s just no way.”

  26. Radu Floricica says:

    A complaint and a possible solution. Low hanging fruits are Not picked. The election in US was/is controversial (for example I’d have voted for trump if i was american) so maybe this helps side track this kind of talk – but there are plenty of situations in the world where people vote … wrong. In my Romania for example, we’re close to getting a Robber Party dictatorship, which has no redeeming qualities about it. And they got (fairly) voted into it.

    And we have no recipe to stop it from happening again in 2 years. So don’t talk to me about low hanging fruits being picked.

    And this is not the only scenario. We have concepts like steelmanning which you can get at 5th read if you’re around 120 iq points – and maybe even use once or twice in real life – but nothing even resembling a manual for people in Uganda to stop voting people that think gay people bring draught.

    • Scott Alexander says:

      You think that convincing everyone to agree on who the president should be is a low hanging fruit?

      I don’t mean “low-hanging fruit” in the sense of “now everyone is rational”, I mean it in “all the easy insights that you can get by thinking about the problem for five minutes have already been developed”

      • Radu Floricica says:

        Nonono. The issue is US was _complicated_, which both sides seem to hilariously miss. But in many parts of the world, the decisions should be obvious. Stuff like “things are awful, so I’m not voting anymore” – literal example I heard two hours ago. Or, well, brexit.

        And we have nothing even resembling a tutorial. I’m better off with Nisbett’s mindware than the rational community. This is the real low hanging fruit – not optimize smart people, but make super basic tools for average joe. Voting average joe, if the incentive is not enough.

        • ec429 says:

          Or, well, brexit.

          Hmm, I haven’t seen any discussion of Brexit on SSC. As a Brit, I’d be interested to hear what you think the obvious decision is, and why you think it’s obvious. Scott, feel free to delete/veto this if you think it’d get too CW-ey.

          • Radu Floricica says:

            Don’t mind hearing arguments pro brwxit, actually.

          • ec429 says:

            @Radu Floricica:

            Don’t mind hearing arguments pro brwxit, actually.

            You asked for it, you got it…

            Above all is the democratic argument. The referendum was already held, Leave won, and if the government does not deliver on that result it will put a great strain on our constitutional dispensation at a time when trust in our governing institutions is already at an historic low. It is with no relish at all that I say the results could make the Poll Tax Riots look like a Sunday-school expedition.

            Leaving that process issue aside, there are several substantive arguments. EU membership is antithetical to national sovereignty, with both the highest legislators and the highest court being essentially foreign powers. The only consistent positions are national sovereignty or EU statehood; it is clear that the EU aspires to the latter and has ratcheted that union ever closer through Maastricht and Lisbon, and I do not believe there is any reason why a nation, particularly one with such capabilities and talents as the UK, should allow itself to be made a vassal state of this euro-empire.

            Economically, the EU is a source of great harm. The Common External Tariff leads to needlessly high prices for consumers (as Jacob Rees-Mogg MP likes to point out, this includes heavy tariffs on food, clothing and footwear which fall disproportionately on the poorest in society), while the vast thicket of Single Market regulations stifles productivity and enables large corporations (who can afford the compliance costs) to strangle their smaller, more efficient competitors (this is why Airbus, the CBI, etc. keep briefing in favour of Remain). The CAP and CFP encourage inefficient and wasteful methods of agriculture and fishing, which incidentally also damage the natural resources on which these sectors depend (the CFP in particular has been an ecological catastrophe and a perfect case study in the tragedy of the commons); moreover, the combination of the CAP and barriers to trade in agri-foods keep African producers out of European markets, which otherwise could do so much to help lift Africa out of poverty (an ounce of trade, in my opinion, is worth a pound of aid. Though under the Weights and Measures Directive I’m probably supposed to convert that to grams). At least since the financial crisis, the EU has looked enviously at the financial business of the City of London and seeks, through new regulations and taxes on financial transactions, to break that industry in the hope that other European nations might capture it (a vain hope, since in such an eventuality the trade would most likely move to, say, New York or Singapore, not Paris or Frankfurt). It should also be noted that the UK taxpayer has been a major contributor to the EU budget.

            The system of law gives a peculiarly British reason to leave: historically both England and Scotland (at least; I am not certain of the Welsh or Irish history here) have operated on the Common Law, and that has certainly been the legal system of the United Kingdom throughout its existence; but the European Union runs on the Continental system of Civil, or Roman, Law. These two legal systems are incompatible (don’t ask me the details; I’m not a lawyer), making it unclear whether the rights of Englishmen anciently held, some first recorded in Magna Carta, still hold (one can argue, for instance, that some applications of the European Arrest Warrant, in extraditing Britons without a fair trial to a jurisdiction where they will not receive one, have violated the ‘lawful judgment’ (due process) clause of Magna Carta). To give a little context to this, note that many of the American revolutionaries justified their rebellion on the basis that they were being denied the rights to which Magna Carta (and other elements of the British constitution) entitled them; thus, abrogating these rights in order to trade with Europe is analogous to if the US had had to repeal the Bill of Rights in order to join NAFTA.

            Immigration is an issue often talked up by pundits, but it’s my belief that Leave voters did not object to immigration per se; rather they felt that (a) Britain did not control her own borders, essentially a flavour of the sovereignty argument and (b) it was unconscionable of us to turn away applicants from the Commonwealth in order to make room for Europeans. This latter may also have been influenced by folk memories of Canadians, Australians, New Zealanders, Indians etc. giving their lives in WWII.

            One last point is the question: why would we want to be in? We only joined in the first place because it seemed like the European system was producing unprecedented growth in its member economies. We now know that this was mainly post-war ‘catch-up’ growth (Wirtschaftswunder, trentes glorieuses etc.) which petered out at just about the point we joined; today the EU is a cramped and declining customs bloc (hampered by an ill-considered currency union) while all the growth and opportunity in world markets is elsewhere. If we were not already a part of the project, who on earth would be suggesting we should join, and why?

          • Radu Floricica says:

            @ec429

            > You asked for it, you got it…

            And I am very happy I did. Lot of this is new to me, and I pretty much agree with all of it (except the “empire” comment, that’s just being mean).

            The first instinct is saying: “issue is not being or not in EU, but doing Brexit the way it’s likely to end up: badly for Britain”. This was likely my intention initially anyways… I think. And this at least remains solid.

            Second comment is that very little of this is party of why britons voted brexit. The discussion is about voter decision making skills, and in this respect the brexit vote still looks horrible. The immigration comment is hollow, the “EU costs us money” sunk literally the next day after the vote, not to mention the pro brexit faction looking even more panicked than the losers.

            So if the argument was: should Britain be part of EU, or just part of its common market in some form, you make a great point.

            If the argument was: will Britan actually win more from leaving, long term? That’s undecided, and honestly half luck.

            But the vote meaning was: do we leave EU now, with the existing process, with those guys in charge of leaving? This still looks like a very very bad decision to me.

          • ec429 says:

            except the “empire” comment, that’s just being mean

            Well, I only use that term because the EU itself uses it. For instance in 2007, the then President of the European Commission, Jose Manuel Barroso, said:

            We are a very special construction unique in the history of mankind. Sometimes I like to compare the EU as a creation to the organisation of empire. We have the dimension of empire. What we have is the first non-imperial empire.

            Relatedly, I feel that the sovereignty argument is backed up by the way the EU has behaved in the negotiations, for example in its attempts to impose a border between Great Britain and Northern Ireland (essentially saying that N.I. isn’t allowed to Brexit because of the land border).

            very little of this is party of why britons voted brexit

            I don’t think you (or anyone else) has sufficient data to make a claim of that kind. Polling that asked Leave voters for their reasons has mainly shown the sovereignty argument as coming out top, with immigration/borders second (e.g. this Ashcroft poll). In the polls I’ve seen, the latter has been phrased in terms of ‘control’, which does not distinguish between the version I gave and the more ‘pull-up-the-drawbridge’ version that many pundits have attributed.

            The immigration comment is hollow, the “EU costs us money” sunk literally the next day after the vote, not to mention the pro brexit faction looking even more panicked than the losers.

            Not sure what you mean by “hollow”, nor what you think was sunk (are you referring to the £350m/NHS slogan?), and the only panic I’ve seen from Brexiteers is over the fear that the Government will produce a ‘technical Brexit’ that in practice keeps us subject to the EU’s rules, courts, etc. If anything, the Leave side was too complacent after the referendum result, shutting down most of its campaigns while Remainers kept agitating and politicking.

            do we leave EU now, with the existing process, with those guys in charge of leaving? This still looks like a very very bad decision to me.

            I think most people assumed that the Government would deliver on the result of the referendum (don’t forget the leaflet they sent out to every household beforehand saying “the Government will implement what you decide”). I, at least, never anticipated that they would have the brazen cheek to gradually retreat from the Mansion House position (it’s not everything I wanted but I can live with it) to the Chequers agreement (this is not Brexit except in the most petty and legalistic sense). The latter does not even satisfy Remainers — when both Jacob Rees-Mogg and Peter Mandelson oppose a policy, it can’t be good! The Government’s current policy is not what Leavers voted for, nor what they expected to get if they won the plebiscite. So if we did make a “very very bad decision”, it was due to being under the mistaken impression that we lived in a democracy.
            But in any case, while the Government is currently mired in ‘fudge’, there is enough push-back from the Conservative grassroots and the ERG that the most likely result (as I see it) is that we will leave on WTO terms. It’s my belief that that would be a good thing and that the results would vindicate the decision to leave; until it’s happened I won’t, of course, be able to prove that.

  27. 天可汗 says:

    Part of this is that the low-hanging fruit has been picked.

    Disagree. I see near-zero study of instrumental rationality, which seems like low-hanging fruit. Identifying successful people whose lives are well-documented and trying to figure out what made them successful — especially if they didn’t just drift there on the back of conventional status tracks — seems like it should be fruitful, but other than Athrelon occasionally reading up on the Inklings I see no one doing this.

    Most of history is free on the internet! You can just get Ben Franklin’s autobiography from Project Gutenberg. They seem to have censored the part about how he tried to live up to the virtue of chastity by only having casual sex in moderation, but that’s beside the point.

    Probably there’s a lot of fruit that’s only low-hanging in some senses — e.g. reading a book isn’t low-hanging in time investment but is in effort investment, but the time investment is too long for people to do it.

    • AG says:

      Successful people are more about survivorship bias than anything else.

      • maintain says:

        I feel like we should be able to figure out which cases are unlikely to be survivor bias.

        • albatross11 says:

          How would we know if we were wrong?

          I think the only thing that would really work here would be using your study of successful people to build up a predictive model, and then using that model to try to predict success/failure of other people who hadn’t been in your training data. Otherwise, you can come up with various models that should eliminate survivorship bias given some assumptions, but it seems like they will always rely on those impossible-to-verify assumptions.

      • 天可汗 says:

        Weak.

        There’s always an excuse. Hikkis who haven’t seen sunlight in years have excuses that are as compelling to them as yours are to you.

        • Montfort says:

          Why don’t you join the french foreign legion? Are those reasons (“excuses”) more compelling to you than a hikki’s to them?

        • AG says:

          It’s just as easy to argue that hikkis that successfully leave that lifestyle didn’t do it by replicable means. Either their minds are stronger than those left behind, and/or they happen to have a strong support system (that can’t be replicated with others, because you can’t force feelings), but there are just as many cases where the material settings were the same, but they have not broken away. So generalizing from the successful ex-hikki does nothing for other hikkis. The low hanging fruit of instrumental rationality is “everyone is a special snowflake.”

    • Jaskologist says:

      Why study individuals when egregores are where the real action is?

      (This genre already exists.)

      • Anon256 says:

        Because I’m an individual and not an egregore? I care about my own outcomes and control my own actions, whereas I don’t care about any egregores’ outcomes (except as they impact mine) and generally have negligible influence over their actions.

        • Jaskologist says:

          ‘The eye cannot say to the hand, “I don’t need you!” And the head cannot say to the feet, “I don’t need you!”’

          The most successful individuals got there riding atop egregores.

          • Jiro says:

            But to coopt a feminist saying, the fish can say to the bicycle, “I don’t need you”.

            Proof by analogy isn’t actually a thing.

          • Randy M says:

            Analogies, like fiction, can teach, but not prove. If you trust the person presenting it, you can gain understanding. If they aren’t trustworthy, you can be misled.

  28. apedeaux says:

    Mr. Alexander,

    If you had the intestinal fortitude to wade through Piketty, I beg you to attempt a review of Mises’ Human Action. I realize that your community exhibits an intense antipathy towards the notion of any kind of non-empirical methodology; however I believe it behooves you to at least attempt to refute the validity of praxeology by reviewing the seminal work of its founder. I too harbored immense misgivings toward the notion that apriori synthetic logic could advance scientific knowledge, but I believe that the epistemology of Mises truly succeeds in the attempt. At the minimum, it may provide a valuable foil for what I interpret to be your empirical positivism.

  29. BPC says:

    See, it’s funny you bring this up, because the SlateStarCodex comments section and subreddit are consistently considered by people outside your sphere as the worst thing about SlateStarCodex. As RationalWiki puts it, “As usual, you can make anything worse by adding Reddit. /r/slatestarcodex is an unofficial fan forum for the blog. […] (literally advocating the Fourteen Words will get you 40+ upvotes[33] and admiring replies).” My own experience is that said comments section has a massive in-group bias, considers feminism and the left the outgroup, and is full of racist, misogynistic, and all too often not particularly bright people. If you’re consistently updating your opinions based on those people, it does not bode well for the blog. :/

    To put it in perhaps more SSC-friendly terms: “nice libertarian paradise, shame about the witches though.”

    • The Nybbler says:

      consistently considered by people outside your sphere

      You mean like the unusually-honestly-named SneerClub? Who cares about the opinions of people who are literally one’s sworn enemies?

      • Tatterdemalion says:

        I can think of two obvious answers to that question.

        The boring one is “people whose sworn enemies have the power to hurt them” – you can only afford to ignore someone’s opinion if you’re safe from them.

        The more interesting one is “rational people, a lot of the time”: “hates me personally” does not imply “does not have opinions on other subjects I can usefully learn from”.

        • albatross11 says:

          “Hates me personally” is a pretty good indicator that engaging personally with this fellow isn’t going to work out so well for me, however. I may be able to learn some factual things from someone who hates me–indeed, this happens in every war. But it probably won’t be from a calm conversation under civil rules of discussion.

        • Baeraad says:

          I can think of a third – people are frequently very perceptive of the flaws of the things they hate. You can get a pretty good idea of a thing’s failings by listening to the foaming-at-the-mouth rantings of people who hate it. It’s when people talk about things they like that they tend to depart this dimension for a strange and wonderful one of eternal rainbows and sunshine.

          • Paul Zrimsek says:

            Indeed, they can see the flaws whether they’re there or not. That’s some solid-gold perception.

          • Baeraad says:

            Indeed, they can see the flaws whether they’re there or not. That’s some solid-gold perception.

            Oh, the flaws are always there, unless the critic is literally insane. They may be the exact same flaws that every single other person on Earth has, something that the critics artfully neglect to notice (as the brainy types here might put it, they make isolated demands for rigour). But why would anyone bother to make up fake flaws wholecloth, when everyone and everything has more flaws than you could list if you spent a lifetime talking about it?

    • theredsheep says:

      Given that the core of the blog (as Scott sees it) is supposed to be about rationalist stuff like paperclip AI risk, it probably doesn’t matter if the commenters believe in some weird or nasty things about other subjects. Fill in your own historical example here–my favorite is Isaac Newton’s obsession with alchemy, but there are others. You can find right-wing beliefs fairly easily here, yes, but this is probably because the readership skews pretty heavily white and male, which in turn is because the larger community it draws from (things like software engineers, AFAICT) skews pretty heavily white and male.

      Also, this blog has some renown, but is not exactly world-famous. The group of people who have heard of it, have an opinion of it, but are not part of it, probably represents a somewhat biased sample. If you stumbled across SSC and liked it, there’s a fair chance you decided to comment here, or on the Reddit, or wherever. If you stumbled across and said “I don’t get it” or “this dude needs a damn editor, I’m not reading that textwall,” you probably went off to look at something more interesting to you and forgot SSC existed. If, on the other hand, something Scott or his commenters said offended you terribly, there are plenty of places on the internet where people congregate to complain, and they tend to form communities, which grow from their hatred of other communities, etc.

      Exhibit A being RationalWiki, which in my experience tends to be edited by a bunch of snippy, self-satisfied assholes who collated their badmouthing into Wiki form for efficiency’s sake, then called it “rational” to remind themselves how clever they are. Any given RW entry tends to be “list of things wrong with this subject.” NB I’m not a rationalist, and don’t believe in any of their major bugbears like white supremacy or vaccines causing autism.

      (Finally, what group doesn’t have a massive in-group bias?)

      • BBA says:

        *sigh* The 2000s were a different time. “Rational” just meant anti-Bush.

      • HeelBearCub says:

        If you stumbled across SSC and liked it, there’s a fair chance you decided to comment here, or on the Reddit, or wherever.

        Scott has, at least in the past, hung his proverbial hat on this being decidedly untrue.

        IOW, he has said he wants to convince a liberal (left of center) audience of things, and believes himself to be doing so because of surveys that show that the vast bulk of his readership is liberal and does not comment.

        • Aapje says:

          @HeelBearCub

          People are also probably just very atypical here and defy easy categorization.

          One of the most anti-gay people here, would (or did) vote for gay marriage in his state.

          • Conrad Honcho says:

            I’m not “anti-gay.” My complaint is that the rainbow flag waving has approached “clapping for Stalin” levels of absurdity such that while I have personally argued for gay marriage, voted for gay marriage, and photographed two gay weddings for hire but still think homosexuality leads to poor life outcomes and therefore do not want my children exposed to uncritical portrayals of homosexuality in mass media I am therefore called “anti-gay.” I am not anti-gay, I am insufficiently pro-homosexual. But calling that anti-gay is like saying an atheist who doesn’t attend Mass every day and twice on Sundays is “anti-Catholic.”

          • HeelBearCub says:

            If I did’t want my kids (merely) exposed to uncritical portrayals of Catholics in mass media, I’d say it would be fair to describe me as anti-Catholic, despite being willing to accept them as customers to my business.

          • Conrad Honcho says:

            I’d say that’s more like “anti-pro-Catholicism.” Not wanting to watch (or have your kids watch) someone else’s propaganda is not the same as being against that thing. If you’re not particularly Catholic, or even think Catholicism or organized religion in general are not good ideas, how excited are you about your kid watching TV shows about how all the Catholics are smart, wise, and kind, and anyone who is against the Catholic characters is a moral monster driven by irrational hatred, fear, and stupidity alone? “I’d rather my kid not watch this Catholic propaganda.” “But y u hate Catholics tho?”

            Not everyone who objects to the pledge of allegiance in schools hates America (I think, anyway).

          • HeelBearCub says:

            someone else’s propaganda

            This seems like an odd way to describe merely uncritical portrayals. I suspect you have either changed your description of what you object to to make it seem more acceptable, you are applying different standards for what counts as “uncritical” depending on the subject matter, or have a very non standard definition of propaganda.

          • theredsheep says:

            I don’t really bother to read all the coverage of gay stuff anymore, but I do have to note that there is one hell of a lot of it. Like, seriously, before I stopped following CNN et al, there would be a human-interest story on LGBTetc stuff about once a day. Given that such people account for less than 5% of the population, and most of the pieces were fairly insipid bits about how hard it is to be the only transgendered furry postal worker in a small town in Iowa (or something similar), it’s hard to read it as anything less than a PR drive.

            If the NYT ran a month of positive, front-page stories about the lives of Hasidic Jews every single day, you’d deem it reasonable to assume that they were trying to make the public more sympathetic to Hasidic Jews, even if the stories weren’t dishonest per se.

          • disposablecat says:

            Is it bad that I would want to read a human interest story on the only trans furry postal worker in Bumfuck, Iowa?

          • HeelBearCub says:

            @theredsheep:
            It’s called the “news” for a reason.

            Media coverage is always skewed toward either threat or novelty. LGBTQ issues are still relatively novel, in that the integration of unapologetic gay people into the mainstream is still very new.

          • Conrad Honcho says:

            I’m just saying it’s wrong to call me “anti-gay” given I’m pro-gay or gay-neutral in my personal interactions and politics. In addition to the other opinions I’ve expressed and actions I’ve taken that I’ve already mentioned wrt to gay marriage , I also:

            1) Have gay friends of the level of “dine together, have had them over to my house for parties and have been to their house for parties.”

            2) Am very grateful to the gay man who gave me a job when I very much needed it and was my project manager.

            3) Am friends with my hair stylist of 15 years, who happens to be a gay Republican who voted for Trump.

            And yet, because my pro-homosexuality has a limit (will not go wave their flags at their parades in front of my kids or watch their propaganda shoehorned into pop culture TV shows), I am “anti-gay.” I think this should be ludicrous, and yet here we are.

            Protip: never let them see you stop waving that flag with verve and vigor or you’re an anti-gay bigot, too.

          • HeelBearCub says:

            @Conrad Honcho:
            You do you, but, you sound like you have unresolved cognitive dissonance.

            Imagine:

            I’m not “anti-Republican”, I have friends who are Republican, and former bosses who are Republican, and several of the people who do work on my cars and my house are Republican. See I’m pro-Republican.

            But I won’t go to Republican party events, I wouldn’t vote for one, I don’t think they should have be in government, and I don’t think my child should be able to see any media that paints Republicans in a positive light.

            But don’t tell me I’m not pro-Republican.

          • theredsheep says:

            The coverage is decidedly skewed. You will hardly ever, for example, see a mainstream outlet talk frankly about gay male promiscuity, or “monogamish” partnerships where they live together but both are free to stray. If you go to a place that doesn’t have to worry about freaking out us mundane heteros (e.g. a Dan Savage column), you will see them freely admit it, even advocate it. On CNN or BBC you’ll see devoted, faithful, button-down bourgeois couples every day. Because it’s not about presenting objective truth, but shaping public opinion.

          • Conrad Honcho says:

            and I don’t think my child should be able to see any media that paints Republicans in a positive light.

            That’s not right. It’s not “won’t let them see anything that paints them in a positive light” it’s more like what theredsheep said. It’s the “ONLY in a positive light and those not on board with the gay lifestyle only in a negative light” thing that’s the problem. You do not see the promiscuity, faux monogamy, disease, the drug culture, depression, suicide, etc. You do not see those stories on the mainstream news, you do not see those narratives written into Very Special Episodes of “Modern Family” or “Glee.” It’s only smart, fun, happy healthy gays versus vile irrational hate monsters.

            You can object to the TV only presenting pro-Republican views without being anti-Republican. Hell I am pro-Republican and I don’t want to watch things that only present pro-Republican messages because I don’t want to be self-blind.

            What are the mutually exclusive things I believe to be true that put me into cognitive dissonance? I believe I’m not anti-gay (and I believe that bears out by my pro-gay political stances and personal relationships), but I also don’t want to watch or expose my children to super pro-gay messaging? I don’t think those are mutually exclusive. I just have a limit to my pro-gayness. “Insufficiently pro-homosexual.”

            I keep thinking of the 50 Stalins guy from the reactionary steelmanning. I’m totally fine with Stalin. Maybe even two Stalins. I could be convinced to go as high as three Stalins. But I think the people who are screaming for 50 Stalins have gone off the deep end, I don’t want to hear it or have them influence my kids, and I don’t think being satisfied with fewer than 50 Stalins makes me an anti-Stalinist. And to the 50 Stalinists I’m just saying, watch out, because somebody’s going to come out with the 100 Stalins platform and it’s off to gulag for you.

          • disposablecat says:

            100 Stalins platform

            Some right-leaning gay men I know would say this has already happened – and that the platform in question is the recent ascendance of the T in LGBT.

            I’m of two minds. One, I of all people have no right to tell others not to be who they are. But two, I sort of feel like a movement that was a cohesive whole of “alternative sexualities don’t make you unworthy to participate in polite society” has been co-opted and redirected by something that is… not that, and that the zeal of our new ideological overlords is dragging the rest of us backwards in the eyes of society.

          • theredsheep says:

            T is radically different from LGB, because you can’t keep it private; it’s a question of public identity, not something discreetly hidden in the bedroom. It was always going to be at least a little bit of an albatross for your movement.

          • disposablecat says:

            it’s a question of public identity, not something discreetly hidden in the bedroom

            I mean, yeah, as someone who is in a committed same-sex monogamous relationship where we work at the same company and are deliberately not out to our coworkers (at least until we eventually figure out the whole kids thing and it becomes stupefyingly obvious to everyone), that’s part of it.

            The other weird thing, though, is that while the forefront of the T movement was the really obvious “I AM THE OTHER GENDER NOW WATCH ME DO INVASIVE MEDICAL THINGS TO TRY TO PROVE IT”, I didn’t much have a problem with that – I got along well with the various FTMs I knew in college, we have a couple (well-passing) MTFs at work and I’ve got no problem there either.

            But then it moved past that. Biological sex started being first decoupled from gender, then denied entirely, to the point where “there are two genders” is now as provocative a statement as “I identify as an Apache helicopter”. Males in every meaningful sense of the word started claiming to be women with zero alteration to hormones or body, then competing in women’s sporting events and dominating them. Gays and lesbians began to be called bigots for not being sexually attracted to men and women with the opposite genitalia, which is a stark contrast to previously when, you know, everyone recognized that *gay men like dicks, and probably aren’t interested in fucking people without dicks*.

            And pretty soon we ended up in $current_year where essentially the public leftist vision of Mainstream Trans People is approximately equivalent to if in 1998 the public leftist vision of Gay Rights had been the fucking Folsom Street Fair, or as I once heard it described “the limp wristed S&M rainbow leather apocalypse”, instead of Will and Grace – and I’m being dragged down into the abyss with them, as are all the reasonable, normal trans people that I know, whether they realize it or not.

          • Conrad Honcho says:

            Gays and lesbians began to be called bigots for not being sexually attracted to men and women with the opposite genitalia, which is a stark contrast to previously when, you know, everyone recognized that *gay men like dicks, and probably aren’t interested in fucking people without dicks*.

            I’ve heard of straight men being called out for not wanting to date transwomen. After all, transwomen are women. And there was the female porn star last year who killed herself after being harassed on social media for refusing to do a scene with a gay man. You would think “woman (or person for that matter) has absolute authority over who enters her body and can refuse anyone for any reason at any time” would be really high up there on the terminal values scale, but apparently not. That’s probably the 75 Stalins platform. The 100 Stalins platform will be “males can no longer identify as ‘straight’ on $PopularDatingApp and hide their profiles from other men, because only bigots would not even be open to the idea of a same-sex relationship.” I’m sure it’s out there, it’s just not mainstream yet. I thought “white privilege” was looney stuff you only heard about on tumblr and then in the 2016 primaries there’s Bernie and Hillary, the top contenders for the Democratic party fielding white privilege questions during the debates.

          • theredsheep says:

            By “discreetly hidden in the bedroom,” I didn’t just mean staying closeted; I mean that, for the most part, gay people are publicly indistinguishable from straight ones. One of the pharmacists I work with is really, really, really obviously gay (just in terms of mannerisms), and all it means is that he’s a touch flamboyant. Not hard to deal with, even if I hated gay people for whatever reason. I have no idea about what kind of sex life he has, and I don’t have to modify my behavior all that much. The most I’d be expected to do is act polite to any SO he brings to company events, and SOs at company events are always mildly intrusive nonentities. Not that hard.

            Trans people are publicly presenting as something we think they’re not, and gender is so thoroughly wrapped up in how we relate to one another that it was bound to cause friction. So newspapers can try the same spin they do quite successfully with gay people, and it, uh, doesn’t work very well outside of true-believer-land. Like, they run a story about a happily monogamous couple where the one who bore the child has a full beard now and the other partner is breastfeeding it, and all the people who’d be just fine with a photogenic gay couple in polo shirts go “WHOOAAAA THERE.”

            Which I guess maybe is what you were saying, but I don’t distinguish it (EDIT: the stuff you’re talking about, not just quixotic newspaper articles) all that sharply from extreme-left nuttery in general. I think it was last year that I stumbled across an article claiming it was misogynist to refuse to have sex with a woman during her period. The sexual revolution is eating its own children, and the inclusion of trans issues is only accelerating an existing trend.

          • albatross11 says:

            A good starting point for thinking clearly about this stuff:

            You are not obliged to feel sexual or romantic attraction toward any particular person. Nobody can legitimately demand that of you. Like what you like, not what someone angrily demands that you like.

          • John Schilling says:

            By “discreetly hidden in the bedroom,” I didn’t just mean staying closeted; I mean that, for the most part, gay people are publicly indistinguishable from straight ones.

            Is it not also the case, or at least the ideal, that trans people are publicly indistinguishable from cis people of the same professed gender?

            A trans-hetero woman, presuming they transition e.g. during a gap year between high school and college and do so completely and consistently, should be indistinguishable from a cis-hetero woman to everyone but her immediate family, her MD, and maybe her actual lovers.

            An L,G, or B woman, will be clearly distinguishable from a cis-hetero woman to anyone in a position to notice who she is dating. And while that can be hidden, it seems like that would require more effort than simply not mentioning that you used to have a penis five years ago.

          • Matt M says:

            You are not obliged to feel sexual or romantic attraction toward any particular person. Nobody can legitimately demand that of you.

            Your statement is not accurate though, in terms of what the culture demands.

            “I’m not attracted to black people” is considered a racist and bigoted statement, making one worthy of heaps of scorn and hatred.

            Saying “I’m not attracted to other men” isn’t quite at that level of social unacceptability yet, but I’m with Conrad – it probably will be soon.

          • J Mann says:

            @albatross11 – I’m going to challenge you a little. If your attraction principles are causing harm and you can edit them by introspection or brain hacking or whatever, then maybe you have an ethical obligation to consider it.

            – The easy case is that you’re attracted to children or some kind of emotionally vulnerable personality where you’re causing harm. A medium case would be that you’re a sadist, and on reflection, the consensual relationships you engage in are still leaving people worse off for being involved with you. (For purposes of the hypo, if you think there are some benevolent sadists, you’re the other kind.)

            – Moving on, let’s hypothesize that your attraction pattern is causing harm in the aggregate. Let’s say that Purpletonians have been the subject of historic discrimination, and as a result, 95% of the population finds ethnically identifiable Purpletonians undateable. That’s a pretty crummy result.

            In this hypo, on a balance of harms basis, we might not want to shame people into trying to adjust their tastes to include Purpletonians, but I’d argue that individually, people should probably feel a moral imperative to at least try.

          • Randy M says:

            John, I don’t think you are considering the full spectrum of people who take the “transgender” label; there are those who advocate for treatment as women/men well in advance, or even in lieu, of any alterations, let alone the ideal surgery that leave them indistinguishable from cis.

            Of course, I don’t know the breakdown or how much coverage of outliers skews perception of the relatively small group of trans.

            J Mann, let me point out that albatross didn’t say that who you *are* attracted to can’t be a problem, only who you aren’t. So that answers your first objection to his post, which I agree with. To your second, we don’t need to hypothesize anything–we have a group that is seen as undateable–ugly people. It’s about as easily identifiable demographic as a racial group, with some debate about the edges but not the central cases. Our cultural consensus seems to be that whether one has an obligation to look past outer ugly to find inner beauty is gender dependent. I’ve seen arguments based on, for example, average ratings on dating sites that suggests men tend to have lower standards, so this is exactly what we’d expect, I suppose.

            Regardless, whether preferences should be altered depends on the cost to doing so, but even without any side effects, I’m not consequentialist here; I think one has obligation to avoid direct harms but there is no fault in not proactively sacrificing in order to benefit a demographic, especially when the net individual benefit is zero. That is, if I hack myself to fall in love with a purpletonian, that is transferring the benefits of my affection from a non-person of purple to a person of purple.

            There is a pretty good argument for monogamy in there, though. If every person has just one lover (barring distortions in the gender balance or differences in satisfaction in being alone by gender) there is someone for nearly everyone.

          • Conrad Honcho says:

            maybe her actual lovers.

            I don’t think it’s a “maybe.” As I understand it the created vagina is very easily distinguishable from a natural vagina.

          • theredsheep says:

            Re: enforced attraction, I don’t think this whole trend can progress much further. Everybody loved gay rights in part because it was basically cost-free; it was a form of progress that required a minor rewrite of a few rules, and for some devout religious people to get their feelings hurt. If two men marry each other, it doesn’t have much bearing on the lives of their neighbors. That was a big selling point; anybody else remember those “it will not affect you in any way” memes on FB circa 2014? Regardless of the rights or wrongs of it, it was the perfect cause for the slacktivist era. Easy fix, and everybody gets to feel good about themselves.

            If people are allowed to tell you who you are/are not allowed to date, or even feel attracted to, it is no longer anything close to cost-free. That is about as personal as an imposition can possibly get. People who were fine with Fred and Steve wearing rings, or even with forcing a few isolated bakers to provide them with a cake, are going to be a lot more leery of something that puts a burden on them, personally, to feign attraction to Fred or Steve. And this would affect, basically, every post-pubescent person in the country, with the possible exception of asexuals and sworn celibates.

          • John Schilling says:

            I don’t think you are considering the full spectrum of people who take the “transgender” label; there are those who advocate for treatment as women/men well in advance, or even in lieu, of any alterations,

            The claim under debate was that you can’t keep transgenderism private. For all forms of sexual behavior or desire, there will be people who choose to trumpet it as loudly as possible, and it’s annoyingly unreasonable for any of them to also say, “and none of you all are allowed to be the least bit offended by that!”.

            But for the ones who want to keep it private, I do believe that the Ts have the edge over the LGBs.