SELF-RECOMMENDING!

The Craft And The Codex

The rationalist community started with the idea of rationality as a martial art – a set of skills you could train in and get better at. Later the metaphor switched to a craft. Art or craft, parts of it did get developed: I remain very impressed with Eliezer’s work on how to change your mind and everything presaging Tetlock on prediction.

But there’s a widespread feeling in the rationalist community these days that this is the area where we’ve made the least progress. AI alignment has grown into a developing scientific field. Effective altruism is big, professionalized, and cash-rich. It’s just the art of rationality itself that remains (outside the usual cognitive scientists who have nothing to do with us and are working on a slightly different project) a couple of people writing blog posts.

Part of this is that the low-hanging fruit has been picked. But I think another part was a shift in emphasis.

Martial arts does involve theory – for example, beginning fencers have to learn the classical parries – but it’s a little bit of theory and a lot of practice. Most of becoming a good fencer involves either practicing the same lunge a thousand times in ideal conditions until you could do it in your sleep, or fighting people on the strip.

I’ve been thinking about what role this blog plays in the rationalist project. One possible answer is “none” – I’m not enough of a mathematician to talk much about the decision theory and machine learning work that’s really important, and I rarely touch upon the nuts and bolts of the epistemic rationality craft. I freely admit that (like many people) I tend to get distracted by the latest Outrageous Controversy, and so spend way too much time discussing things like Piketty’s theory of inequality which get more attention from the chattering classes but are maybe less important to the very-long-run future of the world.

Any argument in my own defense is entirely post hoc. But if I can advance such an argument anyway, it would be that this kind of thing is the endless drudgery of rationality training, the equivalent of fighting a thousand bouts and honing your reflexes. Controversial things are, at least, hard problems. There’s a lot of misinformation and conflicting interpretations and differing heuristics and compelling arguments on both sides. Figuring out what’s going on with Piketty is good practice for figuring out what’s going on with deworming etc.

Looking back on the Piketty discussion, people brought up questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”

And in the end, I think we made a lot of progress on those questions. With the help of some very expert commenters, I resolved a lot of my confusions and changed some of my conclusions. That not only gives me a different view of Piketty, but – I hope – long-term trains my thought processes to better understand which heuristics and generators-of-heuristics are reliable in which situations.

Last year, I had a conversation with a friend over how we should think about the latest round of scientific results. I said over the past few years I’d learned to trust science more; he said he’d learned to trust science less. We argued it for a while, and in the end I think we basically had the same insights and perspectives – there are certain situations where science is very definitely trustworthy, and others where it is very definitely untrustworthy. Although I could provide heuristics about which is which, they would be preliminary and much worse than the intuitions that generated them. I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?” I can come up with some reasons this isn’t the right way to look at things, but my real answer would have to sound more like “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

I think by looking at a lot of complicated cases, and checking back on them after they’re solved (which sometimes happens! Just look at the Fermi Paradox paper from earlier this week!) we can refine those intuitions and get a better idea of how to use the explicit-textbook-rationality-techniques. If this blog still has value to the rationalist project, it’s as a dojo where we do this a couple of times a week and absorb the relevant results.

This is one reason I’m so grateful for everyone’s comments. I only post a Comments Highlights thread every so often, but I’m constantly making updates based on things I read there and getting a chance to double-check which of the things I think are right or wrong. This isn’t just good individual rationality practice, it’s also community rationality practice, and so far I’m pretty happy with how it’s going.

This entry was posted in Uncategorized. Bookmark the permalink.

681 Responses to The Craft And The Codex

  1. wiserd says:

    “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

    I’d like to steelman a counterargument.

    1. There actually is some tenative evidence in favor of non-monotonic dose response curves. Bisphenol A, in particular. I met one person who scrapped their research on bisphenol A because they got a paradoxical result (increased testes wait in the studied animal instead of reduced.) Some endocrine disruptors have one effect at a high dose and the opposite effect at a low dose. And this makes some intuitive sense. A low dose is within the physiological range, and potentially a signal. A high dose is obviously noise, and may lead to shutting down certain systems. Since many plants use endocrine disruption as a defense (via phytosteroids), humans would have some history of being exposed to certain endocrine disruptors.

    https://scholarworks.umass.edu/cgi/viewcontent.cgi?referer=https://duckduckgo.com/&httpsredir=1&article=1028&context=dose_response

    Something not entirely dissimilar happens with scent. A molecule which is pleasant at small doses will smell differently and far less hedonic at large doses.

    2. While homeopathy claims to use quantities of materials too small for detection, the actual practices used (dilution in a single container) tend to produce concentrations higher than those quantities claimed. This has been supported by some peer reviewed literature. (Though obviously the major journals don’t care one way or another.) Many chemists have reaffirmed this based on their own experience. The procedures that homeopaths claim to use would result in doses in the microgram range, not the undetectable range, because some material sticks to the edges, remains in drops, etc.

    Thus, there may be some small quantities of homeopathic remedies which offer benefit, though accepting that would require acknowledging some problems with homeopathic theory.

  2. Markus Karner says:

    Scott,

    hope this doesn’t get buried here. You should really check out / review the book on the argumentative theory of reasoning by Hugo Mercier and Dan Sperber. It is all about what you wrote here. It “proposes that instead of having a purely individual function, reasoning has a social and, more specifically, argumentative function. The function of reasoning would be to find and evaluate reasons in dialogic contexts—more plainly, to argue with others.” (from the authors’ site, here is the book aptly called “The enigma of reason”.

    Bottom line, humans are really not very good at reasoning at all, as individuals, hence all our biases etc. But they are superb at reasoning in a dialogue fashion, because reasoning likely evolved as a tool to convince others for social action purposes. And from courtrooms to democracy dramas to scientific falsification, our best systems at finding “truth” always – always!!, involve partisan mudfights. No human can find truth on their own.

    I would love you to review this book. Also ties in in a remote but robust way to the predictive processing model of the mind – mind’s always made up already and can only be changed by “dialogue” with the environment. We only ever see what we want to see, unless we are convinced otherwise argumentatively.

  3. AwaitingCertainty says:

    There is only “one science” but unfortunately, there are many scientists. Add in reputations, previously-written works by know-it-alls, grant money, lobbying interests, insurance interests, hospitals who want sick patients (don’t kid yourself), doctors (ditto), USDA, agri-business, Big Pharma (double ditto)…

    People in the US and other developed countries have become increasingly diabetic, cancer-ridden, obese.

    I work with researchers trying to turn the “food pyramid” on its head (now ridiculously and sloppily slidden onto an “plate” – while the food pyramid was completely upside down, it at least deigned to show a hierarchy).

    So! Before y’all start loving on science, consider what happens when they get it WRONG for 60 years (I’m pretty sure Low carb/Keto aka LCHF is right, HCLF is WRONG. Persecution of those holding that view has heated up as the proponents of the erroneous view start losing their hold: Look up what they did/are doing to Tim Noakes (SA), Gary Fettke (Aus.), Steven Cooksey@diabeteswarrior (USA), Jennifer Elliott (Aus.) and see how persecution of the truth works (Fettke who cuts off diabetic limbs was told he couldn’t counsel on diet since he wasn’t “trained” AND just to kick him in the teeth, he was further told by his country’s medical overlords that “If [your beloved] LCHF becomes the standard, you STILL can’t counsel your patients).

    Tolstoy put it well:

    http://www-stat.wharton.upenn.edu/~steele/Rants/MindChanging.html
    Changing Someone’s Mind — a Non-starter

    “I know that most men, including those at ease with problems of the greatest complexity, can seldom accept the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their life.” — Count Leo Tolstoy

    (from Chapter 14 of his incredible and contentious essay “What is Art?” — with thanks to Ricky Der for finding this source)

    William Davis MD (cardiologist)’s Undoctored: How Health Care has Failed You and how you can become smarter than your doctor” (2017) – knows whereof he speaks. A great book. Mentions my boss, Richard D. Feinman, PhD (blog: feinmantheother.com) = coming out with a book called “Nutrition in Crisis”

  4. Paul Zrimsek says:

    I’m going to stop saying “Checkmate, atheists!” and start saying “Ha ha, now you die.”

    • J Mann says:

      Just wipe the blood off your face with your thumb, taste it, and then make the “come on, then” gesture with your free hand.

  5. Yaleocon says:

    Waaaay late to the game here, but for what it’s worth, there’s a better (or at least more commonplace) word for “x as a martial art”, a good thing that you cultivate in yourself through practice. It’s virtue.

    • yodelyak says:

      In that vein, you could say: “Virtue is to church/temple as karate is to dojo as rationality is to LW/SSC-adjacent-internet.”

      • Yaleocon says:

        That use of the word that way is older than churches. For an origin point, maybe look to Aristotle’s Nicomachean Ethics.

        And I think you might be able to do better than an analogy; you might be able to refine Scott’s characterization of rationalism. Rather than relying on the metaphor to martial arts, you could more clearly say: “Rationalism started out as those who valued the virtue of rationality highly, and came together to pursue it.”

  6. mariadroujkova says:

    I’ve been observing the rationalist networks for many years. I too hope to see their progress. Past the good times at the dojo, where are the networks going with their honed skills, what is it really all about? What will they make? What difference will they make?

    Sharing is an important first step. Creating a shared language, a shared craft as you put it, is necessary. I think the rationalists have worked a lot on that, and that’s admirable. Then, I hope, comes collaboration, making things together. And then, purposeful action, changing the world together.

    I’ll call that progress.

  7. Quixote says:

    Reading this post made me think of The Checklist Manifesto by Atul Gawande (noted surgeon, Harvard med school professor, bestselling author, MacArthur Fellowship winner etc etc).

    In it, he argues that extremely complex disciplines (he uses aircraft flight and surgery as his two main examples) benefit greatly from the introduction of checklists as there is simply too much complexity for any person, no matter how practiced, well trained, and experienced, to hold in their head without missing some steps some of the time. He also says that in many cases every step is so vital that if you miss any steps you might as well skip the whole enterprise (e.g. if you do surgery perfectly but forget to ensure your lines are sterile, there is a good chance the patient dies anyway).

    If rationality is so complex, and it relies so much on training, experience etc. then it’s the kind of field that would benefit from working on checklist development.

    • raemon777 says:

      Well, there is this. 🙂

      • Quixote says:

        Thanks’ for reminding me of that. That list is a good thing to have (and I wish CFAR had produced more things in that vein rather than going down the low-reach seminar rabbit hole). But, that list isn’t what I’m thinking of; it’s more a collection of loosely associated good things across a wide variety of problems and domains. You couldn’t encounter a problem and then follow that list and feel confident you had hit most of the major steps in addressing it. It lacks ‘ordered’ness, lacks comprehensiveness, and isn’t focused.

        I think its unlikely that a first, or even second or third try at practical rationality checklist would be that great, but the lists would be tangible things that could be iterated and incrementally improved. They would be susceptible to the ratchet of progress. Currently there a lots of floating tips and tricks with the occasional best practice that are applied on an ad hoc basis as people remember them.

    • HeelBearCub says:

      IMO, checklists are for performing the same task, over and over again. They work best in application to problems that are already solved with discrete tasks which can easily be assessed as complete.

      I’m not sure that really applies well to rationality.

      • Quixote says:

        I think you may be significantly underrating both the effectiveness of checklists and the complexity of tasks for which checklists have been demonstrated to significantly improve results.

        Anyway, check the book out from the library and read it for an hour, its only 200 pages, so if you read at a decent clip you’ll get about a third of the way through. That’s enough to extract many of the major insights and potentially become impressed enough to finish the book (or alternatively become bored enough to put it down with certainty that you at least gave it a fair shake). Best case you get a great tool for improving performance across a wide variety of domains. Worst case, you hear some cool stories about surgery, flying military bombers, building skyscrapers, and other fun subjects but don’t learn anything that useful. Not the worst way to wager an hour of time.

        • HeelBearCub says:

          I’m not saying anything about how great (nor not) checklists are. Things can be enormously complex, but still be repeated.

          I’m saying that checklists work well on non-novel problems and much less so on novel problems.

          Can you name a novel problem that the author thinks there is a checklist for?

          • AG says:

            Try looking at the “Root Cause Analysis” processes used by various companies to investigate safety incidents. Related are FMEA or 5-Whys processes.

            And as I stated above, it seems like more organized argumentation (structured as per competitive debate argument structures) can help investigate issues in a more systematic way. A lot of the “ships passing in the night” arguing I see in these comments could be clarified by knowing that one party is attacking the internal link while the other is trying to run an impact calculus, etc.

          • HeelBearCub says:

            Processes aren’t checklists, though?

            Certainly process can be extremely helpful in answering novel questions. The scientific method itself can be described in this way. But I wouldn’t call it a checklist.

          • AG says:

            They follow certain investigative procedures, though. Create a timeline, create a causal chart, identify causal factors. Then there is actually a checklist-like process of categorizing the identified causal factors using a (helpfully provided by the training company, ofc) root cause map, in order to determine what kind of solutions to address the problem are more relevant.

            Who what when where why is a checklist. I’ve seen Critical Thinking checklists. Just general questions that one should remember to ask, when investigating any situation.

    • AG says:

      Wasn’t Arbital supposed to do this? Well-known discussion paths getting documented so people can stop reinventing the argument. Instead, it turned into just another blog, if I remember correctly.

      • Nornagest says:

        I said this when Arbital started, and now I’m going to say it again: I’ve seen nerds try to solve politics by mapping out all the common logical fallacies and blind alleys that people fall for about two dozen times now, and it never works, and that never stops the next nerd from trying. At best you get something that your ingroup can point to and be smug about, as per RationalWiki: nerds have no appreciation for how subjective these things are.

  8. magicalbendini says:

    As a person who took this up this cause last year, and has been working on it ever since, I’ll give you my current, up-to-date theory:

    Despite being in the sequences, Craft was (and is still) not well understood

    Perhaps due to a background in self-improvement, I got that this was an obvious unmet need the first time I read the sequences. Whatever the mechanism, I had completely ruled out the possibility that prominent figures who were there since the beginning were yet to understand this. When Scott questioned me about this last year, the possibility that he could have missed it was so ruled out in my mind that I responded assuming he was being deliberately evasive. Like there was just no way someone who had taught me so much could miss something that seemed so obvious.

    This also might be why there are multiple “rationality dojos” that are basically rationality meetups but instead of sitting around talking about fanfiction and transhumanist news, they implement a scaled-down version of CFAR.

    Blogging is useful, but does little to develop craft
    For a certain type of person (who I’m rather envious of), blogging is fun and low-effort. This is probably why it still happens, and why they are of such good quality. But as Scott rightly points out, he ends up blogging about shiny things rather than drilling down into important but boring topics.

    To make it clear, I think community blogs are still very much worthwhile. On just the quantifiable metrics, I’d be surprised if less than 6 figures of donations to EA causes could be attributed to its promotion here.

    Craft is both difficult and resource-intensive to develop
    The marginal production costs of craft are moderate, but the resources and original research needed for the first version are immense in terms of time and background knowledge. The best example of craft I’ve seen so far is Obermot’s cheesecake recipe. That was just one very narrow, proof-of-concept application, and that still took months to develop and document.

    I also don’t think the model of people contributing over the internet in small chunks can get enough traction. Adding information to Wikipedia works well enough, but developing something like Wikipedia for craft is not something that can be done with layman volunteers making small and irregular contributions.

    No short-term business model
    While craft needs the talent resources that would ordinarily be provided by a business, craft cannot pay for them. There is little ability for someone who would work on this to capture any of the value created in the first several years, nor is there a path to monetization large enough to interest venture capital. This leaves grants and donations – see next point.

    Craft development lacks units of caring
    Perhaps there would be funding for this if there was adequate awareness, maybe the shift in emphasis towards Xrisk and EA sucked all the oxygen away from craft development. Either way, there is a minuscule amount of external resources currently thrown at this problem.

    Up until my declaration of intent, there could have been a chicken and egg effect, in the sense that the kind of people who had the skillset to make it happen were not interested given a lack of available resources, so potential funders saw nothing that was worth funding. I find it hard to blame these potential leaders. The kind of people who were both interested and had the skills and connections to succeed would have had a lot of other options*. Faced with the choice between working at Google, founding a startup or working on this. When they consider finances, prestige and how they’d describe their job at dinner parties, it would be completely nuts to pick the third one.

    * I didn’t have any legible track record or certification so my opportunity cost was farer, lower, yet I still felt I was taking a massive risk.

    Due to a lack of resources, current efforts are heavily talent and finance constrained
    One of my objections to everything being in Berkeley’s was that a project like this would be very difficult to start there. I’ve heard things like “funding isn’t an issue, anyone with an idea and a team can get money” which I find hard to believe, especially in light of REACH’s last-minute funding plea.

    While it there are less financial constraints in Manchester, where the cost of living is ~30% of Berkeley, funding is an ongoing struggle. The project’s total expenses including my food and shelter are only about $800 a month, but that still has to come from somewhere*. The original plan was to run my business part-time, but it didn’t scale down well. Even on part-time hours, the need to deal with unscheduled client phone calls 8-10pm 7 days a week remained. This didn’t mix with the deep/cognitively heavy work the project required, so I tried to make other arrangements.

    The last attempt to secure a funding stream that wouldn’t derail the project almost killed it. The renovation due to take place last year to house Manchester’s rationalist community ended rather badly. The business decision made sense given my construction experience, but relations between the project members and the investor deteriorated to the point of doxxing relatives and attempted blackmail. Tallying up the lost time spent on this, the disruption caused and the funding gap it abruptly left, it probably caused around 3 months of setbacks.

    In terms of talent, right now we have 3 regular volunteers together in a grouphouse. Hours contributed, as with the vast majority of volunteers, is variable. Due to a lack of management experience on my part, I had no idea how much to ask for, and went with the default hero-power strategy of asking for very little, working on things publicly and hoping people would opt to do more. It turned out I underestimated willingness, while vastly overestimating the obviousness of what needed to be done, and how to go about doing it.

    This situation is improving, and organizational capacity is being built, but our current volunteer situation could well be above average for projects running on shoestring budgets – they may not have been able to generate the kind of publicity I did.

    * Obligatory Patreon (www.Patreon.com/bendini) which also gives a summary of what Kernel has in the works. As a general rule, the more oddly specific the description, the further along a thing is in development. If you have don’t want to donate but could instead put me in touch with Eliezer to discuss implementation of craft, that would also be very useful.

    • magicalbendini says:

      Correction: Obermot’s key lime pie

    • HeelBearCub says:

      From the linked post:

      We’ll be ignoring those, of course. A store-bought crust?! Absurd. Is it rational to use an inferior, mass-produced crust, instead of making one from scratch with your own two hands? I submit that it is not rational.

      It is pretty amazing to see someone near instantly invalidate not only their own work, but the work of anyone who references their work approvingly, in such a blithe manner.

      And not because store-bought crust is necessarily the equal of, let alone superior to, home made crust.

    • quanta413 says:

      People who think brownies don’t taste as good right after baking as they do after several hours in the fridge should not be trusted (joking).

    • Drew says:

      Despite being in the sequences, Craft was (and is still) not well understood Perhaps due to a background in self-improvement, I got that this was an obvious unmet need the first time I read the sequences.

      I’m very happy that other people feel this way. It is odd to read such a inspiring “rationality is about winning!” call to action and then contrast it with the apparent result.

      The bits of the community I’ve seen don’t seem bad, precisely. But they seem no more or less successful than what I’d expect of a community found among any similarly-nerdy pursuit.

      While craft needs the talent resources that would ordinarily be provided by a business, craft cannot pay for them. There is little ability for someone who would work on this to capture any of the value created in the first several years, nor is there a path to monetization large enough to interest venture capital.

      I have a heuristic with investing advice that seems to apply here: “if this person were as skilled as they say, would they really have the time to take me as a client?”

      There are probably some investment brokers who are so skilled (/connected) that they could beat index funds. But any broker who can prove this would be making vast sums of money and probably not interested in me.

      Life coaching seems like it might be similar; if someone had the secret to career success I’d think they could turn that into some personal victories (to establish credibility) or just open a life-coaching business targeting corporate clients.

      So, I’m entirely in favor of community efforts to figure craft out, but I’m suspicious of the idea that anyone has the answer, especially if they don’t already have a track record.

  9. Vergence says:

    I think the martial arts analogy can be extended to reveal two possible mistakes of the rationality movement. When people think of martial arts, they usually think of extreme levels of skill. But in fact, in history, most martial artists (i.e., soldiers) had only a small amount of training. Giving them extreme levels of skill was rightly viewed as a waste of resources. This is the first mistake: Spending resources to develop extreme levels of skill in a few people instead of moderate levels of skill in many people. High levels of skill and exotic techniques seem to get a lot of attention in the rationalist movement, but working to spread the basics to as many people as possible would probably be more effective in the long run.

    People also think of technical ability when it comes to martial arts, maybe because they imagine symmetrical one-on-one dueling like you see in combat sports. But again, technical ability is not actually very important in a broader martial context. What’s more important is tactical or strategic ability – knowing how to use your resources for maximum effect. This is the second mistake: Developing a skill without also developing the ability to decide when and how to use it.

    Arguably the effective altruism movement avoids the second mistake, but I’m not sure it’s doing so well avoiding the first one.

    • Nornagest says:

      But in fact, in history, most martial artists (i.e., soldiers) had only a small amount of training.

      This is only true for relatively recent history, i.e. after the development of mass conscription. The average member of a feudal warrior class trained to levels that’d make 99% of modern black belts look half-assed and uncommitted, although clannishness and bad communications often meant that the breadth of their knowledge wasn’t so hot by modern standards. Even after lightweight firearms and better metallurgy meant it was no longer possible for one well-trained and well-equipped rich guy to kill an arbitrary number of feckless peasants, armies were normally built around a hard core of trained and experienced veterans. (For example, longbows, sometimes cited as the beginning of the end for chivalry, are so strenuous to use that we can tell who the longbowmen in period burials were by their skeletal structure.)

      On the other hand, it’s true that technical ability is only a moderate advantage in martial arts. What counts for a lot more, and which you’re leaving out, is conditioning — both mental and physical. That takes a lot of time and has to be maintained constantly.

      • Vergence says:

        Is conditioning important in developing rationality though? I think the analogy might break at that point.

        • Nornagest says:

          Well, the honest answer is that I have no idea, because no one’s come up with a demonstrably effective system of developing rationality or a set of best practices for it. But I could definitely tell a story where rationality practice was more analogous to conditioning than to technique.

  10. johan_larson says:

    From the outside looking in, the rationality movement has its Jesus, someone who could get things off the ground and running, but hasn’t found its Paul, someone how can take the movement to the masses. Perhaps what’s called for is a more approachable adaptation, something a bit more self-helpy and middle-brow. I’m picturing a series of books, something like this:

    Be Right (logic, statistics, accounting)
    Do Not Be Wrong (fallacies, common misconceptions, cognitive blind-spots)
    Be Persuasive (rhetoric, psychology, manners)
    Be A Force of One (goal setting, self-discipline, motivation)
    Be a Force of Many (recruiting, organizing, inspiring, leading)

  11. PeterDonis says:

    Looking back on the Piketty discussion, people brought up questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”

    there are certain situations where science is very definitely trustworthy, and others where it is very definitely untrustworthy. Although I could provide heuristics about which is which, they would be preliminary and much worse than the intuitions that generated them.

    I posted a fairly late comment in the previous scientific consensus thread:

    https://slatestarcodex.com/2017/04/17/learning-to-love-scientific-consensus/#comment-494537

    The gist of that comment is relevant here:

    “I disagree with your whole approach regarding scientific consensus (to be fair, it’s not really “your” approach since lots of people advocate a similar one). Science doesn’t work by consensus. It works by constructing models that make predictions that match reality. So if you want to know how reliable a given piece of science is, you look at how well its predictions match reality. You don’t need to look at anything else, and in fact you shouldn’t look at anything else, because anything else is just a distraction from the only thing that matters.”

    Three out of four of the questions you mention on the Piketty discussion are irrelevant; they just distract you from the only question that matter, which is “Can this model make accurate predictions?” The fourth question at least touches on that, since you do have to also critically evaluate the data, not just the ability of a model to make predictions that match the data.

    As for “heuristics” and “intuitions” about when science is trustworthy and when it isn’t, again, those might be helpful in cases where you have to critically evaluate both the data and the models, but at the end of the day, either the models make accurate predictions or they don’t. If you can’t trust the data well enough to make that comparison, that is functionally equivalent to a “no”. It might justify a concerted effort to improve the quality of the data, but it doesn’t justify going with the model simply because it looks nice and you can’t trust the data enough to test it.

    Or, as I said in my comment in the previous thread:

    “Social scientists often make the argument that, since they can’t run controlled experiments, they shouldn’t be held to the standard I have just described, at least not the same way that, for example, physics is. But that’s not an argument for loosening the standards of science; it’s an argument for limiting how much we are willing to allow that the social sciences are sciences. if your ability to make predictions that match reality is limited, then it’s limited, and that’s just an unfortunate fact about your discipline. Talking about “consensus” does not change that.”

    This doesn’t just apply to the social sciences; it’s a general inconvenient fact about many disciplines that are called sciences.

    • LadyJane says:

      Social scientists often make the argument that, since they can’t run controlled experiments, they shouldn’t be held to the standard I have just described, at least not the same way that, for example, physics is. But that’s not an argument for loosening the standards of science; it’s an argument for limiting how much we are willing to allow that the social sciences are sciences. if your ability to make predictions that match reality is limited, then it’s limited, and that’s just an unfortunate fact about your discipline. Talking about “consensus” does not change that.

      And this mentality is why we have alt-rightists who believe that ethno-nationalism is a viable option in the 21st century, populists who think Trump can just turn back the clock on automation and outsourcing and international trade, leftists who seriously believe that communism will work this time, and dictators like Mugabe and Chavez who’ve run their countries’ GDPs and life expectancies and standards of living into the ground because they don’t understand basic economic principles.

      The social sciences may be ‘softer’ than the physical sciences, in the sense that they deal with far more complicated systems. Thus, they can’t make predictions with the same degree of pinpoint accuracy as physics or chemistry, and there’s a lot more disagreement between experts over precisely how some of these complicated systems function. (Climate science is similarly controversial for the same reason; it’s hard for experts to reach a consensus or make exact predictions because the system being studied is so complicated, with so many different variables involved that they can’t all be accounted for and largely need to be viewed in terms of simplified low-resolution abstractions.)

      Nonetheless, the social sciences have empirically-verifiable laws that can’t be broken. To use one of the most basic examples I can think of, any government that continually prints more money at a rapid rate will become subject to hyperinflation. That is an absolute certainty, just like knocking a cup off a table will cause it to fall on the floor unless something intercepts it. It’s not fungible, as the people of Zimbabwe learned the hard way.

      • albatross11 says:

        Lady Jane:

        One thing that undermines your point is that in a lot of experimental psychology/social psychology, it seems like a lot of what was being taught a decade ago as ironclad rules of human behavior has turned out to be just wrong. (Some of that was actual fraud, but I think most of it was honest error plus bad incentives.) This ought to make us pretty careful about accepting future claims from the same field.

        I definitely agree there are genuine insights from the social sciences. But I also think they’re a lot harder to identify than we’d like them to be. Partly that’s because experimental verification is hard; partly it’s because it’s very hard to know whether a given set of results is really universal to all humankind, or just true of a specific subset of people at a particular time and place.

        And anyone who finds himself in a position to dictate what is true or false about society also has a lot of opportunity to put a thumb on the scales in the direction of his desired political policies. When it’s very difficult to point to an empirical verification of the statements of, say, an educational psychologist or a criminologist or a macroeconomist, it’s also very hard to know whether he’s passing along hard-won knowledge about a difficult-to-study subject, or whether he’s simply pushing his political agenda by means of making claims about universal laws. *He* may not even know which he is doing.

      • Ilya Shpitser says:

        I agree. Social science just needs good methods, like all other empirical disciplines. These methods exist, social scientists just need to learn them better and/or partner with better methodologists/statisticians.

        • quanta413 says:

          What about their funding schemes? From talking to people in at least some social sciences, it seems like they spread funding too thin per scientist and they’d be better off with the same amount of money spread across fewer scientists. Or with more scientists collaborating on fewer, larger experiments.

          I am presuming a field where the method of measurement itself is valid though so by increasing the scale of experiments you also make it possible to apply statistical techniques that can make good use of domain knowledge and a serious model instead of just null hypothesis testing. I guess a bigger issue might be cases where the measurement itself has little validity in the real world.

          • Ilya Shpitser says:

            I don’t think funding is the issue, I think the issue is crooked incentives that heavily favor hype and positive findings, and not being aware good methods, and/or collaborating with methodologists enough. And generally not being sufficiently careful.

            (I am not a social scientist, this is an outsider critique, and may be invalid to an extent. I deal with social scientists sometimes who are thoughtful people, and very generous with their time.)

      • PeterDonis says:

        this mentality is why we have alt-rightists who believe that ethno-nationalism is a viable option in the 21st century, populists who think Trump can just turn back the clock on automation and outsourcing and international trade, leftists who seriously believe that communism will work this time, and dictators like Mugabe and Chavez who’ve run their countries’ GDPs and life expectancies and standards of living into the ground because they don’t understand basic economic principles.

        No, we have all those things because so many people *refuse* to accept the mentality I described–because they refuse to accept that, when you’ve tried something, and it doesn’t work (where “doesn’t work” is really a gentle euphemism for “failed catastrophically killing huge numbers of people and sending huge numbers more into abject poverty”), you should stop doing it and try something else. You shouldn’t keep doing it because some nice-sounding theory or some plausible-sounding social science model says it should work.

        the social sciences have empirically-verifiable laws that can’t be broken. To use one of the most basic examples I can think of, any government that continually prints more money at a rapid rate will become subject to hyperinflation. That is an absolute certainty

        Sorry, this is already falsified by the US Federal Reserve, which printed a lot of money between 2008 and now, while inflation remained almost zero.

        • Mark Atwood says:

          while inflation remained almost zero

          As long as you are not buying housing, medical care, education, skilled labor, Euros, gold, or hand farmed organic produce with those dollars.

          I love how the “inflation rate” is based on a “basket” of items that carefully exclude all the things that actually do get more expensive that people notice.

          • PeterDonis says:

            As long as you are not buying housing, medical care, education, skilled labor, Euros, gold, or hand farmed organic produce with those dollars.

            The QE programs of the US Federal Reserve (i.e., printing large amounts of money) ran from late 2008 to late 2014.

            Average US housing prices were lower when QE ended than when it started.

            Average US health care prices, so far as I can make out, were roughly flat during the QE time period.

            US education prices have been rising for decades, not because of the Fed printing money, but because of the decades long expansion of US student loans, which continued unchanged after 2008.

            I’m not sure what you mean by “skilled labor”, but from what I can make out, US wages have been mostly flat.

            I also can’t find any useful figures for “hand farmed organic produce”. My personal experience is that organic food in the US, at least in reasonably urban or suburban areas, has been growing cheaper and more widely available since 2008. I can shop for organic food now at my local Wal-Mart; that would have been unthinkable in 2008.

            Euros and gold are not good examples since nobody actually consumes them, they are only traded.

        • LadyJane says:

          when you’ve tried something, and it doesn’t work (where “doesn’t work” is really a gentle euphemism for “failed catastrophically killing huge numbers of people and sending huge numbers more into abject poverty”), you should stop doing it and try something else

          Congratulations, you just applied the scientific method to social systems. That is exactly what social science is. If you support the belief that we can make empirical judgments about social systems based on observable results, even if they’re as broad as “fascism and communism killed tens of millions of people, we shouldn’t try them again,” then you support the social sciences.

          You’re making the exact mistake that Scott points out at the beginning of “Yes, We Have Noticed The Skulls,” except you’re applying it to all of the social sciences instead of just economics.

          • PeterDonis says:

            Congratulations, you just applied the scientific method to social systems.

            No, what you’re describing isn’t “the scientific method”. It’s only part of it–the part about collecting past data in order to try and build a model. See below.

            That is exactly what social science is.

            To the extent that social science actually documents “this thing that people tried didn’t work” (and I suspect I think that’s a smaller extent than you do), yes, I agree. But that’s not prediction; that’s data collection prior to model building, as I said above. “Science” means making empirical predictions about things people *haven’t* tried; if people have already tried it and it didn’t work, saying “if you try that, it won’t work” isn’t a prediction.

          • PeterDonis says:

            You’re making the exact mistake that Scott points out at the beginning of “Yes, We Have Noticed The Skulls,”

            No, I’m not. What Scott is describing there has nothing whatever to do with whether the field actually makes accurate predictions. It only has to do with whether the critics are accurately describing the workings of the field–what the field’s proponents say and do. I completely agree that many critics of social sciences have a highly inaccurate understanding of what social scientists say and do. But I’m not criticizing what social sciences say and do; I’m simply pointing out that, whatever they say and do, they can’t make accurate predictions. “Noticing the skulls” does not magically give you the ability to make accurate predictions.

      • PeterDonis says:

        Climate science is similarly controversial for the same reason; it’s hard for experts to reach a consensus or make exact predictions because the system being studied is so complicated, with so many different variables involved that they can’t all be accounted for and largely need to be viewed in terms of simplified low-resolution abstractions.

        All of which is just a fancy-sounding way of saying “no, these models cannot make accurate predictions”. Which, as I said, is simply an unfortunate fact about such disciplines. which no amount of trumpeting about “consensus” will make go away.

  12. Worley says:

    As David Sloan Wilson noted, “The human mind is an organ for survival and reproduction, not an organ for perceiving the truth; and many deeply held beliefs simply do not correspond to the structure of the real world.” OTOH, the brain is a fine organ for saying, not the truth, but whatever will enhance one’s survival and reproduction. So there are going to be real limits to the success of the rationalist enterprise.

  13. BPC says:

    See, it’s funny you bring this up, because the SlateStarCodex comments section and subreddit are consistently considered by people outside your sphere as the worst thing about SlateStarCodex. As RationalWiki puts it, “As usual, you can make anything worse by adding Reddit. /r/slatestarcodex is an unofficial fan forum for the blog. […] (literally advocating the Fourteen Words will get you 40+ upvotes[33] and admiring replies).” My own experience is that said comments section has a massive in-group bias, considers feminism and the left the outgroup, and is full of racist, misogynistic, and all too often not particularly bright people. If you’re consistently updating your opinions based on those people, it does not bode well for the blog. :/

    To put it in perhaps more SSC-friendly terms: “nice libertarian paradise, shame about the witches though.”

    • The Nybbler says:

      consistently considered by people outside your sphere

      You mean like the unusually-honestly-named SneerClub? Who cares about the opinions of people who are literally one’s sworn enemies?

      • Tatterdemalion says:

        I can think of two obvious answers to that question.

        The boring one is “people whose sworn enemies have the power to hurt them” – you can only afford to ignore someone’s opinion if you’re safe from them.

        The more interesting one is “rational people, a lot of the time”: “hates me personally” does not imply “does not have opinions on other subjects I can usefully learn from”.

        • albatross11 says:

          “Hates me personally” is a pretty good indicator that engaging personally with this fellow isn’t going to work out so well for me, however. I may be able to learn some factual things from someone who hates me–indeed, this happens in every war. But it probably won’t be from a calm conversation under civil rules of discussion.

        • Baeraad says:

          I can think of a third – people are frequently very perceptive of the flaws of the things they hate. You can get a pretty good idea of a thing’s failings by listening to the foaming-at-the-mouth rantings of people who hate it. It’s when people talk about things they like that they tend to depart this dimension for a strange and wonderful one of eternal rainbows and sunshine.

          • Paul Zrimsek says:

            Indeed, they can see the flaws whether they’re there or not. That’s some solid-gold perception.

          • Baeraad says:

            Indeed, they can see the flaws whether they’re there or not. That’s some solid-gold perception.

            Oh, the flaws are always there, unless the critic is literally insane. They may be the exact same flaws that every single other person on Earth has, something that the critics artfully neglect to notice (as the brainy types here might put it, they make isolated demands for rigour). But why would anyone bother to make up fake flaws wholecloth, when everyone and everything has more flaws than you could list if you spent a lifetime talking about it?

    • theredsheep says:

      Given that the core of the blog (as Scott sees it) is supposed to be about rationalist stuff like paperclip AI risk, it probably doesn’t matter if the commenters believe in some weird or nasty things about other subjects. Fill in your own historical example here–my favorite is Isaac Newton’s obsession with alchemy, but there are others. You can find right-wing beliefs fairly easily here, yes, but this is probably because the readership skews pretty heavily white and male, which in turn is because the larger community it draws from (things like software engineers, AFAICT) skews pretty heavily white and male.

      Also, this blog has some renown, but is not exactly world-famous. The group of people who have heard of it, have an opinion of it, but are not part of it, probably represents a somewhat biased sample. If you stumbled across SSC and liked it, there’s a fair chance you decided to comment here, or on the Reddit, or wherever. If you stumbled across and said “I don’t get it” or “this dude needs a damn editor, I’m not reading that textwall,” you probably went off to look at something more interesting to you and forgot SSC existed. If, on the other hand, something Scott or his commenters said offended you terribly, there are plenty of places on the internet where people congregate to complain, and they tend to form communities, which grow from their hatred of other communities, etc.

      Exhibit A being RationalWiki, which in my experience tends to be edited by a bunch of snippy, self-satisfied assholes who collated their badmouthing into Wiki form for efficiency’s sake, then called it “rational” to remind themselves how clever they are. Any given RW entry tends to be “list of things wrong with this subject.” NB I’m not a rationalist, and don’t believe in any of their major bugbears like white supremacy or vaccines causing autism.

      (Finally, what group doesn’t have a massive in-group bias?)

      • BBA says:

        *sigh* The 2000s were a different time. “Rational” just meant anti-Bush.

      • HeelBearCub says:

        If you stumbled across SSC and liked it, there’s a fair chance you decided to comment here, or on the Reddit, or wherever.

        Scott has, at least in the past, hung his proverbial hat on this being decidedly untrue.

        IOW, he has said he wants to convince a liberal (left of center) audience of things, and believes himself to be doing so because of surveys that show that the vast bulk of his readership is liberal and does not comment.

        • Aapje says:

          @HeelBearCub

          People are also probably just very atypical here and defy easy categorization.

          One of the most anti-gay people here, would (or did) vote for gay marriage in his state.

          • Conrad Honcho says:

            I’m not “anti-gay.” My complaint is that the rainbow flag waving has approached “clapping for Stalin” levels of absurdity such that while I have personally argued for gay marriage, voted for gay marriage, and photographed two gay weddings for hire but still think homosexuality leads to poor life outcomes and therefore do not want my children exposed to uncritical portrayals of homosexuality in mass media I am therefore called “anti-gay.” I am not anti-gay, I am insufficiently pro-homosexual. But calling that anti-gay is like saying an atheist who doesn’t attend Mass every day and twice on Sundays is “anti-Catholic.”

          • HeelBearCub says:

            If I did’t want my kids (merely) exposed to uncritical portrayals of Catholics in mass media, I’d say it would be fair to describe me as anti-Catholic, despite being willing to accept them as customers to my business.

          • Conrad Honcho says:

            I’d say that’s more like “anti-pro-Catholicism.” Not wanting to watch (or have your kids watch) someone else’s propaganda is not the same as being against that thing. If you’re not particularly Catholic, or even think Catholicism or organized religion in general are not good ideas, how excited are you about your kid watching TV shows about how all the Catholics are smart, wise, and kind, and anyone who is against the Catholic characters is a moral monster driven by irrational hatred, fear, and stupidity alone? “I’d rather my kid not watch this Catholic propaganda.” “But y u hate Catholics tho?”

            Not everyone who objects to the pledge of allegiance in schools hates America (I think, anyway).

          • HeelBearCub says:

            someone else’s propaganda

            This seems like an odd way to describe merely uncritical portrayals. I suspect you have either changed your description of what you object to to make it seem more acceptable, you are applying different standards for what counts as “uncritical” depending on the subject matter, or have a very non standard definition of propaganda.

          • theredsheep says:

            I don’t really bother to read all the coverage of gay stuff anymore, but I do have to note that there is one hell of a lot of it. Like, seriously, before I stopped following CNN et al, there would be a human-interest story on LGBTetc stuff about once a day. Given that such people account for less than 5% of the population, and most of the pieces were fairly insipid bits about how hard it is to be the only transgendered furry postal worker in a small town in Iowa (or something similar), it’s hard to read it as anything less than a PR drive.

            If the NYT ran a month of positive, front-page stories about the lives of Hasidic Jews every single day, you’d deem it reasonable to assume that they were trying to make the public more sympathetic to Hasidic Jews, even if the stories weren’t dishonest per se.

          • disposablecat says:

            Is it bad that I would want to read a human interest story on the only trans furry postal worker in Bumfuck, Iowa?

          • HeelBearCub says:

            @theredsheep:
            It’s called the “news” for a reason.

            Media coverage is always skewed toward either threat or novelty. LGBTQ issues are still relatively novel, in that the integration of unapologetic gay people into the mainstream is still very new.

          • Conrad Honcho says:

            I’m just saying it’s wrong to call me “anti-gay” given I’m pro-gay or gay-neutral in my personal interactions and politics. In addition to the other opinions I’ve expressed and actions I’ve taken that I’ve already mentioned wrt to gay marriage , I also:

            1) Have gay friends of the level of “dine together, have had them over to my house for parties and have been to their house for parties.”

            2) Am very grateful to the gay man who gave me a job when I very much needed it and was my project manager.

            3) Am friends with my hair stylist of 15 years, who happens to be a gay Republican who voted for Trump.

            And yet, because my pro-homosexuality has a limit (will not go wave their flags at their parades in front of my kids or watch their propaganda shoehorned into pop culture TV shows), I am “anti-gay.” I think this should be ludicrous, and yet here we are.

            Protip: never let them see you stop waving that flag with verve and vigor or you’re an anti-gay bigot, too.

          • HeelBearCub says:

            @Conrad Honcho:
            You do you, but, you sound like you have unresolved cognitive dissonance.

            Imagine:

            I’m not “anti-Republican”, I have friends who are Republican, and former bosses who are Republican, and several of the people who do work on my cars and my house are Republican. See I’m pro-Republican.

            But I won’t go to Republican party events, I wouldn’t vote for one, I don’t think they should have be in government, and I don’t think my child should be able to see any media that paints Republicans in a positive light.

            But don’t tell me I’m not pro-Republican.

          • theredsheep says:

            The coverage is decidedly skewed. You will hardly ever, for example, see a mainstream outlet talk frankly about gay male promiscuity, or “monogamish” partnerships where they live together but both are free to stray. If you go to a place that doesn’t have to worry about freaking out us mundane heteros (e.g. a Dan Savage column), you will see them freely admit it, even advocate it. On CNN or BBC you’ll see devoted, faithful, button-down bourgeois couples every day. Because it’s not about presenting objective truth, but shaping public opinion.

          • Conrad Honcho says:

            and I don’t think my child should be able to see any media that paints Republicans in a positive light.

            That’s not right. It’s not “won’t let them see anything that paints them in a positive light” it’s more like what theredsheep said. It’s the “ONLY in a positive light and those not on board with the gay lifestyle only in a negative light” thing that’s the problem. You do not see the promiscuity, faux monogamy, disease, the drug culture, depression, suicide, etc. You do not see those stories on the mainstream news, you do not see those narratives written into Very Special Episodes of “Modern Family” or “Glee.” It’s only smart, fun, happy healthy gays versus vile irrational hate monsters.

            You can object to the TV only presenting pro-Republican views without being anti-Republican. Hell I am pro-Republican and I don’t want to watch things that only present pro-Republican messages because I don’t want to be self-blind.

            What are the mutually exclusive things I believe to be true that put me into cognitive dissonance? I believe I’m not anti-gay (and I believe that bears out by my pro-gay political stances and personal relationships), but I also don’t want to watch or expose my children to super pro-gay messaging? I don’t think those are mutually exclusive. I just have a limit to my pro-gayness. “Insufficiently pro-homosexual.”

            I keep thinking of the 50 Stalins guy from the reactionary steelmanning. I’m totally fine with Stalin. Maybe even two Stalins. I could be convinced to go as high as three Stalins. But I think the people who are screaming for 50 Stalins have gone off the deep end, I don’t want to hear it or have them influence my kids, and I don’t think being satisfied with fewer than 50 Stalins makes me an anti-Stalinist. And to the 50 Stalinists I’m just saying, watch out, because somebody’s going to come out with the 100 Stalins platform and it’s off to gulag for you.

          • disposablecat says:

            100 Stalins platform

            Some right-leaning gay men I know would say this has already happened – and that the platform in question is the recent ascendance of the T in LGBT.

            I’m of two minds. One, I of all people have no right to tell others not to be who they are. But two, I sort of feel like a movement that was a cohesive whole of “alternative sexualities don’t make you unworthy to participate in polite society” has been co-opted and redirected by something that is… not that, and that the zeal of our new ideological overlords is dragging the rest of us backwards in the eyes of society.

          • theredsheep says:

            T is radically different from LGB, because you can’t keep it private; it’s a question of public identity, not something discreetly hidden in the bedroom. It was always going to be at least a little bit of an albatross for your movement.

          • disposablecat says:

            it’s a question of public identity, not something discreetly hidden in the bedroom

            I mean, yeah, as someone who is in a committed same-sex monogamous relationship where we work at the same company and are deliberately not out to our coworkers (at least until we eventually figure out the whole kids thing and it becomes stupefyingly obvious to everyone), that’s part of it.

            The other weird thing, though, is that while the forefront of the T movement was the really obvious “I AM THE OTHER GENDER NOW WATCH ME DO INVASIVE MEDICAL THINGS TO TRY TO PROVE IT”, I didn’t much have a problem with that – I got along well with the various FTMs I knew in college, we have a couple (well-passing) MTFs at work and I’ve got no problem there either.

            But then it moved past that. Biological sex started being first decoupled from gender, then denied entirely, to the point where “there are two genders” is now as provocative a statement as “I identify as an Apache helicopter”. Males in every meaningful sense of the word started claiming to be women with zero alteration to hormones or body, then competing in women’s sporting events and dominating them. Gays and lesbians began to be called bigots for not being sexually attracted to men and women with the opposite genitalia, which is a stark contrast to previously when, you know, everyone recognized that *gay men like dicks, and probably aren’t interested in fucking people without dicks*.

            And pretty soon we ended up in $current_year where essentially the public leftist vision of Mainstream Trans People is approximately equivalent to if in 1998 the public leftist vision of Gay Rights had been the fucking Folsom Street Fair, or as I once heard it described “the limp wristed S&M rainbow leather apocalypse”, instead of Will and Grace – and I’m being dragged down into the abyss with them, as are all the reasonable, normal trans people that I know, whether they realize it or not.

          • Conrad Honcho says:

            Gays and lesbians began to be called bigots for not being sexually attracted to men and women with the opposite genitalia, which is a stark contrast to previously when, you know, everyone recognized that *gay men like dicks, and probably aren’t interested in fucking people without dicks*.

            I’ve heard of straight men being called out for not wanting to date transwomen. After all, transwomen are women. And there was the female porn star last year who killed herself after being harassed on social media for refusing to do a scene with a gay man. You would think “woman (or person for that matter) has absolute authority over who enters her body and can refuse anyone for any reason at any time” would be really high up there on the terminal values scale, but apparently not. That’s probably the 75 Stalins platform. The 100 Stalins platform will be “males can no longer identify as ‘straight’ on $PopularDatingApp and hide their profiles from other men, because only bigots would not even be open to the idea of a same-sex relationship.” I’m sure it’s out there, it’s just not mainstream yet. I thought “white privilege” was looney stuff you only heard about on tumblr and then in the 2016 primaries there’s Bernie and Hillary, the top contenders for the Democratic party fielding white privilege questions during the debates.

          • theredsheep says:

            By “discreetly hidden in the bedroom,” I didn’t just mean staying closeted; I mean that, for the most part, gay people are publicly indistinguishable from straight ones. One of the pharmacists I work with is really, really, really obviously gay (just in terms of mannerisms), and all it means is that he’s a touch flamboyant. Not hard to deal with, even if I hated gay people for whatever reason. I have no idea about what kind of sex life he has, and I don’t have to modify my behavior all that much. The most I’d be expected to do is act polite to any SO he brings to company events, and SOs at company events are always mildly intrusive nonentities. Not that hard.

            Trans people are publicly presenting as something we think they’re not, and gender is so thoroughly wrapped up in how we relate to one another that it was bound to cause friction. So newspapers can try the same spin they do quite successfully with gay people, and it, uh, doesn’t work very well outside of true-believer-land. Like, they run a story about a happily monogamous couple where the one who bore the child has a full beard now and the other partner is breastfeeding it, and all the people who’d be just fine with a photogenic gay couple in polo shirts go “WHOOAAAA THERE.”

            Which I guess maybe is what you were saying, but I don’t distinguish it (EDIT: the stuff you’re talking about, not just quixotic newspaper articles) all that sharply from extreme-left nuttery in general. I think it was last year that I stumbled across an article claiming it was misogynist to refuse to have sex with a woman during her period. The sexual revolution is eating its own children, and the inclusion of trans issues is only accelerating an existing trend.

          • albatross11 says:

            A good starting point for thinking clearly about this stuff:

            You are not obliged to feel sexual or romantic attraction toward any particular person. Nobody can legitimately demand that of you. Like what you like, not what someone angrily demands that you like.

          • John Schilling says:

            By “discreetly hidden in the bedroom,” I didn’t just mean staying closeted; I mean that, for the most part, gay people are publicly indistinguishable from straight ones.

            Is it not also the case, or at least the ideal, that trans people are publicly indistinguishable from cis people of the same professed gender?

            A trans-hetero woman, presuming they transition e.g. during a gap year between high school and college and do so completely and consistently, should be indistinguishable from a cis-hetero woman to everyone but her immediate family, her MD, and maybe her actual lovers.

            An L,G, or B woman, will be clearly distinguishable from a cis-hetero woman to anyone in a position to notice who she is dating. And while that can be hidden, it seems like that would require more effort than simply not mentioning that you used to have a penis five years ago.

          • Matt M says:

            You are not obliged to feel sexual or romantic attraction toward any particular person. Nobody can legitimately demand that of you.

            Your statement is not accurate though, in terms of what the culture demands.

            “I’m not attracted to black people” is considered a racist and bigoted statement, making one worthy of heaps of scorn and hatred.

            Saying “I’m not attracted to other men” isn’t quite at that level of social unacceptability yet, but I’m with Conrad – it probably will be soon.

          • J Mann says:

            @albatross11 – I’m going to challenge you a little. If your attraction principles are causing harm and you can edit them by introspection or brain hacking or whatever, then maybe you have an ethical obligation to consider it.

            – The easy case is that you’re attracted to children or some kind of emotionally vulnerable personality where you’re causing harm. A medium case would be that you’re a sadist, and on reflection, the consensual relationships you engage in are still leaving people worse off for being involved with you. (For purposes of the hypo, if you think there are some benevolent sadists, you’re the other kind.)

            – Moving on, let’s hypothesize that your attraction pattern is causing harm in the aggregate. Let’s say that Purpletonians have been the subject of historic discrimination, and as a result, 95% of the population finds ethnically identifiable Purpletonians undateable. That’s a pretty crummy result.

            In this hypo, on a balance of harms basis, we might not want to shame people into trying to adjust their tastes to include Purpletonians, but I’d argue that individually, people should probably feel a moral imperative to at least try.

          • Randy M says:

            John, I don’t think you are considering the full spectrum of people who take the “transgender” label; there are those who advocate for treatment as women/men well in advance, or even in lieu, of any alterations, let alone the ideal surgery that leave them indistinguishable from cis.

            Of course, I don’t know the breakdown or how much coverage of outliers skews perception of the relatively small group of trans.

            J Mann, let me point out that albatross didn’t say that who you *are* attracted to can’t be a problem, only who you aren’t. So that answers your first objection to his post, which I agree with. To your second, we don’t need to hypothesize anything–we have a group that is seen as undateable–ugly people. It’s about as easily identifiable demographic as a racial group, with some debate about the edges but not the central cases. Our cultural consensus seems to be that whether one has an obligation to look past outer ugly to find inner beauty is gender dependent. I’ve seen arguments based on, for example, average ratings on dating sites that suggests men tend to have lower standards, so this is exactly what we’d expect, I suppose.

            Regardless, whether preferences should be altered depends on the cost to doing so, but even without any side effects, I’m not consequentialist here; I think one has obligation to avoid direct harms but there is no fault in not proactively sacrificing in order to benefit a demographic, especially when the net individual benefit is zero. That is, if I hack myself to fall in love with a purpletonian, that is transferring the benefits of my affection from a non-person of purple to a person of purple.

            There is a pretty good argument for monogamy in there, though. If every person has just one lover (barring distortions in the gender balance or differences in satisfaction in being alone by gender) there is someone for nearly everyone.

          • Conrad Honcho says:

            maybe her actual lovers.

            I don’t think it’s a “maybe.” As I understand it the created vagina is very easily distinguishable from a natural vagina.

          • theredsheep says:

            Re: enforced attraction, I don’t think this whole trend can progress much further. Everybody loved gay rights in part because it was basically cost-free; it was a form of progress that required a minor rewrite of a few rules, and for some devout religious people to get their feelings hurt. If two men marry each other, it doesn’t have much bearing on the lives of their neighbors. That was a big selling point; anybody else remember those “it will not affect you in any way” memes on FB circa 2014? Regardless of the rights or wrongs of it, it was the perfect cause for the slacktivist era. Easy fix, and everybody gets to feel good about themselves.

            If people are allowed to tell you who you are/are not allowed to date, or even feel attracted to, it is no longer anything close to cost-free. That is about as personal as an imposition can possibly get. People who were fine with Fred and Steve wearing rings, or even with forcing a few isolated bakers to provide them with a cake, are going to be a lot more leery of something that puts a burden on them, personally, to feign attraction to Fred or Steve. And this would affect, basically, every post-pubescent person in the country, with the possible exception of asexuals and sworn celibates.

          • John Schilling says:

            I don’t think you are considering the full spectrum of people who take the “transgender” label; there are those who advocate for treatment as women/men well in advance, or even in lieu, of any alterations,

            The claim under debate was that you can’t keep transgenderism private. For all forms of sexual behavior or desire, there will be people who choose to trumpet it as loudly as possible, and it’s annoyingly unreasonable for any of them to also say, “and none of you all are allowed to be the least bit offended by that!”.

            But for the ones who want to keep it private, I do believe that the Ts have the edge over the LGBs.

          • LadyJane says:

            Gays and lesbians began to be called bigots for not being sexually attracted to men and women with the opposite genitalia, which is a stark contrast to previously when, you know, everyone recognized that *gay men like dicks, and probably aren’t interested in fucking people without dicks*.

            I’ve heard of straight men being called out for not wanting to date transwomen. After all, transwomen are women.

            Don’t make sweeping assumptions based on what you’ve seen a few internet crackpots say. I’m trans, I know plenty of other trans people, and I can say with clear certainty that this is not the mainstream view among the trans community at all. If someone’s not attracted to me because I’m trans, then I’d rather not be with them anyway. There are enough people who like me for who I am, I don’t want or need anyone feigning attraction out of misplaced guilt.

            I’ve yet to meet a single trans person who believes that anyone should feel obligated to be attracted to trans people. There’s a significant percentage of straight men and lesbians who just can’t feel attraction towards anyone with a dick, just like there’s a significant percentage of straight women and gay men who just can’t feel attraction towards anyone with a vagina, and most of us are well aware of that fact. In fact, I’ve known some trans lesbians who themselves weren’t attracted to pre-op trans women.

            At most, I’ve seen some trans people argue that it’s transphobic for bisexuals to refuse to date trans people, or that there’s no real reason for a straight man/lesbian to be opposed to dating a post-op trans woman (unless it’s a straight man who really cares about having children and wouldn’t date an infertile cis woman either). Personally, I don’t agree with either of those arguments – I think there are a lot of subtle nuances to human attraction that we really don’t understand yet – but they’re still a far cry from some of the ridiculous claims being made here.

            The 100 Stalins platform will be “males can no longer identify as ‘straight’ on $PopularDatingApp and hide their profiles from other men, because only bigots would not even be open to the idea of a same-sex relationship.”

            I have literally never seen anyone make that claim, not even internet crackpots.

          • theredsheep says:

            JS: I imagine that, with enough surgery and chemicals, you could make a pretty fair replica, but how many trans people go that far? My experience (from photos I’ve seen) is that most trans women can go far enough to look like, uh, somewhat mannish women. Like, I wouldn’t guess they were trans, but only because trans people are rare; I’d take them for really unlucky and unattractive women. I’m in no position to evaluate the pulchritude of trans men, though the ones I’ve seen looked kind of doughy, like Zach Galifianikis.

            Now, in terms of actual personal experience, I once briefly encountered a gay man who was planning to transition, and as a substitute teacher I met a teen who’d had a girl’s name scratched out and replaced with a boy’s. Said teen had not transitioned physically and was quite obviously female under those boy clothes. So my experience is admittedly pretty worthless. Dunno.

          • Conrad Honcho says:

            @LJ:

            I have literally never seen anyone make that claim, not even internet crackpots.

            I know. I said that’s what the 100 Stalins position will be in the future. You say you’ve already heard the “transphobic for a straight man to rule out transwomen” argument. Next is “homophobic to rule out same-sex relationships.” After all, It’s Just a Bro-job. No Homo.

          • Randy M says:

            The claim under debate was that you can’t keep transgenderism private.

            That was what threredsheep led with, granted. Though, he went on to mention the couple where “the one who bore the child has a full beard now and the other partner is breastfeeding it” so I think he had more in mind the cases where there is a flagrant mismatch between claimed identity and appearance. The point I saw being made was that gay and transgender is fighting for different things; gay rights being more passive “just let us do our own thing and mind your own business” and trans being more demanding, “you must positively affirm my essential equivalence”. I don’t know if that’s necessarily the most salient difference or even accurate but it seems like there is a point there.

            For all forms of sexual behavior or desire, there will be people who choose to trumpet it as loudly as possible

            Do you think that a person with a male body wanting to be known as a female without any body modification is doing it for attention? That seems like a socially discouraged position to hold.

          • LadyJane says:

            @theredsheep: There’s a bit of selection bias going on, since your image of trans people is based solely on the people you’ve known were trans. Maybe you’ve encountered other trans people who’ve looked indistinguishable from cis people of their gender, but since you didn’t realize they were trans, they wouldn’t have affected the way you see trans people in general.

            Just going by my personal experience, most of the people I encounter in passing don’t seem to realize I’m trans. (Obviously I can’t know for sure, but people always address me using female pronouns, straight dudes constantly hit on me and catcall me, and I don’t get the weird glares that I used to get all the time when I first started transitioning.) Judge for yourself, if you’d like: https://i.imgur.com/rvkRiIc.jpg

            That said, people who deal with me on a day-to-day basis usually figure it out soon enough, mostly due to my voice, posture, and mannerisms (all of which I’m working on improving). Likewise, people who are familiar with trans women are a lot more likely to suspect, since they know what tells to look for.

          • LadyJane says:

            @Conrad Honcho: Forgive me if I don’t put much stock in your prediction then. I’m also not entirely sure where you’re going with the “100 Stalins” metaphor. Stalin’s successors were all more moderate than him (even the reactionary Brezhnev was more like 0.75 Stalins, compared to Khrushchev’s 0.50 Stalins and Gorbachev’s 0.25 Stalins), as were Mao’s, so the idea that revolutionary ideologies keep getting more extreme ad infinitum isn’t borne out by history.

            @Randy M:

            Do you think that a person with a male body wanting to be known as a female without any body modification is doing it for attention?

            Assuming you’re talking about people who don’t have any intention of medically transitioning (as opposed to people who want to medically transition and simply haven’t started yet), I think it’s an uncommon enough scenario that you can’t really make concrete judgments about it or draw conclusions from it.

          • HeelBearCub says:

            there’s a bit of selection bias going on, since your image of trans people is based solely on the people you’ve known were trans.

            [cough] Shawn Stinson [/cough]

          • Conrad Honcho says:

            @LJ

            I’m also not entirely sure where you’re going with the “100 Stalins” metaphor.

            It’s just a thought experiment from Scott’s steelmanning of reactionary philosophy before he posted his anti-reactionary FAQ. It doesn’t have anything to do with the actual history of the USSR.

            Also, I don’t think an awful lot of the culture war around LGBT stuff has to do with LGBT people. There are many gays who are not on board with the leftist gay politics, and an awful lot of trans individuals are just trying to live their lives and figure out how to do that given the conflict between their brains and their bodies and are far more concerned with those issues than whether or not governments or employers should be punishing people who don’t use the right pronouns. An awful lot of this is the doing of the political activists who have either cynically co-opted or honestly but overzealously seized on sexual minority issues to advance their other political agendas or political careers. It’s important to make the distinction between “LGBT politics” and LGBT people.

          • rlms says:

            RE: how easy it is for trans people to pass — see ThingOfThings’ post here.

            @disposablecat (some way upthread)

            Biological sex started being first decoupled from gender, then denied entirely, to the point where “there are two genders” is now as provocative a statement as “I identify as an Apache helicopter”. Males in every meaningful sense of the word started claiming to be women with zero alteration to hormones or body, then competing in women’s sporting events and dominating them.

            I’ve never seen “there are [only] two genders” used non-ironically/provocatively. For that matter, I don’t think Apache helicopter references are that provocative — AFAIK they target (widely ridiculed) otherkin rather than trans people.

            Citation very much needed for your last sentence.

          • albatross11 says:

            J Mann:

            I don’t agree. As best I can tell, I don’t have a lot of control over what I find appealing or attractive, but even if I did, I don’t think anyone else would have any right to tell me to change it. There are many things I might want that would be bad for me (Chocolate cake for dinner every night! Hookers and blow every night!). There are things I might want that are morally wrong to actually pursue (any woman that isn’t my wife). But none of those involve a moral obligation to stop wanting what I want. There are also situations where I might try to alter my desires if I thought it would work, but I doubt that would work for sexual desires, and it wouldn’t be anyone else’s business anyway.

            When it was gays making this argument, telling those around them that they weren’t interested in being told that they *should* find women attractive and *should* want to settle down with a nice girl and start a family, I found their argument persuasive.

            When it was women making this argument, telling everyone they didn’t *owe* romantic or sexual attraction or love or sex to men who were angry at them for not falling in love with them, once again, I found it persuasive.

            I feel the same way.

            The practical result of demanding that people change whom they’re attracted to is that people lie about whom they’re attracted to. We have a few thousand years of worked examples from gays to draw on here. The gay man who was socially forced into a marriage he didn’t want didn’t stop wanting to sleep with men, he just lied about it and had an unhappy marriage that probably screwed his wife over as much as it screwed him over.

            If you are not attracted to blacks, then the best way to proceed is to not date blacks. If you are not attracted to transwomen, then the best way to proceed is to not date transwomen. If you are not attracted to women who are taller than you, or redheads, or women with tattoos, or you’re not interested in a poly relationship, or you don’t want kids, or whatever, then the right way to proceed is to tailor your dating accordingly. Acting otherwise is just going to make things worse for you and the people you date even though you’re not really interested in them.

          • Bugmaster says:

            @albatross11:
            I completely agree with you (for a change). Forcing people to be attracted to someone they are not is basically the height of insanity.

            That said, I think I can steel-man the opposing side a little bit. Their claim is that the lack of attraction to X is, at least in part, due to social conditioning that instills anti-X stigma in non-X people’s minds. The best way to break this conditioning is to have more non-X people date X people, and to do so visibly; but this is an obvious catch-22, so they are calling upon adventurous non-X people to deeply examine their preferences and at least try dating an X to see if they like it. On the flip side, the fact that almost no one wishes to date X people causes Xs to feel isolated and alone, inflicting very real psychological damage, so the stakes are high.

            In our current scenario, X stands for “trans”; but if you substitute “geek” instead, you might experience a more sympathetic emotional response.

            Just to clarify, I still believe this argument falls flat because sexual preferences basically cannot be changed, no matter how good one’s intentions are; still, I could be wrong.

          • Matt M says:

            In our current scenario, X stands for “trans”; but if you substitute “geek” instead, you might experience a more sympathetic emotional response.

            More sympathetic from who?

            Here’s what they said about the last guy who suggested that maybe people should try, for the good of society, engaging in romantic relations with people previously considered undesirable.

          • Jiro says:

            Maybe you’ve encountered other trans people who’ve looked indistinguishable from cis people of their gender, but since you didn’t realize they were trans, they wouldn’t have affected the way you see trans people in general.

            That problem can be solved by noting that people can pass to a greater or lesser degree and looking at the distribution. If you’ve seen more people who can’t pass immediately compared to people who can’t pass after fifteen minutes of close observation, you can conclude that the peak of the distribution is on the non-passing end of the scale, and that few people pass so well that you don’t realize.

          • disposablecat says:

            @rlms:

            Regarding “only two genders” being reacted to with intense virulence / used as a bullet for both sides of the CW, one side as a provocation and the other side to mock the side using it seriously:

            https://nomadicecologist.wordpress.com/2017/04/24/no-science-does-not-say-there-are-only-two-genders/

            https://www.theodysseyonline.com/there-are-only-two-genders

            https://www.reddit.com/r/changemyview/comments/68zqi5/cmv_there_are_only_two_genders/

            As for attack helicopters used to mock “I identify as X”, which, yes, started with otherkin but has branched out into “the opposite gender / a nontraditional gender”, I invite you to peruse r/tumblrinaction or any Leftbook meme group (Fully Automated Luxury Gay Space Communism, New Urbanist Memes for Transit Oriented Teens, anything FB suggests when browsing either of those), where you will also find plenty of the “two genders” thing.

            To my last sentence:

            https://nypost.com/2018/02/25/transgender-boy-wins-girls-state-wrestling-title-for-second-time/

            https://usatodayhss.com/2017/connecticut-transgender-sprinter-andraya-yearwood-wins-two-state-titles-amidst-controversy

            https://www.outsports.com/2018/6/14/17458696/trans-athlete-connecticut-high-school-ban-petition

            https://www.ctpost.com/highschool/article/Jeff-Jacobs-No-easy-answers-when-it-comes-to-12967306.php

          • Toby Bartels says:

            @ disposablecat :

            Regarding transgender athletes, the last three of your four links are about the same two athletes in Connecticut, and the first link is about an athlete in Texas who is the exact opposite of what you said. So you have found two examples.

            But they are two good examples. The line being promoted by the news stories about them is that winning isn't important in high school sports, so nobody should care; as opposed to adult sports, where hormone requirements such as those of the Olympics are reasonable. (I' not sure that I agree with that, but that’s the argument, which the last link made explicitly.) Since people also get upset when underage people take hormones, allowing them to compete without hormones until they're of age might be the best compromise.

          • Conrad Honcho says:

            Since people also get upset when underage people take hormones, allowing them to compete without hormones until they’re of age might be the best compromise.

            How about instead of segregating sports by gender we segregate by sex (as a proxy for hormones, muscle mass, etc). Male or female, if you have a penis you’re competing against other people with penises, and male or female, if you have a vagina you’re competing against other people with vaginas.

          • disposablecat says:

            Male or female, if you have a penis you’re competing against other people with penises, and male or female, if you have a vagina you’re competing against other people with vaginas.

            I would correct this to “man or woman”, because otherwise it is tautological – barring rare intersex conditions about which the left is endlessly happy to split hairs with “not all x” arguments they wish proved biological sex wasn’t real, if you have a penis you are male. You can then become a woman, yet remain male – but this utterly invalidates the right-think that “trans women are women”, i.e. there is no meaningful difference between an MTF person and an AFAB person. Which is transparently absurd – pretty sure an MTF person’s doctor needs to know that they have a prostate once they turn 40, for example! And yet I’ve seen activists complaining about their PCPs wanting to know their birth sex as opposed to their identified gender…

            Anyway, with that edit I would agree – segregating high school sports by biological sex is the fairest way to do it, especially for the 98 percent of youth who are not trans. By the way, an argument that winning doesn’t matter at this level is facile – precollegiate athletic success can influence life outcomes in many ways, scholarships being the most obvious.

            It’s a shame that this solution utterly violates The Narrative and is thus Evil And Transphobic.

          • albatross11 says:

            Thinking more about this: there’s a moral argument that says “you should want more of X and less of Y” that, as far as I can see, never works. However, there’s also an informational “you might think X is unappealing at first, but if you give it a try, you might find reasons Y and Z why you like it.” As an encouragement to give something you don’t currently want a try, or to at least consider it, that makes plenty of sense. But as a moral demand, I don’t think it works at all.

            FWIW, I know an apparently very happy married couple where the wife is about two inches taller than the husband. This was apparently a major hurdle to their getting together romantically, as he didn’t want to date someone taller than him. It seems pretty clear that he found his now-wife’s other many appealing properties sufficiently appealing to overcome his issues with being shorter than his then-girlfriend/now-wife. This seems perfectly reasonable to me. But I can’t imagine this working out if it started with her or someone else making a moral demand that he start finding taller women attractive.

          • John Schilling says:

            if you have a vagina you’re competing against other people with vaginas.

            At the higher levels of competition and for most sports, the winners in this league will be almost entirely people with Y chromosomes who were born with a penis and had it reshaped into a quasi-vagina. And anyone born with two X chromosomes and a vagina can basically give up on any dreams of recognized athletic excellence.

            Vagina and negligible testosterone, enforced for years prior to competition, might work.

          • disposablecat says:

            Vagina and negligible testosterone, enforced for years prior to competition, might work.

            I think this is what Conrad meant. It’s certainly what I meant. It’s more defensible for high school; not sure how we would handle it at the professional or Olympic level.

            It’s also simpler for FTM than MTF, as seems to be the case with many things – if you transition onto testosterone, that’s a performance enhancing drug as much as any other, and you need to play with the people whose bodies produce a comparable level of it.

            MTF, though – if you were male through puberty and then transitioned, are you always going to have a leg up on people who didn’t spend that time growing male levels of muscle mass? The two MTF people I know, both of whom are at the high end of female muscularity years after transition, would certainly suggest that it’s a valid concern, but the plural of anecdote isn’t data.

          • John Schilling says:

            I completely agree with you (for a change). Forcing people to be attracted to someone they are not is basically the height of insanity.

            Forcing people to say they are attracted to someone they are not, or more generally to say things that they privately know to be untrue, can be an effective form of social engineering.

          • Conrad Honcho says:

            Phrase it in whatever way gets you people with similar hormones and body frames competing against other people of similar hormones and body frames. Otherwise it’s insanely unfair to traditional girls’ sports. Go look at FloJo’s 100m dash, which has stood as the world record for women for 30 years. Now go google for random high school boys track times. Any random high school will have times almost as fast as FloJo, and an all-county meet will have times beating her, and an all-state meet will beat her by multiple tenths of a second. At any reasonably competitive level, it is impossible for cis girls to compete against people with traditional male body frames and hormones. It is deeply unfair to cis girls who want to compete at sports.

          • Edward Scizorhands says:

            I’ve heard of straight men being called out for not wanting to date transwomen. After all, transwomen are women.

            Who is the most prominent person who has done this?

            With outrage media, you can always find someone saying this and getting 30 retweets and 50 likes, but that’s nothing. (Also, outrage media encourages amplification of the worst things the other side says so many of them may be hate-likes.)

            I’ve heard this, too, but thinking back I’ve always heard it from conservative sites who dug deep on social media to find the crazy. Which is just as dishonest as liberal media sites that do the same to prove how evil the other side is.

          • Thegnskald says:

            While I don’t have any dogs in this race, Ozy, to pick one locally prominent individual, made arguments to the effect that amount to “It is morally wrong to not be willing to date trans people”.

            So maybe tone it down with the “This is stuff only evil conservative sites publish” rhetoric? That stuff isn’t conducive to friendly discussion, because it alienates everyone, most of all those who legitimately hold those beliefs.

            You are, after all, calling them a right-wing caricature of leftism.

          • Toby Bartels says:

            Why mention penises and vaginas at all if hormones are what matter? A penis doesn't help you at sports, and a vagina doesn't handicap you, but testosterone helps. (Does oestrogen matter? And of course there are other hormones besides these two.)

            I like boxing's approach. Everybody can compete; we just put you in with people that match your size. Sure, the heavyweight bouts get the most attention, but everybody has a chance to play. Just extend this from weight to hormone levels. You have a medical reason to take a performance-enhancing steroid? Fine, but you'll be competing against others who have the same level. Conversely, taking hormone blockers is like losing weight; just make sure that you're below the cut-off on weigh-in day. (For hormones, you'd actually have to measure over a period of time leading up to the competition.)

            Adult sports are moving towards this; the issue with kids' sports is that time is more pressing. But if there were a broader range of hormone classes, like the weight classes in boxing, then it would be easier to make a smooth transition. (And it might give some scrawny boys more of a chance to play too.)

          • Iain says:

            @Thegnskald:

            While I don’t have any dogs in this race, Ozy, to pick one locally prominent individual, made arguments to the effect that amount to “It is morally wrong to not be willing to date trans people”.

            [citation badly needed]

            This article, as one example, seems utterly incompatible with your claim.

          • Conrad Honcho says:

            I don’t know if there’s a consensus, but apparently the whole “is it transphobic” thing is up for debate, or else there wouldn’t have been a stink about Ginuwine.

          • Bugmaster says:

            @Thegnskald:

            …arguments to the effect that amount to “It is morally wrong to not be willing to date trans people”.

            See, this is one of the reasons why I don’t support my own steel-manned version of the argument (as I presented it above). It always devolves into something like the following:

            “If Xs are suffering from lack of dates; and your lack of attraction to X is due to social pressures or some other factor that you can consciously ignore; then by failing to force yourself to date Xs as much as possible you’re contributing to their suffering; thus you’re an immoral monster.”

          • FWIW, I know an apparently very happy married couple where the wife is about two inches taller than the husband.

            I know a very happily married couple where the wife is considerably more than two inches taller than the husband. The only problem is that the husband’s father is going to end up with grandchildren who look down on him.

            But I can live with that.

          • Thegnskald says:

            Iain –

            We may get different things out of that article.

            It began with “Be nice when rejecting trans people” but then ended with a “But not wanting to is closed-minded”, as more than one commenter noted. The request to be open-minded is not a connotatively neutral statement, and is, in the context of modern discourse, a moralistic statement about those who disagree.

            (Imagine, for comparison, somebody telling lesbians to be more open-minded about dating cis men. It isn’t a morally neutral request.)

          • Edward Scizorhands says:

            The worst I get out of Ozy’s linked essay gets is this:

            On the other hand, if your true rejection is “I don’t see trans women as women . . . ” then you maybe have some homophobia and transphobia issues to sort through. However, you can still be polite to her while you sort through them.

            (I elided something important to Ozy’s point there, but not important to my point. Just to be clear since I don’t want to put words in her mouth.)

            The definition of “women” is not assigned from above. If someone is attracted to women, they get to decide for their own what that means. They don’t need to “sort through” anything. Just like if someone is gay, they get to decide, for themselves, what being gay means. Saying that transwomen are women is begging the question.

            I’ll agree 100% you shouldn’t be a shit to someone who asks you out, even especially if you think they are low status.

          • Iain says:

            Let’s pretend I wrote an article called “Etiquette for Picky Eaters”. Here’s one section:

            Be open-minded. I know lots and lots and lots of guys eaters who used to not be into trans girls brussels sprouts. But then they internalized that trans girls are women, got over their worry that it would make them gay, and found a trans girl who was really freaking cute– and suddenly they find her cock super-hot tried roasting them instead of boiling them, and now they love brussels sprouts. It happens.

            That might never happen for you. It’s okay. But if it does happen, don’t freak out and say “I couldn’t possibly be attracted to a woman with a penis! I’m a straight guy! enjoy brussels sprouts! There are rules!” To be fair, this is almost entirely a self-punishing state of affairs, in that now there is a cute person you want to date delicious food you want to eat that you’re not going to get to. So I am not inclined to push it too hard.

            If you experience “That might never happen for you. It’s okay.” as moral castigation, I don’t know what to say. Certainly, if you tried to summarize my position as “Iain argued that it’s morally wrong not to eat brussels sprouts”, then I would be quite reasonably taken aback.

          • LadyJane says:

            @disposablecat: Regarding the difference between biological sex and gender, I would consider the vast majority of trans people to be neither biologically male nor biologically female, but intersex. It’s absurd to say there’s no meaningful physiological difference between a trans woman and a cis woman, but it’s also absurd to say there’s no meaningful physiological difference between a trans woman who’s medically transitioned and a cis man. The anatomical changes that accompany hormone therapy are not purely cosmetic, they affect nearly all aspects of a person’s physiology and health in a myriad of subtle and not-so-subtle ways.

            It’s important for my doctor to know I have a prostate, but it’s also important for my doctor to know I have fully-developed female breasts, and to judge my body mass and blood pressure by female rather than male standards. Trans people should always make their medical providers aware of the fact that they’re trans, because treating them as a cis person of either gender could cause complications.

            I also consider gender dysphoria itself to be a very subtle intersex condition, rooted in some kind of neurological and hormonal mismatch. So in that regard, I would consider most trans people to be intersex even before undergoing hormone therapy. That’s why trans activists use terms like “assigned male at birth” instead of just “formerly male,” and why they claim that trans women were never truly men (which admittedly sounds ridiculous if you don’t understand the context). They’re asserting that if someone wants to medically transition, it’s a result of underlying biological factors. It’s basically the “born this way” argument for trans people, though a lot of trans activists might not be great at explaining that to people who don’t have a background in psychology.

            In that regard, the left’s focus on intersex people is not an attempt to say “not all X,” but rather an attempt to say “yes, all Y.”

          • LadyJane says:

            @John Schilling:

            At the higher levels of competition and for most sports, the winners in this league will be almost entirely people with Y chromosomes who were born with a penis and had it reshaped into a quasi-vagina. And anyone born with two X chromosomes and a vagina can basically give up on any dreams of recognized athletic excellence.

            On the flip side, though, a trans woman who’s medically transitioned (i.e. gone through hormone therapy and possibly genital reassignment surgery) is almost always going to fall behind cis male athletes. Likewise, a trans man who’s medically transitioned (i.e. taken testosterone, which is considered a performance enhancer) is almost always going to outperform cis female athletes. So placing people by their assigned birth gender has problems too.

            I’m honestly not sure there’s any good solution here. Giving transgender people their own leagues would be ideal, but I doubt there are enough transgender athletes for that to be viable. Going by muscle mass and hormone levels rather than sex or gender makes sense, but I don’t know if it would be socially acceptable right now; I can see a lot of people complaining about someone who physically appears to be a woman playing in a league for men or vice-versa.

          • Thegnskald says:

            Iain –

            Well, you have shifted from moralistic tones into classist tones, as a result of the distinct class connotations associated with foods, particularly bitter foods and vegetables, and openness to culinary experiences.

            Do you disagree with that assertion, and/or do you understand why I am making it?

          • LadyJane says:

            And for the record, I don’t think it’s transphobic for people to argue that athletes should be categorized based on assigned biological sex rather than gender, although I have seen a lot of people express that opinion in very blatantly transphobic ways (e.g. “it’s disgusting that the Olympics have some dude in a skirt competing with girls”).

            There was this hardcore paleo-conservative I used to be Facebook friends with, and I remember he complained when a high school allowed a trans man to compete with men, but then also complained when a high school forced a trans man to compete with women. And while there may be a legitimate argument that either option would be unfair to someone, that wasn’t the argument he was making. He was just outraged by the fact that trans people were allowed to compete in any sporting events, and more broadly, by the fact that society tolerated the existence of trans people at all.

            It’s the same when it comes to the issue of dating trans people. It’s not transphobic to say “I wouldn’t date a trans woman, I can’t force myself to be attracted to them.” But it’s definitely transphobic to say something like “trans women aren’t women and no self-respecting straight man would date one, any man who fucks a tr*nny is either a closeted f*g or a desperate loser who can’t get a real girl.”

          • albatross11 says:

            Toby:

            If you put women and men in the same weight class in a sport based on strength or speed, the results will still be quite one-sided. Humans are a somewhat sexually dimorphic species–men are quite a bit bigger and stronger on average.

            The mental image to have here is that of overlapping bell curves. (Note–image is generic overlapping bell curves, not specific to male/female strength.)

            There’s a fair bit of overlap in these two curves–some people in the blue distribution are to the right of some people in the red distribution. But if you look at the combined distribution, *everyone* on the right tail is from the red distribution.

            If you’re talking about competitive sports at the college/semipro level, or even really competitive high school sports, you’re looking at right few percent of the combined distribution, in which there will be no or almost no women. At the Olympic / pro level, you’re looking at the rightmost fraction of a percent, and there will be zero women.

            Skill mediates this in hobbyist/casual sports–a good female tennis player will beat a much less skilled male tennis player ten times out of ten. But when you’re at any serious competitive level, *everyone* is pretty well-trained and in good shape and is practicing a lot. And then, being substantially bigger, stronger, faster, etc., gives a huge practical advantage.

          • Toby Bartels says:

            @ albatross11 :

            I think that you misunderstand me. I'm not saying ‹Boxing has the answer: weight classes!›; I'm saying ‹Boxing has the answer: a spectrum of classes based on measurement of specific relevant physical characteristics!›. In particular, I'm recommending (when it is relevant, which is often) testosterone-level classes (or probably it could be made more sophisticated than that). The reason why the strongest men are much stronger than the strongest women is not the penis (which is basically irrelevant) or even the testes as such, but the hormones.

          • Doctor Mist says:

            testosterone-level classes

            I was under the impression that the real stumbling block here was the arms race between doping detection and designer dopes. If you can synthesize a variant hormone that gives you an advantage over the class that testing will put you in, then you win.

            I’m rather afraid the only stable solution is to say: Everybody can compete, and the fastest runner or the strongest weight-lifter or the fiercest boxer will get the prize. The really unfortunate consequence of this is that it encourages a gladiatorial culture where people dope themselves in life-endangering ways in order to get the short-term gain. The consequence I’m not so concerned about is that females will no longer win races — I know there must be things that they can do better than males.

          • Matt M says:

            The consequence I’m not so concerned about is that females will no longer win races — I know there must be things that they can do better than males.

            1. Well, putting you aside, the entire rest of the world is very concerned about that. Hell, there’s even federal legislation requiring schools to offer female-specific sports in equal number to non-female-specific sports.

            2. Are there “things” women can do better than men? Absolutely. Are there sports? Eh, I dunno. Maybe if you start reaching into the artistic/objectively scored sports like figure skating? Maybe if you include things that are exclusively about manipulating technology such as auto racing (or e-sports)?

          • Bugmaster says:

            @Iain:
            The difference is, brussel sprouts don’t care if you eat them or not, and no one (except maybe vegans) cares about the moral status of vegetable consumption.

          • Aapje says:

            @Matt M

            Auto racing can be fairly physically challenging and men may (or may not) be better at it.

            In e-sports, reaction and movement times are probably very important. Studies show faster reaction times and faster movement in men.

            Women may be less prone to making mistakes, so a sport that heavily punishes mistakes may benefit women.

          • Conrad Honcho says:

            The consequence I’m not so concerned about is that females will no longer win races — I know there must be things that they can do better than males.

            Then you’re killing all girls’ sports because dealing with hormones is messy. You go tell the local high school girls’ track and field team that they have to race against the boys now. They’re really going to want to put in all that effort and training so they can…not even qualify.

          • Doctor Mist says:

            All please note that I didn’t say I was crazy about my prediction. If there were a gentle and reliable way to break competitors into natural classes within which to compete, I wouldn’t have any principled reason or emotional inclination to object. I just don’t see it shaking out that way.

            But I also note that I don’t see spelling bees or MacArthur Grants getting split up that way. I’m sure I would be worthy of a Nobel Prize if you would just take my natural level of ability into account.

          • Edward Scizorhands says:

            The Brussels sprouts example has been sitting in my mind for a few days and I think I’ve figured out why.

            When I was much younger, less than half the age I am now, I had the “why don’t you just try it, you might like it?” done to me with a new food. I demurred but it was in being done in public. The person was not a parent or guardian of any kind, but they had much higher social standing than I did, and sitting there quietly and politely not eating was not an option. This person saw me as a stupid kid who just couldn’t be trusted to know what he wanted for himself.

            To stop the loss of social status, I finally gave in. It turned out it contained an ingredient I was allergic to. (As small consolation, my social status was then put above his, because he had publicly poisoned me, but I’d rather it have never happened in the first place.)

            I truly don’t think Iain is doing this, but I can understand why people are misreading him as doing this. Just like the other person kept on framing it as something for my own good, things done to harangue other can framed that way as well.

            I think there is one good distinction to make: what’s the relationship between the person saying I ought to consider something new and the recipient? If I write an open-letter to lesbians on the Internet and say they should consider carefully “maybe you just hate men, have you thought of that? Have sex with men to get over it” then I’m just a cad. But if my best friend is asking me for advice because they have questions about their sexual identity, then telling them to think hard about their philia is part of what happens when you ask your best friend for advice.

            PS: Roasted Brussel sprouts are awesome.

        • J Mann says:

          @albatross11

          1) I think there’s a difference between “It’s OK to pressure you to try to be attracted to Purpletonians” and “You might want to consider whether you personally find a moral reason to try to be attracted to Purpletonians.” I think personal autonomy means we need to respect your choice and you should consider whether your moral code suggests you should see if it’s possible to be attracted to Purpletonians.” Maybe I could express it better – I’ll gladly take suggestions.

          2) It’s probably also true that there’s a Purpletonian Kinsey scale. If so, maybe you’re just a 1, or maybe you’re really a 2-4 and social cues have limited your erotic imagination, so you might or might not be a good candidate to try.

          3) Even if you don’t feel a moral incentive to read some Purpletonian erotica to see if it does anything for you, if we agree that Purpletonians get a raw deal, there might be some things we can do on a societal level to increase the erotic appeal, down to simple stuff like pressuring Hollywood to make sure that the “Purpletonian best friend” character is seen as a viable dating partner with a possibility of healthy, fulfilling relationships.

          4) @All – yes, it’s definitely true that few people are extending this principle to lonely white nerds, and that’s unfair, but I think the principle extends there.

          5) General disclosure: If I could design society to my liking, I’d probably tie attraction to socially desirable qualities to create some incentives for people to develop in healthy ways. (Peterson is good on this front, of course). I agree that there’s a knowledge problem there, and that broad social engineering is disastrous.

          6) Specific disclosure: If it helps, I’m an average looking cis white male with above average but not exceptional career success, so I doubt any romantic realignment is in my personal interest.

          • Randy M says:

            I assume that Purpletons aren’t a stand in for trans or homosexual, right? Because there is an obvious reason a great many people search for mates; that is, to mate.
            If we’re just talking about friendship, which there’s not really any reason to be exclusive with, then sure, try to broaden you horizons.

          • J Mann says:

            Purpletonian is deliberately neutral.*

            Since I’m challenging people to explore their own moral code instead of telling them what to do, people are welcome to slot different things into it and see how you feel.

            Sure, if you want to raise your kids a particular religion or if your goal is to have children who are a genetic blend of your partner and you, then it matters if we slot a religion or a fertility into Purpletonian – feel free to keep exploring.

            * I’m using unattractive but otherwise healthy people as my internal proxy, but I really mean the principle to be neutral.

          • Iain says:

            Oh look, J Mann saved me most of the effort of writing my own post.

            I think the following positions can all reasonably co-exist:

            1. Nobody has an obligation to be attracted to anybody else.
            2. Nevertheless, it can be virtuous to occasionally scrutinize your own desires. Maybe you’re naturally unattracted to Purpletonians; maybe you’re just (consciously or otherwise) concerned that they’re low status.
            3. There’s a difference between being personally unattracted to Purpletonians and going around talking loudly about how gross and low-status Purpletonians are. The former is fine; the latter is unkind and unnecessary.
            4, It would be good if more people extended point 3 to lonely male nerds.
            5. It would be good if more of the people pushing for point 4 extended point 3 to people other than lonely male nerds.

          • Edward Scizorhands says:

            Those 5 points are wonderful and if we had a “wholesome meme” rating I would give it to that comment.

          • Thegnskald says:

            Iain –

            I mean, sort of.

            You manage to imply that people arguing on behalf of lonely white nerds are uniquely selfish, and uniquely need to expand their care horizon. So, basically, you single out lonely white nerds for reprimand while nominally proposing people just treat people better in general.

            Whereas the parent comment doesn’t feel the need to shit on one group in particular.

            So, it is your comment, but less needlessly culture war.

          • LadyJane says:

            @Iain: I whole-heartedly agree with most of your points, though I find your focus on lonely male nerds to be a little strange, and I can’t help but wonder if you’re using “nerd” as a proxy for something else.

            I’ve known a lot of straight male nerds, and the majority of them never had much trouble getting laid, even the ones who were fairly shy or introverted or socially awkward. Some had steady long-term relationships, some were polyamorous, some were involved in the kink scene, some were just really good at picking up women on Tinder. And when I do see straight male nerds who have a lot of difficulty finding romantic/sexual partners, it’s usually not because they’re nerds or even because they’re shy and awkward, but because they have a lot of weird psychological hangups when it comes to sex and women.

          • Iain says:

            @Thegnskald:

            You manage to imply that people arguing on behalf of lonely white nerds are uniquely selfish, and uniquely need to expand their care horizon. So, basically, you single out lonely white nerds for reprimand while nominally proposing people just treat people better in general.

            A: I am not the person who started talking about lonely nerds. I was replying to point 4 in J Mann’s post.
            B: What part of “It would be good if more people extended point 3 to lonely male nerds” do you believe implies that nerds have a unique duty to expand their care horizon?
            C: As a general principle, if you discover that somebody “managed to imply” something that they explicitly contradict elsewhere in their post, consider the possibility that you are ascribing a stance to that person that they do not hold.

            @LadyJane:
            Yeah, we’re on the same page. As I said above, I singled out nerds because J Mann had already done so. More broadly, one recurring theme at SSC is that there’s something hypocritical about how some people who complain about fat-shaming women will turn around and take shots at all the sweaty fedora-wearing neckbeards living in their mothers’ basements. While some of this complaint stems from outgroup homogeneity bias, I think there’s some truth to it, and it’s a thing that should be fixed.

            (But not the only thing that should be fixed.)

          • albatross11 says:

            Iain: +1

          • rlms says:

            @Thegnskald
            Did you read point 4?

          • Thegnskald says:

            Iain –

            The parent comment mentioned them. It didn’t focus on them, as you did. It certainly didn’t single any group out as not applying the principle generally, as you did.

            What did your comment add, except to imply that lonely white nerds only apply the principle to themselves? What, precisely, does your formulation add, other than to imply lonely white nerds don’t apply the principle more generally?

            Because it isn’t what you copied from somebody else’s comment, it is what you added to it, that is relevant. And while it could charitably be read as “Lonely white nerds are just as prone to the same kind of self interest other groups engage in in which they treat their own suffering as more morally relevant”, that was already implied in the parent comment, because no exemption was carved out for them in the preceding clause.

            So what you have effectively done is reiterate point 3, but aimed it only at lonely white nerds. That is singling them out as unique offenders.

            “People shouldn’t kill other people. Our society often overlooks when black people get killed, and that is bad. But people arguing that black people shouldn’t be killed should remember that people shouldn’t kill other people, as well.”. That last clause isn’t neutral. Neither is your equivalent.

          • albatross11 says:

            I recommend applying a charitable reading to other peoples’ posts. If there is some ambiguity between “he’s saying something really offensive and wrong” or “he’s being a little careless and insensitive in his wording,” erring toward option #2 leads to more productive conversations than erring toward option #1.

          • Thegnskald says:

            Albatross11 –

            Sure. Which is why I am focusing on what he is implying, and telling him how and why he is implying it, rather than calling him a bad person who hates lonely white nerds.

            ETA:

            Okay, now I am wondering: What exactly do people think I am arguing for, here? What theory of my mind has me believe Iain is expressing hatred for lonely white nerds, then, as a response, point out that he is expressing hatred of lonely white nerds?

          • J Mann says:

            Iain – thanks for expressing that position so clearly.

            @all – since I seem to have gotten Iain into some trouble on the subject of eroticizing white male nerds:

            I think I was misreading some of the discussion upthread as arguing that proponents of erotic egalitarianism expressed it for other erotically marginalized groups but not for white male nerds. (I also think that’s true, FWIW).

            I think all of Iain’s principle are true and well stated. If you think that white male nerds get a raw deal and deserve some sympathy, then it would be good to think about other romantically marginalized groups and vice versa. And as Iain says better than I was able to, it’s not compulsory to examine your erotic desires, but it can be valorous.

          • Bugmaster says:

            @Iain:
            I disagree with point #2. What do you mean by “virtuous” ? If you mean something like “morally preferable”, then it implicitly contradicts point #1; also, no one died and made you the Sex Police. If you mean, “instrumentally useful to the advancement of your core goals”, then it all depends on what your goals are, and on whether the resource investment is worth the benefits. Just because some experience is an acquired taste, does not automatically mean you have to try your best to acquire it.

          • Matt M says:

            I think all of Iain’s principle are true and well stated. If you think that white male nerds get a raw deal and deserve some sympathy, then it would be good to think about other romantically marginalized groups and vice versa.

            But other romantically marginalized groups do get sympathy. There’s a large gap between “sympathy” and demands to date them.

            While many people would not date a transwoman, I think that “transwomen are worthy of affection and it would be very nice if they find partners” is a completely mainstream idea. Anyone who argued the opposite would be dismissed as a bigoted homophobe.

            But for white male nerds, that is not the case. There are large swaths of society who openly declare these people to fully deserve their loneliness and isolation. It’s not just that lots of people won’t date them, it’s that lots of people declare them to be undeserving of basic human affection entirely.

          • albatross11 says:

            Matt M:

            I suspect that you’re a victim of your bubble here, or perhaps the distribution of megaphones in our society. My guess is that if you looked around in the big wide world, you’d find way, way more people who think “transpeople are disgusting and deserve loneliness and abuse” than “white male geeks are disgusting and deserve loneliness and abuse.”

          • Iain says:

            @Bugmaster:

            Consider the same argument structure in another context:
            1. Nobody has an obligation to donate large sums of money to charity.
            2. Nevertheless, it can be virtuous to donate to a worthy charity.

            Something can be good to do without being an obligation. We have lots of opportunities to do good things in our lives; nobody pursues all of them, and that’s fine. I don’t believe that it’s humanly possible to live your life under the onus of doing every possible good thing, and I suspect that any attempts to do so would rapidly result in burnout.

            I am not claiming to be the Sex Police. Note that I was not pushing the idea that you should try to modify your own desires; just that it is, in general, a good thing to occasionally think about them. I really don’t think that should be very controversial.

          • albatross11 says:

            One thing I’d add to Iain’s comment is motivation.

            Usually, deciding to expand your willingness to try things is based on the idea that you might find that you like some stuff you don’t currently think you’d like. Maybe if you actually went out with that transwoman, you’d find her really pleasant and appealing and want to spend more time with her. The motivation is that you might find out more about what you like/want. (Presumably if she’s also interested, this is a win/win, but the decision isn’t primarily about what will make her happy, it’s about whether or not you can be happy with her.)

            Similarly, if someone suggests that you give crab soup a try, or that you try to be open to the possibilty that you might like being a parent, they’re suggesting you try something so you may discover better what you want. Or if you tell someone “hey, Joe may not be all that good-looking or athletic, but honestly, go out with him and you’ll discover he’s super fun to hang around and you may end up wanting to go out with him again,” you’re trying to help them discover something they may want but don’t know it yet.

            That feels very different from telling someone to try something, not because they may find they like it, but in order to make life better for the unwanted, or because it is morally wrong *not* to want it.

    • albatross11 says:

      To the extent the “sphere” is regular participants in the comment threads, I imagine there’s a pretty obvious reason why comment threads are more highly regarded within than without that sphere.

      Personally, I feel like I learn from reading the comment threads, including from people whose basic values or assumptions about the world are rather fundamentally different from my own. YMMV.

    • grendelkhan says:

      Huh. And to me, the comments sections, fractious and flawed as they are, are the best thing about this community.

      What would a rationality dojo be, if not a place where people who honestly disagree face the toughest opponents they can conjure, where opponents aren’t practitioners but problems, and do their damnedest to get it right where everyone else is getting it wrong? Isn’t that precisely the Culture War thread?

      From “Guided by the Beauty of Our Weapons”:

      I keep trying to keep “culture war”-style political arguments from overrunning the blog and subreddit, and every time I add restrictions a bunch of people complain that this is the only place they can go for that. Think about this for a second. A heavily polarized country of three hundred million people, split pretty evenly into two sides and obsessed with politics, blessed with the strongest free speech laws in the world, and people are complaining that I can’t change my comment policy because this one small blog is the only place they know where they can debate people from the other side.

      If you’re managing to inspire and/or run something this rare and this valuable to people, isn’t that an indication that you’re doing something special?

      (I’ll say that, for me, I’ve definitely gotten better at trying to cash out my beliefs with bets, make predictions as expectations rather than signals, and do a modicum of scholarship before making any bold statements. Like this kind of thing; I couldn’t have had that elsewhere.)

  14. apedeaux says:

    Mr. Alexander,

    If you had the intestinal fortitude to wade through Piketty, I beg you to attempt a review of Mises’ Human Action. I realize that your community exhibits an intense antipathy towards the notion of any kind of non-empirical methodology; however I believe it behooves you to at least attempt to refute the validity of praxeology by reviewing the seminal work of its founder. I too harbored immense misgivings toward the notion that apriori synthetic logic could advance scientific knowledge, but I believe that the epistemology of Mises truly succeeds in the attempt. At the minimum, it may provide a valuable foil for what I interpret to be your empirical positivism.

  15. 天可汗 says:

    Part of this is that the low-hanging fruit has been picked.

    Disagree. I see near-zero study of instrumental rationality, which seems like low-hanging fruit. Identifying successful people whose lives are well-documented and trying to figure out what made them successful — especially if they didn’t just drift there on the back of conventional status tracks — seems like it should be fruitful, but other than Athrelon occasionally reading up on the Inklings I see no one doing this.

    Most of history is free on the internet! You can just get Ben Franklin’s autobiography from Project Gutenberg. They seem to have censored the part about how he tried to live up to the virtue of chastity by only having casual sex in moderation, but that’s beside the point.

    Probably there’s a lot of fruit that’s only low-hanging in some senses — e.g. reading a book isn’t low-hanging in time investment but is in effort investment, but the time investment is too long for people to do it.

    • AG says:

      Successful people are more about survivorship bias than anything else.

      • maintain says:

        I feel like we should be able to figure out which cases are unlikely to be survivor bias.

        • albatross11 says:

          How would we know if we were wrong?

          I think the only thing that would really work here would be using your study of successful people to build up a predictive model, and then using that model to try to predict success/failure of other people who hadn’t been in your training data. Otherwise, you can come up with various models that should eliminate survivorship bias given some assumptions, but it seems like they will always rely on those impossible-to-verify assumptions.

      • 天可汗 says:

        Weak.

        There’s always an excuse. Hikkis who haven’t seen sunlight in years have excuses that are as compelling to them as yours are to you.

        • Montfort says:

          Why don’t you join the french foreign legion? Are those reasons (“excuses”) more compelling to you than a hikki’s to them?

        • AG says:

          It’s just as easy to argue that hikkis that successfully leave that lifestyle didn’t do it by replicable means. Either their minds are stronger than those left behind, and/or they happen to have a strong support system (that can’t be replicated with others, because you can’t force feelings), but there are just as many cases where the material settings were the same, but they have not broken away. So generalizing from the successful ex-hikki does nothing for other hikkis. The low hanging fruit of instrumental rationality is “everyone is a special snowflake.”

    • Jaskologist says:

      Why study individuals when egregores are where the real action is?

      (This genre already exists.)

      • Anon256 says:

        Because I’m an individual and not an egregore? I care about my own outcomes and control my own actions, whereas I don’t care about any egregores’ outcomes (except as they impact mine) and generally have negligible influence over their actions.

        • Jaskologist says:

          ‘The eye cannot say to the hand, “I don’t need you!” And the head cannot say to the feet, “I don’t need you!”’

          The most successful individuals got there riding atop egregores.

          • Jiro says:

            But to coopt a feminist saying, the fish can say to the bicycle, “I don’t need you”.

            Proof by analogy isn’t actually a thing.

          • Randy M says:

            Analogies, like fiction, can teach, but not prove. If you trust the person presenting it, you can gain understanding. If they aren’t trustworthy, you can be misled.

  16. Radu Floricica says:

    A complaint and a possible solution. Low hanging fruits are Not picked. The election in US was/is controversial (for example I’d have voted for trump if i was american) so maybe this helps side track this kind of talk – but there are plenty of situations in the world where people vote … wrong. In my Romania for example, we’re close to getting a Robber Party dictatorship, which has no redeeming qualities about it. And they got (fairly) voted into it.

    And we have no recipe to stop it from happening again in 2 years. So don’t talk to me about low hanging fruits being picked.

    And this is not the only scenario. We have concepts like steelmanning which you can get at 5th read if you’re around 120 iq points – and maybe even use once or twice in real life – but nothing even resembling a manual for people in Uganda to stop voting people that think gay people bring draught.

    • Scott Alexander says:

      You think that convincing everyone to agree on who the president should be is a low hanging fruit?

      I don’t mean “low-hanging fruit” in the sense of “now everyone is rational”, I mean it in “all the easy insights that you can get by thinking about the problem for five minutes have already been developed”

      • Radu Floricica says:

        Nonono. The issue is US was _complicated_, which both sides seem to hilariously miss. But in many parts of the world, the decisions should be obvious. Stuff like “things are awful, so I’m not voting anymore” – literal example I heard two hours ago. Or, well, brexit.

        And we have nothing even resembling a tutorial. I’m better off with Nisbett’s mindware than the rational community. This is the real low hanging fruit – not optimize smart people, but make super basic tools for average joe. Voting average joe, if the incentive is not enough.

        • ec429 says:

          Or, well, brexit.

          Hmm, I haven’t seen any discussion of Brexit on SSC. As a Brit, I’d be interested to hear what you think the obvious decision is, and why you think it’s obvious. Scott, feel free to delete/veto this if you think it’d get too CW-ey.

          • Radu Floricica says:

            Don’t mind hearing arguments pro brwxit, actually.

          • ec429 says:

            @Radu Floricica:

            Don’t mind hearing arguments pro brwxit, actually.

            You asked for it, you got it…

            Above all is the democratic argument. The referendum was already held, Leave won, and if the government does not deliver on that result it will put a great strain on our constitutional dispensation at a time when trust in our governing institutions is already at an historic low. It is with no relish at all that I say the results could make the Poll Tax Riots look like a Sunday-school expedition.

            Leaving that process issue aside, there are several substantive arguments. EU membership is antithetical to national sovereignty, with both the highest legislators and the highest court being essentially foreign powers. The only consistent positions are national sovereignty or EU statehood; it is clear that the EU aspires to the latter and has ratcheted that union ever closer through Maastricht and Lisbon, and I do not believe there is any reason why a nation, particularly one with such capabilities and talents as the UK, should allow itself to be made a vassal state of this euro-empire.

            Economically, the EU is a source of great harm. The Common External Tariff leads to needlessly high prices for consumers (as Jacob Rees-Mogg MP likes to point out, this includes heavy tariffs on food, clothing and footwear which fall disproportionately on the poorest in society), while the vast thicket of Single Market regulations stifles productivity and enables large corporations (who can afford the compliance costs) to strangle their smaller, more efficient competitors (this is why Airbus, the CBI, etc. keep briefing in favour of Remain). The CAP and CFP encourage inefficient and wasteful methods of agriculture and fishing, which incidentally also damage the natural resources on which these sectors depend (the CFP in particular has been an ecological catastrophe and a perfect case study in the tragedy of the commons); moreover, the combination of the CAP and barriers to trade in agri-foods keep African producers out of European markets, which otherwise could do so much to help lift Africa out of poverty (an ounce of trade, in my opinion, is worth a pound of aid. Though under the Weights and Measures Directive I’m probably supposed to convert that to grams). At least since the financial crisis, the EU has looked enviously at the financial business of the City of London and seeks, through new regulations and taxes on financial transactions, to break that industry in the hope that other European nations might capture it (a vain hope, since in such an eventuality the trade would most likely move to, say, New York or Singapore, not Paris or Frankfurt). It should also be noted that the UK taxpayer has been a major contributor to the EU budget.

            The system of law gives a peculiarly British reason to leave: historically both England and Scotland (at least; I am not certain of the Welsh or Irish history here) have operated on the Common Law, and that has certainly been the legal system of the United Kingdom throughout its existence; but the European Union runs on the Continental system of Civil, or Roman, Law. These two legal systems are incompatible (don’t ask me the details; I’m not a lawyer), making it unclear whether the rights of Englishmen anciently held, some first recorded in Magna Carta, still hold (one can argue, for instance, that some applications of the European Arrest Warrant, in extraditing Britons without a fair trial to a jurisdiction where they will not receive one, have violated the ‘lawful judgment’ (due process) clause of Magna Carta). To give a little context to this, note that many of the American revolutionaries justified their rebellion on the basis that they were being denied the rights to which Magna Carta (and other elements of the British constitution) entitled them; thus, abrogating these rights in order to trade with Europe is analogous to if the US had had to repeal the Bill of Rights in order to join NAFTA.

            Immigration is an issue often talked up by pundits, but it’s my belief that Leave voters did not object to immigration per se; rather they felt that (a) Britain did not control her own borders, essentially a flavour of the sovereignty argument and (b) it was unconscionable of us to turn away applicants from the Commonwealth in order to make room for Europeans. This latter may also have been influenced by folk memories of Canadians, Australians, New Zealanders, Indians etc. giving their lives in WWII.

            One last point is the question: why would we want to be in? We only joined in the first place because it seemed like the European system was producing unprecedented growth in its member economies. We now know that this was mainly post-war ‘catch-up’ growth (Wirtschaftswunder, trentes glorieuses etc.) which petered out at just about the point we joined; today the EU is a cramped and declining customs bloc (hampered by an ill-considered currency union) while all the growth and opportunity in world markets is elsewhere. If we were not already a part of the project, who on earth would be suggesting we should join, and why?

          • Radu Floricica says:

            @ec429

            > You asked for it, you got it…

            And I am very happy I did. Lot of this is new to me, and I pretty much agree with all of it (except the “empire” comment, that’s just being mean).

            The first instinct is saying: “issue is not being or not in EU, but doing Brexit the way it’s likely to end up: badly for Britain”. This was likely my intention initially anyways… I think. And this at least remains solid.

            Second comment is that very little of this is party of why britons voted brexit. The discussion is about voter decision making skills, and in this respect the brexit vote still looks horrible. The immigration comment is hollow, the “EU costs us money” sunk literally the next day after the vote, not to mention the pro brexit faction looking even more panicked than the losers.

            So if the argument was: should Britain be part of EU, or just part of its common market in some form, you make a great point.

            If the argument was: will Britan actually win more from leaving, long term? That’s undecided, and honestly half luck.

            But the vote meaning was: do we leave EU now, with the existing process, with those guys in charge of leaving? This still looks like a very very bad decision to me.

          • ec429 says:

            except the “empire” comment, that’s just being mean

            Well, I only use that term because the EU itself uses it. For instance in 2007, the then President of the European Commission, Jose Manuel Barroso, said:

            We are a very special construction unique in the history of mankind. Sometimes I like to compare the EU as a creation to the organisation of empire. We have the dimension of empire. What we have is the first non-imperial empire.

            Relatedly, I feel that the sovereignty argument is backed up by the way the EU has behaved in the negotiations, for example in its attempts to impose a border between Great Britain and Northern Ireland (essentially saying that N.I. isn’t allowed to Brexit because of the land border).

            very little of this is party of why britons voted brexit

            I don’t think you (or anyone else) has sufficient data to make a claim of that kind. Polling that asked Leave voters for their reasons has mainly shown the sovereignty argument as coming out top, with immigration/borders second (e.g. this Ashcroft poll). In the polls I’ve seen, the latter has been phrased in terms of ‘control’, which does not distinguish between the version I gave and the more ‘pull-up-the-drawbridge’ version that many pundits have attributed.

            The immigration comment is hollow, the “EU costs us money” sunk literally the next day after the vote, not to mention the pro brexit faction looking even more panicked than the losers.

            Not sure what you mean by “hollow”, nor what you think was sunk (are you referring to the £350m/NHS slogan?), and the only panic I’ve seen from Brexiteers is over the fear that the Government will produce a ‘technical Brexit’ that in practice keeps us subject to the EU’s rules, courts, etc. If anything, the Leave side was too complacent after the referendum result, shutting down most of its campaigns while Remainers kept agitating and politicking.

            do we leave EU now, with the existing process, with those guys in charge of leaving? This still looks like a very very bad decision to me.

            I think most people assumed that the Government would deliver on the result of the referendum (don’t forget the leaflet they sent out to every household beforehand saying “the Government will implement what you decide”). I, at least, never anticipated that they would have the brazen cheek to gradually retreat from the Mansion House position (it’s not everything I wanted but I can live with it) to the Chequers agreement (this is not Brexit except in the most petty and legalistic sense). The latter does not even satisfy Remainers — when both Jacob Rees-Mogg and Peter Mandelson oppose a policy, it can’t be good! The Government’s current policy is not what Leavers voted for, nor what they expected to get if they won the plebiscite. So if we did make a “very very bad decision”, it was due to being under the mistaken impression that we lived in a democracy.
            But in any case, while the Government is currently mired in ‘fudge’, there is enough push-back from the Conservative grassroots and the ERG that the most likely result (as I see it) is that we will leave on WTO terms. It’s my belief that that would be a good thing and that the results would vindicate the decision to leave; until it’s happened I won’t, of course, be able to prove that.

  17. wanda_tinasky says:

    I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?”

    Really? My apologies if I’m taking a throwaway example too seriously, but isn’t the obvious response there that that’s an equivocation fallacy w/r/t the term ‘prominent scientists’? Social science isn’t chemistry (or even medicine), and the reliability of results in the former is demonstrably less than the reliability of results in the latter. There are also justifiably different priors between “subtle, anti-intuitive effect of fuzzy complex system [brain]” and “violates settled principles of a hard science.”

      • wanda_tinasky says:

        I get your point, which is that you’d like to have a pure Outside View approach where you can point and say “this is science, believe the result” in a consistent way. And I think you can have it, but you have to add a dimension for complexity/confidence. The simpler the system, the better we can model it. The better we can model it, the more confidently we can say “your proposed mechanism is incompatible with our model, therefore it’s false” (e.g. perpetual motion, faster-than-light travel). The more complicated the system (brains, economies, cultures), the less likely we are to have Laws than we are to have vague platitudes like “Our derivatives-pricing model works sorta well most of the time” (but if you’re convinced there’s a huge housing bubble that’s gonna destroy the economy then who am I to argue with you). I feel like there’s a metaphor or isomorphism to Occam’s Razor here somewhere, but I’m not quite sure how to make it.

        My point that is non-replicating medical studies seem to be more like the latter and homeopathy more like (a violation of) the former. When a medical study is overturned, we can shrug our shoulders and say “what do you want, the body is a complicated mess and we can’t control for everything” without feeling too bad about ourselves. But when something is diluted to the point where there’s less then one molecule of ‘active ingredient’ per dose, we can pretty confidently say “we know how molecules work, there’s just no way.”

  18. MartMart says:

    Like many, my first introduction to SSC was the tolerate everything except for out group post. It’s probably the one most likely to attract right of center libertarians (I think being right of center meant different things then). But what kept me around was the clarity of thought. I long thought that it was important to pass an ideological touring test, but it never occurred to me to extended that idea to steel manning arguments. This was particularly apparent in the reactionaries in a nutshell post, where the author was being extremely charitable to an idea he clearly disagreed with. To me, this was new. People just do not do that.
    I think I spent the next several month reading as far back into the archives as I could. Sometimes I learned things. Sometimes, I agreed with conclusions. Othertimes, I disagreed, and felt much more certain about disagreeing (If I was asked a few years ago, I would have said that consequentialism was probably the optimal moral philosophy, once someone explained what it meant. Having heard this blog argue against deontology, which I was biased against, mostly because its sounds like something stuffy old religious people believe, I’ve come to think of it as being far more optimal).
    I’ve long wanted to read something more about how to think, rather than seeing those principles applied to the latest controversy (Unless its a controversy I deeply care about, of course). This blog keeps pointing me toward Eliezer Yudkowsky’s work, but for one reason or another I absolutely cannot tolerate his writing style (apologies to him if he is reading this). Getting thru it is a chore, one I tend to put for “later” far too often.
    All that is to say that at least in a sample of 1, this blog has done quiet a lot to spread the idea of thinking rationally.
    That said, the latest writings don’t have the same feel to them as the earlier ones. Maybe it’s inevitable as I’m learning more, maybe Scott is getting caught up in a life of his own (how dare he!), or maybe there really is less of the whole “here is how you think clearly” examples I valued so much.

  19. Jayson Virissimo says:

    There is an analogy between kumite and argument, so in that sense SSC is like a dojo. But we don’t really have the equivalent of kata (although adding a daily logic/probability/decision theory exercise could work).

    IMO though, SSC fulfills a role much more like the kinds of coffee shops the logical positivists used to hang out in than any kind of dojo.

    • watsonbladd says:

      The coffee shops provided the fundamental input (caffeine) required to do math or analytic philosophy. I am not so sure blogs replace that, although they do provide a social space for ideas to develop, which is also extremely useful.

      • 天可汗 says:

        IME, it’s a lot easier to develop ideas IRL than online. It can be done online, but it helps to at least have meetups or IRL groups who share the online context.

  20. Donnie Clapp says:

    We once thought the world was flat. Then, through some scientific inquiry, we discovered it was a sphere. Then, we discovered it was not exactly a sphere: It bulges here and there. It’s not even a constant shape: the land has tides!

    An anti-science zealot points to each of these breakthroughs and says, “See! Every time science thinks it know the truth, someone proves it wrong. Putting faith in today’s scientific consensus is just as foolhardy as it was to put faith in the idea the world was flat.”

    But in fact, these discoveries were not absolute negations. They were refinements of our understanding.

    Over time, scientific study gets us asymptotically closer to the truth. When a theory gets “disproved”, most of the time we are not replacing it with its opposite—we are replacing it with a more nuanced theory that’s a step closer to the truth.

    The intuition that you’re struggling to identify is an intuition about where the asymptote is for any given subject or question. They were wrong about social priming, but that’s not earth-shattering because the asymptote of absolute truth is still a long way off in the field of psychology. It’s harder to believe that we’re still a long way off from understanding whether decreasing the amount of chemical in a medicine can increase its effects on the body.

  21. AI alignment has grown into a developing scientific field. … It’s just the art of rationality itself that remains (outside the usual cognitive scientists who have nothing to do with us and are working on a slightly different project) a couple of people writing blog posts.

    My impressions are almost the opposite. AI alignment has gotten good publicity, but in terms of useful results, it’s barely better than a couple of people writing blog posts.

    Whereas CFAR made much more valuable progress in 2012-2015, and I suspect is continuing to make more progress than MIRI. But the martial art analogy is apt – much of what CFAR does well is training our System 1’s, and that translates poorly into blog posts or other marketing efforts. CFAR alumni seem to lead more satisfying lives than pre-CFAR rationalists, but that’s easy to overlook because it doesn’t lead them to talk more about rationality.

  22. RC-cola-and-a-moon-pie says:

    Part of the problem, of course, is that in order for this sort of thing to be really effective as practice is for there to be a way to know at the end of the exercise whether you ended up being right or wrong, which is impossible on controversial issues by the nature of the case. I think the broader idea of trying to create a science or art or craft of applied rationality is a fool’s errand. Maybe part of it is going to break down along the lines of those of us who think Yudkowski’s essays in this vein are interesting and useful and those who do not (I’m personally in the latter group). I love this web site but not for any contribution to a “rationalist community.”

    • Part of the problem, of course, is that in order for this sort of thing to be really effective as practice is for there to be a way to know at the end of the exercise whether you ended up being right or wrong, which is impossible on controversial issues by the nature of the case.

      Much of the time, the question is not whether your conclusion is correct but whether your argument of it is, and you can indeed discover that it is not. I’m pretty sure that one or more of my climate exchanges here resulted in someone concluding that the particular argument he was using was not correct, although not necessarily that the conclusion was not.

      • Kelley Meck says:

        Right. Although my object-level view of climate change hasn’t changed, or not much, my view of how difficult it is to make persuasive arguments on the subject very much has changed.

        • Did that affect your confidence in your object-level view? Given that your best estimate of the effects of AGW is still about the same, is your subjective probability that the estimate is correct still the same?

          If not, has the probability of deviations from that estimate changed in both directions–have you concluded both that it is more likely than you previously thought that you had overestimated the negative consequences and that you had underestimated them?

          • Kelley Meck says:

            Hm.

            By far the biggest thing I gained was an appreciation of how different minds have different “stopping points”–sort of like stop codons in DNA/RNA.

            Here’s an anecdote to point at what I mean about ‘stopping points’… as a college student, I once wrote a paper about “sustainability” as a slogan. I don’t still have the paper, but I know I framed it under an occasion/position-type topic sentence, where the occasion was “obvious puffery word ‘sustainability’ is basically a perfect tool for liars” and my position was “but even so, it’s better to use the slogan and work to populate it carefully with facts and our true values, than to make no attempt to unite people whose collective action will require simple rallying cries as labels for more detailed values and commitments.” As I saw it, because of problems like the tragedy of the commons and (and I didn’t have this brilliant phrase for it yet, but I tried to grope at it) scope insensitivity, nobody could credibly show a revealed preference for caring about the environment/climate/polar bears/what-have-you without immediately rendering themselves indigent (because the problem is too big and there are free riders) or rendering themselves vulnerable to charges of hypocrisy (because they say they care, but aren’t sacrificing in proportion with the scale of the problem). By starting with vague, slogany commitments and building commitment collectively, these problems can be (somewhat, haltingly, and not without problems of Machiavellians rising to the tops of movements) tackled without anyone making caring about the environment self-destructive.

            My paper was much at least 2x as hard to write than it would have been to cleverly criticize the brainlessness of the word “sustainability” and write, “Sustainability: Empty Rhetoric or a Bad Idea?“. It was 10x harder to write than it would have been to write a brainless “rah rah sustainability or otherwise the future will not be sustained!” All I needed was a passable paper with no discernable omissions or typos… why did I try to write a theory of collective action into a short paper in an environmental economics survey class?

            The answer, I think, has to do with stopping points. I was not comfortable ‘stopping’ until the idea I planned to write felt “done”–just like I would feel unhappy turning in a math problem that had a fraction that maybe hadn’t been reduced to its lowest terms.

          • Kelley Meck says:

            At the material-science level, my view of AGW hasn’t changed, nor have my error bars. E.g., still happening, still back-loaded, still pretty tightly matching what models have predicted, still more dramatic a change than anything in the history of the planet since photosynthesis put oxygen in the atmosphere. I still have it flagged to try and find out more about water vapor feedbacks… but I feel somewhat doubtful that there’s something there to find. (Many much more qualified physical sciences people than me have looked, and nobody seems to come back with serious doubts.)

            At the ecological-impacts level, I still think this is an extinction event for double-digit percentages of every phylum in animalia, even assuming humans adopt concerted and aggressive mitigation as a response to the problem. That seems to just follow directly from the material science level and the general sensitivity of ecological systems to this kind of change. Error bars haven’t changed much.

            At the social-impacts level, the “stopping points” way of organizing my thoughts about the disagreements I’ve encountered opens me up to either making very big updates myself, or thinking that many of the people disagreeing with me are missing something pretty big themselves, and *about* themselves. Certainly I’ve very much upped my interest in finding the time to significantly expand the set of facts from which I draw my understanding of how climate will affect human systems, and do not know what I’ll find. Do climate changes cause famines? Do famines cause political unrest? Has CO2 in the atmosphere affected plant productivity positively? Has warmer temperatures meant wetter weather? Has warmer ocean surface temperatures meant heavier ocean-fed storm systems, including wetter hurricanes and more expensive storm damages? Why does the U.S. have more climate deniers than elsewhere? Do governments generally underreact or overreact to tragedy-of-the-commons situations, and are the world’s governments more likely to underreact or overreact to this one?

            I am not sure what I’ll find as I find time to track down answers to these questions. It would be nice to be wrong, and have climate not be a major problem facing the world. I am not very hopeful.

            Edited to add: or anyway, I do not place much hope in the idea that climate isn’t a big problem. I am, in the main, a pretty optimistic/hopeful person.

          • still pretty tightly matching what models have predicted, still more dramatic a change than anything in the history of the planet since photosynthesis put oxygen in the atmosphere.

            On tightness of matching, I have a summary of the performance of the first few IPCC reports, written a few years back.

            On more dramatic a change, are you claiming that what has happened so far is more dramatic than the repeated glaciation/interglacial cycles during the current ice age? You might compare either by change in sea level or change in how much of the Earth’s surface is inhabitable. For the former:

            During the last glacial maximum, 21,000 years ago, the sea level was about 125 meters (about 410 feet) lower than it is today.

            (Wiki)

            Currently it’s about a foot higher than it was a century ago, and the high end of the IPCC projection is for about a meter by the end of the century.

        • RC-cola-and-a-moon-pie says:

          The main climate argument of David’s that I would very much like to see answered is his contention that there are high-cost, low-probability outcomes to averting climate change that may well roughly offset the high-cost, low-probability dangers associated with allowing climate change to proceed, all on the assumption of rough IPCC-level probability distributions. That, to me, seems like one of the crucial points that needs to be answered by proponents of incurring significant expense to avert climate change. A ton of the bottom-line rationales I see on that side boil down to “better safe than sorry,” but that attitude completely goes away if the balance of risks is in rough parity.

          • Thomas Jørgensen says:

            But that is obviously wrong, because climate change can be solved at a net-savings over present practice.

            Step one: Stop pretending the electricity market works. Build TVA, EDF quasi-government entities to run it. Order them to build standard reactors by the hundreds. This should cause the price of electricity to converge to around 4-6 cents kwh long term. Not to cheap to meter, but cheap.

            More importantly, it is a grid that does not externalize pollution onto public health budgets, so it is a much greater savings than it appears.

            Second: The market in cars does work, so we do not want to abolish it. But we can lean on it. Announce a long term plan to tax the sale of gasoline cars harder every year with the take being applied to a flat rebate on electric car sales (This tax should self-eliminate. Therefor we do not want it in the general budgets.)

            That leaves shipping and aviation. Taking shipping nuclear is also cheaper than present practice, tough it does mean you have to end.. most current shipping practices – Very few shipping operators could be trusted with a reactor, too prone to cutting corners.

            Aviation.. Well, ammonia synthesized from electricity is a viable aviation fuel.

          • bean says:

            Where are you getting the claim that nuclear shipping is cheaper than current practice? Because I’d love to see a cite for that. NS Savannah was only competitive at 70s oil prices because most of the reactor costs weren’t counted. Naval nuke people are insane. They’re very good at safety, not so good at getting things done.

          • Thomas Jørgensen says:

            .. Uhm. Math. Writing of the cost of a reactor, extra crew and decommissioning comes to a whole lot less than the life-time cost of fuel for a high-seas freighter. Emma Maersk – Which was designed from the keel up for fuel efficiency -burns 64 million dollars worth of fuel per year.
            Your discount rate has to be insanely high for just building a nuclear boat to not be cheaper than that. (A naval reactor that would move the emma should cost about 200 mil)

            This is not just my math, it is the result everyone gets.

            Savannah got retired right before OPEC, and the cost of intermediate fuel oil tracks the price of crude very faithfully.

            Everyone who has done the math at modern oil prices agree it is not even a contest – nuclear reactors would be cheaper. By millions per year, or more relevantly, by quite high fractions of the freight rate.

            The cost of fuel is the dominant consideration for modern merchant marine operations, and despite the wide-spread adoption of sailing extremely slowly, it is very high. A nuclear ship would cost more, but you would also be able to carry much more cargo per year for a given tonnage by the simple expedient of laughing at the concept of slow steaming, so the higher investment is not nearly as big a deal as it looks – a ship with twice the price tag that goes 3 times as fast is a cheaper mode of moving goods. A ship that goes three times as fast while having a fraction of the fuel cost puts rivaling shipping companies out of business.

            The issue has never been the economics – but rather, the politics of getting permission to dock nuclear powered ships in all the relevant freight ports

            It does set a lower bound on the economic size of freighters – Because while the math is drool worthy for the big boats the economics of a 15 mw plant in a tramp freighter.. not so much.

          • John Schilling says:

            Emma Maersk – Which was designed from the keel up for fuel efficiency -burns 64 million dollars worth of fuel per year.

            This “math” thing that you speak so highly of, at least to my limited understanding, requires that you also have the annual cost of ownership of a marine nuclear powerplant to include in your calculations. That number does not appear anywhere in your rather lengthy post. Could you provide it, please?

          • Thomas Jørgensen says:

            Savanna had extra costs of 2 million/year (in 1970 dollars, so 12 mil) This is an extremely pessimistic upper bound, because it is the cost of being both the sole user of all the supporting infrastructure, and a floating luxury exhibition ship, where a nuclear fleet would be able to share the first between many ships, and not have to do the second at all.
            And it is not enough to move the needle at all. You can google some of the proposals people have made over the years if you care to – the estimate of professionals in the shipping business is an overall cost saving of 30, 40 percent per tonne/kilometer for large freighters or oil carriers. The numbers get even more ridiculous if you run them with oil at a hundred dollars per barrel which has happened before and is likely to happen again.

          • RC-cola-and-a-moon-pie says:

            Thanks, Thomas. Very interesting response. I have two reactions. First, I’m surprised to hear that there would be no cost to solving warming. Aren’t there tons of estimates for the costs of, say, Kyoto that show large positive numbers? And Kyoto would barely dent warming. Are all the estimates of this sort just completely wrong and they are missing obvious answers?

            Second, even if the costs of preventing the warming were, in fact, zero, or even slightly negative, I think that David’s argument would not necessarily be refuted. His argument, as I take it, is that the actual results of the warming are not known to be net negative, and could be net positive, and that even factoring in small-chance risks, they too cut in both directions. If this is right then I think it would follow that even if we could push a button and eliminate the warming we should be highly uncertain whether to push it.

          • Thomas Jørgensen says:

            The standard cost estimates all contain several unspoken qualifiers.

            First qualifier: “What would it cost to solve global warming, without recourse to unholy fission”.

            That qualifier is bloody stupid.

            Fission has direct costs slightly higher than minehead coal, and external costs that are hard to distinguish from zero (because things like waste disposal are internalized costs- the reactor operator pays for them)

            Since the external costs of coal are enormous – and this does not count global warming, just straight up the costs of air-pollution and poisoning people – switching to nuclear is a large net saving for the nation as a whole.

            Same goes for the shipping thing. And the cars. Going electric only solves the problem given abundant low-carbon electricity supplies.

            Second qualifier: “And also we must not offend against free market ortodoxy!” – This is why plans for decarbonizing the grid involve so many subsidies and tax breaks.

            This is also a stupid, stupid qualifier.

            doing this is expensive, and also just does not work. The state is a hammer, state interventions that are extremely blunt generally work much better than trying to be delicate.

            The main drawback of nuclear power is that it is extremely capital intensive. Now, there is a very noteworthy feature of our current economic situation, which is that governments currently have extremely low costs of capital – the interest on government bonds is 0-2 percent, depending on which oecd nation you are looking at.

            Thus: Valley Authorities. Do not try to bribe private actors into building the grid you want. They will take you to the cleaners and just not do it.

            Instead set up semi-independent state owned entities, give them the pile of cash to just out-right build reactors by the dozens and their marching orders. The long term return on investment will likely be very high.

            And if it is not, that means someone invented a clean energy source which is genuinely better than fission, in which case, the state coffers can afford to take the hit because the economy will be booming.

          • Thomas Jørgensen says:

            Yes. They all have the unspoken assumption that you are not going tp just go out there and solve the problem by using treasuries to finance 300 reactors at a cost of capital of under 2 percent.

            Well, actually, they have the more basic assumption that trying to prevent the end of the world as we know it is not a good enough reason to violate the taboo against nuclear power.

            … Or the taboo against Dirigism. Note that this plan does involve just flat out ruining a really large number of coal barons, gas tycoons and energy traders by undercutting them. Also, would in very short order destabilize the middle east very hard by turning of the money faucet.

            That, is the basic assumption underlying all the cost projections are that climate change is not an emergency and that no actions actually consummate to the scale of the problem will be taken.

          • bean says:

            @Thomas

            I’d want to double-check those numbers. The Wiki article specifically says that she would have cost the same as a conventional freighter post-oil crisis not including maintenance and disposal of the reactor. I’d assume those came out of a separate budget, and who knows how much that was. If you have a cite on this, I’d love to see it, but I do know that nuclear power isn’t even really considered cost-competitive for destroyers by the Navy, and they have a lot more incentive to avoid refueling than merchies do.

            Re speed, 3x merchant speed is really, really fast. Like LCS fast. Which isn’t practical for a merchie, nuclear power or no. And it ignores the bit where you have to load and unload, and that takes the same amount of time no matter how fast your ship is.

          • RC-cola-and-a-moon-pie says:

            Wait, just to be clear we’re on the same page, treasury expenditures are positive costs that would be incurred to end warming, right? How important something is to do is a separate question from the cost of doing it. (And, of course, David’s argument contends that it is highly uncertain whether we should want to end warming at all.)

          • Thomas Jørgensen says:

            re:Bean. No, it was comparable before OPEC. Post opec, it is not even in the same ballpark. The navy does not see near as much benefit in economic terms because they do not spend very much time under steam, compared to a freighter. A freighter can easily spend 80% of a year going places at top speed, so has much, much greater fuel spend than a warship which spends most of its time sitting someplace being intimidating.

            RC-cola: Investment, not cost. Nuclear reactors frontload the cost of the electricity they produce.

            The fuel is barely an entry on the ledger, but the 12000 person-years of skilled labor to build one, those do cost.

            But.. with treasuries bearing zero percent interest and plentiful available construction labor, that becomes a joke. (the second part matters. State demand for labor can crowd out the market.. but not when unemployment is what it currently is) Take out loan, build reactor, sell electricity, pay loan back and then your children are probably pretty happy – After all, current build reactors are expected to run for a century. The maturity on the loan is 20 years. So that generation can cut prices, or use use the income stream to cut taxes.

            Sure, the headline stickerprice is a high number, but you need to compare that number to the price of 20 years of buying coal. Which is higher. Let alone the century perspective.

            Which must, admittedly, be taken with a grain of salt or three – on that time horizon, the chance of technological surprise obsoleting the things gets pretty high.

          • HeelBearCub says:

            but not when unemployment is what it currently is)

            Uh, what? Prime age labor participation rate still isn’t quite to pre 2008 peak, but it’s close. Unemployment is quite low.

            More to the point, the specific skills needed to construct sea going nuclear reactors aren’t just laying around.

            The final, most important, piece to the puzzle is “how many nuclear events due to sub-standard construction, maintenance or operation will the market bear”? “How many thefts of nuclear fuel by terrorist organizations will the market bear”? The answer is “far less than the expected number”.

            There are so many, many forces pushing against nuke power, but especially mobile nuke, and most especially sea going nuke.

          • bean says:

            The navy does not see near as much benefit in economic terms because they do not spend very much time under steam, compared to a freighter. A freighter can easily spend 80% of a year going places at top speed, so has much, much greater fuel spend than a warship which spends most of its time sitting someplace being intimidating.

            The fact that you’re looking at this only in economic terms makes it extremely hard for me to take you seriously. Yes, a freighter works harder than a warship, which should mean more focus on fuel economy. Which is why freighters use diesels and destroyers use gas turbines. But a freighter goes from point to point. It never has to dash halfway around the world to respond to a crisis. Freighters don’t UNREP. Warships do it all the time. Not having to unrep fuel would be really nice, saving all the money currently spent on tankers, and have major tactical advantages as well. The USN had nuclear cruisers, and walked away. They have a very active and very powerful nuclear propulsion community, and I haven’t heard of CGNs recently. Merchant lines have none of that, and they have no chance.

            And I’d like cites on Savannah’s operating costs. Wiki disagrees with you, and I’ll take them unless you can give me something solid.

          • Thomas Jørgensen says:

            The dirigiste play is for fixing the grid – I thought the references to the Tennessee Valley Authority and Electricity France made that clear? And the skills for those mostly are just lying around, – the things that eat man years like a maw there it is plumbing, construction and electric work, and the double-checking of the above. Which.. very ubiquitous skillsets.

            Nautical reactors would be nowhere near as labor intensive, because we would be talking about series production in shipyards – Which is the kind of thing that lends itself to automation, as does the obvious desire for consistency of build.

            RE: only viewing it in economic terms.

            No. I view addressing global warming and the general pollution from fossil fuels as an actual priority. I do not want to virtue signal, I want the problem solved.

            That means I care about the cost of solving it, because any solution which is expensive runs the risk of getting undone by a lizard-person with a bad toupee. A solution which is genuinely cheaper, overall, by contrast is going to stick come hell or high water.
            It also means I care about how possible various solutions have been proven in practice. People have been promising the renewable grid for over 40 years without delivering, while nuclear grids have existence proofs.

            Thus, if you are actually commited to changing the status quo – reactors.

            Nautical transport is technically easy but politically hard, since port access was a bitch and a half for Otto Hahn and the Savannah but the underlying tech is also simple.

            The general problem here is that people are insane. Nuclear gets held to a standard which nobody applies to the status quo. Which, let us be clear about this, is murdering us all by the hundreds of thousands.

            Never mind global warming, just flat out poisons from fossil fuel burning and extraction operations are an ongoing massacre that makes every nuclear accident ever combined look like a spill in aisle 4.

          • bean says:

            Nautical transport is technically easy but politically hard, since port access was a bitch and a half for Otto Hahn and the Savannah but the underlying tech is also simple.

            If it’s so cost-effective, why did the USN stop building nuclear surface ships in the 70s? Because everything I know of naval operations (which is a great deal) suggests that naval requirements are better-suited for nuclear propulsion than merchant requirements. Merchant ships go from one place that has fuel to another place that has fuel. They do not spend four months off the coast of someone who doesn’t like them very much, getting fueled at sea by expensive tankers. The USN going nuclear would let them cut the tankers, and their operating costs.

          • Thomas Jørgensen says:

            Well, unless you want to put reactors in destroyers… – Because going of the list of ships commisioned into service, all the USN ever builds are subs (nuclear) Carriers(nuclear) and destroyers(not nuclear)
            I mean, I kind of would like to see an all-nuclear navy, but I can see why the navy balks. It would tend to oversize the destroyers even more than they already are.

          • bean says:

            Well, unless you want to put reactors in destroyers… – Because going of the list of ships commisioned into service, all the USN ever builds are subs (nuclear) Carriers(nuclear) and destroyers(not nuclear)
            I mean, I kind of would like to see an all-nuclear navy, but I can see why the navy balks. It would tend to oversize the destroyers even more than they already are.

            The USN used to have nuclear cruisers. They were very useful for escorting carriers, and most aren’t that much bigger than current destroyers. There were serious plans for a nuclear-armed AEGIS cruiser, which got cancelled for fiscal reasons. I don’t have data to hand to give me size comparisons (I get numbers for the Belknap-Truxtun delta of both 700 and 1200 tons from wiki alone) but it’s a very significant cost. That’s why we haven’t seen them come back.

            Also, there’s no danger of an all-nuclear USN. The USN builds a lot of ships that aren’t nuclear or destroyers. Neither the amphibs nor the auxiliaries are likely to go nuclear any time soon.

          • John Schilling says:

            Because going of the list of ships commisioned into service, all the USN ever builds are subs (nuclear) Carriers(nuclear) and destroyers(not nuclear)

            The United States Navy did in fact commission two nuclear-powered “destroyer leaders”, USS Truxtun and USS Bainbridge. Both of which were subsequently reclassified as “frigates” and then “cruisers”, because the Navy couldn’t make up its mind about ship classifications. But both were designed to fill the same tactical role as the Burke-class destroyers, and both were slightly smaller than the current flight of Burke-class destroyers.

            Both were failures. As Bean notes, large-ish warships are a much better match for nuclear propulsion than any merchant ship, but the nuclear destroyers were too expensive to operate and it was cheaper to just pay for the damn fuel even if you had to build a milspec tanker to deliver it to you at sea. Same deal with the nuclear-powered cruisers. Only the aircraft carriers, at ~100000 tons and with the unique requirement to steam off at thirty knots in a random direction every few hours, ever really benefited from nuclear power in a surface ship application.

            And possibly a few Russian icebreakers, because they operate in an environment where even military-grade tankers couldn’t reliably deliver them fuel.

          • bean says:

            The United States Navy did in fact commission two nuclear-powered “destroyer leaders”, USS Truxtun and USS Bainbridge. Both of which were subsequently reclassified as “frigates” and then “cruisers”, because the Navy couldn’t make up its mind about ship classifications. But both were designed to fill the same tactical role as the Burke-class destroyers, and both were slightly smaller than the current flight of Burke-class destroyers.

            John, you’re slipping. They were commissioned as frigates, with the hull symbol DLGN, which came from destroyer leader, guided nuclear. (Welcome to mid-century USN ship designations.) Later, they were relcassified as CGNs. And they were only slightly smaller than the later California and Virginia-class DLGNs/CGNs, which filled the same role. All of which were in the same band as the Ticos and Burkes we have today.

            Same deal with the nuclear-powered cruisers.

            The only nuclear ship that was clearly a classical cruiser (as opposed to the lesser so-called cruisers we have today) was Long Beach.

          • ana53294 says:

            Nuclear powered ships make sense for icebreakers. But only Russia bothers to build them, probably because they have the largest frozen coast.

            Canada still uses diesel powered icebreakers, even though breaking ice consumes huge amounts of fuel. If it mostly doesn’t make sense to use nuclear energy for a ship that requires huge amounts of fuel, why bother with anything else?

          • bean says:

            I read the chapter in Friedman’s US Destroyers on nuclear surface ships, and it looks like the nuclear escort was more a victim of politics than anything else. Basically, it didn’t make a lot of sense in the early days, when the ships were relatively cheap and the reactor was thus expensive. As ships got more expensive, the cost delta for the reactor went down. The most relevant number was a nuclear Aegis ship vs a Tico, which was $1.2 billion vs $800 million. However, the nuclear ship (of basically comparable combat power) was the class leader, whiel the Tico was a follow-on, so I’d guess that the true cost of the nuclear escort is more like $1 billion (FY80 dollars, I believe). I honestly have no clue what the numbers would look like today. The combat systems have only gotten more expensive, but we may not have a good surface reactor available, which is going to drive up price.

            That said, the operational advantages of nuclear power for naval use are really compelling, and a merchie has nothing like Aegis driving up cost. So it’s still a no for them unless you give me hard numbers otherwise.

  23. helloo says:

    I think this is a rather… missed? metaphor.

    One of the biggest things regarding why martial arts practice a lot is that there are quite a few places where you should do something AGAINST your “natural instincts”.

    A lot of the repetition and focus on form is to cement the response and prevent the instinct from happening.
    This is more true for some than others (I’ve been told that fencing is often quite unnatural in how you’re supposed to respond).

    It is possible to link it to rationality, but not in the way you’re using it and almost certainly not by intuition – at least if said intuition hasn’t changed based on the training.
    At best this is training mental skills by going through battles – mock battles perhaps, but battles nonetheless.
    Without really shaping the style and practicing the format, and the fact you’ve admitted that you’re improved intuition is not really transferable, this really doesn’t fit as a martial art class metaphor.

    • Spookykou says:

      Isn’t understanding and working around our biases very similar to the idea of training yourself against your “natural instincts”, our biases being our “natural intuitions” when confronted with new information or an argument, instead of an attacker here. I believe when Scott is talking about his intuitions he is talking about the intuitions that he has worked up through rationalist practice, not his “natural intuitions”.

      After years of looking into this kind of thing

      • helloo says:

        I saw that as more experience than training or drills.

        A veteran might be able to intuitively know what to do or what are some tips and tricks, but those generally aren’t considered to be benefits from training.

        Might those skills and knowledge be collected and then formalized into a martial art?
        Yes, but that’s not a step which is described here.

    • Confusion says:

      I’d rather say that your instincts often aren’t very helpful. A lot of techniques are subtle: they work reasonably well when executed mediocrely and seem to work as well as they should when executed well, but then your teacher gives a few minor pointers and you feel how much more is possible. Even after 1500 hours of training (10 years, 3 hours per week) you’re still lacking the instincts to come up with those corrections yourself; to feel how you could improve your execution. And it takes repeating the technique many times to make those minor corrections a reliable part of your execution.

    • Scott Alexander says:

      I think the instinct one struggles against here is the instinct to assume you already know everything worth knowing, you’re definitely right, your opponents have nothing to teach you, there’s no reason you should change your mind, etc.

      But there’s also a part of martial arts which isn’t *quite* about overcoming your instincts. How do you know which of two different kinds of kicks to use at a specific point in one fight? I think that’s more about developing instincts than overcoming existing ones, and corresponds to questions like “is this one of those situations where I should trust the experts, or not?”

      • helloo says:

        If you need to decide between options, that tends to imply consciousness and thus some type of decision making. That can be learned or taught (always use A except on Mondays), but isn’t really a particular to a martial art.

        In fact, that’s rather AGAINST the metaphor of martial arts.

        Martial artists often do not have time to make these decisions. They NEED that split second to block or dodge or whatever.
        You train and drill to use those moves instinctively (or to gain teamwork/be able to do something without question – see military drills). And having consideration goes against this.

        It is not like there aren’t instincts that are against rationality that should be “trained away”. Besides the ones you listed some also include-
        Not considering multiple sources/bias
        Focus on what makes “more sense” than what might be more accurate (and not looking at data to check)
        Assuming opposing side is an attacker/wrong and trying to prove/defend it
        Not understanding what your/their assumptions/givens/postulates are.

        But these aren’t trained away from just having arguments. Otherwise, the way that Facebook and other social media has “encouraged” and increased arguments, would have considerably reduced these types of behaviors by now.

        You might be putting together how this blog and arguments and such are in fact being used to suppress these instincts, but that’s not really what you’ve written in this post besides the single mention of theories. It’s not like I’m doubting your rationality hasn’t improved, just that the metaphor doesn’t fit (though it might be interesting to think of ways to make it fit better).

      • PeterDonis says:

        “is this one of those situations where I should trust the experts, or not?”

        I’m not sure “whether or not to trust the experts” is the right way to frame this question. Personally, I think that if you care enough about the answer to a given question to even wonder whether you should trust the experts, you shouldn’t trust them; you should learn enough to make your own independent assessment. So the only situation where trusting the experts would even be an option would be where you don’t care about the answer anyway.

        To go with the martial arts analogy some more: part of martial arts training is learning to use your own unique attributes. The right way for you to act in a particular fight might not be quite the same as anyone else’s. Sure, there are general techniques and principles that everybody should learn, but in the end it’s you in the fight, not those who taught you.

        Similarly, part of rationality is learning to use your own unique set of attributes, and those include your goals and preferences. Your own goals and preferences won’t be exactly the same as anyone else’s. Rationality does not consist in finding the One Right Answer, because in most of the interesting cases, there isn’t one.

  24. Cerastes says:

    I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?” I can come up with some reasons this isn’t the right way to look at things, but my real answer would have to sound more like “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

    I may be biased because of my field (which could loosely be called experimental animal physiology), but one of my major criteria for “should I believe this study” (among many) is a simple question: “What is the mechanism?” If a study (or several) says that X is linked to Y in people, and we already know that there’s a hormone or protein or system which is capable of both X and Y, I’m way, way more likely to believe it than if the mechanism is speculative, or worse, unknown.

    If someone points to a study and says “this drug interacts with leptin receptors etc. to reduce subjects’ calorie intake and leads to weight loss”, I’m pretty good with accepting that on face value given what we specifically know about how leptin works. If another study says “People who eat this herb lose weight because of appetite suppression,” I’ll be skeptical but open to the possibility, because plants are crazy chemical factories and it’s entirely possible that one happens to produce something which interacts with the various appetite hormones in some useful way. But if a study says “Acupuncture results in appetite suppression and weight loss”, I will be immensely skeptical, because no matter what statistical method you used, there’s quite simply no known, well-established mechanism behind accupunture, and without that, it’s way more likely to just be statistical errors or somesuch.

    I’m not closed to the reality of empirical phenomena for which we don’t know the mechanism, but demonstrating its reality requires much more evidence and more definitive evidence than something which follows well from known principles. It’s a bit like the “extraordinary claims require extraordinary evidence” concept, but even for mild claims – the magnitude of evidence required to convince me is inversely proportional to the plausibility of the proposed mechanism.

    • Scott Alexander says:

      And I may be biased, because in my field nobody has any idea what causes anything, and in fact often reasons back from “Well, this intervention worked, so maybe the underlying territory looks like this…”

      But biases aside, I don’t think mechanism is a good way to solve this problem. I can come up with a plausible biological mechanism for why vaccines cause autism (which they don’t), but not for why oily fish affects health outcomes much more than fish oil (which it does).

      • Deiseach says:

        why oily fish affects health outcomes much more than fish oil (which it does)

        Hmm, that’s interesting.

        (1) Could it be that people who make a decision to eat oily fish also change other aspects of their diet as well, whereas people who take fish oil supplements may have less healthy diets and just toss down a few capsules in the hopes it will do something for them?

        (2) Argh, where was this evidence *coughcough* years ago when my mother was dosing us with cod liver oil? Yes, it does taste as horrible out of the bottle as you’ve heard.

        (3) The Vatican II Church should never have made fish on Friday voluntary instead of retaining the compulsory nature of the fast 🙂

      • Bugmaster says:

        because in my field nobody has any idea what causes anything

        Have you considered that perhaps your field is not as rational as you thought it might be ? 🙂

        • Aapje says:

          @Bugmaster

          Dealing with black boxes is not irrational, but it tends to require a lot of trial and error.

          • Bugmaster says:

            All of nature is a black box, though…

          • Aapje says:

            No, there are plenty of things in nature where we know the detailed mechanisms at work and can predict things based on that actual knowledge.

    • Doctor Mist says:

      Your varied spelling of “acupuncture” spurred me to wonder why it has only one “c”. Though I knew the spelling, I think subconsciously I’d always assumed the meaning was “accurate”, “precise” — getting the needle in exactly the correct spot. But no, “acu” is from Latin “with a needle”.

      (Now I’m wondering why Latin had such a short particle for “with a needle”.)

  25. rahien.din says:

    [Note : this is may be is overlong, but, I have been thinking about this a lot lately. Forgive it or hide it.]

    If this blog still has value to the rationalist project, it’s as a dojo where we do this a couple of times a week and absorb the relevant results.

    What distinguishes a dojo from a street fight is not expertise, not fighting spirit, but a particular kind of civility.

    Sparring is at its most effective when it is full-speed, when you aren’t thinking about how hard (not) to hit. Sparring partners take your force so you can learn to hit, and you do the same for them. You risk injury for each other, and that injury that is neither intentional nor unintentional. The civility underlying sparring is “Even if I/you do things that could injure you/me so that we can learn to fight, we know that neither of us enjoys the thought of the other being injured.”

    Think of Lawrence Taylor’s (#56) reaction when he notices that he broke Joe Thiesmann’s leg. That’s a particular kind of civility. [Ed : Don’t watch the whole thing unless you want to see Joe Theismann’s tibia snap.] Or, think of “We need emotional content… not anger!

    We’ve all been part of internet communities where people were genuinely trying to damage each other. I have, both as a mouth-agape spectator, and as one of the people inflicting damage. I learned an awful lot from those interactions because few things will hone your ideas better than a full-speed opponent – whether that’s in the dojo or in the street. But, I’m not part of those communities, anymore. I burnt the house down around me. Can’t learn anything more.

    SSC is different. This place has that warrior civility. We’re not exactly looking out for one another… but the goal isn’t to murder each other, either.

    I have sometimes wondered why. Sure, Scott excises the vicious and the saboteurs, but often he doesn’t even have to. When the newbies get out of hand, it’s usually some of the other posters admonishing them. I myself have compulsively defended this civil atmosphere – to a weird degree. Sure, I really enjoy how we can have controversial discussions here, but, the degree of compulsion I felt was not explained by my intellectual enjoyment here.

    This civility is what’s sometimes missing from the rationalsphere in general. Rationality, at its very heart, is incivility.

    On one hand, that’s by design. Immobility is the chief resource that an algorithm brings to the table – an algorithm doesn’t and shouldn’t care about when it is wrong, because it assumes that its wrongness occurs at the optimal rate. It’s drilling down to the inviolable and indisputable and non-conscious source code of deciding, and standing immovable even when the world cracks around it.

    On the other hand, you have The Prophet Eliezer intellectually ripping people in half at cocktail parties. Yes, you sure can “have some fun with people” that way. But not forever. And if you excise people in that manner (if, like me, you burn a house down around you without realizing it) you have no recourse when your rationality slides into the various analogues of scientific forestry.

    This is not unique to the rationalsphere – the most visible fiefdom of incivility is fundamentalist religion. Both use the same currency. For instance, that’s why Sam Harris’ thought processes resemble those of his opponents. He reads the Koran and draws the same uncivil conclusions as the Islamists. Harris is better because he can see, at least, that we should not abide by those conclusions, but he’s still shackled by them and to them. From the other direction, consider that the Parable of the Good Samaritan is not about hypocrisy, but actually about abandoning formalism.

    It is for this that Book of Cold Rain says one must never take the shortest path between two points.

    Within the rationalsphere, there is a diverse, blind-men-vs-elephant nomenclature for this kind of civility. “Sportsmanship.” “Wisdom.” “Schelling point.” “Epistemic humility.” “Chesterton fence.” “Clarity didn’t work, trying mysterianism.” But these are just magic spells we cast on ourselves.

    The whole point is, if we want to conduct ourselves in the best manner possible for conscious beings, the unblinking eye of rationality is both necessary and insufficient. Sometimes, the algorithm doesn’t work. Sometimes, you build a system that inappropriately increases uncertainty, or worse, inappropriately reduces certainty. Conversely, sometimes instead, the way out is through. How do you know when to abandon that system, or when to cleave to it?

    A priori, you can tell yourself “Okay, sure, it’s just a formal system, and boy don’t we know about formal systems,” or, “Oh yeah, like how our best AI’s operate by maximizing their future options,” or, “Intelligence is knowing a tomato is a fruit, wisdom is knowing that ketchup isn’t a smoothie.” But in the heat of the moment, it’s not something you can rationally decide. You have to be able to switch systems when it’s appropriate to do so. The same thing that made Lawrence Taylor the Platonic ideal of an outside linebacker is not the same thing that moved in him upon seeing Joe Thiesmann’s ruined leg.

    The incantation is “I have recognized an opportunity to apply one of {Schelling point mechanism, epistemic humility mechanism, Chesteron fence mechanism, …},” but the actual effect is “I am going to cast this illusion on myself, which will allow me to switch systems without entirely abandoning the idea of rationality.”

    It’s a mild and adaptive form of self-hypnosis.

    This is why AI is not and will never be conscious, any more than a stone. Why it is ever the golem in silica. Why it is scary, but also its vulnerability to conscious beings. “I’ll just pull the plug” means “I possess a counterspell, whereby I may abandon a deranged formalism.” The people who are worried about AI safety are actually worried that they don’t have a general counterspell for a deranged formalism.

    (Probably they should be, because they tend to rip people in half at cocktail parties.)

    I’m not even talking metis-vs-Lysenko. Metis is also insufficient and blind. For every miscarried forest of sterile evenly-spaced fir trees swept clean by a fire, there is a vibrant forest of microscopic transistors on an integrated circuit ablaze with calculation. We have to be capable of abandoning metis, and just as you can’t rational your way out of rationality, you can’t metis your way out of metis.

    We think that what we do here is training ourselves in applied rationality, or, solving problems, or, providing checks and balances. We’re not. (Moreover, the rest of the internet does that better – nothing prunes one’s fighting system more effectively than a sucker punch or a Glasgow smile.) We might even be consciously grateful for the civility here – as refreshing as cool petrichor in a green wood after a lightning storm – and how it permits us to go full-speed and yet remain a community. That’s not exactly correct, either.

    We are here not because the civility allows us to work through hard problems. We are here because [working through hard problems with civility] is the paired kata.

    It has to be a kata, and it has to be a paired kata.

    The skill that paired kata trains – the very attribute that everyone is here to acquire – is system switching. It’s counterspells. It’s knowing how to flinch when you can’t say exactly why you flinched. It’s being able to say “When there is an opportunity, ‘I’ do not change, ‘It’ changes all by itself.”

    • J Mann says:

      That judo bit in the sequences makes me really mad. I don’t have much opinion on whether EY is leading a sex cult,* but that sequence leads me to believe that he’s either a gnomic instructor out of zen parables who likes slapping people to supposedly induce enlightenment, or is something of a myopic jerk.

      • Randy M says:

        I’m waiting on your footnote

        • J Mann says:

          Sorry – I’m near the the bottom on editing quality, and will work to improve!

          Originally, I had written the following

          * Specifically, I think it’s unlikely as I understand the terms, but with low confidence.

          But then I decided that “don’t have much opinion” captured that sufficiently and forgot to delete the *.

        • Nick says:

          Not J Mann, but my opinion these days on whether EY is leading a sex cult is “Folks saying it need to put up or shut up already.”

          • Randy M says:

            Eh, I don’t really have an opinion about that, but dangling asterisks leave me in suspense.

          • Nick says:

            Are you the guy who always has to close other folks’ parentheticals in a chat? Sometimes I leave them open just for kicks. 😀

          • carvenvisage says:

            @Nick I’m not sure what other stuff (if any) prompts people to say that, but his old OKCupid profile is probably still up somewhere.

          • Aapje says:

            Here it is.

            Fun to read/cringe at.

          • J Mann says:

            In defense of EY’s profile, I have to say that respondents know exactly what they are getting, which seems pretty rational.

          • Viliam says:

            Seems like these days all you need to become a sex cult leader is:

            1) to be polyamorous, and

            2) to announce it on OkCupid.

            I wonder why more guys don’t do this. Is this secret PUA technique patented?

          • John Schilling says:

            In order to be a recognized as a sex cult leader, you first have to be recognized as a cult leader generally. That, plus overt polyamory, will likely result in such a reputation.

            EY’s reputation as a cult leader generally hinges on his,

            A: having developed or claimed to develop a new and superior way of understanding the universe
            B: having promoted this as a way to achieve a significant level of self-improvement
            C: having made profound eschatological claims on the basis of this understanding
            D: having written scripture to teach all of this, with extensive jargon impenetrable to outsiders
            E: having collected followers in communal living arrangements to receive and evangelize his teachings
            F: having directed his followers to donate significant sums of money to particular causes, including a research institute he directs and which pays his salary.

            This does at least vaguely match most not-explicitly-religious definitions of a “cult”, specifically what Bruce Campbell would call a service-oriented instrumental cult, but a vague pattern-match probably doesn’t merit using a term with the pejorative implications of “cult”.

      • Bugmaster says:

        That entire bit belongs on /r/thathappened ; all it’s missing is the standard “and then everyone clapped” coda. Don’t get me wrong, I understand that it’s supposed to be allegorical, but still — even the New Testament is written in a less arrogant style…

      • thevoiceofthevoid says:

        A classic example of how being clever does not protect oneself from being an @$$hole.

    • Scott Alexander says:

      Frick, don’t quote the Book of Cold Rain at me, I forgot I disclosed that part of my inner mythology somewhere and you freaked me out.

    • rlms says:

      I fervently hope you are an outlier in including Sam Harris in the rationalsphere.

      • rahien.din says:

        Intentionally left ambiguous as to whether he is fundamentalist, rationalist, or both.

      • christhenottopher says:

        A spectre is haunting the rationalsphere – the spectre of Sam Harris.

        But actually I’ve been seeing Harris at the periphery of every online community I hang around and I’m never impressed by him. I kind of wonder what that says about a person who finds they keep joining groups and those groups start referencing the same not-really-that-great figure?

    • Bugmaster says:

      On a side note, “don’t think, feel !” seems like great advice for martial arts, but kind of exactly the opposite of what rationality is supposed to be…

      • Toby Bartels says:

        You want to practise to the point that you can feel rather than think (because thinking is too slow). Reaching out with your feelings on your very first day is only for movies.

  26. Lapsed Pacifist says:

    What is the best demonstration of rationality? I would love to see some Rationalist Masters demonstrate their own prowess before accepting that there is value in studying under them.

    I actually do study martial arts. There is a Kung fu school in my city, where I went and found that none of the students could throw me. I discovered that their teacher did not believe in competition, and that his students were not allowed to resist the techniques in training. I asked to fight or spar with the teacher, and he demurred.

    Can anyone show me a concrete instance of ‘Rationalist’ technique being effective, such that I can show a third person and have them understand it? If not, you may want to consider that your kungfu is not good enough to be taught, and that teaching it may not be responsible.

    • andrewflicker says:

      I believe Tetlock’s superforecasting tournaments are basically this, LP, and in general I think well-calibrated predictions over diverse domains where the “predicter” lacks significant domain expertise are a good signifier of rationality skills.

    • moridinamael says:

      I have observed that “rationality techniques that work” often cease to be called rationality techniques.

      Is staying hydrated and getting enough sleep a rationality technique? Well, it’s certainly the kind of thing that any Rationalist would recommend, but Rationalists hardly have a lock on this kind of basic lifestyle advice.

      Another thing Rationalists recommend is meditation. But they didn’t invent it, they just identified it as empirically useful.

      When you think about epistemic rationality, well, I think people with a bit of practice in this area do outperform the norm, but it’s very difficult to screen that effect off from high intelligence. And it’s even more difficult to filter out the inherent noise related to the fact that any problem domain which will qualitatively reveal the benefit of high skill at epistemic rationality will by design be an unusually confusing and perhaps poorly defined problem domain.

      It’s all a bit like if you invented a martial art called “do what works fu”, and you steal all the most effective strikes and grapples and stances from every other martial art. This is indeed what “MMA” is – but MMA doesn’t take credit for these techniques, MMA isn’t a “thing” in the same way jiu-jitsu is, it is a meta-process that takes all available techniques as inputs and yields functional combinations of those techniques as output. MMA tournaments serve as the force which selects on the techniques.

      • Bugmaster says:

        I understand that there are lots of complexities involved. However, at the end of the day, you can always call a big tournament on a secluded island somewhere, and observe first-hand whose kung-fu and/or meta-process is strongest. This doesn’t seem to work for rationality techniques. Sure, some people are very smart, and some of them achieve great things; but their probability of success does not seem to be related to their practice of rationality techniques. In fact, Scott himself even states that he just goes on intuition most of the time.

        I can’t speak for Lapsed Pacifist, but personally I’d like to see some actual evidence that disproves what I said above; I have a feeling he’s asking for the same thing.

        By analogy, I want to see some evidence that you won because your fighting style was superior, and not just because you’re 7 feet tall and naturally muscular, fighting a couch potato like myself. In that situation, it doesn’t matter what style you use, you’re going to win pretty much regardless.

        • Luke the CIA Stooge says:

          I think the test for rationalist master would be the same as the test for kung fu master.
          You are dropped on a vast deserted island with survival equipment.
          Slightly less deadly Battle royal rules: Outwit your opponents to kill them or force their surrender.
          (oh and there’s a ton of money as a prize to compensate for the risk (a rational person needs more than glory))
          The winner is the most rational, the survivors are the second most rational, and the dead are the irrational. (repeat until we accept a God emperor)

          If Rationallity is just systematic winning then the test works.
          If however we don’t expect rationality to be the deciding factor, then that kinda discredits rationality.

          Personally i expect the test to favor ruthlessness, immorality, and nihilism. So i think its a good test for rationality ;b

          • zqed says:

            @Luke the CIA Stooge:

            If Rationallity is just systematic winning then the test works. If however we don’t expect rationality to be the deciding factor, then that kinda discredits rationality.

            I’m sure you’re joking, but let me explain where this goes wrong.

            Rationality as systematic winning arises in a specific decision-theoretic context, when two agents play the same role in the same fixed game with the aim of achieving a high score. It means that if agent X is more rational than agent Y, then X should not systematically and predictably achieve a lower score than Y.

            In your proposal, the subjects X and Y are playing two different roles/games. X is playing “survive against Y” and Y is playing “survive against X”. Since the games are different, rationality as winning does not mandate that the more rational agent should achieve a higher score.

          • Luke the CIA Stooge says:

            @zqed

            I think your interpretation gets the formal implications of decision theory right, like that is what it does and does not imply, but within the broader rationality community i think the broader definition of rationality is the one thats used.

            Like Yudkowsky’s final exam in HPMOR was literally “this is an impossibly one sided fight, how can rational Harry win against these odds?”, and the “rational” answer was just the most plausible way he could still win the fight.

            Like my glib joke “if rationality is just sytematic winning then we can just decide whose rational with a formal death battle” is actually whats implied by how the rationalist community, and yudkowsky in particular, envision rationality, a fully general correct decision maximiser.

            And to be fair to them I don’t see where they’re wrong to define rationality that way, and i don’t think my reductio discredits them. Like the kind of rationality they want is the kind that would maximize your chances in a death battle, or a war, or a career, or life.

    • sty_silver says:

      It’s an interesting question, but I’m not sure what proof of effectiveness you have in mind. Suppose rationalism was as effective as people claim, then how would a demonstration look like?

      In terms of the single most impressive thing rationalists have done, I would say these two papers by Miri, but you could argue they don’t count as rationality outputs (also you’d have to read them to judge their quality, and then I suppose it might still be debatable). The next most impressive thing would be convergence on truth, I think this is demonstrated by this survey and in the focus on AI, but this won’t impress you if you disagree with the conclusions. I think the sequences are impressive and useful, but this won’t impress you if you disagree or don’t want to read them. The focus on signaling is impressive and incredibly useful in daily life, but this won’t impress you if you don’t agree. That’s true for all epistemic results.

      CFAR teaches techniques on how to better proceed in disagreements, but how would a demonstration there look like? People telling you they changed their minds? Or statistics?

      Generally, studying rationalism is supposed to make you better at 1. achieving your goals, 2. figuring out what’s true. I don’t know how you would measure either.

      • Bugmaster says:

        Suppose rationalism was as effective as people claim, then how would a demonstration look like?

        I don’t know, do rationalists ever make any specific claims ? Martial artists do: “I can kick your ass”, or, more formally, “I can win this tournament by points or knockout”.

        Generally, studying rationalism is supposed to make you better at 1. achieving your goals, 2. figuring out what’s true.

        As I’d mentioned above, money features prominently as an instrumental sub-goal in most goals; so if rationalists were significantly richer than the general population; and if their wealth could be directly linked to their decision-making processes (as opposed to, say, inheritance); then I’d take it as evidence for their position.

        • thevoiceofthevoid says:

          Many rationalists have an annoying tendency to donate loads of their time, money, and/or focus to weird things like MIRI or wild-animal suffering research or bed nets, which might complicate that metric.
          Though you might still find a good number of them making pretty good money as software engineers, and it’s your choice whether to count that as a rationality win.

          • LadyJane says:

            There’s a very large number of non-rationalists making “pretty good money” as software engineers (and plenty of people using their intelligence, education, and skill sets to make much greater amounts of money in other fields) so I wouldn’t consider that a “rationality win” in any meaningful way.

        • sty_silver says:

          As I’d mentioned above, money features prominently as an instrumental sub-goal in most goals; so if rationalists were significantly richer than the general population; and if their wealth could be directly linked to their decision-making processes (as opposed to, say, inheritance); then I’d take it as evidence for their position.

          That really would be zero evidence for anything. If you take a look at the survey I linked you see that the population is highly non-representative of average USA citizens. You’d have to control for a thousand things then there’s the fact that maximizing personal wealth is not something you’re actually supposed to do. In many cases, I’d consider a luxurious lifestyle as evidence against someone being a good rationalist.

          Many rationalists make lots of specific claims, like “god doesn’t exist” or “Many worlds is true” or “the existential risk of AI is really high” or “you should invest in bitcoin

          • Bugmaster says:

            In many cases, I’d consider a luxurious lifestyle as evidence against someone being a good rationalist.

            I was talking purely in terms of income, not how the person decides to spend it.

            Many rationalists make lots of specific claims, like “god doesn’t exist” or “Many worlds is true” or “the existential risk of AI is really high” or “you should invest in bitcoin“

            Those are all factual claims, and not claims about their capabilities (with possible exception of the last one). Martial artists can also claim that “god doesn’t exist”, but that tells us nothing about their fighting skills, just as it tells us nothing about the rationalists’ intellectual capabilities. On the other hand, when a martial artist says “my kung-fu is best, and it allows me to defeat up to 10 opponents at a time”, this is a claim about his fighting prowess that is reasonably specific and readily verifiable.

    • e.samedi says:

      “What is the best demonstration of rationality?” First, reject the notion that the pursuit of rationality is something new. Off the top of my head, the best demonstrations I can think of are: Plato’s Socratic dialogues, the medieval quaestio format (see the form of argumentation used by Aquinas in the Summa Theologica), and the Encyclopedia of Diderot. Logic, grammar, and rhetoric remain the foundation and sine qua non of “rationalist” technique.

      We are fortunate that we have new tools to layer on this foundation, the most important of which in my view, are ones mentioned in “The Martial Art of Rationality”: probability and statistics, and cognitive psychology. The latter is especially interesting because it informs us of the limitations of the tool we use for reasoning, i.e. the human mind.

    • Wrong Species says:

      If you change your mind on some subject which you are heavily biased, and it’s not because of peer pressure, then you are in the top 1% of rational people. Even if you are wrong, it’s a strong signal about your ability to overcome bias.

      • Aapje says:

        There are many ways to change your mind:
        – Decide that the bias is wrong, for example: due to new evidence, I no longer believe that snails are being discriminated against in our society
        – Give up on a claim that you used to make in favor of your bias, in favor of a different claim in favor of your bias, like: this IAT test administered to snails seems too unreliable, but police statistics show that snail stomping happens a lot.
        – Adopt a claim that goes against your bias, while still believing that the balance of evidence favors it.
        – Weaken a claim to some extent.

        Some of these are much stronger than others and most can also differ in magnitude. I doubt that only 1% of people are capable of the weaker ones.

  27. Ilya Shpitser says:

    “I’ve been thinking about what role this blog plays in the rationalist project. One possible answer is “none” – I’m not enough of a mathematician to talk much about the decision theory and machine learning work that’s really important, and I rarely touch upon the nuts and bolts of the epistemic rationality craft.”

    This induced a bit of a whiplash in me, especially in light of the fencer analogy.

    There are two parts to making positive changes: (a) knowing what to do (this involves learning from books/blogs/etc), and (b) actually trying to do it (the latter part involves motivation, repetition, social support, trying things empirically, error correction, etc.)

    In the internet age, I think we are oversaturated with (a), and limited by (b). I think the marginal value of more blogging about rationality is probably zero, while the marginal value of getting more social and cultural infrastructure in place to enable (b) is high.

    Re: ML and decision theory work that you think “is really important.” How did you decide, not being enough of a mathematician as you say, that it is really as important as you say?

    • Scott Alexander says:

      “Re: ML and decision theory work that you think ‘is really important.’ How did you decide, not being enough of a mathematician as you say, that it is really as important as you say?”

      How does one decide that global warming is important, if one isn’t a climatologist? How does a diabetic decide it’s important to take her insulin, if she isn’t a doctor?

      • rlms says:

        How does a diabetic decide it’s important to take her insulin, if she isn’t a doctor?

        Easy — she stops taking it, feels sick, changes her mind.

        How does one decide that global warming is important, if one isn’t a climatologist?

        Much more difficult. Some possible approaches: go with the consensus view; take the assumption that global warming is real and do relatively simple reasoning about the consequences of it (e.g. analogising climate change to acute changes in temperature, looking at the effects of previous climate change). These aren’t really available for AI, because it’s never happened before (well, you can try analogising to human intelligence but I think that favours the opposite conclusion — human intelligence hasn’t foomed).

        • Jiro says:

          Easy — she stops taking it, feels sick, changes her mind.

          If you substitute antibiotics for insulin, doctors have to keep reminding patients to finish up their prescription even if they feel better. There is a reason for this.

      • Ilya Shpitser says:

        > How does one decide that global warming is important, if one isn’t a climatologist?

        Presumably listen to climatologists. What do your decision theorist and machine learning friends say?

        • ec429 says:

          As a general rule, listening to practitioners of X is not a good way to find out if X is important.
          Theologists will tell you theology is important. Bankers will tell you that their bank is so important you have to bail them out. I will tell you that your entire civilisation depends on Linux kernel developers. None of these should be particularly strong Bayesian evidence to you.

          Climatologists are one source for whether global warming is happening, what’s causing it etc. But personally, I decided whether it’s important by listening to economists. (Specifically, David Friedman.)

          • The Nybbler says:

            Beware the man of one economist.

          • John Schilling says:

            Economists are generally quite good at quantifying how important something is, presuming it is true in the first place. And there’s definitely much to be learned along the lines of, “If IPCC5, then we still muddle through”.

            But if you want to know whether e.g. IPCC5 can be relied on in the first place, then you probably want to talk to scientists who are specifically not climatologists.

          • makj says:

            (Specifically, David Friedman.)

            (Who’s also a physicist.)

          • Ilya Shpitser says:

            I like your “follow the money” heuristic.

            That said, deciding whether climate change is important is probably going to involve evaluating claims about climate change, which is very difficult indeed to do without input from actual climatologists. Perhaps their place is as oracles for answering factual queries only.

            I don’t self-identify as “an ML researcher”, but I can probably do a reasonable job not actually lying about factual claims about ML.

          • quanta413 says:

            I think it’s a good idea for scientists to function as oracles for factual queries. But scientists are also people so they should try to act morally. This creates some grey areas where different people lean different ways on whether or not to answer a factual query at all and whether or not to change how you communicate depending on the moral content of the query.

            I lean strongly towards “scientists should try to restrict their behavior as scientists to answering factual questions in as direct and literal a manner possible”. I think that answering more complex non-factual queries involves expertise that scientists don’t necessarily have or often involves adding facts from outside their field.

            It doesn’t come up a lot because most of the questions a scientist will answer will be the ones they personally chose. It’s not clear to me the best way to give non-scientists influence about what factual questions are worth answering into the scientific process. Obviously, they usually won’t have very useful things to say about what to look at specifically. But broad brush, maybe. For example, the NIH pushes scientists one way and the NSF another.

          • Ilya Shpitser says:

            To be explicit about the “between the lines” stuff that I am sure Scott got already, the way Scott talks about this stuff makes me think he’s way overconfident about what’s important.

          • albatross11 says:

            I’d say the most important job of scientists (detectives, journalists, historians) is to try to get as good a picture of reality as possible and share it honestly. When you’re in your role as a scientist, you should be trying to make sure you are telling the whole truth as you understand it, with all appropriate modifiers. (Yes, the journalists will turn your “slowed down tumor growth by 4% in a mouse model for ovarian cancer” into “cured cancer”, but *you* should be telling the truth.) The truth you tell then becomes an input into a million other people’s thinking, which you can’t possibly know or predict or understand because you don’t have their expertise.

            This is one reason I am so opposed to the “noble lie” idea–the notion that sometimes, scientists, journalists, detectives, etc.. should misreport the truth to achieve some valuable social goal. You have no idea who will be hearing your noble lie or how they’ll be applying it, and you *can’t*–you don’t know what all those people know[1]. (Some of those people are in the future, making decisions based on the noble lie you taught them in high school or college because they’ve never revisited the subject since then.)

            Scientists also have opinions of their (our) own, sometimes on social issues, sometimes on scientific issues. But I think it’s really important to distinguish between “this is the best currently available picture of reality” and “this is what I think might be going on” and “this is how I hope things will be so that my desired kind of society may arise.”

            [1] ETA: This is basically parallel to von Mises’ argument about the impossibility of running a fully planned economy. No one person or organization knows enough to set the production quotas for everything-that knowledge is distributed across all the people in the society, and includes stuff like individual preferences between work and leisure, detailed engineering tradeoffs involving alternative materials that won’t be visible until someone building generators or radios finds out there’s no copper available, etc. It’s basically the same problem here, but it’s messier, because instead of prices, we have more general knowledge moving through the society.

          • HeelBearCub says:

            Specifically, David Friedman.

            If you think this means that you are “listening to economists”, I submit that you should go back a step or three. What you are most likely doing is specifically rejected by theoretical Rationality (if not the actual practice) as rationalization.

            Substitute any sufficiently ideologically motivated economist for Friedman and I will tell you the same.

          • Specifically, David Friedman.

            If you think this means that you are “listening to economists”, I submit that you should go back a step or three.

            If I correctly understand him, his point is that the arguments I offer he finds more convincing than the arguments other people offer in the other direction, and I happen to be an economist, not that the fact that my arguments are being made by an economist is what makes them convincing, which would be an error.

          • HeelBearCub says:

            @David Friedman:
            He’s arguing that he’s taking your opinion to be that of “economists”, that you represent the consensus view of the field vis-a-vis climate change.

          • ec429 says:

            He’s arguing that he’s taking your opinion to be that of “economists”, that you represent the consensus view of the field vis-a-vis climate change.

            Nothing of the sort. I don’t rely on “consensus view”. Rather, I have read a sufficient amount of economics that I believe myself able to evaluate (though not, with any degree of confidence, to initiate) arguments on the subject. I have read David’s economic arguments on the global warming issue, and found them convincing; whether others with the label ‘economist’ agree with those arguments is not really relevant. (Consensus in economics is generated neither by an efficient market nor by incontrovertible experimental results, thus (per EY) there is no reason for epistemic modesty.)

            Beware the man of one economist.

            I have read other economists; but for some reason Adam Smith never mentioned the IPCC’s 5AR. You can imagine how disappointed I was to slog through the entire Wealth of Nations and not see a single sentence about the impacts of cap and trade on long-term growth… 😉

            … /me waits for David to point at a passage from Smith that can be interpreted in precisely those terms…

          • HeelBearCub says:

            @ec429:
            Then you aren’t deciding whether global warming is “important” by listening to “economists”. You are deciding by listening to David Friedman. Those two quite different things and you should be clear on the difference.

            If you think that listening to David Friedman is the same thing as listening “to economists” you are making a big error. If you think that this is generalizable, that you can simply pick and choose which people count as experts in a field, you are making an even bigger error.

          • ec429 says:

            @HeelBearCub

            Then you aren’t deciding whether global warming is “important” by listening to “economists”. You are deciding by listening to David Friedman. Those two quite different things and you should be clear on the difference.

            Clarification: I decide by listening to economics (I initially misspoke when I said economists), one of my main sources for economic arguments about GW is David Friedman (who writes in terms of economic arguments, on account of his being an economist), and I then evaluate those arguments against my own understanding of economics.
            This is not the same as just believing X because David Friedman, Who Is An Economist Doncha Know, says X.

            If you think that this is generalizable, that you can simply pick and choose which people count as experts in a field, you are making an even bigger error.

            That’s not what I claim to do. I don’t care who’s an “expert”, I care what their arguments are. Admittedly, once someone has built up a record of (from my perspective) reasoning correctly, that becomes Bayesian evidence that their future positions will also be correctly reasoned — but evidence which is screened-off once I actually read their arguments for those positions.

            It is those who choose to defer to the “consensus” of “experts” who believe they can reliably identify the experts in a field. On this subject at least, I believe I am better able to evaluate arguments than persons.

          • HeelBearCub says:

            You are still picking and choosing. In addition, your initial statement make even less sense substituting economics for economists as Friedman is most definitely not economics as a whole.

            The proper way to do this would be to understand multiple general arguments within the field of economics about how “important” AGW is. Listening only to Friedman doesn’t get you there, although Friedman potential might point you at other relevant arguments. Still, Friedman has a well known bias on AGW, so you really need to take what he says on it with the proverbial grain of salt and make sure you evaluate multiple independent sources.

            As to whether Friedman is trustworthy, I find him to be highly disingenuous when it comes to arguments, especially about climate change. Among other things, he continually switches from the motte of “climate change is most likely something we can adapt to at a low enough cost” to the Bailey of “ and you can’t trust climate scientists anyway, so you should doubt whether climate change is occurring”.

          • Among other things, he continually switches from the motte of “climate change is most likely something we can adapt to at a low enough cost” to the Bailey of “ and you can’t trust climate scientists anyway, so you should doubt whether climate change is occurring”.

            Would you like to either provide support for that claim by quoting me saying that you should doubt whether climate change is occurring? If you cannot do so, you might want to rethink the reliability of your model of the world.

          • On the question of the views of other economists, you might find the case of Nordhaus of interest.

          • HeelBearCub says:

            @David Friedman:
            I don’t care to go quote dredging, but one of your favorite things to do is: a) state that whether you trust someone on their statements should depend on whether you find claims that they have made to be true in the past, b) state that the 97% figure (or something else) is a lie, and c) say (or merely imply) that you leave it up to the readers to decide whether various climate scientists are trustworthy on the science. It’s an inference chain that you employ frequently.

          • eccdogg says:

            I certainly have not read everything David has written on the subject. But I have read quite a lot here and other places. And most often I see him quoting directly from the IPCC when it comes to predictions of temperature changes and sea level rises.

            That certainly seems pretty far from “you should doubt if climate scientist or whether climate change is occurring”.

            I certainly have seen him go after individual climate scientist he believes to have done shoddy misleading work, but I have never seen him transition from there to implicating the whole idea of climate change.

            But I certainly would change my mind on the subject if you could provide some quotes about what you are talking about.

          • HeelBearCub says:

            @eccdog:

            And most often I see him quoting directly from the IPCC when it comes to predictions of temperature changes and sea level rises.

            As long as we are asking for references, can you point me to a place where he is employing those quotes in agreement with the IPCC?

            I really don’t want to have maintain a file of Friedman quotes with links. It’s boring. And it’s precisely this very careful dancing between motte and bailey that makes him so untrustworthy on this subject to my mind.

            Go back a step to my example. Assuming that inference chain is one that he walks people down frequently when the subject of climate change comes up, how would you classify it? Have you every seen him employ it?

          • eccdogg says:

            Sure, see down further in this very thread.

            “Currently it’s about a foot higher than it was a century ago, and the high end of the IPCC projection is for about a meter by the end of the century.” — David Friedman

            I have not seem him walk folks down that inference chain.

            ETA: I guess this is what you are referencing

            http://daviddfriedman.blogspot.com/2014/02/a-climate-falsehood-you-can-check-for.html

            But it mainly seems to call into question one particular scientist, his claims, and the claims of those closely associated with him, not the whole idea of AGW.

            Also I guess the quote I gave you would not necessarily put him in agreement with IPCC because he is using those as an upper bound.

          • J Mann says:

            @HeelBearCub – I don’t see evidence of motte and bailey, and think you might be misusing it. It’s reasonable for David to make more than one assertion in his life – he can simultaneously say (a) that the 97% figure is misrepresented; (b) that climate change is occurring; (c) that the costs are likely to be manageable;* and (d) that readers should judge for themselves whether any particular piece of science is reliable.

            The essence of motte and bailey is that when challenged in the motte, you flee to the bailey – I don’t see any evidence that David flees from any of his opinions.

            * I don’t know if David actually says (c) – he well might, but I don’t want to put words in his mouth.

            @DavidFriedman – I’m mostly interested in the use of “motte and bailey” here, and am not used to discussing someone’s work in front of them – if this isn’t something you’re comfortable with, let us know.

          • HeelBearCub says:

            @eccdogg:
            That is not him agreeing with the IPCC. That is him picking out one prediction from the IPCC to use as a logical club. Note that you don’t find him endorsing a belief that they are correct in predicting a 1 meter rise, merely an inferred statement which implies that a one meter rise is not concerning. If you put the phrase “Even if this were true” into his post, it doesn’t change the meaning of it.

            The blog post you linked is one example of him promoting the inference chain.

            And here is him explicitly endorsing a statement very near the end of the inference chain:

            Since, as a prominent supporter of the position that warming is primarily due to humans and a very serious threat, Cook is taken seriously and quoted by other supporters of that position, one should reduce one’s trust in those others as well. Either they too are dishonest or they are over willing to believe false claims that support their position.

          • eccdogg says:

            But he also clear says this.

            “That Cook misrepresents the result of his own research does not tell us whether AGW or CAGW is true. It does not tell us if it is true that most climate scientists endorse AGW or CAGW.”

            “The fact that one prominent supporter of a position is dishonest does not prove that the position is wrong. For all I know, there may be people on the other side who could be shown to be dishonest by a similar analysis. But it is a reason why those who support that side because they trust its proponents to tell them the truth should be at least somewhat less willing to do so.”

            Which I think is totally fair.

            The folks I see him calling into question are not climate scientist or the IPCC, but Cook, his website, and those who unquestionably quote Cook.

          • HeelBearCub says:

            @eccdog:
            He is very careful in his dance with the edge of the motte. He does debate the issue for sport. But the most you can say about that statement is that he doesn’t preclude the possibility that AGW is occurring. What he doesn’t do is agree that it IS occurring. All of his statement push the logical inference that we should doubt the conclusions of the IPCC and other bodies similar to them and that we should presume it to be an open question whether AGW is in fact occurring.

            What I don’t think you can find is Friedman saying “I believe the consensus scientific position that AGW is occurring and is in fact, anthropogenic, and will continue to accelerate commensurate with the amount of greenhouse gasses added to the atmosphere. This acceleration will continue well past the moment we cease to add greenhouse gasses to the atmosphere. I believe that climatologists as a whole are credible and that there scientific conclusions are broadly valid.”

          • PeterDonis says:

            What I don’t think you can find is Friedman saying “I believe the consensus scientific position that AGW is occurring and is in fact, anthropogenic, and will continue to accelerate commensurate with the amount of greenhouse gasses added to the atmosphere. This acceleration will continue well past the moment we cease to add greenhouse gasses to the atmosphere. I believe that climatologists as a whole are credible and that there scientific conclusions are broadly valid.”

            You just conflated several very different claims, which is precisely the sort of thing that I often see David Friedman saying that people should not be doing.

            The first claim is “climate change is occurring and human activities are significantly contributing to it”. I think this claim is true. (I suspect Friedman does too, but I’ll let him give his own opinion if he wants.)

            The second claim is “the primary way that human activities contribute to climate change is through human GHG emissions”. I think this claim is debatable; if nothing else, it leaves out the obvious alternative factor of human land use and consequent alteration of the Earth’s average albedo, which I personally believe is a more significant contribution (for one thing, it’s been going on for millennia, whereas GHG emissions have only been significant for a century or so, depending on how you count).

            The third claim is “climatologists as a whole are credible and their scientific conclusions are broadly valid”. I think this claim is false. Climate science does not have good predictive power; the mismatch between climate models and actual data is now so large that even the IPCC has to admit it. Also, every IPCC report since AR1 has included a table that gives the level of scientific understanding of the key factors that might affect the climate. That table looks the same in AR5 as it did in AR1, indicating that climate scientists have not increased their understanding of any of those key factors in 25 years. That’s not what you would expect from a reliable scientific field with obvious public policy implications. And given the above, the fact that climate scientists continue to claim that that they can make reasonably accurate predictions of future climate is evidence that they are not credible.

            And a fourth claim, which you don’t state but which you clearly imply (at least by the definition of “clearly imply” that you appear to be using in criticizing Friedman’s statements), is “human GHG emissions are a serious problem and we should take high-cost actions now to drastically reduce them”. I think this claim is not only false, but dangerously false, since it commits us to high-cost, near-term actions whose consequences we do not understand and which have a much too high probability of making us net worse off instead of better. Which is basically the position that I think Friedman often argues. And a key component of that argument is to point out obvious ways in which the third claim, above, is false, since the vast majority of people who believe the fourth claim believe it because climate scientists say so. So pointing out ways in which climate scientists are obviously not credible ought to decrease the general level of confidence in the fourth claim.

          • Paul Zrimsek says:

            That table looks the same in AR5 as it did in AR1, indicating that climate scientists have not increased their understanding of any of those key factors in 25 years.

            If that conclusion is true (I have my doubts), it’s actually bad news for us skeptics. The case for undertaking expensive mitigation starting right now is driven largely by high-cost, low-probability scenarios which we aren’t able to rule out entirely– but might be able to pretty soon, if our knowledge is advancing. If our knowledge isn’t advancing, that undermines the skeptical case for waiting to gather more information.

          • PeterDonis says:

            If that conclusion is true (I have my doubts)

            Read the reports and see.

            The case for undertaking expensive mitigation starting right now is driven largely by high-cost, low-probability scenarios which we aren’t able to rule out entirely

            This is one version of the argument, but by no means the only one. Nor do I think it’s the one that’s driving the political debate. The argument that is driving the political debate is basically “the science is settled, we’re doomed unless we take drastic action now.”

            Also, the argument based on high-cost, low-probability scenarios which we aren’t able to rule out entirely is a very weak one in any case. There are always high-cost, low-probability scenarios which we aren’t able to rule out entirely. And one can always pick numbers to make it look like the expected benefit of taking drastic action to mitigate such a scenario outweighs the cost. But the uncertainty in those numbers is so high that any such calculation is just hand-waving. Of course one should understand that such possibilities exist, but the best you can do about them is to increase the general robustness and resilience of our society. Which is precisely what taking high cost drastic actions to mitigate some imaginable scenario about which we actually know very little prevents us from doing.

            If our knowledge isn’t advancing, that undermines the skeptical case for waiting to gather more information.

            It does no such thing. Observing that mainstream climate science is not advancing our knowledge is very different from saying it is impossible to advance our knowledge. The obvious policy prescription if mainstream climate science is not advancing our knowledge is to shift funding from mainstream climate science to other lines of research that might do better at advancing our knowledge. It’s not to just shrug our shoulders and say we might as well spend trillions of dollars on CO2 mitigation because mainstream climate science can’t do any better.

          • I don’t care to go quote dredging,

            Prudent, since if you did you would discover that you made a demonstrably false statement about me. I don’t know if that concerns you—perhaps not.

            but one of your favorite things to do is: a) state that whether you trust someone on their statements should depend on whether you find claims that they have made to be true in the past, b) state that the 97% figure (or something else) is a lie, and c) say (or merely imply) that you leave it up to the readers to decide whether various climate scientists are trustworthy on the science.

            You ought to try reading for comprehension.

            I have argued, in some detail, that the second Cook article contains a lie–that the 97% figure in the first Cook article is for abstracts holding that humans are a cause of warming, while the second article claims it is for abstracts holding that humans are the main cause of warming. You could, if you wished, read the post and decide for yourself if it is correct.

            I have concluded that Cook is dishonest and ought not to be trusted. He is not, as it happens, a climate scientist, at least not unless I am, since he has an undergraduate degree in the same field in which I have a doctorate. He is a propagandist.

            I have a fairly detailed comparison of the predictions of the first few IPCC reports to what happened from which I conclude that the first report badly overestimated future warming, the later reports tended to project somewhat high. I don’t suggest anywhere in it that warming is not happening. The final conclusion of the post is:

            Looking at a webbed graph of the data and fitting by eye, the slope of the line from 1910, when current warming seems to have started, to 1990, when the first IPCC report came out, is about .12 °C/decade. That gives a better prediction of what happened after 1990 than any of the IPCC reports.

            I have another post pointing out that not all AGW alarmists are the same, contrasting one who is pretty clearly a flake with another who seems like an intelligent person with views that disagree with mine.

            I have multiple posts trying to look at likely consequences of warming based mostly in IPCC projections, including one with the title “Climate Nuts vs the IPCC.”

          • All of his statement push the logical inference that we should doubt the conclusions of the IPCC and other bodies similar to them and that we should presume it to be an open question whether AGW is in fact occurring.

            If all of my statements push that inference you should be able to find at least one to quote that does so. But checking to see whether what you say is true is apparently too much trouble.

            What I don’t think you can find is Friedman saying “I believe the consensus scientific position that AGW is occurring and is in fact, anthropogenic, and will continue to accelerate commensurate with the amount of greenhouse gasses added to the atmosphere. This acceleration will continue well past the moment we cease to add greenhouse gasses to the atmosphere. I believe that climatologists as a whole are credible and that there scientific conclusions are broadly valid.”

            Is this sufficiently close for your purposes?

            My actual view was and is intermediate between the two ends of the dispute. I think it is reasonably clear that global temperatures have been trending up unusually fast for the past century or so, and the most plausible explanation I have seen is the effect of human production of carbon dioxide. On the other hand, I do not think there are good reasons to predict that warming on the scale suggested by the IPCC reports for the next century or so will have large net negative effects, a point I have discussed here in the past.

            That’s a little less unambiguous than what you want, since I try to be careful not to say things that might not be true, and climate is a complicated system. Climate sensitivity is very much an open question, and the physics, at least, are consistent with net negative feedback, although I don’t think it is likely, so the effect of CO2 could be small and something else could be responsible for the warming. I’m not sure if you realize that the IPCC only claims clear human responsibility for the warming since the mid-20th century, leaving the possibility that the first thirty years or so was due to something else.

          • HeelBearCub says:

            Prudent, since if you did you would discover that you made a demonstrably false statement about me.

            Ah, the umbrage gambit. You like this one.

            Here you are mischaracterizing what I said. You will point at something I put in quotes (which is clearly a paraphrase) as if it was intended to be read as a precise quote.

            No, I merely characterizing statements you have made. This is of course opinion and can be argued over. Hell, we can argue about whether your characterization of my statement is correct. (We are arguing over it).

            These are the kinds of debate games you play on these issues, constantly.

            I don’t particularly care to go through all your links, I will simply note that you link to your claims about Mann being a liar. Said claims were already covered above. I’ve already made my case as to why this is you harvesting crops in the bailey.

            Playing the “my honor has been impeached” card simply reveals the underlying weakness in your arguments, IMO.

          • PeterDonis says:

            Playing the “my honor has been impeached” card

            He’s not complaining that you have impeached his honor. He’s impeaching yours. And justifiably, as far as I can see.

          • Thegnskald says:

            This is popcorn worthy. This is the first time I have seen somebody motte-and-bailey a motte-and-bailey accusation.

            Seriously. Certain participants should step back and ask themselves what position they are actually arguing for, and specifically, whether that position is true, kind, or necessary – or more specifically, whether that position can even meaningfully be said to be true or not, nevermind kind or necessary. If you are unkind, unnecessarily, and then proceed to admit that it can’t even be evaluated as truth, since it is just opinion – hey, maybe consider the possibility that you are in the wrong, here.

          • the_the says:

            @ HBC 7:48am

            I really don’t want to have maintain a file of Friedman quotes with links.

            But if you are going to make strong claims regarding a fellow commenter, then perhaps you should. It’s not like you need detailed records, but even one clear example would go a long way to establishing your credibility. Otherwise, this seems like a poor strategy for arguing, particularly with Friedman who tends (in my experience) to make limited, defensible claims.

          • Paul Zrimsek says:

            @PeterDonis: I already know from reading the ARs that the IPCC evaluation of scientific certainty hasn’t changed much. I’m just not convinced that they’re right about that: I think it likelier that they were underestimating the uncertainties at the time of AR1, and I know of at least a few gaps in our knowledge which are filling in– for example, we’re starting to see reasonable physics-based estimates of sulfate cooling, and will not much longer have to assume that the effect must be large because that’s what it takes to make GCMs backtest successfully.

            By “The case for undertaking expensive mitigation” I mean of course the most convincing case. “The science is settled, we’re doomed unless we take drastic action now” is a weakman, however popular it may be politically; I’m no more interested in beating up on it than our opponents should be in beating up on “It’s all a socialist hoax.”

            It would be nice if the case for wait-and-see could be based on the possibility of somehow improving climate science. But unless you believe, as I do, that progress is still being made despite the pathological state of parts of the current establishment, what we’re going to need is a strong likelihood of improving it– preferably backed by a plan more definite than “spend more money some-unspecified-where else”. (Or we may need to not rely on wait-and-see, which after all is my pet argument– it doesn’t necessarily have to be yours).

        • J Mann says:

          @HeelBearCub

          Even assuming arguendo that your quote is indisputably true and that DF hasn’t said it, I don’t think that’s motte and bailey. (Also, your statement has enough clauses that I’m betting it’s not indisputably true, but I don’t know).

          Let’s hypothetically assume that DF only publishes accurate information that’s on the skeptic side of the GW debate. That’s not motte and bailey, because he isn’t arguing for something he isn’t willing to defend.

          I do agree that if it were true that someone only published accurate information on one side of a debate, then it wouldn’t be good to use them as your sole or primary source on that question.

          • HeelBearCub says:

            Since, as a prominent supporter of the position that warming is primarily due to humans and a very serious threat, Cook is taken seriously and quoted by other supporters of that position, one should reduce one’s trust in those others as well. Either they too are dishonest or they are over willing to believe false claims that support their position.

            Again, what is the inference chain being promulgated here?

            If one puts tiny pebbles on one side of a balance, over and over, and none upon the other, it’s no good to point to the size of the pebbles as evidence you don’t intend to make the scale tip.

            Friedman is making an argument here, one he strengthens by questioning little pieces of various reports over and over. There is no good faith effort to analyze the data and the conclusions as a whole and point what is strong, what is less strong, and what is weaker. This is not good faith debate about climate science.

            To put the shoe on the other foot, in what I hope will not be a digression, what I frequently see in commentary from left-leaning pundits about the Mueller probe and various pieces of information about it, are statements somewhat like the following: “Certainly we don’t know yet the conclusion of the Mueller investigation, and it may ultimately turn out that Trump did not cooperate with the Russians in order to influence the election.” This in now way means that the rest of the article isn’t making the argument that we should see it as likely that Trump DID seek to influence the election. If I were to tell you that “They aren’t making an argument that we should think it likely he collaborated” you would rightly laugh at me.

            ETA: and I would also note that, vis-a-vis my original statement, it would be near as ridiculous to think that you could simply listen to, say, Krugman as representative of economic thought on AGW. Although I don’t think Krugman has made AGW a hobby horse where he goes and looks for bad arguments against the theory on facebook so he can argue against them, nonetheless, he would still be poor as a sole source simply because he is so ideologically fervent.

          • J Mann says:

            @HeelBearCub

            I’ve been googling DF and climate change, and my impression is (a) DF is extremely careful and accurate in his discussion of the evidence, to the point where I find his comments extremely helpful to build a model of some of the lukewarmer side of the debate; (b) DF does not publish much if anything making the warmist case. I think that’s reasonable, and in any case isn’t “motte and bailey.” Apologies for the pedantry.

            As to your question of whether DF has ever stated a belief in warming directly, how’s this?

            But my best guess, from watching the debate, is that the first half of the argument is correct, that global climate is warming and that human action is at least an important part of the cause.

          • HeelBearCub says:

            I already acknowledged that was his position of retreate to the motte:

            climate change is most likely something we can adapt to at a low enough cost

            You finding a “lukewarm” endorsement of the idea that the climate is warming and that human activity is some significant contributor is complete consistent with what I originally said. I know he says that. I said he says that.

            But you haven’t actually directly addressed my question about inference chains. Argument via implication is still argument.

            And let’s taboo “motte and bailey” for a moment. Is he making statements in the manner of someone interested in making people generally aware of state of the science? Or is he attempting to simply win a debate?

          • J Mann says:

            @HeelBearCub

            Thanks for engaging on this, although I will admit that the proper use of “motte and bailey” was what really interested me. 🙁

            I think that DF reads like a proponent of a specific position – that at least the moderate global warming hypothesis seems likely but that the situation is complicated enough that he doesn’t have a high degree of certainty, and that based on the IPCC predictions, extraordinary remedies are not justified. (This fits within the positions that are often referred to as “lukewarmist”).

            I think he’s very careful to identify what he knows and doesn’t know, and I generally think he’s a reliable source for his position. (In other words, if DF says a fact, it’s likely to be well supported). I’d want to read some similarly reliable statements of alternative positions to see if there is another side, but I don’t have a problem with the way DF lays out his ideas. I particularly don’t think he’s implying more than he’s saying.

          • HBC writes:

            You finding a “lukewarm” endorsement of the idea that the climate is warming and that human activity is some significant contributor is complete consistent with what I originally said. I know he says that. I said he says that.

            On the contrary, you said:

            But the most you can say about that statement is that he doesn’t preclude the possibility that AGW is occurring.

            You see no difference between “Saying that X is probably true” and “not denying the possibility that X might be true”?

          • HeelBearCub says:

            David you really outdo yourself.

            You managed to put two unrelated statements next to each other, as these two quotes are referencing separate statements, not the same one.

            Specifically the first quote quote is referencing my original statement about the motte.

            The second quote is referring to the section that contains:

            The fact that one prominent supporter of a position is dishonest does not prove that the position is wrong. For all I know, there may be people on the other side who could be shown to be dishonest by a similar analysis.

            In the post in question you are in the bailey. At other times you are in the motte.

            Again, my original statement of the motte was:

            climate change is most likely something we can adapt to at a low enough cost

            IOW, I pre-stated that you will agree that climate change is occurring. This is implied by that (very loose) framing.

            @J Mann:
            Looks like we are back to the discussion you wanted to have.

          • But you haven’t actually directly addressed my question about inference chains. Argument via implication is still argument.

            Indeed it is. And the conclusion I am implying is that if you read a claim on skepticalscience.com, which Cook runs, you should not believe it unless you have checked the references and arguments with reasonable care. If you see a claim in the popular literature about climate change you should be similarly skeptical.

            The implication of what I have written about the IPCC is that it is attempting to offer answers to a hard question where calculations involve a lot of judgement calls, that its conclusions are probably biased in the direction of overestimating the rate of warming but are nonetheless pretty much the best we have, hence I am willing to draw conclusions about the consequences of warming based on their estimates.

            Note that you don’t find him endorsing a belief that they are correct in predicting a 1 meter rise

            I can’t endorse the belief that they are correct in predicting a 1 meter rise because they don’t predict a 1 meter rise. 1 meter is about the high end of the range of outcomes by 2100 on the high emissions scenario.

            I think the IPCC is if anything likely to overestimate effects, so 1 meter is a reasonable upper bound for how much sea level rise might plausibly occur by 2100.

          • In the post in question you are in the bailey. At other times you are in the motte.

            Clearly I should have also quoted the paragraph after the one I did quote:

            What he doesn’t do is agree that it IS occurring. All of his statement push the logical inference that we should doubt the conclusions of the IPCC and other bodies similar to them and that we should presume it to be an open question whether AGW is in fact occurring.

            “All of his statement” presumably covers both motte and bailey. And I took “what he doesn’t do” not as “what he doesn’t do in the passage just quoted” but “what he doesn’t do.”

            So far as warming is concerned, I have repeatedly agreed that it is occurring. I don’t claim with certainty that the main cause is human action because I don’t know it is, although it seems likely. I do not believe that is the conclusion anyone would reach from what you wrote.

          • J Mann says:

            @HeelBearCub and @DavidFriedman

            I hope this discussion doesn’t feel confrontational. I enjoy reading both your posts, so if this gets uncomfortable for anybody, let me know and we can drop it.

            @HBC – this is actually why I think motte and bailey was unhelpful. I think David clearly expresses a specific opinion. If you think he’s wrong because he understates the risks or overstates the costs or because you think some other writer makes a great case for a position that’s incompatible with his, then just say that, and we can discuss it, and we can all get smarter.

            Within our geographical metaphor, I don’t think DF ever retreats to the motte. I think he’s standing in his bailey, and you are criticizing him for not stepping out of the bailey to stand in the curtilage.* Which is perfectly reasonable, but then it would be helpful to provide an argument we can evaluate. It’s not dishonest** for DF to take a consistent lukewarmist position – at most, it’s mistaken.

            * Not quite the right word – if anyone knows the name for land outside a castle wall, please let me know.

            ** Again, not exactly the right word. “Illegitimate?”

  28. J Mann says:

    I am thankful for this blog and the commenters. I don’t see myself as a rationalist, but I do think the blog makes me smarter and more engaged.

    Scott, if you want a suggestion for rationalism evangelism, I would probably react well to a few short posts a month going through some of the core concepts of rationalism (or just strongly endorsing a given LW post at the top of your link thread). I’m not sure if I’d catch rationalism fever or not, but I’m willing to expose myself to it, especially if there’s a lively comment discussion.

    • Mark V Anderson says:

      I agree with this. I do not consider myself a rationalist, and I am a bit skeptical of some of the things I’ve heard about it. But I am certainly interested in learning more, so I’ll know better what I like and dislike about the field.

  29. jrdougan says:

    Years ago I read the saying (very paraphrased) “A wild goose chase is good exercise, and sometimes you catch the goose.” It sounds to me like that is a fair summary of what you are doing, except the geese are a bit more catchable.

    (I would appreciate if someone could point me back to where I got this)

  30. HeirOfDivineThings says:

    One of the things that’s helped me when studying rationality re: trusting science is to not treat science as one unified field.

    There are obviously different scientific fields with varying degrees of good/poor methodology and data. So e.g., comparing homeopathy to priming is more or less comparing apples to oranges. They are different fields and so it isn’t the same scientists failing both.

    Then there’s the difference between science and academia. There are facets of academia I trust a lot less than I did a few years ago, and some that I trust more. I think I’ll just leave it at that.

    • Scott Alexander says:

      I think what you’re saying (and what other people have said) is part of the solution. But I wouldn’t want to have to explain it on the fly to a hostile audience in 140 characters.

  31. oiscarey says:

    The idea of rationality as a martial art is a rich metaphor. It is fruitful and interesting to look at the ways in which this metaphor is apt, e.g. looking at training, evidence-based practices, etc. The ways in which the metaphor falls apart give interesting indications for the future of the craft.

    A key defining feature of a martial art is a two-person competition in which there is a demarcated winner and loser, usually as judged by an impartial 3rd party (referee).

    There are challenges in identifying clear winners and losers in martial arts, e.g. close boxing matches, fake martial arts. They have developed rulesets and cultural practices to guard against this, and these are effective within the bounds of human subjectivity.

    One comparison to note is Brazilian Jiu Jitsu, a submission-oriented martial art that dominated the UFC initially. In Jiu-Jitsu, there is no confusion about the winner or loser, as the loser submits/taps, or else they go unconscious/break a limb. A serious system for judging expertise only arose quite recently, and it isn’t uncommon for upsets to occur where the expert is beaten by someone with less expertise.

    In comparison, Aikido is a movement and momentum-based martial art that focuses on dodges, throws, and limb-locks. It has a strict and detailed system of competition for the judgement of winners and losers, but when applied to real-world situations or applied in competition with other styles it completely falls apart. It is so ritualised in its ruleset as to be worse than useless.

    Rationality training has no school, no agreed-to set of skills to be utilized in ascertaining truth. Tetlock’s system for superforecasters is probably the closest, but there is no general form (to my knowledge). Thus, training is ambiguous.

    Rationality competition is highly ambiguous. How long should a debate last before the match is stopped? What happens when one person gives up and leaves? When the game is over and both people claim to have won, how is the winner identified? Furthermore, how is the competition structured so as to provide a hierarchy of rationality? I’m fairly confident that there are few that would relish being labelled a “black-belt in rationality”.

    To be truly comparable rationality would require institutional backing. This would involve developing training, and a ruleset for competition. It would require impartial 3rd parties to judge the outcomes of competition.

    To my eye, there is a need for ‘canon’ in the competition in such a rationality community. This would be the aim for victory, changing and updating the canon to define what the community must see as a rational perspective.

    However, most pressing would be the need for the referees to judge the conduct of competitors and declare the outcomes of competitions. How rational was the argument, how convincing, what percentage movement would be required, etc. There would need to be a ritualistic definition of arguments and premises, the factual requirements for changing positions, probably percentage confidence levels for the truth of arguments and premises, etc.

    Probably wouldn’t be as fun as just arguing with people on the internet…

    • Bugmaster says:

      Well, one way to bypass all those problems would be to settle on some objective metric, such as e.g. money. Have both contestants in the rationality battle put up $1000 (or however much) of their own money (or have someone sponsor them). Have them play the stockmarket however they want — day trading, high frequency trading, long-term investment, whatever. Wait a year, then see who’d made the most money; that guy is the winner. Play best 2 out of 3 if you want to reduce the influence of chance.

      • thevoiceofthevoid says:

        Counterpoint: the stock market is a uniquely bad place to practice one’s prediction skills, due to its inherent volatility and unpredictability. [insert any argument for why index funds outperform mutual funds or any other guided investment schemes] I assume if you actually tried this you’d either have the contestants winning or losing on mostly pure luck, or investing in index funds and ending up within a few dollars of each other.

        • Bugmaster says:

          I never said you were limited to only looking at the stock prices and nothing else. If you wanted to, you could research some promising companies, and invest in them because you believe in their products (or services, or IP, or whatever). I completely agree that an ordinary person would be better off investing into index funds — but we’re talking about Rationalists here, and the whole point of the movement is that they should perform much better than ordinary people, right ?

        • 天可汗 says:

          Weak. My mother outperforms index funds.

      • eccdogg says:

        I would think a better competition would be to create a list of outcomes to be predicted and then split them in half. For each question one person would be on offense and the other on defense. The person on defense names a probability of the event happening and the person on offense decides to buy or sell at that probability. When the event happens or not a seller gets the agreed upon probability if the event does not happen and the buyer gets 1-p if it does happen.

        You could publish the list of questions ahead of time so contestants could research.

        Add up the points at the end and declare a winner.

    • Nancy Lebovitz says:

      Thanks for the detailed analysis, but I believe there is no escape from Goodhart’s Law (measurements which are used to guide action become corrupt), though some efforts to escape it are better than others.

      The challenge is that rationality is person versus the universe, and victory is even less well-defined than it is for martial arts. How can you tell how much someone was lucky vs. how much they were right? Or how much someone did as well as possible in the face of bad luck?

      • AG says:

        Not quite Goodhart’s Law, but rather, I was about to comment that the community would then just get mired in increasing levels of meta debates, and never settle on any stable standards (and standards for the standards, etc.) in the first place.

        At work, we recently tried to quantify employee performance a little bit less subjectively, got into an extensive definitions discussion for the standards, but concluded that this would lead to the definitions discussion being interminable, and so reverted to declaring a certain fuzzy area as subject to “engineering judgement,” for the sake of being able to move ahead on the object level. Not unlike Scott concluding that sometimes you just gotta trust that intuition.

    • Confusion says:

      A key defining feature of a martial art is a two-person competition in which there is a demarcated winner and loser, usually as judged by an impartial 3rd party (referee).

      That seems like a strange criterion to me. There are martial arts that truly don’t have any rules, because in the real world there are no rules. Such a martial art is about incapacitating your opponent(s) quickly and decisively. You can’t have competitions, because competatents (sp?) would break limbs, be knocked unconscious, suffer permanent injury or die. Even MMA has rules: it e.g. forbids kicks to the groin, punches to the throat, eye gouging, elbows to the spine, kicking an opponent when he’s down. There are martial arts where you train those things as well. Of course you can not practice destructive techniques as well as non destructive techniques and you don’t truly know how well they will work, but the fact they are forbidden in MMA gives a clue.

      • John Schilling says:

        There are martial arts that truly don’t have any rules, because in the real world there are no rules

        In the real world there are always rules.

        If you e.g. actually gouge someone’s eyes out and leave them permanently blind, the rules are going to say that something really bad happens to you next. If your excuse is that you were teaching or practicing a martial art for use in the “real world”, something really bad is going to happen to you next. If your excuse is that you were actually defending yourself against an unprovoked lethal attack, you might get off, but the enforcers will be asking hard questions about how you could be sure the attack would have been lethal and whether you might have been able to defend yourself by some less drastic means like just killing the guy.

        So I’m kind of skeptical about the existence of martial arts that truly don’t have any rules even at the “don’t actually gouge people’s eyes out” level.

        • Confusion says:

          Ah sorry, I hadn’t realized you could interpret it like that. Gouging someone’s eyes out is not the goal of the exercise (much too difficult to do deliberately). It’s just useful to use the eye sockets for leverage (the head can be used to steer the body: move the head and the body follows) on e.g. someone bald (otherwise the hair is usually more convenient to use). Any damage to the eyes is circumstantial. The same goes for the other things I mentioned: they are not goals, but means as part of techniques. Usually to distract or force a certain response.

      • wysinwygymmv says:

        What are these martial arts? Who practices them? Where? Most importantly, how? Like, who agrees to let you gouge their eyes to drill your eye gouging technique? If you don’t drill it, how do you get good at it? And if you do drill it, doesn’t everyone wind up being much worse martial artists than when they started because now they’ve all had their eyes gouged out?

        The rules around “tapping” are not a good model for fighting in the “real world”, but if you don’t learn to tap you will not become a good martial artist because you will become permanently brain damaged by having the blood cut off to your brain too frequently. Reproducing real-world conditions is not necessarily the best way to get better at dealing with real-world conditions.

        Another example: it’s probably suboptimal to train your swimming technique on a shoreline with 5 foot swells and a deadly riptide, even if that’s the kind of environment that would most effectively test for extraordinary swimming ability.

    • rlms says:

      One can join GJO or the sequel HFC. I’ve mentioned them before but apparently no other commenters signed up.

    • AG says:

      Competitive debate formats do already exist, you know. (And I wish more people in this group would learn some of the jargon from them, as they’re useful for better organizing arguments and counter-arguments.)

      (Really, though, they boil down to how can you best employ dark arts to sway the judge to your side. And because the judge also awards “speaker points,” they’re keeping an eye on the meta level for how well you employ dark arts as a plus.)

      • Peffern says:

        I agree that my experience with competitive debate has made it much easier for me to understsnd things on SSC.

        I think you’re wrong on the parenthetical though. In my experience speaker points were awarded independently from who won the debate, and didn’t change the outcome. This is because debating events were shared with speech events where oratory skill mattered so the debate events would give speaker awardd independently of the actual round wins/losses.

        Also in my experience, the judges are either experienced debaters themselves or attended judge training that (in theory) helped to inoculate them against certain unsavory strategies. While this doesn’t mean people aren’t winning by employing the dark arts, it takes more effort than you would expect to do so, to the point where for most debaters its not worth it. And, if a hypothetical debater is so competent that they could win even through a judge watching out for their tactics, then they are probably smart enough to win the normal way anyway.

        • thevoiceofthevoid says:

          The very form of competitive debate lends itself to intrinsically dark-art-ish motivations and techniques, though. It isn’t remotely about truth-seeking; it’s about coming up with as many post-hoc rationalizations as you can for the side that you’re assigned (not even what you truly believe!). I’d agree it’s probably useful for learning rhetoric, persuasion, and even research; but is terrible for training rationality. IIRC Yudkowsky used a hypothetical “good arguer” paid to write evidence and reasons that supported a presupposed conclusion as a fable to demonstrate how “rationalization” is opposed to “rationality.”
          ETA: Found it.

          • AG says:

            Yes, thevoiceofthevoid articulates what I was getting at.

            Like, if I use a particularly clever rhetorical trick to answer an argument via definitions debate, the judge may award me the win, because I was the better debater, on the meta level of evaluating my cleverness skills, rather than on the actual content of the debate.

            My debate team kind of specialized in winning by derailing from the original topics into either obscure political horse trading knowledge, or claiming that going extreme hedgehog with [ideology] solved everything in the world, which the opposition does not do. The best teams are horseshoe experts, employing the exact same small stable of arguments no matter which side of the topic they’re assigned to.

  32. Ron Unz Article says:

    Scott makes good, or “well-calibrated” predictions. I think he’s wasting his talents by showing these predictions off only once a year. If I understand, he’s resolved to do it even less often. I’m disappointed. Kavanagh or someone else for supreme court next week? Democrats or Republicans in the senate this fall? But maybe those are fraught topics in this space.

    Still, instead of applying rationality to optimising the far future, or to giving measured advice about newsy outrages, I would like to see it applied to “fun” things. Probably others can come up with better “fun” ideas to analyze. But what about old conspiracy theories?

    Did Lee Harvey Oswald kill JFK, with a single bullet? I don’t need to be convinced that the answer is “probably.” But it would be fun and instructive to see it quantified. Is a rationalist 90% sure of this? 99 and a half percent sure?

    • Tarpitz says:

      Is the FSB arranging for Russia to overperform at the World Cup? How much stronger should this suspicion be if they go on to win it?

      • Deiseach says:

        Well, Belgium just beat Brazil to go up against France in one of the semifinals, so do we think Russia can beat Croatia? And which of England/Sweden to go through?

        If we’re looking at a Russia-Belgium final, certainly something weird will have happened, but it may just be football and not bribery and corruption 🙂

  33. MawBTS says:

    The rationalist community started with the idea of rationality as a martial art

    That reminds me of something godawful.

    There was/is a pick-up artist called Erik von Markovic who went by the handle of “Mystery”. Once he was the most famous of his kind – he was a central character in Neil Strauss’s book The Game, and even had his own reality TV show. Now he’s destitute and forgotten.

    He viewed pick-up artistry in the same terms. “If do right, no can defense!”

    He planned on opening a dojo, where aspiring PUAs could “train” in what he called the venusian arts (Mars is the god of war, and Venus is the goddess of…). The entryway would contain a life-sized poster of Bruce Lee and Erik von Markovic standing side by side, along with the text “the king of martial arts and the king of venusian arts welcome you to this dojo!”

    Unfortunately for cringeseekers worldwide, the dojo never went ahead. He did start a company called the Venusian Arts, which he was summarily forced out of by his own students.

    You might suspect that this comment is off-topic and pointless. Your suspicion is correct.

    • Toby Bartels says:

      While we're on the topic of off-topic trivia: ‘Venusian’ is analogous to ‘Martian’, a strictly modern term referring to the planet. The classical term analogous to ‘martial’ is ‘venereal’. I'm not sure that I'd want to study at a dojo on the ‘venereal arts’, but at least I would respect its command of the English language.

      • Randy M says:

        It’s one of those cases where you only hear a word with a negative modifier–disease in this case–so it acquires an negative connotation it doesn’t actually denote on its own.

        • ec429 says:

          And that’s why astronomers use “cytherean”. (Apparently “aphrodisial” was too close to “aphrodisiac”.)

          (Ok, so many astronomers nowadays just use “Venusian”, and to hell with etymological soundness; but Κύθηρα lives on in the terms peri- and apocytherion.)

      • quaelegit says:

        I’d call it a respect for Latin language, or romance etymology or something. Noun “X” –> adjective “Xian” is perfectly valid English derivational morphology. You can still pan him for ignorance of Latin inflection if you want to though 😛

        • Toby Bartels says:

          I’d call it a respect for Latin language, or romance etymology or something.

          That too, I suppose. But the etymology is not the point; it's not as if was coining a new word. Both words ‘venereal’ and ‘venusian’ existed beforehand, and ‘venereal’ already had the meaning that he was going for, while ‘venusian’ did not.

    • beleester says:

      It’s a funny story regardless. But if you want it to have a point, perhaps “Calling something a martial art is no guarantee that you can actually train people in it”?

  34. k48zn says:

    Looking back on the Piketty discussion, people brought up questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”

    Clearly you should not choose the wine in front of me.

  35. Bugmaster says:

    “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

    Intuition can be a powerful tool. It can grant you correct answers very quickly and efficiently. Unfortunately, intuition is also entirely subjective. There’s no way for someone to check your math to see if you’ve got the right answer, and there’s no way for you to convince someone else that your intuition is right and theirs is wrong. Intuition is explicitly irrational.

    I get the metaphor about martial arts and dojos, but the promise of The Rationality Project is not just an enhanced sense of intuition; it’s the ability to arrive at correct answers in a way which is repeatable, objective, verifiable, and legible (or, at least, more so than the alternatives). Personally, I’ve always been somewhat skeptical about this promise, and, if what you say is true, then my skepticism was justified. That’s what my intuition is telling me, anyway.

    • thevoiceofthevoid says:

      I think a well-calibrated intuition can still be useful and “rational” (Yudkowsky definition) in cases where it’s more important for you to come to the correct answer than it is to convince others of that answer. As you said, in the case where you’re trying to convince anyone else, you’d better be able to show more work than “based on my prior experience, it feels right.” But I’d argue intuition is fine for “should I buy these homeopathic pills at the drug store?” and falls short on “should we ban drug stores from selling homeopathic pills next to traditional medicine?”

      • Deiseach says:

        But I’d argue intuition is fine for “should I buy these homeopathic pills at the drug store?” and falls short on “should we ban drug stores from selling homeopathic pills next to traditional medicine?”

        If there’s serious argument that depression medication works mostly as a placebo because after a while depressive episodes get better by themselves, then it should be okay for chemists’ shops to sell homeopathic remedies along with the OTC medicines and perfume and fake tan and the rest of the bits’n’pieces.

        The danger is when people ignore serious symptoms and proper medicine and try and treat themselves/family members with homeopathy, but the kind of person who is anti-vaccination isn’t going to be persuaded that vaccines are okay by banning the sale of Bach’s Rescue Remedies alongside proper medicines like Addyi. Mostly it’s a case of “it won’t help but it won’t do any harm either, and if it makes you feel psychologically reassured to take it, why not?”

        • thevoiceofthevoid says:

          And that’s exactly why we need more than first-pass intuition for the latter question.

        • theredsheep says:

          If taking water instead of actual medicine causes the symptoms to get worse/the disease to progress, I would argue that that constitutes harm. Likewise if people unwittingly take fake medicine instead of the real kind because they assume anything the drugstore sells must have been vetted by Science somehow. Especially if the medicine has a sciencey-sounding name like “oscilloccoccinum” or however they spell it.

          If actual approved medicines are rubbish–I’ve heard bad things about benzonatate and oseltamivir–that’s a separate issue.

      • Bugmaster says:

        If you cannot legibly demonstrate how you arrived at the answer, then how do you know your answer is correct, and not just self-delusion ? The obvious answer is, “well, my intuition is usually right, so I’ll trust it this time too”. This approach works extremely well until it fails spectacularly. Usually, this happens when you encounter some genuinely difficult question the likes of which you’ve never faced before… i.e., exactly the kind of problem that rationality is supposed to solve. See lumineforous aether, for example.

      • BBA says:

        A few years ago, a homeopathic “low-dilution” (i.e., actually present) zinc nasal spray marketed as a cold remedy caused several people to permanently lose their senses of smell. Its marketing had been slick enough that very few people realized it wasn’t “real” medicine, but was untested and based on junk science. (Certainly I had thought it was just another cold medicine, and it made me question my then-smug libertarian views on consumer protection laws.) So I’d say we should ban low-dilution homeopathy from the drugstore shelves and only allow those remedies that are diluted to sufficient “potency” to be chemically indistinguishable from placebos.

        • I’d say that with this comment you brought added value to this thread, so that I reject the claim that you are “bringing the comment quality down” by posting here.

          You’d bring even more added value by providing some link / reference to this zinc nasal spray story. Would love to read more.

          Also, it’s my first post – I’ve been lurking for about 1.5 year, so hello everybody.

          • BBA says:

            Here’s a WebMD piece on it. The makers deny the claims. Beware the man of one study and all that. Zicam oral lozenges have not been linked to anosmia and continue to be widely sold.

            As critical as we are of the FDA around here, I still don’t know that letting these medicines escape FDA oversight by claiming to be ineffective homeopathy is a good thing.

    • Scott Alexander says:

      I think you’re arguing someone is only a Scotsman if their genes encode the Scots language without them having to learn it, then using that definition to conclude there is no such thing as a Scotsman and everyone claiming to be Scottish is lying to you.

      Less snarky edit: Or compare medicine. You can write medical textbooks saying “the rules of medicine”, and lots of people do. But the average person who’s just read a medical textbook (or Wikipedia, or WebMD) will do a terrible job diagnosing and curing disease compared to a doctor who’s worked in the field all their lives. This doesn’t mean medicine is wishy-washy and not based on objective principles. But it does mean it’s not some simple algorithm anyone can apply.

      Even less snarky edit: Maybe I’m bad at explaining this. Have you read David Chapman on meta-rationality?

      • Bugmaster says:

        Maybe I’m bad at explaining this. Have you read David Chapman on meta-rationality?

        I think I did, but I’m pretty bad at human names — can you provide a link ? I want to make sure I’m reading the exact same thing you are.

        Or compare medicine. You can write medical textbooks saying “the rules of medicine”, and lots of people do. But the average person who’s just read a medical textbook (or Wikipedia, or WebMD) will do a terrible job diagnosing and curing disease compared to a doctor…

        And yet, when you ask the doctor, “why should I subject myself to painful and potentially deadly chemotherapy ?”, his response is not, “I’m a doctor, I just know these things”. Rather, it is, “I’m a doctor, I know these things because there’s this shadow on your MRI, and several of your protein levels are elevated, and your original symptoms are consistent with this statistical model, and…” Even if you are not a doctor yourself, you could take all that data, show it to another doctor, and have him interpret the results.

        The doctor surely would’ve used his intuition to arrive at his conclusion — but intuition is just the first step (or at least, one would hope so). After that, he’s going to meticulously follow the rules written down in the Big Book of Medicine, which are objective (or, at least, as objective as humanly possible), reproducible, and legible. Any doctor who’s any good will follow these rules regardless of how strongly his intuition is screaming “cancer !”, because the stakes are quite high.

        What about Rationality ? Are the stakes high enough ?

        • Nick says:

          I think I did, but I’m pretty bad at human names — can you provide a link ? I want to make sure I’m reading the exact same thing you are.

          Chapman writes about this stuff at Meaningness. I think Scott has some of his blog posts in mind, like this one.

      • MasteringTheClassics says:

        You’re kind of describing metis, both here and in the original post.

  36. nameless1 says:

    The reason I hang out here but not consider myself part of the rationalist community is the glaring contradiction between Yud teaching people how to think skeptically then throwing it away and starting an entirely unskeptical and irrational “friendly AI research” cult brainwashing people to donate their money to him. All this because rationalists tend to be nerds who fetishize the word “intelligence”, unable to think skeptically about it, unable to see that intelligence is not in itself a thing but the outcome, the effect, or the measure of a whole lot of things in the human mind and social interaction, thus artificial intelligence is about as meaningful as artificial profit.

    Considering intelligence an optimization ability is something that gives bullshit a bad name. Succesful optimization is obviously an outcome, not a thing in itself. Hence the profit parallel. One company makes profit by putting an unusually good camera on a phone. The other makes profit by firing some unnecessary employees. And so on.

    If you look at Spearman’s g, the two main factors are eductive ability (think clearly, make sense of complexity) and reproductive ability (store and retrive information), the second being something computers are already excellent at and the first is not hard either, a tensor flow can do it. These abilities, understood as intelligence, make sense only for measuring the problem-solving ability of humans because these two things are something our brains are generally terrible at, they are a bottleneck. We have a gazillion other cognitive abilities that work well, are entirely necessary for even the simplest task, but they are not a bottleneck. No amount of eductive and reproductive ability alone will make a computer able to a tie a shoelace. They have entirely different bottlenecks.

    And this is something every sensible person understands intuitively, this is why there are so many us who are grateful for his sequences yet consider his AI fetish a cult for milking gullible people. Gullible people who are high IQ nerds, and therefore tend to fetishize intelligence and building a friendly 1000 IQ machine is their dream. As I said above, computers already max out reproductive ability, and giving it eductive ability gives you merely an expert system, not what you would consider anything like a mind. It will not rule the world because it is something equivalent to a savant, just incredibly more so. It gives you correct answers to questions. It does not DO anything. It does not go and convince you somehow to let it out of a box or fire missiles at Moscow. The problem is intelligence fetishists think correct answers to questions is the most powerful thing ever because this made homo sapiens take over the planet. No. This, while also having all the other abilities of primate made homo sapiens take over the planet.

    • Mr Mind says:

      Your thesis, if I do understand it correctly, is that today’s AIs lack many cognitive abilities that make a human really intelligent, and thus potentially dangerous. So today’s AIs cannot possibly do any of the things that makes an unfriendly AI a menace.

      But this is not Yudkowsky thesis. He talks about the danger of a self-evolving general AI, to distinguish it from what we have today. In your worldview, this will happen when we will add to expert systems all the different abilities that separate their intelligence from ours.
      Thus, unless you’re saying that it’s impossible to add those requirement to a piece of software (why?) or that even a gAI cannot be more intelligent than a human (??), then there’s no contradiction between what you say and Yudkowsky’s view.

      • Faza (TCM) says:

        The problem with Yudkowsky’s thesis is that it glosses over questions of AI teleology, interface and comprehension – or rather, it has the propensity to give whatever answer is expedient at the moment.

        Taking it in turn:

        1. Teleology – You can have a paperclip maximizer (fixed teleology) or an autonomous being (self-determined teleology), but not both. So which is it? If it’s a maximizer – not a problem, because “maximize paperclips” isn’t a realistic specification of teleology programmed into any kind of system (a solver of this kind would have constraints specified up front). If autonomous – potentially a problem, but unlikely to directly conflict in a destructive way (because the existential concerns of computers, if they have them, are fundamentally different than ours).

        2. Interface – Thinking isn’t the same as doing. Anybody who doesn’t believe me is invited to reflect on why nerds continue to get pushed around (as discussed often enough here). An AI needs to be able to act in order to pose a threat and it is not necessarily apparent what a realistic scope of action is. The answer I see to such questions is essentially: superintelligent AIs are magic and can do anything they want. Seriously?

        3. Comprehension – AIs function in Game-Space. What do I mean by that? We create AIs to operate within various game families that we create as part of our model-crafting mode of thinking: logic is one such game, mathematics is another, natural language is another still. Hell, we can even make AIs that play “proper” games like chess and Go.

        Computers are good at playing games because they can explore the rule set much faster and more broadly than an individual human. However, the also game constrains the world of the computer.

        The games (taken in the broad, existential sense; language is such a game) we play as humans are mental maps (models) we make, but we exist in the territory and our concerns are with the territory. There are many different games we can overlay on the world as we know it that serve the same purpose comparably well. For the most part, a Young Earth Creationist has no more problems in day-to-day life than someone who accepts evolution through natural selection. The theory of evolution (and related concepts like genetics, etc.) is an improvement over YEC if you want do some things, but you’ll only know if you try to apply both to practical ends.

        It’s not impossible for a self-evolving AI to fashion games of its own to model its existence in the world, but that would require it to function fully autonomously and affect the world without human mediation. This isn’t a given.

        Unlike humans (who exist before they begin to play) games are an AIs “natural environment” (being created to deal with issues of language, mathematics, chess, etc.), I find Stanisław Lem’s vision of “digital philosophers” (AIs exploring the bounds of existing games, games that may conceivably created, as well as related ontological and epistemological concerns, as described in GOLEM XIV) a whole lot more convincing than all of Yudkowsky’s demons.

        Which brings me to the main complaint. Every time I try to make sense of what the AI threat crowd are saying, I’m confronted with a parade of genies designed solely for the purpose of being scary. When I say “genies”, I mean that the postulated superintelligent AI seems to be capable of working whatever wonders are necessary to make the point – regardless of whether we have any good reason to believe such wonders are even possible.

        This is, at its core, a theological argument. Superintelligent AIs are a problem – say Yudkowsy et al. – because they are (tacitly) defined as omnipotent. If this is the case, we’re already doomed.

        That isn’t rationality, however. It’s begging the question. Unless we specify at the start what manner of superintelligent AI we’re talking about: enumerate its capabilities and – more crucially – its limitations, we’re not dealing with an argument, but with unsupported assertions of whatever conclusion the arguer finds compelling.

        I suppose some folks find it fun and rewarding to speculate on such things (scholastic theology was big back in the day), but I see no compelling reason to treat it as anything other than wild flights of fancy.

    • MawBTS says:

      Considering intelligence an optimization ability is something that gives bullshit a bad name. Succesful optimization is obviously an outcome, not a thing in itself. Hence the profit parallel. One company makes profit by putting an unusually good camera on a phone. The other makes profit by firing some unnecessary employees. And so on.

      I don’t follow. Bench pressing 405 pounds is an outcome, hence there’s no way to improve my weight-lifting ability? Cleaning my bedroom is an outcome, hence there’s no way to improve my ability to pick dirty clothes off the floor?

      The rest of your post sounds a lot like “intelligence is magical and impossible to understand etc”, which was what people used to say about chess, painting pictures, and so forth.

      thus artificial intelligence is about as meaningful as artificial profit.

      “Artificial intelligence” means intelligence in a machine, versus the naturally evolved intelligence of biological organisms.

      I’m not aware of a similar dichotomy that would let us separate “natural profit” from “artificial profit”, but if there was, I’d definitely talk about artificial profit.

      • Bugmaster says:

        As I said before, the main problem I see with the “AI FOOM” scenario — other than the laws of physics preventing it, of course — is that no AI, no matter how smart, can just think its way to victory.

        For example, if it wants to make novel scientific discoveries (which is a necessity for maintaining its exponential growth), it will have to actually run experiments. Last time we humans wanted to learn something significant, we had to build a laser interferometer about 4 km in diameter (two of them, actually). The AI can’t magic up such things overnight, and it can’t just simulate everything and come up with the right answer, for obvious reasons.

        • Murphy says:

          We’re currently at the point of fully automated bio labs hooked up to AI’s that can automatically generate hypothesis then design experiments to falsify the maximum possible number of hypothesis then run the experiments in large number and with great precision.

          Currently the AI’s used for that are glorified SAT solvers but if something genuinely much more inventive and bright than a human got involved I strongly suspect that it could very quickly learn a vast amount about biology, physics, chemistry etc etc which opens the door to a lot of interesting resources that probably wouldn’t require a 4km construct.

          Foom or not-foom seems to more come down to whether the difficulty of going from 100 intelligence (measured by whatever metric) to 200 using a mind with 100 intelligence is harder than going from 200 to 300 intelligence for a mind with 200 intelligence by whatever methods.

          if it gets progressively harder vs capability added to add each effective IQ point then not-foom.

          if its actually really easy and humans just happen to be how smart we currently are because we’re currently at the minimum intelligence needed to build computers then much more likely Foom

          Personally I think all the “boxing” stuff is kinda stupid. AI’s don’t get built in boxes or with bombs attached to them. If something was actually smart and wanted to get resources then the world is full of the kind of people who fall for the claims of nigerian princes.

          • Bugmaster says:

            We’re currently at the point of fully automated bio labs hooked up to AI’s that can automatically generate hypothesis…

            Yes, and there’s absolutely nothing the AI can do to make that yeast grow any faster. It might help generate some promising leads, but it won’t be overturning any scientific paradigms overnight. Even if the AI was 1000x smarter, yeast still grows at a fixed rate.

            but if something genuinely much more inventive and bright than a human got involved I strongly suspect that it could very quickly learn a vast amount about biology, physics, chemistry

            How ? You can’t just wave your hands and say, “oh, well, it would think of a way”. That’s not intelligence, that’s clairvoyance, or possibly divine revelation. The only way to learn anything about any of those sciences you mentioned is to run experiments. Experiments take time. Some experiments, e.g. those for detecting gravity waves, take a lot of time and gigantic buildings… which also take time to build. That is less of a FOOM and more of a regular crawl. The whole point of running experiments, by the way, is that you don’t already know the answer, so just running lots of simulations is not enough.

            On a slightly different topic:if it gets progressively harder vs capability added to add each effective IQ point then not-foom. That’s pretty much the case, yes. You can’t just keep sticking CPUs on a motherboard ad infinitum.

          • christhenottopher says:

            EDIT:Dang it, I see that this point came up elsewhere in the thread.

            How ? You can’t just wave your hands and say, “oh, well, it would think of a way”. That’s not intelligence, that’s clairvoyance, or possibly divine revelation. The only way to learn anything about any of those sciences you mentioned is to run experiments. Experiments take time. Some experiments, e.g. those for detecting gravity waves, take a lot of time and gigantic buildings… which also take time to build. That is less of a FOOM and more of a regular crawl. The whole point of running experiments, by the way, is that you don’t already know the answer, so just running lots of simulations is not enough.

            Doesn’t this depend on what the nature of the hurdle to learning is? Specifically whether the issue is lack of data or lack of integrating/understanding the data? To give an example of the later, a major finding of the 9/11 Commission was that the information various intelligence organizations had wasn’t being shared and integrated. That’s the sort of problem that we would expect higher intelligence/faster AI to be an improvement on humans for. Indeed we currently use computers to shift through data at much faster rates than humans can manage. To the extent that a scientific/engineering problem is hard due to integrating data, AI will work a lot faster, even if AIs can only run experiments at the same speed as humans. The total number of academic papers has risen drastically over the past few centuries so insofar as that is a proxy for the amount of scientific data available, I think there’s a potential argument that there are a lot of problems where the data exists but is hard to sort and integrate (though I don’t really know how many such problems exist).

          • ec429 says:

            Bugmaster:

            That’s not intelligence, that’s clairvoyance, or possibly divine revelation.

            Just to check — you have read That Alien Message, right? Care to explain what’s wrong with it?

            Experiments don’t have to take very long; our giant particle colliders and LIGOs are to make up for our paucity of computational power, since we can’t just do large-scale calculations on fundamental-level physics and see which ones reproduce our macro-scale observations. Large chunks of science are about regularities we’ve observed on larger scales that we aren’t yet able to tie back to the fundamental physics because we haven’t got the mathematics to do it, and some of the biggest advances in science have come from learning how to make such a linkage (e.g. when Gibbs discovered statistical mechanics and thus linked the phenomenological thermodynamics back to the behaviour of particles, or when the application of the Schrödinger equation to molecular orbitals opened up the field of quantum chemistry).

          • Bugmaster says:

            @ec429:

            Just to check — you have read That Alien Message, right? Care to explain what’s wrong with it?

            You mean, other than the fact that it’s completely fictional, and that it not only postulates that The Simulation Argument is true, but also does so in a way that is contrived to push the intended “moral” along ? I guess I don’t exactly see the relevance between the story and the scenario we’re currently talking about.

            Experiments don’t have to take very long; our giant particle colliders and LIGOs are to make up for our paucity of computational power…

            First of all, how long did it take us to build those devices ? Are you saying that our current particle accelerators, LIGOs, space telescopes, etc., are the the pinnacle of what we’ll ever need ?

            Secondly, there’s no amount of computational power that will allow you to detect e.g. gravity waves. Your problem is not just lack of computational power, but lack of data. Being able to run millions of different models very quickly does not help you figure out which of them is actually true (or, at least, not completely). You can’t tell the shape of the elephant by sitting in the dark and meditating really hard; at some point, you have to go out and touch it. As far as I’m aware, all of the examples you bring up were experimentally verified.

          • ec429 says:

            @Bugmaster

            You mean, other than the fact that it’s completely fictional, and that it not only postulates that The Simulation Argument is true, but also does so in a way that is contrived to push the intended “moral” along ? I guess I don’t exactly see the relevance between the story and the scenario we’re currently talking about.

            The point of the story is not that it “could be true”, but that it demonstrates computation being used to economise on (not replace entirely!) data. All the Simulation guff and other contrivances are just a framing device to try and get people to think about the issue rather than just regurgitating whatever their cached thoughts about superintelligent AI are (which is what would happen if Eliezer had just said “a superintelligence could deduce GR from three frames of video” and left it at that).

            Are you saying that our current particle accelerators, LIGOs, space telescopes, etc., are the the pinnacle of what we’ll ever need ?

            No, I’m saying that we only need them because we can’t analytically solve the Schrödinger equation for an entire galaxy. If we could, then the differences between “galaxy where gravity waves exist” and “galaxy where they don’t” would be blindingly obvious to us from one clear picture of the night sky.

            You can’t tell the shape of the elephant by sitting in the dark and meditating really hard; at some point, you have to go out and touch it.

            Again, in case you missed it: computational power allows you to economise on experimental data, in the limit allowing every bit to eliminate half of the hypothesis space (weighted by prior probability). The *humans in That Alien Message, without any magic (just doing the same kind of science we do, but for longer), end up deducing GR from three frames of video — and even they do not even come close to this informational/entropic limit. This is the real moral of That Alien Message: that that limit, while finite, is far, far beyond human scale.

            No-one is saying computational power can entirely replace experimental data; that would be stupid.

          • Bugmaster says:

            @ec429:

            Again, in case you missed it: computational power allows you to economise on experimental data…

            Well, in that case, it seems like we disagree primarily about the amount of data you can save through computation. It sounds like you believe the answer is “almost all of it”, whereas my estimate would be somewhat closer to “almost none of it”. I find That Alien Message completely unpersuasive because it’s not only fictional, but also explicitly contrived to be as convenient for the author’s thesis as possible; real life is not so accommodating.

            You say that we only need expensive LIGOs and such because “we can’t analytically solve the Schrödinger equation for an entire galaxy”. Technically, this is true; but even more technically, it’s kind of meaningless. The only way to accurately simulate the entire galaxy is to build a computer exactly the size of the galaxy; this isn’t a useful avenue of pursuit for anyone, no matter how smart he or it happens to be. On top of that, you’re sneaking the Schrödinger equation into your argument, as though it was some sort of an axiomatic truth — as opposed to an experimentally verified model that was arrived at by conventional means. Is the Schrödinger equation the very last equation that we need to fully understand the world ?

            The problem with simulating things and writing down equations in a vacuum is that the search space is incredibly large. You can invent all kinds of beautiful mathematical models for how you think the world ought to work; but, at the end of the day, at most one of them is going to be correct. You can’t find that needle in your haystacks by piling up more straw.

          • ec429 says:

            explicitly contrived to be as convenient for the author’s thesis as possible

            Nonsense. That would be “oh yes these people have access to Solomonoff induction and maybe a halting oracle as well, and a pony”. Instead the premise is contrived to be inconvenient: the *humans are only permitted levels of intelligence that (a few) humans have actually demonstrated in reality; their advantage is merely one of speed (or, viewed the other way round, that they have a long time to keep on applying human-level scientific inquiry to the problem). As EY points out near the end, they do not even come close to making “efficient use of experimental data”.

            The only way to accurately simulate the entire galaxy is to build a computer exactly the size of the galaxy

            Only if physics is, in some sense, incompressible. There are definitely physical systems with analytic solutions; more generally, the regularities in physical law can be used to reduce the amount of computation required. If the AI is smarter than us, it will be better at doing this.

            On top of that, you’re sneaking the Schrödinger equation into your argument, as though it was some sort of an axiomatic truth

            No, I’m saying that if you have it as a hypothesis, you can evaluate it by either of two methods:
            (1) experiments that extract signal from the behaviour of a simple system, which you compare to the solution of the equation for that simple system, or
            (2) observations of a complex system, which you then compare to the solution for that complex system.
            Or, really, any point on a continuum between these two.
            Substitute the Schrödinger equation for any other fundamental hypothesis, and one finds that either (1) or (2) can identify (and then verify) which hypothesis is correct. The fact that we don’t know whether the S.e. is “the very last equation” actually helps to illustrate my argument: if we solved it for the Galaxy and found it produced a galaxy that looked nothing like ours, we would thereby discover that it wasn’t the correct hypothesis.

            The problem with simulating things and writing down equations in a vacuum is that the search space is incredibly large.

            This is where Solomonoff induction comes in; in principle, it takes only as many bits of data to locate a hypothesis as the Kolmogorov complexity of that hypothesis. With sufficient computational power, each observed bit cuts the search space in half, and 2^n is also incredibly large.

          • Bugmaster says:

            @ec429:

            Nonsense. … their advantage is merely one of speed…

            That, and the relative simplicity of the message (as compared to real-world problems) are advantages so massive that they completely invalidate the story. This actually ties into my next point:

            With sufficient computational power, each observed bit cuts the search space in half…

            I think you are vastly overestimating the amount of computing power that can even theoretically become available to us humans (to speak nothing of reasonably practical limits); vastly underestimating the size of the problem; or both. Solomonoff Induction is a wonderful theoretical concept, just like a Turing machine; but in practice, you wouldn’t want to build a Turing Machine to do your taxes.

            Only if physics is, in some sense, incompressible.

            If you want to accurately simulate the entire Universe, as you’ve implied, then it is. You can’t just throw away the insides of all the rocks, unless you don’t really care about what goes on inside of rocks — in which case, your simulation is no longer accurate.

            I will grant you that a vast AI that runs on a Dyson Sphere composed of what the Solar System used to be will be able to draw valid conclusions from a smaller amount of data than present-day human scientists. However a). that amount of data will be nowhere near zero, b). assuming that AI-powered Dyson Spheres are even physically possible, someone would need to figure out how to build them, which gets you into a chicken-and-egg problem, and c). if you choose to collect more data points the conventional way, you will no longer need to build Dyson Spheres — an Excel spreadsheet would suffice.

            Let me put it this way. Let’s say you’re a super-smart AI who doesn’t know anything about physics or cosmology or astronomy. Someone sends you a single cellphone picture of the night sky, as seen from Randomville, Kentucky. That’s all you have to go on. Can you reliably calculate the orbits of all the moons of Saturn ? What if you got two pictures ? 100 pictures ? Would that make things better ?

          • Murphy says:

            If you can design better experiments when you’re smarter and more skilled at doing science.

            Practical proof of that can be shown in the interactions between many dimmer undergrads and better professors.

            But first, to be clear, are you arguing only against “5-minutes-ago-nothing, now everything is computation” “foom” or also against “oh shit, we booted this up a few years ago and didn’t pay enough attention and now oh-shit” “foom”.

            Show a bright monkey the publicly availible data from the LHC and they’ll gain nothing from it. Show a smart physicist and they might gain something, show a very very smart physicist the same data and they may generate better hypothesis about the universe.

            Given the same input data John Von Neumann could probably derive a hell of a lot more useful information than random person on the street.

            I find it vexing when people posit deity-level powers attributed to AI, for example above solving wave equations for galaxies etc but there’s a much more reasonable version where we merely posit something very bright with a moderate advantage on some of the brightest humans in history.

            Science isn’t just a pure grind, quality matters as well. A very smart person can sometimes falsify a hypothesis using mundane observations that a less capable person would demand major resources to address.

            We live in a world where George Dantzig could look at unsolved problems and mistake them for homework assignments and later even he runs into someone who could take one look at the problems he couldn’t solve and immediately sees the path to a solution.

            I regularly encounter new techniques in my own field where people have figured out how to use standard commodity hardware with a few tweaks to allow them to extract data that’s previously ridiculously hard to get.

            Just for an example: how certain are you that something somewhat smarter than Von Neumann couldn’t advance the existing field of DNA computing to a useful degree in a fairly short time period given access to all existing data on the subject and a well equipped biolab.

            That’s not magic or clairvoyance, that’s normal real-world smart people who are often fundamentally better at linking together datapoints and coming up with novel ways to falsify hypothesis

            Even if there’s an upper bound of ,say, 10x the cognitive capability of John Von Neumann which doesn’t really go into the super/hyper/deity intelligence stuff…. that’s still extremely capable and someone with that level of smartness (intentionally using vague language here) could be dangerous in the normal human world if they happened to also be a complete psychopath or had their mind set on some unpleasant goal.

          • Bugmaster says:

            @Murphy:
            When I talk about “FOOM”, I mean, “an exponentially accelerating change that happens too quickly for any human to even notice, let alone prevent”. I was under the impression that this was more or less the accepted definition, but I could be wrong. Anything slower, or less significant, would not IMO, count as a “FOOM”, because such changes happen all the time in our current world — markets are disrupted, wars break out, political administrations change, etc.

            I find it vexing when people posit deity-level powers attributed to AI

            You and me both ! That said though, if you posit unbounded exponential increase of intelligence, and if you believe that enhanced intelligence automatically leads to the same level of enhancement in practical capabilities, then you’ve pretty much come up with a working definition of “deity”. I don’t subscribe to either of these beliefs, personally.

            Just for an example: how certain are you that something somewhat smarter than Von Neumann couldn’t advance the existing field of DNA computing to a useful degree in a fairly short time period given access to all existing data on the subject and a well equipped biolab.

            That depends, how short of a period of time are we talking about ? The problem with the state of biological research right now is not just that we don’t have enough computational resources to process all the data we’re generating; it’s that we don’t even have enough data, and we don’t even know what kind of data to look for because most of the time we don’t know what’s going on inside the cell. A “well equipped biolab” would definitely be a huge boon (ask any scientist ever, and he’ll probably agree), but no amount of equipment will help you grow corn (or mice, or especially humans !) 1000x faster than it does naturally. This means that, no matter how brilliant your experimental designs are, you’re going to have to wait.

          • ec429 says:

            I find it vexing when people posit deity-level powers attributed to AI, for example above solving wave equations for galaxies etc

            Just to be absolutely explicit, I was setting up a theoretical endpoint for a sliding scale of computation versus observation (the other endpoint would be something like “observe every particle in the Universe, have no unifying theory of their behaviour at all”), not claiming that a FOOMing AI would be anywhere near that endpoint.
            The point is that computation can be traded off against observation, and our confidence intervals for the exchange rate are really rather wide.

            Only if physics is, in some sense, incompressible.

            If you want to accurately simulate the entire Universe, as you’ve implied, then it is.

            Not so, for two reasons.

            1. I can create a toy model of gravitational physics in which I have 2n point masses in a Klemperer rosette; no matter how high I set n, I need only a few numbers to specify the entire model, so this toy ‘universe’ is compressible.

            2. What is needed is not a simulation of absolute perfect precision, but rather one with, as it were, a good condition number, so that the errors introduced by the approximations the simulation makes do not swamp the resulting ‘signal’. For evidence that physics is compressible in this manner, note that before developing GR and QM we were able to understand most things through Newtonian gravity and classical mechanics, rather than finding ourselves in a chaotic and totally incomprehensible Universe, as would be the case if one couldn’t (metaphorically) understand the outsides of rocks without accurately modelling their insides.

            Computation of this kind is simply a way of applying ‘filters’ to our data to improve its signal-to-noise ratio; experiment design approaches the same problem by reducing sources of noise.

          • Murphy says:

            I’ve seen a few definitions of “Foom”. it being too fast for us to really cope with or effectively oppose by the time people really notice would probably be more reasonable.

            Even if it takes 5 years of something chugging away in a lab somewhere for it to crack solutions that get it access to excessive computation and resources the clock on “foom”, the potential disaster scenario, only really starts when people notice something happening.

            Also, you seem to be stuck on assumptions that the only way to get useful knowledge is with a 25 km particle accelerator.

            Human comprehension is a major bottleneck in biology. If something had just a human level intelligence but more broad such that keeping 10000 interactions in mind at the same time while thinking about a problem rather than the human limit of <10 then lots of problems would probably look like open books.

          • albatross11 says:

            Murphy:

            There’s an interesting scaling question here. If I can make one super-von-Neumann, and use it to get more resources, I may or may not be able to scale up from super-von-Neumann to super-duper-von-Neumann. But I can certainly scale from one super-von-Neumann to N super-von-Neumanns, and I can certainly scale the resources I give them to discover stuff. That’s basically how we get improvements to science and math now–we throw lots of smart people at problems, accumulate knowledge and techniques, and sometimes, you get what has happened in computing or biology from, say, 1940-now.

            It may also be possible to speed up your super-von-Neumann–someone just a little smarter than von Neumann who runs ten times as fast and never sleeps can accumulate a *lot* of knowledge and discoveries about the world pretty quickly.

          • Bugmaster says:

            @ec429:

            I can create a toy model of gravitational physics … What is needed is not a simulation of absolute perfect precision, but rather one with, as it were, a good condition number, so that the errors introduced by the approximations the simulation makes do not swamp the resulting ‘signal’.

            Yes, but I contend that if your goal is to learn about our actual universe; and if you want to do so without performing lots of lengthy experiments or spending lots of time on building projects; then you’ll need to simulate the known Universe with near-perfect fidelity. At that point, it would be cheaper to just build another space telescope. And, of course, you can’t improve the signal-to-noise ratio without any signal at all…

            Just to clarify, I’m working off of the assumption that your AI wants to make significant — perhaps even paradigm-altering — scientific discoveries. It would need to do so in order to transcend our current technological limitations, which (somewhat inconveniently) prevent superintelligent AIs from existing in the first place. If all your AI wanted to do was to develop a slightly more efficient cellphone antenna, I’d have no objections to your approach.

          • Bugmaster says:

            @albatross11:
            Currently, we can create precisely zero super-Von-Neuimanns, and it may in fact be the case that super-Von-Numanns are impossible to create at all (depending on your definition of “super”). Assuming that they are at least possible (which is already a massive assumption, IMO), it would either take a super-Von-Neumann to figure out how to do it, or just lots and lots of time. That’s the opposite of a “FOOM”.

          • Bugmaster says:

            @Murphy:

            If something had just a human level intelligence but more broad such that keeping 10000 interactions in mind at the same time while thinking about a problem rather than the human limit of <10 then lots of problems would probably look like open books.

            To borrow a quote, “everything you said in that sentence is false”. First of all, we already have something that has a human intelligence but can keep 10,000 interactions in mind; it’s called “a human with a database”. The problem is that there aren’t 10,000 potential interactions; there are trillions, and, what’s worse, most of them have not even been discovered yet. The problem is not just that we lack data, or that we lack CPU power; the problem is that we don’t even know what to look for. The only way to figure that out is to look at actual organisms in real life, which is what lots and lots of people are working on right now.

          • Murphy says:

            “we already have something that has a human intelligence but can keep 10,000 interactions in mind; it’s called “a human with a database”.”

            No, just no.

            [Having access to a database you can run queries against] is to [actually being able to keep things in mind]

            as

            [being a savant] is to [being innumerate but with access to a calculator]

            If you give someone with anterograde amneisa a pencil and a notebook they aren’t suddenly on a par with someone with good memory.

            handing someone a dictionary does not make them automatically fluent.

            and having access to a database is not equivalent to being able to keep large quantities of information in the forefront of your mind.

          • albatross11 says:

            Suppose the best you can do with AI is to create a mind at the high end of human intelligence[1]–a Gauss/von Neumann/Einstein/Archimedes/Newton level thinker, say. And suppose that doing this takes $X worth of resources.

            You don’t then get an explosive expansion of intellect. But you can probably use the first high-end human-level intellect to start funding the resources for more high-end human-level intellects. At some point, you have a community of a million intellects that are all about as smart as the three or four smartest people on Earth. If they’re all aligned in terms of goals, they’ll probably have very little trouble taking over the planet.

            [1] This is the world we’d have if we lived in the Zones of Thought universe, and Gauss-level intellect was the limit of our zone for both biological and electronic minds.

          • Bugmaster says:

            @albatross11:

            Suppose the best you can do with AI is to create a mind at the high end of human intelligence…

            In practice, you run into problems with this scenario right away. Currently, we have absolutely no way to even begin researching anything remotely like that. Furthermore, such intellects may not even be desirable in the first place. I don’t need my car autopilot to write poetry or solve abstract math problems as well as the smartest human can; but I do need it to drive 1000x better than any human ever could. The latter is much easier to achieve, and is more profitable. That said, even if I were to grant you this premise, additional problems remain:

            You don’t then get an explosive expansion of intellect.

            Then how is this scenario different from what is already happening in our world all the time ? Why is this scenario a special case that we need to fear especially hard ?

            But you can probably use the first high-end human-level intellect to start funding the resources for more high-end human-level intellects.

            Where are the funds coming from ? How are they different from funds accrued by, say, McDonalds (~$25 billion/year) ?

            At some point, you have a community of a million intellects…

            How much money, electrical power, physical space, and logistical support does each intellect take to run ? How are these intellects coordinated ? By comparison, a million humans can’t agree on anything, and organizations with millions of members tend to be supremely inefficient (that’s why disruptive startups pop up all the time, for example).

            If they’re all aligned in terms of goals, they’ll probably have very little trouble taking over the planet.

            What do you mean by “take over”; why would they want to do that; and how are they categorically worse than powerful human organizations — such as the US, China, Microsoft, the Catholic Church, or, well, McDonalds ?

        • moridinamael says:

          Brains consume 20W of energy and weigh 1.4 kg and they can do intelligence. There are probably laws of physics that put an upper bound on conceivable intelligence, but we aren’t anywhere near them.

        • Confusion says:

          I used to think that. Then I learned I was much more naive than I thought.

          You are overlooking a few things:
          * there is already a vast amount of experimental data available that humans have been unable to ‘integrate’, meaning to understand it in context of all other data. There are undoubtedly many technological inventions and even fundamental scientific discoveries possible based on integrating the available data. No need to experimentally verify it if it already postdicts unexplained experimental results.

          * What makes you think the AI FOOM scenario presupposes a disembodied intelligence? If the AI hasn’t already been given ways of perceiving, and interacting with, the real world when it fooms, it will certainly quickly gain those abilities if its reward function cares about the real world.

          * It’s easy to come up with boring, inert AI’s that cannot escape their boxes and exasperatedly explain they obviously cannot escape their boxes. The question is whether interesting AIs, designed to help us understand the real world, with abilities to interact with the real world and to do inventing for us in the real world, would ‘run away’. How about one designed by an evil madman that wants to destroy humanity? What if he gives the code an arsenal of robots to control, both nano and macro, the ability to modify itself both at the software and the hardware level, unlimited resources in terms of crude materials, vast fabrication and experimentation facilities and the explicit goal to only improve itself for its purpose for a year before executing its plan? Is it a foom if it happens after 6 months of relatively slow progress, building up faster experimentation facilities?

          I’m not convinced foom is possible. I am convinced somewhat intelligent AI is possible and could be as least as dangerous as cats that could reproduce at the rate factories produce cars, could wield guns and saw humans as a threat. Could, not will.

          • Bugmaster says:

            There are undoubtedly many technological inventions and even fundamental scientific discoveries possible based on integrating the available data.

            I seriously doubt that. I’m sure there are some inventions that we’re missing, but I’d need to see some evidence before I can accept your (much stronger) claim.

            What makes you think the AI FOOM scenario presupposes a disembodied intelligence?

            Nothing, I never said that. My point was that an embodied intelligence is restricted to real-world speeds whenever it wants to actually affect the real world in some way. Even if it can calculate the perfect architectural design for a skyscraper in a blink of an eye (which, by the way, it can’t — not without surveying the site), it still won’t be able to build said skyscraper in a blink of an eye. This is a huge problem for the FOOM scenario, because the same limitations apply to everything the AI would have to achieve in order to become superintelligent, such as i.e. inventing nanotechnology (assuming that is at all possible, which it very likely isn’t).

            The question is whether interesting AIs, designed to help us understand the real world, with abilities to interact with the real world and to do inventing for us in the real world, would ‘run away’.

            Yes, absolutely, they do so all the time. The Flash Crash was one such example. My own little algorithm did that yesterday, when it consumed all my RAM and crashed (due to a bug, obviously).

            What if he gives the code an arsenal of robots to control, both nano and macro…

            Firstly, he won’t be able to achieve the “nano” part, since nanotechnology does not (and, I’d argue, cannot) exist (outside of obvious exceptions such as living cells). Secondly, “unlimited” resources don’t exist, either. In practical terms, such a madman actually exists already, his name is Kim Jong Un — or Uncle Sam, if you prefer to look at things from the other side. You’d deal with the AI the same way.

          • Thegnskald says:

            Nanotechnology cannot exist?

            You just mentioned nanotechnology that does exist.

            Granted, protein folding isn’t exactly friendly to read-write operations – it’s pretty much a ROM program – but… well, even if we can’t build nanomachines as we imagine them now, a protein that “triggers” another protein by releasing an enzymatic ion molecule when there is an electromagnetic gradient of some specific value is almost certainly a viable operation.

            You only need a few dozen operations like that before nanotechnology is viable at the scale of a virus.

            (I won’t get into the “fundamental scientific discoveries” thing, because I’m probably a crackpot, but I disagree on that as well. Effectively, however, this is a claim that can always be made; if I -did- have a working theory of everything, as soon as it comes out, the claim can be made again. There is no point, no matter how much data we have, or how many theories we do or do not have, where exactly this claim cannot be made – because any discovery that is made is evidence that there are fewer discoveries -to- be made, as there is now clearly one less.)

          • Lambert says:

            Do you suppose that cell biology is the only type of nanomachinery that can possibly exist?
            And that it doesn’t count as nanotechnology?
            One example of genetic engineering is already becoming able to save half a million lives per anum (golden rice).
            What’s to stop it killing that many in the wrong hands?

          • Bugmaster says:

            @Lambert, @Thegnskald:

            Do you suppose that cell biology is the only type of nanomachinery that can possibly exist ?

            It is certainly starting to look that way; at least, as long as you’re talking about self-replicating molecular nanotechnology. The reason for this is that water-based chemistry is incredibly flexible and efficient. There does not appear to be a viable way to make some equivalent of a protein out of e.g. silicon. Even if you could, you run into massive issues with energy requirements, heat dissipation, oxidation, and so on.

            Unfortunately, biological cells have some pretty serious limitations. They grow relatively slowly; they are fragile; and they can only move around a very limited set of chemicals. All this adds up to saying, you can grow ironwood (over the span of many years), but you’ll probably never be able to grow iron, not to mention diamond.

            a protein that “triggers” another protein by releasing an enzymatic ion molecule when there is an electromagnetic gradient of some specific value is almost certainly a viable operation.

            As far as I know, what you’ve just said is essentially impossible to achieve (assuming I understand you correctly), but I could be wrong — can you show me some evidence to the contrary ?

            genetic engineering is already becoming able to save half a million lives per anum (golden rice). What’s to stop it killing that many in the wrong hands?

            Don’t get me started on genetic engineering. As it turns out, genetic engineering is really, really hard. There is a handful of genes that can be easily transformed or knocked out to achieve a useful effect; beyound that, you are dealing with vast networks of genes (and intragenic regions) that all interact with each other in ways that are extremely difficult to understand, and may be impossible to manipulate. If you wanted to kill a bunch of people, you’d be way better off with good old-fashioned smallpox. If you wanted to do something actually useful, like growing a skyscraper overnight, no amount of genetic engineering will ever help you.

          • Thegnskald says:

            Bugmaster –

            If I could point to the specific proteins that we’d need to identify in order to have a protein-based nanofactory, I’d already be a significant way to creating a protein-based nanofactory. But yes, we have, for example, probably identified proteins which appear to change the behavior of chemical reactions based on magnetic fields. (Google “birds eyes magnets” for some articles discussing this at the 30,000 foot level.) This isn’t a nanofactory – but it is the suggestion that a nanofactory might be possible. You don’t need a comprehensive understanding of how proteins work in order to get a nanofactory up and running – you just need a turing-complete set of instructions, which is a surprisingly small instruction set. We can build abstractions on top of that. New proteins enhance our capabilities by adding new operations we can perform. Maybe we can grow carbon nanotubes in a vat out of sugar, potassium, and graphite. This sort of nanotechnology looks almost inevitable to me.

            Almost, because I am not saying it is definitely possible – maybe we’re missing the protein equivalent of a GOTO (well, probably not, IIRC RNA/ribosomes do in fact have a GOTO operation). But certainly, when you start looking at it as a boring industrial process, it starts to look a lot less like the “magic” nanotechnology prevalent in scifi.

          • Bugmaster says:

            @Thegnskald:
            What do you mean by “nanofactory” ? In a purely technical sense, a zucchini is a nanofactory. It takes elements from the air and soil, energy from the sun, and combines them to produce more zucchinis. However, there are some very serious limitations on what such a factory can produce, and magnetic-sensitive proteins won’t get you there. Proteins (and DNA/RNA) simply don’t work the way you think they do. Their behavior is a stochastic process, not a set of linear instructions. In living organisms, they form vast networks of interactions whose behaviours are incredibly complex and poorly understood; phenotypes based on a single allele, such as sickle-cell anemia, are the exception rather than the rule.

            What’s worse, water-based chemistry is pretty limited in what it can do. You won’t be building synthetic diamonds or titanium plating with living cells (at least, not nearly quickly enough), because the energies required are just too high. You couldn’t even flash-build a fully-grown biological organism, such as a pine tree. Sure, you could grow a pine tree over decades in the conventional way — but you couldn’t do it overnight. Even if you somehow managed to encode the genetics for it (using magic, presumably), the cells would immediately fry themselves as soon as you tried it.

      • Lapsed Pacifist says:

        405 pounds of metal being moved 18 inches away from the surface of the earth is an outcome. You can improve your upper body strength or build a lifting robot, or make a ramp and have Hebrew slaves push the object upwards.

        There is body building, but optimizing for body building might not be the golden standard for achieving your desired results.

    • thevoiceofthevoid says:

      Much confusion comes from everyone using the words “artificial intelligence” to refer to either:
      A. Computer systems today which can e.g. decide what products on Amazon or videos on Youtube to suggest to you.
      B. Entirely hypothetical machines (“General AI” or “superintelligences”) that might be developed in the future, which would be able to model the world and take general actions to achieve a wide variety of instrumental goals.
      I don’t believe that B is physically or scientifically impossible (since humans exist and are non-magical). However, whether it will be remotely technically feasible in our lifetimes is a matter of intense debate. Yudkowski himself admits that even with a supercomputer the size of Jupiter, they would have no idea how to build an AI that could accomplish a task as simple as putting a single strawberry on a plate. Today’s systems don’t even come close to general goal-oriented behavior; that still doesn’t mean it’s physically impossible for a machine to reach that threshold.
      Whether you could intentionally build an AI that can answer questions but doesn’t try to achieve any goals beyond that (intended or otherwise), is again controversial among experts.

      • Bugmaster says:

        I am not even convinced that superintelligences are physically possible. Surely, very smart intelligences are possible; but there’s a huge gulf between “very smart” and “effectively godlike”. AI alarmists tend to ignore that gulf. By analogy, tall people do exist, and we can build ladders much taller than any human — but we will never be able to build a ladder all the way to Alpha Centauri.

        • thevoiceofthevoid says:

          That’s a fair point. A more nuanced argument would be: We don’t know if superintelligence is physically possible or technically feasible, but we haven’t proved that it’s impossible. Since it could be possible, and could be existentially dangerous if it comes to exist, it might be worth devoting some resources to precautionary research in the area. And I’m sure you’ve seen the arguments about possible “fooms”, which boil down to “when the ladder starts building itself taller, it might be too late to worry about how tall it is.”

          • Bugmaster says:

            The problem with this logic is that it can be used to justify absolutely anything; it’s basically Pascal’s Wager. Sure, Hell probably doesn’t exist, but it’s possible that it does, so why aren’t you on your knees praying every day ?

          • thevoiceofthevoid says:

            @Bugmaster

            Reasonable people differ on how probable they think it is and how much that justifies spending on the problem. Very few say “LITERALLY EVERYTHING” and advocate for funding AI research to the exclusion of all else. (I’d be wary of those who did.)

            It is a bit Pascal’s Wager-y, but slightly more grounded in reality. I don’t think of Hell as “probably not existing”, I think of it as “making no sense within my current conception of physical reality.” But going with the example regardless, I think there’s a step between “Hell might exist” and “on my knees praying every day”, namely, “research the specifics of Hell, determine whether it’s a real threat, and figure out how I can avoid it if so” (yeah I’m stretching this). That research ultimately led me to my current “not physically possible or coherent” position on Hell, and I’ve decided not to spend any more resources on the issue.

            Should we also devote resources into, say, preliminary planning for how we might be able to combat an imminent asteroid strike? Important specifics aside, I’d say sure.

          • Bugmaster says:

            @thevoiceofthevoid:
            Well, I was just going by your original sentence:

            We don’t know if superintelligence is physically possible or technically feasible, but we haven’t proved that it’s impossible

            Now you’re saying that “reasonable people differ on how probable they think it is”, which IMO is a huge step up — at the very least, you’re ruling out the scenario where superintelligence is outright impossible.

            Should we also devote resources into, say, preliminary planning for how we might be able to combat an imminent asteroid strike?

            There’s a huge difference between asteroid strikes and UFAI. We know asteroids exist. We’ve seen them. We know they strike planets, all the time. We have seen the craters. Some of those craters exist on our own planet, and small meteorites rain down on it all the time. Furthermore, if we were able to detect an incoming asteroid early enough, there are at least a few things we could do about it that could reasonably work. AFAICT, literally none of that is true of UFAI (though obviously your opinion may differ).

    • Scott Alexander says:

      I continue to be baffled by everyone’s constant insistence on the theory that spending years writing hundreds of thousands of words, running a struggling nonprofit, founding an entire new field and convincing a bunch of scientists to enter it, then bucking the consensus of his own new field just as it started to get popular — that all of this was just a really long con so a guy who could otherwise probably land a programming job with Google could make a five-digit-a-year salary and live in a one-bedroom apartment in Berkeley.

      I am glad you think computers can already eg cure cancer; would you please share the cure you’ve developed on your laptop with the rest of us?

      • dank says:

        He doesn’t want to be rich. He wants to be famous/important enough that someone will bother to thaw out his brain in the future. (Only half joking).

      • drossbucket says:

        bucking the consensus of his own new field

        Interesting, how has his thinking changed?

      • rlms says:

        A cult can have a true believer leading it.

      • LadyJane says:

        Wealth isn’t the only form of power, and there are plenty of charlatans who seek fame and/or influence rather than just money. The big city stock broker with a seven-figure bank account is less powerful, in many significant ways, than the backwoods cult leader who has hundreds of dollars to his name and hundreds of followers willing to do whatever he wants. Although it’s worth noting that not all cult leaders are charlatans, as some genuinely do believe what they’re saying.

        I also think you’re rather overstating the importance and impact of the rationalist movement with the first part of your description.

        • sty_silver says:

          This seems like a fairly meaningless objection. Either you claim that Eliezer is lying about everything he believes in or you don’t, and if you don’t then it doesn’t matter to what degree power is attractive; then you’re just accusing him of being factually mistaken.

          • LadyJane says:

            Someone can earnestly believe that their espoused views are 100% correct, and still be driven to promote those views out of a desire for wealth, fame, or influence, and still engage in manipulative and exploitative behavior in their pursuit of those goals. In fact, that can even be true if some of the views in question are actually correct.

            And I didn’t make any claims about Eliezer one way or another. I’m just making general observations about human behavior and social dynamics; draw what conclusions you will.

        • watsonbladd says:

          What can the cult leader do the stock broker can’t?

          • LadyJane says:

            Have a rival murdered, for starters. Maybe the stock broker could use his wealth to hire a professional assassin, but in all likelihood he wouldn’t know where to find one or even where to start looking. Finding a hit man to accept his money would be a very difficult and time-consuming process for him. It would also be extremely risky since he would be forced to deal with violent criminals who might want to take his money by force, plus he could potentially get swindled by a con artist or arrested by an undercover cop. The cult leader, on the other hand, could simply order one of his devout followers to do the deed.

          • rlms says:

            Have sex with lots of people without paying for it.

      • AG says:

        But by the combo of “AI research is the most important thing” and “making all of the money to fund the most important thing is a key part of EA,” then shouldn’t he have founded Google, or make that five-digit-a-year salary, instead of assuming he has the unique intelligence to do the actual research instead of funding it? The current situation is that, instead, other entities like Google are indeed doing the research, and faster, without the ethical oversight that he wanted.

        So no, “it’s a long con to funnel money” doesn’t seem to be true. But it doesn’t match his own insistence that it’s about winning, either, as he’s not actually taking the most effective route to his goals.

        • sty_silver says:

          Are you actually claiming that not doing any research himself would have been a more effective strategy to the goal of getting alignment research done?

          • AG says:

            Yes. If he had founded Google, then he could pay that many more people to do the alignment research, whereas those same people instead are doing AI development for our-world Google, without considering alignment in such depth.

      • Ilya Shpitser says:

        “founding an entire new field”

        This is not how founding a new field works. Most fields don’t have a clear founder (for example, causal inference does not, and machine learning does not). There are exceptions: Claude Shannon is widely credited with starting information theory. Because he published a seminal paper, defined most relevant concepts, and solved half of it.

        “a guy who could otherwise probably land a programming job with Google”

        Minor technical point: there is no way this would happen, because he can’t program. It’s not actually entirely straightforward to get a programming job at Google.

        “just a really long con.”

        I think EY is trying (and succeeding) to be a guru, rather than a grifter. Gurus definitely are satisfying a demand, so gurudom is somewhere on a continuum between grifting and “legitimate business,” depending on the exact nature of the arrangement.

        Personally, I don’t hold gurus in very high esteem, but opinions may differ.

        I don’t think EY is trying (?very hard?) to be a scientist or a mathematician, because those people communicate with the outside world by publishing publishable things.

      • Ilya Shpitser says:

        (Followup): a lot of the objections will go away if MIRI stops asking impressionable youngsters for money, and starts doing what research institutes typically do, namely get their funding through successful grant proposals from the government or foundations. (I know MIRI is moving more in that direction, but they are still doing their “drives”).

    • Deiseach says:

      I think that’s a little unkind. God knows, I’ve seen enough examples of writing that make me go “He really does think the sun shines out of his backside, doesn’t he?” but calling it a cult is going a bit far. There’s all the dangers of a charismatic (well, some people seem to find him so) and dominant personality being a big fish in a small pond and the appeal of “our little inside group knows a big secret that will change the world”, but it’s not quite on the level of Indian (fake) gurus and David Koresh (leaving aside Waco and how the FBI handled that, which was extraordinarily awful and completely the wrong way, I don’t think Koresh’s movement was innocuous, it seems to have started turning in on itself and warping dangerously).

      I think there’s enough “herding cats” involved with the kind of people you describe that people have started naturally moving on, growing away, finding and developing other interests within the broader EA and rationality sphere, and even the pet AI project seems to have been taken up by and developed by others who are more influential/recognised names.

      • HeelBearCub says:

        but it’s not quite on the level of Indian (fake) gurus and David Koresh

        Unless this was intended as sarcasm, it falls in the category of “damning with faint praise”.

        • quanta413 says:

          I’m pretty sure Deiseach is daming with faint praise. She’s never been super fond of EY as far as I know.

        • theredsheep says:

          I think it’s more that, if the word “cult” has any meaning other than “belief system I disapprove of,” it refers to things like David Koresh, the Bhagwan, or Scientology–groups with an identifiable pattern of abusive and exploitative behavior.

          • HeelBearCub says:

            Imagine someone saying:

            It’s almost a cult, but not quite.

            Yes, the obsequious fawning requested by the leader and the uncritical thinking it promotes is antithetical to a healthy organization, but most of them aren’t being abused or exploited. Therefore it is a bit too far to refer to them as a cult.

  37. pontifex says:

    SSC is the only place where I feel like my comments sometimes drag down the average comment quality. I feel guilty about that sometimes.

    • Dominik Tujmer says:

      You’re not alone. I regularly feel like an idiot in this setting, but that’s good cause it means I’m growing.

    • BlindKungFuMaster says:

      My feeling is that comment quality has been declining (probably because I’m posting more). So don’t worry. You’ll be a real boon to the community soon, if you aren’t already.

    • MawBTS says:

      Also, does anyone else have an IQ below the blog average (~140) and just feel like they’re polluting these hallowed halls with dumbness?

      • Montfort says:

        IMHO, SSC asks for a certain style of comment/argument and has a standard of conduct a little different from other sites, but raw intelligence is not really a prerequisite, except as necessary to adhere to them. If you’re reading and enjoying the posts, I’m fairly confident you’re smart enough.

      • xXxanonxXx says:

        That’s a little excessive, honestly. It’s the sort of attitude Scott was pushing back against here. There’s always someone smarter than you. I understand the sentiment though, and I’ve seen a number of comments in open threads to the effect of “I don’t have anything to contribute, so just lurk.”

        I stick to anecdotes mostly, or interject to steer a conversation in a direction I’m interested in, just to see what people have to say.

      • MasteringTheClassics says:

        OTOH Deiseach has a self-diagnosed IQ of around 100*, and she’s probably the best commenter on here in terms of what the community would lose if any single commenter left. Meanwhile, the atmosphere has only been improved by less of the great and enlightened Vinay Gupta, brilliant though he be. IQ isn’t everything around here.

        *yeah, I don’t believe it either, but I’ll not be crossing her.

        • John Schilling says:

          yeah, I don’t believe it either, but I’ll not be crossing her

          And now I’m morbidly curious to see the Deiseach post citing, correctly and in appropriate context, authorities from St. Augustine of Hippo to G.K. Chesterton to back up a scorching argument of the form, “Deiseach really is a not-smart and you all are fools for doubting her!”

          I mean, my head will probably explode like an AI after a Kirk-speech, but it will be fun while it lasts.

        • Deiseach says:

          Deiseach has a self-diagnosed IQ of around 100

          ‘Scuse me, according to that one Raven’s Matrices site, it’s 99 and proud of it! And if I believe Richard Lynn, I’m straggling along with a bare 93 at best (like my fellow Southern Irish ingrate rebels against the glorious British Empire and Protestant Church).

          This person defends Lynn’s results, but if you believe the conclusions, the theory we are asked to accept is that we were dumb as stumps up till the 70s/80s, then in about two decades suddenly leaped up to be as normal as the English, making up that 13-point gap out of nowhere. (May I also say that if you’re going to rope in Hans Eysenck as your character witness for the reliability of the results, God help us all).

          Just looking at the studies that post uses, in 1990 we only managed to score 87, but in 1991 we got as high as 96. Cursed by pride at this lofty achievement of jumping 9 points in one year, in 1993 we dropped to 93 and even worse, 91 in the same year but presumably testing a different bunch of potato-exhuming red-faced leprechaun-botherers. We managed a respectable 95 in 2000, finally achieved a normal average of 100 in 2009 but backslid to 92 in 2012.

          Those results seem all over the place to me, but that’s my ignorance of statistics speaking. How we got smart, then got stupid again, then smartened back up – I have no idea.

          I can’t speak for Lynn’s most recent work but the original one that gets the much-quoted “Irish national IQ of 93 (or 96)” relied on conflating a grand total of 2 studies, one carried out in 1972 on about three thousand primary age schoolchildren which got a depressing result of 87 (allegedly) and one a few years later carried on a small number of adults which did better, so by combining the two he got an average of 93.

          Lynn didn’t do the tests himself, he just lumped together the results of other people’s studies, so I’m not so sure this is gospel. But it gets quoted everywhere, even in Irish media.

          • bean says:

            There are apparently people working on estimating IQ from a writing sample, and I’d dearly love to see what they’d make of your writing. I suspect that the IQ tests you took were skewed by your issues with math, because this sort of thing does not look even vaguely like the work of someone who has an IQ below 100.

            Also, weirdly, googling “IQ from writing sample” gave me a bunch of hits about the CIA’s style guide for writing.

          • Deiseach says:

            I’d dearly love to see what they’d make of your writing

            The ellipsis abuse alone would drive them to drink 🙂

            The quoted studies are why I’m generally sceptical of what IQ tests measure, if they’re measuring it at all, and exactly how useful they are outside a particular cultural milieu (I know the Raven’s Matrices try to be culture-neutral, but there’s still some underlying mathematical tricks to solve problems such that schooling, practice and exposure to such problems would help increase correct answers and hence slant the IQ test away from ‘measure of natural untutored whatsit’). A nine point increase in IQ in just one year must mean that the two sample groups were so vastly different there’s no useful point of comparison, or that the methodology of one or both studies was banjoed. How we then lost eight points of IQ over three years is another puzzler.

            I’m certainly not claiming we’re a race of unacknowledged geniuses, but that we would be so consistently lower than the average point especially as compared with the English sounds a tiny bit fishy to me. There’s a useful study here which helpfully breaks down pupils attending English schools by ethnicity; the “white Irish” (not “Traveller of Irish heritage”) compare very well with the “white British” educational results. The argument could be made that this data comes from the 2000s decade when we Irish had suddenly gotten smarter, so it means little, but something must be going on if Irish in Britain can hold their own with British but are testing as stupider at home:

            Percentage of all students achieving 5EM by ethnic group: 2004-2013 where 5EM means “5+ GCSE A*-C or equivalent including English & Mathematics” (these are grades in the GCSE exam for 16 year olds):

            Ethnic group 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013

            White British 41.6 42.9 44.3 46.1 48.4 50.9 55.0 58.2 58.9 60.5
            Irish 46.7 50.7 50.1 52.6 57.0 58.0 63.4 65.9 66.9 68.8

          • Just looking at the studies that post uses, in 1990 we only managed to score 87, but in 1991 we got as high as 96. Cursed by pride at this lofty achievement of jumping 9 points in one year, in 1993 we dropped to 93 and even worse, 91 in the same year but presumably testing a different bunch of potato-exhuming red-faced leprechaun-botherers.

            You aren’t allowing for the short term effects on national IQ of the response of leprechauns to being bothered.

      • theredsheep says:

        I’m about there, but I’m still frequently mystified by the sheer aggregate of jargon, references, in-jokes, etc. that prevail here. Which is fine; it’s normal and healthy for a community to build up its own slang. I’m not here for the rationalism at all, not being what one would consider a rationalist. I’m not into transhumanism or AI, have no head for statistics, have only a vague notion of who the devil this Yudkowsky person is.

        I hang out here anyway, because it’s one of the most open-minded places I’ve run into. Someone could post “I think Hitler didn’t go far enough” in the comments, and the first reply would be a polite request for supporting evidence. The third or fourth would be an admonition to “steelman,” and by reply ten we’d have Wiki links to an obscure class of Gypsy bankers in one part of Bavaria. People would entirely forget the original argument to discuss the statistical distribution of mathematical talent as influenced by generations of horse-trading. The original troll would grumble, then wander off bewildered.

        Honestly, I’m here for the diversions, digressions, etc. that touch on what I know. When I come across a thread where I can follow what’s going on, I seize my chance. And I comment when I feel like I have something sort of relevant to say, and if it isn’t really steeped in communal lore, I figure it probably won’t hurt, as long as I’m not spamming.

        • Deiseach says:

          Someone could post “I think Hitler didn’t go far enough” in the comments

          Ouch. We did have that one person, but they turned out to be trying to troll us. Though in terms of bait, I suppose this is one of the few places where “watch this four-hour video” would garner enough “okay, to give you a fair chance to make your argument” responses to be worth it 🙂

          • bean says:

            No, the response to “watch this video” is always a request for a transcript, followed by general agreement that text is superior for this purpose to video. But “read the transcript of this 4-hour video” would get responses.

          • theredsheep says:

            Didn’t run into that one. I was actually thinking of that time somebody called SSC and its commenters “insufferably autistic” or something, and Scott posted it, and thousands of dispassionate words were typed arguing what exactly “autistic” was meant to imply in this particular context. Nobody, AFAICT, bothered to get offended.

          • Jiro says:

            That itself is a sign of being “autistic” in the likely intended sense, even if not literally autistic. It’s actuallyimportant to respond to social cues. Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

          • Berna says:

            @Jiro:

            Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

            Why is it bad, and what should one do instead?

          • Deiseach says:

            Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

            Though there is always the social gambit of ignoring an insult and letting spectators draw the inference that your status is unassailable, the person attempting to insult you is the equivalent of a drunk crazy homeless person yelling obscenities on the street, and by refusing to engage with them you are denying them the satisfaction and power of having the upper hand to make you sputter and fume and attempt to deny the charge, or get into a mutual slanging match.

            It’s tough to do and does heavily rely on you being able to pull off “I am too high status for a petty fleabite like this to affect me”, but if you can do the de haut en bas bit it’s effective.

            Granted, in some contexts the only acceptable and effective response is a punch in the snoot (literal, metaphorical or verbal) to the person insulting you, but oftentimes the person wants to make you angry/sad/run around like a headless chicken overcome by emotional response, and apparent clueless missing the point ‘no I was trying to say you’re a big poopy head’ literalism deflates their fun.

          • theredsheep says:

            Jiro, that’s what made it so funny, at least to me and my wife. But also awesome. Codex DGAF.

          • carvenvisage says:

            @Jiro

            That itself is a sign of being “autistic” in the likely intended sense, even if not literally autistic. It’s actuallyimportant to respond to social cues. Treating an insult as if it’s solely a logical argument and ignoring its function as an insult is a bad idea.

            As usual with people advising others to studiously avoid incidental trappings of autism, this is a pretty ‘autistic’ way of looking at things:

            1. Treating an insult as beneath notice/contempt usually functions as a response on the level of the insult. (whether intended to or not)

            2. ‘autistic’ people have a comparative advantage for dealing with insults in this and other dismissive/stonewalling/contemptuous ways

            3. advising people to defang themselves so they can more neatly fit into your cosy paradigm is a gross provocation- the likes of which might be excused by extreme youth or actual autism but certainly isn’t the mark of a worldly wise socialite to throw randomly about.

          • Jiro says:

            Treating an insult as beneath notice/contempt usually functions as a response on the level of the insult.

            That only works if you obviously understand how serious an insult it is but are ignoring it anyway. If you seem to be ignoring the insilt out of a lack of the social skills needed to deal with insults, that won’t work.

          • Toby Bartels says:

            Even if that's so, it's not what was happening here. Scott said ‹Here's an insult that I got.›, and then we all analysed it. So we obviously knew what it was; we just didn't care.

          • Jiro says:

            Stating “this is an insult” doesn’t count as really understanding that you’ve been insulted. Understand that you’ve been insulted doesn’t mean “I am capable of classifying sentences into insults and non-insults”; it means reacting as a human normally does to an understood insult.

            This does not mean you need to insult them back or anything like that. There’s a wide range of ways in which normal humans react. But “I dispassionately consider the merits of the proposition literally expressed by the insulting sentence” is not one of them

          • theredsheep says:

            What’s the desired outcome that results from handling an insult “correctly”? In this case, nobody here was upset or demoralized, and they got an interesting discussion out of it. What were they supposed to do, call them doodyfaces?

          • Jiro says:

            Okay, let me rephrase. It’s a bad idea if you don’t want people to reasonably think of you as autistic.

          • carvenvisage says:

            Understand that you’ve been insulted doesn’t mean “I am capable of classifying sentences into insults and non-insults”; it means reacting as a human normally does to an understood insult.

            If stepping outside normal reactions causes you to fear for your status, then it’s already catastrophically low, and if you’re proceeding from that as a base assumption, what are you doing dispensing social advice?

            reacting normally =/= reacting effectively

            _

            There is a big difference between being an actual designated scapegoat and a potential one, and if your advice is aimed at skulking around on that unfortunate borderland, it might have some twisted highly indirect merit (I’d still argue against it, but maybe), but advising everyone to act as if any proclivity towards social obliviousness relegates them to said borderland is just defeatism on other people’s behalf.

            _

            (Surplus argument: If you’re 6 foot 5 and weigh 300lbs in muscle, a normal persona is going to accidentally scare people all the time and you might e.g. try and be more restrained and aware of the effect your presence has. Giving so little a shit about petty nonsense that you have trouble perceiving insults isn’t ‘normal’ either.)

          • Deiseach says:

            There’s a wide range of ways in which normal humans react. But “I dispassionately consider the merits of the proposition literally expressed by the insulting sentence” is not one of them

            So you are saying the only permissible way to react to an insult is to be outraged and angry and to hit back? Yes, that is one way of doing it, and this is how honour cultures handled it (to the extent that even what was perceived as a mild or unintentional insult could not be ignored or smoothed over by an apology, so both parties had to duel and you could end up with one person dead over saying ‘I don’t like the colour blue’ when the other party was wearing a blue suit).

            But ignoring/playing Logical Positivist (“you say I am a big poopy head? hmm but is it possible for a cranium to be made exclusively out of excrement? let us see if there are any examples in nature to back up your claim”) is also a way of dealing with an insult. If I know that the person hopes to evoke anger and upset in me, then by denying them such a response I am taking their power to affect me away from them. And by causing them to dance with frustration over “no, no, you’re not reacting correctly, you’re supposed to be upset!” then I am the one who is controlling this interaction and exhibiting superior social power.

            I really don’t see why “you were insufficiently upset by that insult! why didn’t you break his legs for saying that!” is a one-size-fits-all response. And if that means that most of us on here are not “normal humans”, then so be it!

            Jiro, you sound like an Enneagram Type Eight: the person for whom anger and its expression is authenticity, and when others say (in a situation where an Eight sees the only possible reaction as anger honestly expressed) “No, I’m not angry” or “I am angry but expressing it differently”, they see that as deceitful, dishonest, dissembling, and untrustworthy – the ‘proper’ response is to hit back when you’ve been hit, what kind of weak sauce response is this, are you too dumb to see you’ve been insulted or are you too sly and pulling some con?

          • quanta413 says:

            Okay, let me rephrase. It’s a bad idea if you don’t want people to reasonably think of you as autistic.

            Sure, but why should we care what people think about that?

            Lots of things here are going to scream “autistic” to people who don’t know what autism is and think it means something like “nerds who are way too into boring technicalities about boring things”. It’s ok if these people think everyone here is autistic! As long as that makes them feel better. I mean, I guess I’m not concerned if it makes them feel worse, but I hope it doesn’t in the same way I hope it’s sunny yet not too warm each day.

          • Nancy Lebovitz says:

            Have a story about ignoring insults.

            Once or twice on usenet, I got good results by ignoring insults, but I believe it was a matter of a poster (possibly two) with bad manners rather than trolls.

            In any case, what I did was ignore insults and only reply to content, and the person (or two) eventually dropped the insults. There may well have been other things going on, but I like to think I trained them by only giving reinforcement for the behavior I wanted.

            It was a good bit of work, and as I recall it was more work to think of intelligent responses to content than to ignore the insults.

          • Jiro says:

            So you are saying the only permissible way to react to an insult is to be outraged and angry and to hit back?

            No, there are other things you can do. In a lot of contexts you can just ignore it.

            But that’s different from being told “Your mother is a whore!” and responding by steelmanning his position and trying to gather evidence that may or may not indicate whether your mother is really a whore.

          • quanta413 says:

            But that’s different from being told “Your mother is a whore!” and responding by steelmanning his position and trying to gather evidence that may or may not indicate whether your mother is really a whore.

            And your father smelled of..?

            I couldn’t steelman insults to family or friends on the off chance that things propagate from the internet to the real world and I hurt their feelings. But I might steelman the claim that I’m a whore. Or a prude. Or a prudish whore. If it was interesting enough.

          • Jiro says:

            But I might steelman the claim that I’m a whore.

            Assuming a central set of circumstances under which you’ve been called a whore, that’s “autistic”, even if you’re not literally autistic. Normal people don’t act that way.

            Also, reacting that way is considered a win for the person insulting you, and has corresponding consequences (encourages further insults, lowers your status, etc.)

        • LadyJane says:

          I remember there was one poster a while back who kept saying that the Kulaks deserved their fate, which comes pretty close to “Hitler didn’t go far enough” in my view. People still engaged with him in good faith, and the ensuing political argument didn’t get particularly heated.

          More recently, someone was arguing that U.S. forces should start sabotaging water supplies near the Mexican border so that illegal immigrants will die in the desert instead of making it into the country. That’s not quite as bad as “Hitler didn’t go far enough” or “Kulaks deserved it,” but I don’t consider it that much better, except perhaps in terms of sheer scale.

          • Jiro says:

            By that reasoning if I own a grocery store and I lock it up every night so that starving people can’t steal the food, and someone dies of starvation because they can’t steal the food, I’m a murderer.

            If you’re required to let thieves steal merely because they need the thing they are stealing, you destroy the ability of people to own property at all. What if instead of taking water, they were stealing cars and selling the cars for money to buy water? Would we have to allow that too?

          • Lambert says:

            Whose property is the water?
            It’s not stealing from you if you never owned it.

            Or were you making an argument on a more metaphorical level, where the ‘property’ is the USA itself?

          • Jiro says:

            Yes, the US’s water belongs to the US. The US has no obligation to kep it easy for someone who doesn’t belong to come in and help themselves to some.

          • theredsheep says:

            You say that as though we were undergoing a desperate water shortage and the little drinks given to Mexicans caused us lasting harm. Water falls from the sky. In some parts of the country, it’s tight due to excessive agricultural use, but if it’s that tight, I’m surprised they haven’t invented Dune-style stillsuits.

          • Jiro says:

            I’m pretty sure that one or two food items stolen from a grocery store also won’t cause lasting harm. That still doesn’t mean the grocery store is acting immorally if it keeps starving people from stealing its food.

            Furthermore, the Mexicans would not need the water if they weren’t trying to violate the law; if they violate the law, don’t run into any water, and therefore die, that’s their own fault. We are not required to keep them from killing themselves by making sure our water is available for them.

          • CatCube says:

            The water in question is usually set out by organizations specifically attempting to facilitate crossing. So it wasn’t stolen from anywhere in any meaningful sense. There’s a case to be made for littering, I guess.

          • Lambert says:

            It belongs to the people who bought the water, not the US gov’t.
            Would it be different if the water came from a well in Mexico?

          • LadyJane says:

            @Jiro: I think you’re misunderstanding the situation. The issue is not that immigrants are stealing water from the government and the government is trying to stop them from doing so. The issue is that private citizens and organizations are leaving out water supplies to prevent immigrants from dying, and the government is removing or destroying those water supplies to ensure that immigrants are at higher risk of dying.

            The government is not passively allowing people to die in order to protect its property rights. It’s violating property rights in an active attempt to cause people to die, or at least to discourage their behavior by making their risk of death significantly higher.

          • quanta413 says:

            There’s no property right to stick your stuff in the middle of the desert and not have it interfered with if you don’t own that desert. Especially with the explicit purpose of aiding and abetting breaking the law.

            There are valid moral arguments against possibly causing more deaths. There are valid arguments that current immigration law is pretty subpar. But property rights to just leave your stuff sitting around wherever don’t really come into it.

            If I left a case of water just sitting in the middle of the sidewalk for a day, I don’t really have any right to expect it to be there when I come back.

          • John Schilling says:

            There’s no property right to stick your stuff in the middle of the desert and not have it interfered with if you don’t own that desert.

            Yes, there is. If I park my jeep in a bit of federally-owned public desrt that isn’t explicitly posted “no tresspassing” or the like, I have a right to not have anyone – not even the federal government – drive or tow it away or vandalize it in place. If I set up my tent and leave it there while I go on a day hike, it had better be there when I get back. And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized. Property rights, while harder to enforce in practice, are not voided in principle by the property being left untended in a public area.

            At least, not in the general case. You are trying to make a special case that property intended to prevent illegal immigrants from dying in the desert is exempt from this general respect for private property and can be vandalized. You are trying to justify this by noting that the immigrants in question are committing an actual misdemeanor, which you would rather they die than get away with. And you are proposing to take active measures to kill them, in defiance both of their right to life and of your fellow American citizens’ right to dispose of their private property as they see fit. Because this is different, this is special.

            And you really don’t understand how everyone who isn’t a hardcore Trumpist, looks at you and sees only something especially monstrous?

          • LadyJane says:

            @John Schilling: Yes, exactly. Thank you.

            I’m always amazed by how many otherwise libertarian-minded people will suddenly abandon all of their principles when it comes to immigration, and then go through all sorts of bizarre and contrived mental gymnastics to explain how they’re not really abandoning their principles. (I have my suspicions as to why, of course, but perhaps this isn’t the best place or time to discuss them.)

          • Toby Bartels says:

            @ Jiro :

            So your position is that the federal government owns the country, and without this being recognized, ‘you destroy the ability of people to own property at all’, right? How is this different from the Soviet Union?

          • HeelBearCub says:

            @John Schilling:
            Cogent, coherent and well argued. Huzzah.

          • quanta413 says:

            @John Schilling

            Yes, there is. If I park my jeep in a bit of federally-owned public desrt that isn’t explicitly posted “no tresspassing” or the like, I have a right to not have anyone – not even the federal government – drive or tow it away or vandalize it in place. If I set up my tent and leave it there while I go on a day hike, it had better be there when I get back. And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized. Property rights, while harder to enforce in practice, are not voided in principle by the property being left untended in a public area.

            At least, not in the general case. You are trying to make a special case that property intended to prevent illegal immigrants from dying in the desert is exempt from this general respect for private property and can be vandalized. You are trying to justify this by noting that the immigrants in question are committing an actual misdemeanor, which you would rather they die than get away with. And you are proposing to take active measures to kill them, in defiance both of their right to life and of your fellow American citizens’ right to dispose of their private property as they see fit. Because this is different, this is special.

            Property rights do not have infinite shelf life upon abandonment of property. All of your attempted counterexamples involve specific properties that it is understood you will go back and retrieve or you have formed an explicit agreement with a particular person to retrieve. It is not the general case that you can just leave things wherever for an unspecified amount of time.

            Also note that if you park your jeep somewhere that wasn’t marked for public parking or an area where the government explicitly allows it, your jeep can and would be towed. Likely in the same day.

            Upon abandonment of your property for someone else’s use without any legal contract or common law, all bets are off. Your property rights become irrelevant because you ceded them.

            If the people who take your water are the border patrol or someone smuggling immense amounts of cocaine into the U.S. instead of intended down on their luck illegal immigrants then tough luck. You abandoned it.

            You are begging the question by assuming that property rights apply in cases that legally would not be recognized. The government wouldn’t legally recognize an unsanctioned loan agreement you had with the mafia even if the agreement was a improvement over the status quo for both you and the mafia. But this is even worse, because you don’t even have a specific agreement with a particular person you’ve left the water to.

            And you really don’t understand how everyone who isn’t a hardcore Trumpist, looks at you and sees only something especially monstrous?

            Cool off. I do not condone slicing open water caches. But I don’t accept incoherent arguments for why it’s wrong to cut open water caches either. Shoehorning everything into a libertarian framework needs to be less lazy than “I can just drop stuff wherever I like for whatever purpose for an unspecified amount of time never to be picked up by myself again without making any arrangements that would be understood as a contract (verbal or otherwise) and then have a right to expect that stuff is used how I would like.”

            That’s why I explicitly said there are other valid arguments against cutting water caches. But a libertarian property rights argument against it is bullshit. The property rights argument for cutting is stupid too. Property rights doesn’t dictate that you should cut a cache. It just says that you can without violating property rights. But I can do lots of terrible things without violating property rights.

            @LadyJane

            I’m always amazed by how many otherwise libertarian-minded people will suddenly abandon all of their principles when it comes to immigration, and then go through all sorts of bizarre and contrived mental gymnastics to explain how they’re not really abandoning their principles. (I have my suspicions as to why, of course, but perhaps this isn’t the best place or time to discuss them.)

            (A) I’m only vaguely libertarian.
            (B) I’m irritated that your argument is terrible and shoehorns things into a libertarian framework in a way that makes no sense.
            (C) Make a coherent moral argument instead of jamming everything into a property rights framework. Make an actual argument for why my claim is mental gymnastics and yours is not.

          • Matt M says:

            I’m always amazed by how many otherwise libertarian-minded people will suddenly abandon all of their principles when it comes to immigration, and then go through all sorts of bizarre and contrived mental gymnastics to explain how they’re not really abandoning their principles.

            It’s not “bizarre and contrived at all.”

            Under our current laws and system, the federal government is the rightful “owner” of the lands that constitute the US border, and therefore, it has the right to determine who is allowed to cross the border and who is not allowed to cross the border.

            Libertarians can (correctly) argue that they shouldn’t own said land, but current reality is that they do.

            To the extent that libertarianism requires a healthy respect for property rights, the only relevant question for border enforcement is “who owns this land?” And as of today, the federal government owns this land.

            I welcome any leftists, neocons, or anyone else who would like to start a discussion on transferring these lands (the border, and all other federally managed “public” lands) to state, local, or better still, private control. You guys up for that?

          • Toby Bartels says:

            @ Matt M :

            Under our current laws and system, the federal government is the rightful “owner” of the lands that constitute the US border,

            That's not true, or at least it's not true if you remove the scare quotes, so what exactly are you claiming here?

          • quanta413 says:

            That’s not true, or at least it’s not true if you remove the scare quotes, so what exactly are you claiming here?

            It’s not literally true but it’s a pretty good approximation for the discussion at hand. The sovereignty they exercise is more powerful than regular property rights (and probably less just). The government can decide what is valid public use, where, when and whether people are allowed to enter that land from the other side of the border, etc.

            And they can more easily search you within what… 50 100 miles of the border or some such? I can’t as easily extend my rights 5 feet outside of my house.

          • Jiro says:

            And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized.

            And if you decide to leave a bomb there so that your friends can pick it up later and toss it into a bank vault, expect the government to take the bomb away. Illegal immigration is, by definition, against the law and if you leave on public property things to be used in helping people violate the law on that property, expect them to be confiscated.

          • CatCube says:

            @John Schilling

            And you really don’t understand how everyone who isn’t a hardcore Trumpist, looks at you and sees only something especially monstrous?

            It’s interesting to drag Trump into this, as the POV which kicked off that particular donnybrook in 105.0 is one I’ve held since Trump was a Democrat, donating to Hillary Clinton. It’s totally orthogonal to the man, since I didn’t vote for him in the last election, and probably won’t pull the lever for him in the next one–ideally, he’d get primaried, and any of the other candidates in the last election would be preferable, but you’d already see rumblings if that was going to happen. (I’d also at least consider it if the current court-packing idiocy looks like it’s gaining steam.)

            Would Trump agree with the Border Patrol dumping out water, anyway? We know he’s got a chubby for large, expensive civil works on the border, but this is a different matter. I don’t follow what the guy says, so it’s possible that there was a water dumping incident recently and he defended the agents caught doing it, but all the ones I recall date back some 5-8 years.

          • Toby Bartels says:

            @ quanta413 :

            The sovereignty they exercise is more powerful than regular property rights (and probably less just).

            Agreed, so how is this different from the Soviet Union?

          • quanta413 says:

            @Toby Bartels

            Agreed, so how is this different from the Soviet Union?

            I’m not arguing about whether or not the system is moral. I’m arguing over whether or not in the case of confiscating or cutting water caches being used to help people get across the border the system violates property rights. I say it does not and the question of morality is orthogonal.

            I would be well within my property rights to move water someone else put on a property I owned in the desert near the border into a lost and found box inside my house with a sign where the water was saying “please ring bell outside gated house with guard dogs to claim the water you accidentally left on my property kind stranger! Also tell me the container it was in so I can know I’m returning it to its rightful owner.” Only a fool would then come ring my doorbell for fear I might call the border patrol on them. Even if I was just clueless and really did think someone had forgotten something, they couldn’t know that.

            This might end up with people dying by the same logic as cutting or confiscating water caches that happen to be on public land. This might be morally wrong behavior on my part if I did this to make crossing the border dangerous and wasn’t clueless, but not because it violates property rights. After all, clueless me didn’t violate any property rights by moving things somewhere safe for whoever accidentally forgot their water. I don’t see how property rights hinge on my intent here.

            And since I’m not a hardcore libertarian, I support the state having some powers that private citizens don’t have. National defense, running the legal system, border control, the ability to collect taxes, etc. I accept a certain amount of abuse will happen as being the least possible evil. If you want to have an argument like “anything besides anarcho-capitalism is the moral equivalent of mass starvation and gulags”, I guess we could have that? But I’m not super interested in that debate unless you’ve got a really compelling argument I haven’t seen before.

          • Toby Bartels says:

            @ quanta413 :

            I’m arguing over whether or not in the case of confiscating or cutting water caches being used to help people get across the border the system violates property rights.

            Yes, and you conclude that it doesn't, in part by arguing that government control that is exercised so strictly that it amounts to de-facto ownership acquires a property right nullifying that of the putative owners. (Well, Matt M seemed to argue that, and you defended it.) By this reasoning, dekulakization also did not violate property rights; the Soviet government exercised such control over the uncollectivized peasants' farms that the peasants did not really own them, but were merely recalcitrant renters refusing to comply with the terms of the revised lease, resulting in their forcible eviction. (I keep bringing up the Soviet Union because that's how we got into this discussion; LadyJane suggested that sabotaging water caches was similar to dekulakization except for scale.)

            Incidentally, although I am a hardcore libertarian, I am not an anarcho-capitalist, because I am a left-libertarian. So I don't think that property rights should be absolute, and I wouldn't be at all happy with a landowner sabotaging water caches on their land either. But at least someone who defended that on the basis of property rights would be using ‘property’ in its usual sense, and I would be able to see a distinction between their position and that of the Soviet Union. ETA: Actually, you seem to have shifted to this yourself, based on the examples in your latest comment, away from the position that I inferred as Matt M's. I agree with you that this is not really the same kind of thing as dekulakization (much less the Holocaust, which Lady Jane also mentioned), even ignoring scale.

          • ana53294 says:

            I was wondering if you could base the legality of the water staches on the obligation of witnesses to do what they reasonably can to rescue somebody in danger (this usually means that the minimum requirement is to call emergency services to deal with it; but if you pass by a woman getting beaten in the street, don’t call the police, and a witness saw you pass by, you can be prosecuted in Spain).

            But it seems that there is no such obligation in the US, except for Florida, Massachusetts, Minnesota, Ohio, Rhode Island, Vermont, Washington y and Wisconsin, and none of them are at the border.

            When the obligation to help was explained to me, I was told it was based on Roman law. I couldn’t find any reference to it in the UK, either, so I guess this is one of those occasions were the anglo-saxon law differs from European law (Germany, France, Italy, Greece, Portugal and even Russia have this legal principle).

          • bean says:

            If I park my jeep in a bit of federally-owned public desrt that isn’t explicitly posted “no tresspassing” or the like, I have a right to not have anyone – not even the federal government – drive or tow it away or vandalize it in place. If I set up my tent and leave it there while I go on a day hike, it had better be there when I get back. And if I decide to leave a cache of food and water for when I get back from my hike – or for my friends when they get back from their hike – that’s still my food and water, to do with as I please and not be stolen or vandalized. Property rights, while harder to enforce in practice, are not voided in principle by the property being left untended in a public area.

            These aren’t absolute rights, though. Yes, I can absolutely leave my jeep while I go for a day hike. I can leave it for a week or two, although I’ll put a note in it for the rangers so they know when to expect me back. I can leave my tent for the day. I can leave a cache of food for a while. (I’m not up on caching etiquette, but I suspect those get notes, too.) But if I park my jeep and come back 6 months later, I shouldn’t be surprised that it’s gone. And leaving a cache for some random passerby who may or may not ever come along seems very different from leaving one for my friends to pick up next week. If I dumped random empty jugs around on public land, I’d be arrested for littering, and rightly so. Why does that change if I fill them with water and write “cache” on the side?

          • Jiro says:

            I was wondering if you could base the legality of the water staches on the obligation of witnesses to do what they reasonably can to rescue somebody in danger

            If that’s legitimate, then it would really be an obligation. People would be legally required to leave out caches of water for illegal immigrants. Are you sure you want to go this route?

          • Matt M says:

            Yes, and you conclude that it doesn’t, in part by arguing that government control that is exercised so strictly that it amounts to de-facto ownership acquires a property right nullifying that of the putative owners. (Well, Matt M seemed to argue that, and you defended it.)

            I am merely responding to the assertion (made here by LadyJane, but frequently made by others in all sorts of venues as well) that libertarianism defaults to an open-borders position, and that anyone who does not favor open-borders cannot properly call themselves libertarians.

            To be clear, I think this is complete and total nonsense.

            Libertarianism defaults to “open borders” in a simple and elegant manner – of respecting property rights. If I own my property, I have the right to hosts whichever guests I choose, and the state has no business telling me who I can or can not have on my property. That is the libertarian argument for “open borders” and it is one I happen to agree with.

            That said, I cannot demand that anyone else make their property available to assist my preferred guests in reaching my property. If Undocumented Jose wants to visit me, he is free to do so with my permission. But he must also obtain the permission of the owner of each and every property owner along the route he intends to travel to reach me.

            In the US, as currently constructed, the government maintains a claim of ownership or sovereignty over all of the routes Jose might take to reach me. Anyone other than full-blown AnCaps would seem to respect those claims. Because after all – who would build the roads?

            I; however, happen to actually be a full blown AnCap. I don’t think this claim of ownership or sovereignty by the state is legitimate. I see no particular reason to respect it.

            But even with that concession, it is non-obvious to me that “open borders” would still be the default. Because even if we strike out the state’s claim to ownership of border property, airports, public roads, etc. the actual owner then becomes unclear. Ideally all of these assets would be privatized and clarity would be established – but that isn’t happening any time soon.

            If we were to treat these lands and assets as “unowned” that doesn’t necessarily help the situation either. I suppose one could make an argument that immigrants and coyotes are homesteading unowned lands by leaving water caches, which then makes those caches their rightful property which nobody else has the right to disturb. Of course, I think if you opened those particular flood gates, there is no shortage of right-wing militia types who might choose to “homestead” these lands in a method you might find slightly less pleasant.

            If nobody owns the land in question, your right to leave water caches is not any more significant than someone else’s right to destroy them.

            Note that this is one of the reasons I am an AnCap in the first place. Establishing clear property rights is a great and easy and effective way to solve many such political disputes, including this one.

          • LadyJane says:

            @Matt M:

            In the US, as currently constructed, the government maintains a claim of ownership or sovereignty over all of the routes Jose might take to reach me. Anyone other than full-blown AnCaps would seem to respect those claims. Because after all – who would build the roads?

            The brilliant Jeffrey Tucker addresses this exact argument in his article on libertarian brutalism. (While Tucker is an anarcho-capitalist, his argument here is not framed in anarcho-capitalist terms and is just as applicable within minarchist, classical liberal, and moderate libertarian frameworks.)

            I’ve heard many libertarians postulate that public spaces ought to be managed in the same way private spaces are. So, for example, if you can reasonably suppose that a private country club can exclude people based on gender, race, and religion – and they certainly have that correct – then it is not unreasonable to suppose that towns, cities, or states, which would be private in absence of government, should be permitted to do the same.

            In fact, it has been claimed, the best kind of statesmen are those who manage their realm the same way a CEO manages a corporation or the head of a family runs a household.

            What is wrong with this thinking? It is perhaps not obvious at first. But consider where you end up if you keep pursuing this: there are no more limits on the state at all. If a state can do anything that a private home, a house of worship, a country club, or a shopping center can do, any state can impose arbitrary rules, conditions of inclusion, or codes of speech, dress, and belief, including every manner of mandate and prohibition, the same as any private entity does. Such a position essentially belittles 500 years of struggle to restrain the state with general rules, from Magna Carta to the latest rollbacks in the war on drugs.

            The whole idea of the liberal revolution is that states must stay within strict bounds – punishing only transgressions against person and property – while private entities must be given maximum liberality in experimentation with rules. This distinction must remain if we are to keep anything that has been known as freedom since the High Middle Ages. Through long struggle, we managed to erect walls between the state and society, and the struggle to keep that wall high never ends. The notion that public actors should behave as if they are private owners is an existential threat to everything that liberalism ever sought to achieve.

          • Matt M says:

            I’ve read that article. It was probably the last straw that sent me from being a huge fan of Tucker (I own several of his books and have donated to him before, actually) to considering him a threat to freedom and liberty.

            He, like many other left-libertarians, totally jumped the shark of anti-Trump virtue signaling to the extent that he’s totally turned on the most consistent group of libertarians on Earth (the Mises crowd) and is now happily smearing them as Nazis. No thanks.

          • LadyJane says:

            By the Mises crowd, do you mean the Rothbard/Rockwell/Hoppe paleo-libertarian crowd that’s been dragging poor Ludwig’s name through the mud? I have a great amount of respect for Mises himself, and for some of Rothbard’s early work. However, Rothbard took a hard turn to the cultural right in his later years, attempting to wed libertarian economics to social conservatism and nationalism while abandoning the globalist, cosmopolitan, and socially liberal principles of his mentor. As a result, the Mises Institute has become a paleo-libertarian think tank promoting views that Mises himself likely would’ve been disgusted by. (See: http://thejacknews.com/featured/libertarians-alt-right-ludwig-von-mises-would-not-like-institute/)

            I’m also a little baffled that you consider Tucker to be a “left-libertarian,” unless you’re just using that as a catch-all snarl term for libertarians you disagree with. The term left-libertarian has traditionally referred to libertarians who reject capitalism in favor of far-left economic systems like socialism or syndicalism, but lately I’ve seen paleo-libertarians use it to refer to right-libertarians (i.e. free-market capitalist libertarians) who are liberal or progressive on social and cultural issues, which basically dilutes the term into meaninglessness. Tucker is about as far to the economic right as you can possibly get, and seems fairly neutral on culture war issues, so calling him a left-libertarian seems misguided at best.

            Finally, and most importantly, which of the views expressed in Tucker’s article do you actually disagree with? Why do you reject Tucker’s interpretation of libertarianism to the point where you consider him a threat to freedom and liberty? (I’m interested in hearing criticisms of his actual object-level views, not just “I’m pissed at him because he’s pandering to the Blue Tribe and telling the Red Tribe to fuck off, and I like the Red Tribe better.”)

          • Matt M says:

            As a result, the Mises Institute has become a paleo-libertarian think tank promoting views that Mises himself likely would’ve been disgusted by.

            Uh huh, sure. I’ll take the opinions of his wife, friends, and students over some random journalist whose major complaint (much like Tucker’s) seems to be: But those guys are raaaaaaaaaaaaaaaacist!!!

            The Mises institute offers no declaration of which social values one must follow, which is what makes them legitimate libertarians. It’s Tucker and the other left-libertarians who insist that libertarianism requires SJW stances on various culture war issues.

            Hard pass.

          • quanta413 says:

            @Toby Bartels

            Yes, and you conclude that it doesn’t, in part by arguing that government control that is exercised so strictly that it amounts to de-facto ownership acquires a property right nullifying that of the putative owners. (Well, Matt M seemed to argue that, and you defended it.) By this reasoning, dekulakization also did not violate property rights; the Soviet government exercised such control over the uncollectivized peasants’ farms that the peasants did not really own them, but were merely recalcitrant renters refusing to comply with the terms of the revised lease, resulting in their forcible eviction. (I keep bringing up the Soviet Union because that’s how we got into this discussion; LadyJane suggested that sabotaging water caches was similar to dekulakization except for scale.)

            It is stronger than property ownership in most ways and more easily abused, but that does not mean it isn’t valid. The U.S. government has a military and often abuses this power, but it doesn’t mean I think the U.S. government can’t or shouldn’t have a military.

            The commons are not a shelf you can leave things on for eternity especially if the whole point of leaving those things there is to violate some other rule about how the commons work. You don’t have a property right to abandon your stuff on the commons and not have it interfered with. There is no sane libertarian scheme of how property rights work on public land if it doesn’t acknowledge that the government has significant ability to restrict how that land is used. If it would be within property rights for a private owner to confiscate the water (and even dispose of it if it’s taking too much space or imposing some other burden on the rightful owner of the property) than the government also may do so. Especially when the purpose of that property is being undermined by the actions of those who don’t own it.

            Property rights spring from informal and traditional customs and agreements that were found to usually be beneficial. Sometimes they are codified or even invented in order to improve market efficiency, but the invention case is rare. They do not spring forth fully formed from anarchocapitalist axioms except for the rare libertarian deontologist. The Soviets violated well understood traditions of who owned what. There is no well understood tradition that your property rights remain valid if you abandon your stuff off of your own property in order to violate the law. Because you had no property right to leave your stuff somewhere you don’t own in order to undermine the rightful owner (roughly speaking).

            Also, scale affects essentially all moral and legal ideas. The idea that cutting water caches that violate government rights to control the border is equivalent to confiscating enough grain to starve even one person is ridiculous (I realize not your position, but the comparison to kulaks except scale is inaccurate enough to bother me). At best, the moment the government drops a reliable easy to use emergency transmitter for border patrol pickup the scenario is basically identical to a kind property owner moving things to the lost and found. At worst if it just drains a cache or carries it away forever, I think it’s roughly comparable to a park ranger chopping a bolt on a climb because it violates the natural beauty of the area. Potentially making a climb less safe. Note, people have died in areas there were fights about this issue. I think it’s morally wrong for the park to cut reasonably placed bolts on public land when there’s no clear understanding about whether or not bolting is allowed but it’s not because of a property right that climbers have to place bolts just wherever on public land. Ordinary property rights don’t come into it.

          • LadyJane says:

            @Matt M:

            I’ll take the opinions of his wife, friends, and students over some random journalist whose major complaint (much like Tucker’s) seems to be: But those guys are raaaaaaaaaaaaaaaacist!!!

            Rothbard and Mises had similar views while Mises was alive, but Rothbard’s views changed after Mises died. Rothbard became a hardcore paleo-conservative, while Mises had always been a classical liberal. So I don’t find it particularly unlikely that Margit may have allowed one of her husband’s most well-known disciples to use his name only because she didn’t realize how radically this disciple’s views had changed.

            And are you making light of the racism allegations because you find it ridiculous to think that any of the people in question are actually racist? Or is it simply that you don’t care if they’re racist, and wouldn’t consider that to be incompatible with their purported libertarian views?

            The Mises institute offers no declaration of which social values one must follow, which is what makes them legitimate libertarians. It’s Tucker and the other left-libertarians who insist that libertarianism requires SJW stances on various culture war issues.

            I think you’re conflating two different things here, specifically which ideological views would be allowed in a libertarian social order (namely, all of them), and which ideological views are required for one to be considered libertarian in any meaningful way.

            Libertarian philosophy concedes that people have the right to be xenophobic, racist, sexist, homophobic, transphobic, or otherwise bigoted, just as it concedes that people have the right to be fascists or communists or feudalists. But the fascist and the communist and the feudalist cannot themselves be libertarians, because the ideologies they espouse are fundamentally incompatible with libertarianism. Likewise, bigots can’t be libertarians, no matter how much they might like to claim they are, no matter how much they might agree with libertarians on purely economic issues, no matter how much they happen to dislike the U.S. federal government. Bigotry itself is fundamentally incompatible with libertarianism just like communism is; they’re both nothing more than particularly nasty forms of collectivism, and thus diametrically opposed to the radical individualism at the heart of libertarian thought.

            The people who think that libertarianism is a purely economic philosophy – or worse, that it’s simply about opposing the current government – do not understand what libertarian philosophy is actually about on a fundamental level.

          • Martin says:

            LadyJane,

            The term left-libertarian has traditionally referred to libertarians who reject capitalism in favor of far-left economic systems like socialism or syndicalism, but lately I’ve seen paleo-libertarians use it to refer to right-libertarians (i.e. free-market capitalist libertarians) who are liberal or progressive on social and cultural issues

            There are pro-free market, pro-property rights libertarians who consider themselves to be left-libertarian.

          • The term left-libertarian has traditionally referred to libertarians who reject capitalism in favor of far-left economic systems like socialism or syndicalism

            In current usage, it refers to several other things as well.

          • Bigotry itself is fundamentally incompatible with libertarianism just like communism is; they’re both nothing more than particularly nasty forms of collectivism, and thus diametrically opposed to the radical individualism at the heart of libertarian thought.

            I’m not sure how you are defining bigotry. I don’t think the belief that some group–women, say, or blacks–have a very different distribution of abilities than some other group, is inconsistent with libertarianism, although the belief may not be true.

          • albatross11 says:

            I don’t see why a belief in average group differences would conflict with libertarianism. [ETA] I mean, we all already accept that there are individual differences–I’m not as smart as Terrence Tao, but I’m smarter than my building’s janitor. That doesn’t mean we can’t interact together productively in a market. (Comparative advantage says I can interact productively with Terrence Tao, even if he’s better at *everything* than I am.)

            On the other hand, reading _The Bell Curve_ made me *way* more amenable to both programs to help the people at the bottom, and somewhat paternalistic laws and social norms to give guidance to not-very-smart people. If you’re on the bottom because of your bad choices, it’s easy to accept that; if you’re on the bottom because you rolled a 3 for your Intelligence score (thanks largely to the lousy genes and upbringing provided by your parents), that makes me a lot more sympathetic to your plight.

          • Toby Bartels says:

            @ Matt M :

            If I understand you correctly, you're now only claiming property rights for the federal government over federal land, rather than over the entire border, so I'll no longer compare your position to that of the Soviets. However, as a matter of historical interest, you might like to know that

            The Soviets violated well understood traditions of who owned what.

            is not really true. Those ‘traditions’ had been invented just a decade or so earlier, in 1906. While they didn't ‘spring forth fully formed from anarchocapitalist axioms’, they were a deliberate copy of western Europe and might fairly be said to have sprung from liberal axioms (in a broad but classical sense).

            So if you object to dekulakization because of its violation of private property rights in land, then that's coming from your advocacy of such rights (whatever your reasons for that may be), not defending a longstanding Russian tradition or Chesterton fence. (I object to dekulakization for different reasons.)

          • Edward Scizorhands says:

            Scott wrote a nice essay where he considered what the arguments from the left and right would be if they swapped sides on this specific argument:

            https://slatestarcodex.com/2014/10/16/five-case-studies-on-politicization/

            IQ differences are purely genetic, a matter of luck for which neither the individual nor his community bears any responsibility. We should be generous with welfare and assistance towards those who lost out in the genetic lottery, just as we are towards the sick and disabled. Conservatives who argue that the IQ differences are ‘man-made’ are looking to shift the blame to the victims and excuse their own bigotry towards the least privileged.

            vs.

            IQ differences are purely environmental. According to evidence, the main environmental difference among blacks and whites is culture. We therefore need to replace the multicultural free-for-all that is hurting American children with traditional American family values, and limit immigration from countries with low-IQ cultures. We need to promote monogamy by cutting support for single mothers. We should probably outlaw hip-hop music too, just in case rap is what lowers black IQ.

          • Matt M says:

            Toby,

            Just to clarify, I haven’t been arguing about the kulaks, I don’t know enough about the history there to comment intelligently.

            What I would say is that my argument about the border is essentially: “If you accept that the federal government has any legitimacy at all, then it clearly owns and/or holds sovereignty over the border, and has the right to restrict the usage thereof.”

            (If you do NOT accept that the federal government has any legitimacy at all, I think it may be the case that the lands it currently holds become a free-for-all for anyone to use, but I don’t think that is necessarily/obviously true)

            I think this argument holds, even in the analogy with the Soviet Union. If you accept that the Soviet government is legitimate, then they do, in fact, have the right to confiscate private land, because their entire system of government is sort of contingent upon being able to do that sort of thing.

            To object to the confiscation of lands by the Soviet government is to question the entire legitimacy of said government. Which I am, you know, totally in favor of. Just as I am in favor of questioning the legitimacy of the US government. But I think the left-wing position wherein the government has the authority to regulate individuals lives and behaviors in almost any way imaginable EXCEPT exercising control over who is allowed to physically enter the geographic borders of the nation is bizarre and incomprehensible. And it doesn’t magically become less bizarre and incomprehensible just because maybe you favor gay marriage and legalized pot and slightly lower tax-rates (which is basically the only thing that separates left-libertarians from democrats and/or republicans)

          • LadyJane says:

            @Martin, @DavidFriedman:

            There are pro-free market, pro-property rights libertarians who consider themselves to be left-libertarian.

            In current usage, it refers to several other things as well.

            Fair enough, but I’m a cranky old political theorist and I’m very pedantic about the precise meanings of terms like that. I’ve gotten into plenty of arguments with Trump opponents about how Trump is not actually a fascist, and plenty of arguments with Bernie supporters and opponents alike about how Bernie is not actually a socialist even if he erroneously claims to be.

            I find it especially annoying when political terms are redefined in vague and overly broad ways, because that just muddies the waters and leads to confusion about what exactly people are talking about, which greatly hinders political discourse. At best, it leads to people wasting a lot of time clarifying that they really mean X when they talk about Y. At worst, it leads to people shouting past each other because they don’t realize what the other person is actually talking about.

          • LadyJane says:

            @DavidFriedman, @albatross11:

            I’m not sure how you are defining bigotry. I don’t think the belief that some group–women, say, or blacks–have a very different distribution of abilities than some other group, is inconsistent with libertarianism, although the belief may not be true.

            I don’t see why a belief in average group differences would conflict with libertarianism.

            Believing that group differences exist is not incompatible with libertarian individualism. Believing that people should be judged and treated differently on the basis of their group membership – whether it takes the form of “these people are naturally less intelligent, so we shouldn’t keep trying to help them” or “these people are naturally less intelligence, so we should give them more help than everyone else” or just “these people are naturally less intelligent, so I’d prefer to not be friends with them, serve them, or hire them” – is incompatible with libertarian individualism. However, the former belief tends to make people more likely to support the latter belief.

          • Randy M says:

            Isn’t “we should keep trying to help” itself rather anti-Libertarian? Unless you mean we as in individual charities.

          • LadyJane says:

            @Randy M: I was referring more to a certain mentality than to any particular stance on policy, though yes, “help” could entail private charity as well as government assistance.

            That said, there’s also a very important distinction to be made between “the government should reduce or cut off all social services in general” and “the government should keep providing social services to green-skinned people, but should reduce or cut off social services for blue-skinned people.” The first position is compatible with libertarianism, the second is not.

          • That said, there’s also a very important distinction to be made between “the government should reduce or cut off all social services in general” and “the government should keep providing social services to green-skinned people, but should reduce or cut off social services for blue-skinned people.” The first position is compatible with libertarianism, the second is not.

            Doesn’t that depend in part on how big you think the difference in distribution of abilities is? Do you regard treating chimpanzees differently from humans as raising the same problems?

            In making decisions under conditions of imperfect information, one normally uses proxies. Suppose the difference in distribution is large enough so that the best proxy available for some characteristic relevant to the decision is race. Is using it unlibertarian? Wrong?

            Imagine a world where the distribution is so different that the top 1% of blue people are about as smart as the bottom 1% of green people. You don’t think it would make sense, given the existence of government and laws, for race to be an input to decisions?

          • LadyJane says:

            @DavidFriedman: It’s an interesting thought experiment! If there were an inherent racial difference of that magnitude, then yes, I would say that treating the races differently would be justifiable. Though any attempt at a libertarian social order would still require some minimal degree of equality under the law, even if it’s limited to very basic things like “you can’t murder, assault, or steal from anyone, whether they’re green or blue” and “the government can’t just lock up blue people for no reason, or for exercising their right to free speech and assembly.” (Of course, you could probably come up with scenarios in which even that level of racial equality wouldn’t be viable – for instance, if there were also a race of red-skinned people that was inherently prone to violent and anti-social behavior to an extreme degree – but then we’re getting even further removed from reality.)

            The practical applications of this thought experiment are limited, though. In real life, I don’t think even the most hardcore racist could convincingly claim that “the top 1% of [race X] is only as smart as the bottom 1% of [race Y]” about any human racial groups; that is literally the level of difference between humans and gorillas. Libertarianism is a human ideology that won’t necessarily work for beings with extremely different mindsets or levels of intelligence than baseline humans.

            It’s akin to arguing that communism could work for a species of sapient insectoids that evolved to function collectively as a hive. Well, sure, it could, but what does that prove about us?

          • albatross11 says:

            Libertarianism is consistent with freedom of association–that is, allowing private actors to discriminate at will. It’s hard for me to imagine a libertarian government imposing or maintaining some kind of racial caste system, though.

          • Matt M says:

            It’s hard for me to imagine a libertarian government imposing or maintaining some kind of racial caste system, though.

            Provided they could work out a reasonable way of entry and exit, there could easily be a libertarian racist society. Hell, even a libertarian communist society.

            Suggesting that libertarians can’t be racist is akin to suggesting that libertarians can’t be into BDSM or whatever (because hitting people is aggression, dontcha know). But if people consent, it’s not aggression. My setting up a whites-only enclave does not violate the rights of any blacks, because they have no right to live on my property in the first place.

          • albatross11 says:

            You’re right–libertarian government is 100% compatible with a lot of private discrimination, including all-white enclaves. But I wouldn’t think of a government that imposed some kind of apartheid-like system as libertarian.

          • It’s akin to arguing that communism could work for a species of sapient insectoids that evolved to function collectively as a hive.

            Done by a good economist that could make an interesting sf background. They would still face the coordination problem, and would probably have to find some decentralized structure, perhaps along market socialist lines, to deal with it. But they might not face the problem of actors with inconsistent objectives, or at least not nearly as much of it as we would.

            There is a Kipling poem that describes a (fictional) attempt by the Kaiser to persuade the working men of the world to some sort of socialist economy, where everyone works together for the common good. He fails because of individual self-interest problems, represented in the poem in terms of men wanting to court, marry, and jointly prosper with women. The poem was written in 1890 and shows no evidence that Kipling had thought of the coordination problems that socialism faces even with altruistic participants.

          • Martin says:

            Matt M,

            And it doesn’t magically become less bizarre and incomprehensible just because maybe you favor gay marriage and legalized pot and slightly lower tax-rates (which is basically the only thing that separates left-libertarians from democrats and/or republicans)

            I’m really curious to know which “left-libertarians” you’re referring to here.

      • Deiseach says:

        I have no idea what my official IQ might be, but I do know that it’s not near high enough 🙂

      • Thegnskald says:

        The average commenter is between 110-125, if my usually good intuition about intelligence is on par.

        I’d guess there is some conflation happening between childhood and adult IQs. My childhood IQ was at least 80 points higher than my adult IQ, to give some indication of how different the measurements are.

        ETA:

        Also: Us really smart people are pretty freaking dumb, too. DavidFriedman, for example, reminds me with every post how little I know about economics, an area I think I am about average on for a commenter here. I have been deliberately ignorant at him once or twice just to see what gems of insight he’d provide. Most of us know a lot about something, and the aggregate is far more intimidating than the average on any given subject.

        • albatross11 says:

          Is there research somewhere showing how well you can do at estimating IQs based on seeing writing samples? This would only work with a fair bit of range restriction (comparable education, native speakers, similar age, etc.), but you can kind-of imagine it working. But I’m skeptical about how accurate this would be….

        • alwhite says:

          80 point difference? What scale was that? On the typical scale (avg 100, SD=15) [the wechsler scale] a change of 80 is severe head trauma type change.

          • Brad says:

            Childhood IQs aren’t based on deviation. One or the other should really have a different name.

          • alwhite says:

            @Brad

            I don’t think that’s correct. The Woodcock-Johnson is used for kids from 2 to 14 and has average 100 S.D. 15. Again, what scale are you referring to?

          • I don’t know what scale he is referring to. I thought child IQ was defined as ratio of intellectual age to biological age, so an eight-year-old who did as well on the test as the average twelve-year-old would have an IQ of 150. Am I mistaken? Is that an earlier definition that has now been abandoned?

          • alwhite says:

            @DavidFriedman

            I think you’re partially correct. There is an age/grade equivalence that is given but it doesn’t appear that the equivalence is the same as the score. Here‘s an example.

          • Viliam says:

            I thought child IQ was defined as ratio of intellectual age to biological age, so an eight-year-old who did as well on the test as the average twelve-year-old would have an IQ of 150. Am I mistaken? Is that an earlier definition that has now been abandoned?

            Yes, it’s the old abandoned definition.

            The problem with the old definition was that you could apply it to kids, but not to adults. A smart 8 years old can be compared to an average 12 years old, but what age would you compare 30 years old Einstein to?

            So the new definition is “smarter than p% of people of the same age”, mathematically adjusted to a bell curve whose mean and sigma were chosen to make the new numbers for kids as similar as possible to the old numbers. i.e. the mean is 100, and the sigma is… depending on whom you ask, either 15 or 16 or 20.

            To be sure, whenever you talk about IQ, say explicitly what sigma you use. (For example when Mensa says “IQ 130”, they mean sigma = 15.) I think in Europe the sigma = 15 is typically used, not sure about the rest of the world.

      • BBA says:

        Personally I have a very high tested IQ [MY USUAL DIGRESSION QUESTIONING VALUE OF IQ REDACTED] and I have full confidence that I’m bringing the comment quality down everywhere I post. Here I just feel like I’m bringing it down in a different way. So don’t feel too bad about it, doods.

      • I can keep up except when it’s anything involving something above rudimentary maths, then I’m doomed.

      • Baeraad says:

        Well, I don’t agree with the local conventional wisdom that IQ is super-important and that maximising it is the key to just about everything, so I’m feeling less guilty about it, but… yes, I often feel acutely aware that I’m below average here even in the areas where I’m strong (non-verbal logic, somewhere around 135), and a bottomless pit of stupidity in the areas where I’m weak (intellectual multi-tasking, somewhere around 90).

      • moscanarius says:

        Yes, but I comment anyway. Just can’t avoid.

    • Nancy Lebovitz says:

      I feel as though I’m not as smart– and certainly not as diligent– as some of the people here, but I don’t feel as though I’m making the place worse.

      • MawBTS says:

        Being dumb, we would think that, wouldn’t we?

      • Nancy Lebovitz says:

        So, why do I not believe I’m making the place worse? It’s partly that it’s just not a conclusion I jump to, but also I don’t believe I’m posting enough to make a big difference directly and I’m not combative enough to get other people to make the place worse.

        I suppose I could be making the place *slightly* worse but I like my comments too much to believe that.

        • Iain says:

          Personally, I suspect that the reason that you don’t believe that you’re making the place worse is that you are a consistently thoughtful poster who actively makes the place better.

          • albatross11 says:

            +1

            Nancy, I’ve seen you as a participant in a couple of online spaces (here and Making Light[1]), and you’ve consistently added value to the conversations and communities you’ve participated in.

            [1] Were you around on alt.callahans for awhile, too?

        • Nancy Lebovitz says:

          Iain and albatross11, thanks very much.

          I probably posted in alt.callahans, but I was more active in rec.arts.sf.written and rec.arts.sf.fandom. I was also a fairly frequent poster at Balko’s The Agitator.

        • Bugmaster says:

          I’ve long given up on trying to make forums better. I’ll never be smart enough, nor ideologically pure enough, to elevate any kind of an online forum, on any topic. The best I can do is voice my thoughts as honestly and clearly as I’m able. I think this should be enough for most people.

          • HeelBearCub says:

            That’s what makes comment sections better.

            This comment section is not a listserv of academics discussing topics relevant to their field of expertise. I’m not sure why people seem to think that is what is required.

    • J Mann says:

      The group needs us white belts to keep growing – the challenge is learning when we recognize that someone’s comments are better, or when we fall on our faces.

    • Nancy Lebovitz says:

      For those who are concerned that they’re lowering the average comment quality, do you have specific concerns or is it a generalized worry?

  38. romeostevens says:

    In much the same way that scientific methods underspecify how you should search for and prioritize among promising classes of hypotheses, I fear this method does little to inform you about what areas are promising to paying attention to (scope sensitivity, etc). ie if one can sharpen their rationality chops on anything, are some things much better than others either for leveling the skill more quickly or for useful object level results?

    • Deiseach says:

      The trouble is, the methods of rationality/rationalism being showcased rely very heavily on mathematical intuition/ability. So those like me who haven’t that are doomed to forever be irrational.

      On the other hand, even innumerate mouth-breathers like me can have a good time in the comments here! And there is generally such a wide range of interesting topics and people with their own areas of expertise, that don’t require you to be able to Do Hard Sums, that there really is something for everyone.

  39. Montfort says:

    I can sort of buy the idea of the comments as a dojo – but there’s no personalized instruction or membership fees or belts or anything. The learning here is extremely self-directed, success is hard to judge objectively, and commenters come and go all the time. Still, even just an empty building with some mats where people can show up and practice is something.

    That is, if you wanted to design a place to practice “rationality” skills from the ground up, I’m not sure it would look like this. But the blog and comment section can serve multiple purposes at once.

    • Dominik Tujmer says:

      Agreed. You would probably have a curriculum, a timeline, tests, a system to update the curriculum depending on the actual results, different teachers for different things. But learning rationality is still a very young and very self-directed pursuit, so it’s understandable.

    • Nancy Lebovitz says:

      Not all martial arts have belts. They’re (mostly?) a Japanese thing.

      https://en.wikipedia.org/wiki/Black_belt_(martial_arts)

    • Ninety-Three says:

      I’m not sure dojo is the right metaphor here. SSC is more like a group of people who play pickup games of basketball in the evenings. They’re mostly showing up because it’s fun, and hey, maybe they’ll get a bit better at basketball.

    • Paul Zrimsek says:

      We are building a fighting force of extraordinary magnitude. We forge our spirits in the tradition of our ancestors. You have our gratitude.

    • HeelBearCub says:

      success is hard to judge objectively

      This is the entire problem, and the roots of the rationality movement both obfuscate and exacerbate said problem.

      Think for a second about practicing, whether it’s a martial art or math, the practice depends on being able to judge whether your attempt is correct, and to be able to repeat the attempt.

      Rationality was predicated not on the idea that you could properly calculate Bayesian statistical probabilities, but that you could arrive at the correct answer to novel problems. It doesn’t concern itself with settled issues, and therefore has an inability to generate replicatable practice. Otherwise, there would just be an esoteric course of academic study that combined logic, statistics and rhetoric.

  40. reasoned argumentation says:

    The author of this piece can’t imagine any arguments that would convince him that communism was a bad idea before every communist state went off the rails and slaughtered millions.

    He also stated that he can’t even think of any rules for rationality that would have improved his reasoning here.

    If this is the best “rationality” can do, you’re doomed.

    EDIT:

    By the way, if you want to see the exact problem you’re having it’s encapsulated perfectly here:

    https://slatestarcodex.com/2018/07/03/ssc-journal-club-dissolving-the-fermi-paradox/#comment-645249

    You make a post about the Fermi paradox and the Drake equation – Steve Sailer makes an observation that makes progressives uncomfortable that’s directly related to the discussion at hand – namely that a giant factor about predicting mass behavior (such as developing technology that would be remotely detectable) is systemically overlooked because it undermines progressive orthodoxy and that progressives responded to people who pointed out this factor not by disagreeing but by calling people heretics. You then do the exact same thing by banning him for his comment!

    There’s got to be a name for that type of error.

    EDIT 2: It appears I’m locked out of replies. Oh well.

    • thevoiceofthevoid says:

      I don’t recall anyone ever being called a “heretic” for their opinions on the possible implications or shortcomings of the Drake equation, nor have I seen any serious discussion of it having anything to do with “progressive orthodoxy” in any way. Seems to me like Steve Sailer was shoehorning in an argument about Bush’s housing policy that, whether or not it was correct, looks like less of an attempt at understanding the issue of the Fermi paradox and more like trying to start a “culture war”-ish argument.

      • reasoned argumentation says:

        There’s a missing parameter in the Drake equation for the second human speciation event.

        An intelligent enough species has to then move to novel environments because wherever the initially intelligent species arose is going to have predators, prey and microorganisms that all co-evolved and adapted to that species. In order to break from this trap the species needs to move to a new, isolated environment that then selects for enough intelligence to develop more advanced technology.

        This puts another constraint in which needs to be modeled – that some human groups don’t have the intelligence necessary to develop remotely detectable technology. Of course – as Steve pointed out – it’s immoral to consider the actual implications of different human groups having different average levels of intelligence. The reaction to the hint wasn’t to ask for clarification or even to consider, it was to go berserk about “culture war”.

        I don’t recall anyone ever being called a “heretic” for their opinions on the possible implications or shortcomings of the Drake equation, nor have I seen any serious discussion of it having anything to do with “progressive orthodoxy” in any way.

        How about now? Have you now read an argument about the Drake equation that violates progressive orthodoxy in such a way that it can’t be considered? If you haven’t then why not bring this argument up the next time someone discusses the Drake Equation?

      • reasoned argumentation says:

        There’s a missing parameter in the Drake equation for the second human speciation event.

        An intelligent enough species has to evolve then some of the members of that species have to move to novel environments because wherever the initially intelligent species arose is going to have predators, prey and microorganisms that all co-evolved and adapted to that species which is (on the margin) going to put survival pressure on immune systems, bone density and running speed rather than intelligence. In order to break from this trap the species needs to move to a new, isolated environment that then selects for enough intelligence to develop more advanced technology.

        This puts another constraint in which needs to be modeled – that some human groups don’t have the intelligence necessary to develop remotely detectable technology. Of course – as Steve pointed out – it’s immoral to consider the actual implications of different human groups having different average levels of intelligence. The reaction to the hint wasn’t to ask for clarification or even to consider, it was to go berserk about “culture war”.

        thevoiceofthevoid

        “I don’t recall anyone ever being called a “heretic” for their opinions on the possible implications or shortcomings of the Drake equation, nor have I seen any serious discussion of it having anything to do with “progressive orthodoxy” in any way.”

        How about now? Have you now read an argument about the Drake equation that violates progressive orthodoxy in such a way that it can’t be considered? If you haven’t then why not bring this argument up the next time someone discusses the Drake Equation?

        • Evan Þ says:

          If you haven’t then why not bring this argument up the next time someone discusses the Drake Equation?

          Because it isn’t an argument. It’s an attempted analogy which – even if correct – would be far more distracting than helpful.

          • reasoned argumentation says:

            That you think it’s “distraction” is a giant sign that you’re emotionally committed to a false factual belief.

            Steve’s comment was a perfect example of how what you find “distracting” is very very important in a highly related area – how differences in intellectual traits between different human groups affect behavior.

          • Evan Þ says:

            Right now, I’m talking about the broader culture, which – I’m sure we can both agree – is emotionally committed to a particular belief on that subject. To them, Steve’s analogy was far more distracting than helpful. (And, I don’t see how being emotionally committed to a belief is a sign that belief is false – it’s a sign it isn’t consciously based on logic, but the culture as a whole rarely consciously bases beliefs on logic.)

          • thevoiceofthevoid says:

            @reasoned argumentation
            I’m not sure I agree with you on the implications of certain statistical trends in human intelligence, but I don’t find your position emotionally repulsive to the point that it should never be mentioned and any research that might partially support it should be stopped. (I suspect you might be going a bit too far and/or reinforcing some prejudices of your own, but that’s besides the point.)
            In any case, my own beliefs on the subject are uncertain and I’m definitely not emotionally committed to them. Regardless, I still think Steve’s comment was genuinely distracting from the subject at hand. I disagree that Bush-era housing policy is at all related, let alone “highly related”, to the fermi paradox. I don’t count “someone came to the wrong conclusion because they overlooked a factor” because that applies to virtually every third event in human history.
            Honestly, it’s common sense: Any mention of [banned term]-sounding things will derail a thread into a culture war faster than you can say “bell curve.”

          • Luke the CIA Stooge says:

            Any emotional reaction to protect a belief is a MASSIVE red flag that the belief is FALSE.
            The set of potential beliefs that one could hold on any subject is HUGE!

            The fact, that out of the thousands of potential beleifs you could hold about metaphysics, or God, or Morality, or the inheterent justness of the universe , or the relative merits of man, you just so happen to hold the ONE belief that is most pleasing to your moral and aesthetic sensibilities is an sure sign that you are not assessing the haystack of potential worlds and issolating for the needle of the one true one, but instead are grabing the straw you like best.

            THE FACT THAT YOU LIKE AN IDEA IS STRONG EVIDENCE IT IS FALSE!!!
            Whereas the fact that you hate an idea and find it threatening is a sure sign that it is the most likely of potential ideas to be true.

          • albatross11 says:

            Note that at the same time as Steve’s threadjacking[1] comment, there has been a discussion on affirmative action in university admissions going on in another thread. In that thread, test score and academic performance differences by race have been mentioned several times, without causing any kind of mass-revulsion or shutdown of the discussion.

            [1] That’s how it looked to me, too. Drake’s equation is about universal things that might go wrong with an industrial civilization, thus explaining why it didn’t expand out to the stars so we could see it; it’s really hard to see how the push for less strict lending guidelines to inflate the housing bubble under the Bush administration informs such a discussion.

          • albatross11 says:

            Luke:

            I think your claim proves too much. Often, we first recognize what reality looks like and then can find beauty or meaning in it.

            There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

            I don’t think Darwin’s ability to find beauty and grandeur in the theory of evolution makes the theory less likely to be true.

          • Paul Zrimsek says:

            If you have an emotional reaction to protect a belief, I think that’s likelier to be a fact about yourself than a fact about the belief. In all likelihood, if you’d happened to form the opposite belief instead, you’d have an emotional reaction to protect that.

          • Nancy Lebovitz says: