OT16: Avada Threadavra!

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:

1. Thanks to everyone who attended the meetups earlier this month. It was nice to finally get to meet some of you. For those of you at the Google meetup, Jesse has created a relevant internal mailing list at slatestarcodex-discuss

2. Relevant to the title: Harry Potter and the Methods Of Rationality is ending Saturday. For those of you who aren’t familiar, this is a Harry Potter fanfiction by Eliezer Yudkowsky, the guy who taught me most of what I know about rationality and writing. Most people either love it or hate it in a hilarious over-the-top way; since both are interesting, I can highly recommend it to people who like that sort of thing. See also the Vice article about it. You can find the story at HPMOR.com.

3. And for those of you who are familiar, you might be interested in the HPMOR wrap parties going on Saturday in various cities around the world. Check the map here to see if there’s one with you, or get more information on this page.

4. I’m on call Saturday morning, but if it’s not too hectic I’m going to try to make the Detroit party, which looks like it’s about 20 people strong now. If anyone else in Michigan is interested, come to 1481-A Wordsworth, Ferndale, Michigan 48220 at 3 PM and hopefully I’ll see you there.

5. If you are a high school student, you might want to think about applying to SPARC, a free camp for talented children focusing on game theory, cognitive science, and statistics. Many of the instructors are friends of mine and/or Slate Star Codex readers, and one of the instructors is the aforementioned Eliezer Yudkowsky. See their website for more details and the application.

6. Comment of the week is this real-life trolley problem and the British government’s weird response to it.

7. A request for legal advice, but it’s long enough that I’ll stick it in the comments rather than write it all out here.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

689 Responses to OT16: Avada Threadavra!

  1. Tom Finnie says:

    There’s an even better British real-life trolley problem, that’s almost unknown other than to train enthusiasts, since the only report of it is buried on page 17 of a 65 page incident report

    A broken down maintenance locomotive (RGU) with two workers on board, and with its brakes disabled was being towed uphill when it broke away. It was heading towards a working section of track which had running passenger trains;

    The service manager then reviewed the situation. He had no means of knowing how far the RGU would roll. He concluded that routing the RGU onto the Charing Cross branch gave the best opportunity to avoid a collision with a passenger train. It also provided two opportunities for trying to derail the RGU and the certainty that the RGU could be stopped at Kennington. The opportunities to derail the RGU were provided by trailing points which could be set against the RGU’s route at Mornington Crescent and at Charing Cross. Routing the RGU into a reversing siding at Kennington provided the opportunity to stop the RGU by sending it towards a set of buffer stops (figure 1).

    The service manager knew that there were staff on the RGU when it started to run away. When he was deciding what to do, he did not know that they had jumped off at Highgate. He appreciated the possible consequences for anyone on the RGU if it was derailed or ran into buffer stops. He decided that a collision between the RGU and a passenger train was likely to have worse consequences.

  2. Pku says:

    An interesting example of Schelling points:
    My local Swing dancing club has a free weekly dance, and a monthly dance that costs 5-10$ to get in. The monthly dance is slightly longer (though most people don’t stay the whole time anyways), but otherwise basically indistinguishable from the weekly dance (which is free, and held at the same place by the same people).
    The monthly dance always has a lot more people come – as in about 3-4 times as many people. This is probably because a lot of people like dancing enough to go once a month but not once a week, and partly because people only want to go when there are enough other people, which only happens once a month. This Schelling point effect is apparently enough to counter the 10$ cost most people pay to get in.
    The idea of measuring how much people are willing to pay for a Schelling point is interesting- does anyone have a good example that measures the limits of this effect?

    • Douglas Knight says:

      That’s a great example.

      It reminds me of the concept of “Paris Metro Price Discrimination,” where there are two classes of cars, identical except in price. People pay simply for less crowding, which exists solely because of the different prices. But that’s a simpler example than paying for a Schelling point.

  3. Anonymous says:

    American Psychological Association demolishes the gender-difference myth:

    http://www.apa.org/research/action/difference.aspx

      • Douglas Knight says:

        I don’t think that link is useful or relevant. Hyde asks the right first questions and gives the right answers to them. People claiming to debunk her are just wrong. She fails to ask any further questions and encourages the publication of articles like this one that claim to debunk other beliefs. It is that jump that is nonsense, but the main problem with the jump is that it is vague, not that it wrong. There are no contrary pairs of studies. There are only people who want to address different questions (and people too clueless to even ask questions).

    • Douglas Knight says:

      That is a really awful essay. Try the original paper.

      Scott posted another study in a link dump and there’s some discussion there, but it’s buried in all the discussion of the other links.

      Yes, most personality traits have small sex differences, maybe d=0.3. But what do you mean by sex differences or sex similarities? Do you care about particular personality traits? What the second paper says is that by using all personality traits, it is easy to distinguish sex. In that sense, sex differences exist. Which is necessary but not sufficient for them to exist in places that we care about. The ultimate question is: for a particular purpose, do the relevant personality traits generally point in the same direction or different directions?

      The second paper also points to several sub-traits of the big five that have much larger differences than the big five. This is rather suspicious. If sensitivity (tender-mindedness) has d=2.3 and warmth has d=0.9, why doesn’t PCA doesn’t produce a first factor of sex?

      • Anthony says:

        why doesn’t PCA doesn’t produce a first factor of sex?

        How do you know it doesn’t?

        That’s a somewhat serious question – assuming that the creators of the OCEAN (and HEXACO) models did use some sort of PCA to create or reinforce their models, did they in fact discover that the first principal component was sex, then go on to describe the second through sixth (or seventh, for HEXACO) principal components for their model?

        I don’t have anywhere nearly enough background in psychometrics to know where to begin looking up that question. (Other than to check Wikipedia, of course.)

  4. Mike Owens says:

    This article on sex differences has been influential in some circles. I was wondering if you could check it out.

    “Sex Redefined
    The idea of two sexes is simplistic. Biologists now think there is a wider spectrum than that.” http://www.nature.com/news/sex-redefined-1.16943

  5. Sophie says:

    Does anyone remember that post that Scott wrote about how much of political arguments were just noise and angry yelling? I think he took meaningless connector words out of things people had written and left what became obviously non-sensical non-arguments. If it exists, could someone maybe give me the link?

  6. Landstander says:

    I hope this doesn’t count as race since that’s not my point, I only mean to refer to politicization aspects and not the substance. Sincere apologies if it does. Anyway:
    http://www.washingtonpost.com/blogs/post-partisan/wp/2015/03/16/lesson-learned-from-the-shooting-of-michael-brown/
    http://www.nationalreview.com/corner/415349/ferguson-report-and-right-jason-lee-steorts

    Two articles from either “side” on Ferguson, both encouraging their tribes to look a bit past the noise and to focus on discussion of more substantive issues.

    Between this and a whole lot of anti-purity test thinkpieces in the past week or two, I feel like there’s been some encouraging signs of anti-tribalism lately. Maybe we’re moving in a more positive direction? Dare I dream?

  7. onyomi says:

    A small study shows that practicing more is not as important as practicing correctly, even if that means slowing down, breaking things up, etc:

    http://www.creativitypost.com/psychology/8_things_top_practicers_do_differently

    Any thoughts on this? My initial reaction was just “well, so people who were good at things the first time were good at things the second time and giving the bad people more practice time just doesn’t help as much as we’d hope,” but if they’re right about which strategies might work for just about everyone it would have pretty big implications for teaching. Namely, it seems like it’s more important to get new material right the first time, even if that means slowing down and spending more time on each bit of new material.

    I’m not entirely sure how this tallies with my own experience, because, on the one hand, it’s definitely true that bad habits are harder to break than good habits to instill. On the other, learning and forgetting and learning and forgetting seems to be an inevitable part of all learning processes for me, as does “jumping in and picking up necessary basics as I go” to overcome initial hurdles of boredom.

    • Anthony says:

      I think there’s some level of how “right” you have to get something for it to matter. Having learned to fence, and learned several dance styles, it’s possible to practice certain moves getting one part right while being somewhat sloppy on another aspect of those moves, then getting the other aspect right later. (Footwork versus body movement in many sorts of dancing, for example.) However, you can’t be doing the other part really *wrong*, or you will be setting yourself back.

    • zz says:

      Having studied cello for quite some time, nothing here really surprised me. Other thoughts:

      3. Practice was thoughtful, as evidenced by silent pauses while looking at the music, singing/humming, making notes on the page, or expressing verbal “ah-ha”s.

      This sort of “reflection” is a practice strategy recommended in Make It Stick, a book by a pair of cognitive scientists who’d just completed a decade of research on effective learning (recommended).

      6. The precise location and source of each error was identified accurately, rehearsed, and corrected.
      7. Tempo of individual performance trials was varied systematically; logically understandable changes in tempo occurred between trials (e.g. slowed things down to get tricky sections correct).
      8. Target passages were repeated until the error was corrected and the passage was stabilized, as evidenced by the error’s absence in subsequent trials.

      The article claims that the top three pianists used all these strategies and all the lesser pianists barely utilized them. I find this both surprising, because every single student-musician who was serious about music uses them, and not surprising, because everyone I know who’s serious uses these strategies; if you’re not using them, no wonder you’re not in the top.

      Then again, I live in an unusually good musical community. My friend is son of a professional conductor, has played Yo-Yo Ma’s cello several times, and has studied under the best teachers in the world the country’s best music programs. I have a fraction of his talent, growing-up-in-a-musical-household, and practice regimen, and I’ve still been able to work with members of the Philadelphia Orchestra.

      Notice how close Texas (where the study in question was conducted) is to any of the Big Five orchestras. It’s extremely plausible that the study subjects—advanced undergraduate and graduate piano and piano pedagogy majors—are genuinely unaware of the best practice strategies that are completely obvious to someone relatively talentless like me because they don’t come from a culture that supports it*.

      There’s probably a pre-selection effect here: if you’re a music student and know how to practice effectively, you probably aren’t going to school in Texas, and if you’re a good music teacher and keep up with what works the best and are good at changing your mind in response to new evidence and maybe even read some cog psy literature, you probably aren’t teaching in Texas.

      —-

      *If you doubt that different regional cultures can have such drastic effects, just look briefly at how entirely Korea dominates The Rest of the World in Starcraft 2. I can’t dig up a really great primary reference, but Koreans are so much better than “foreigners” (as they’re called) that there’s essentially two leagues: one for Koreans and one for everyone else, so they aren’t beaten by Koreans. It’s a bit like men’s and women’s leagues in physical sports: the best non-Koreans can maybe compete with second-string Koreans, but it’s truly remarkable/unheard of for a foreigner to beat an A-team Korean. For instance, Hydra, a Korean progamer somewhere between A- and B-team, was recently defeated by a foreigner, after going something like 60-0 against foreigners.

      …There’s a point here, and it’s that Koreans beat the snot out of non-Koreans and that the best explanation is Korean culture: they treat the best Starcraft players like Americans treat the best baseballers (in all honesty, they inspire more awe—when was the last time a baseballer was painted on an airplane?), and they’ve figured out how to practice Starcraft most effectively. Foreigners don’t the same coaching quality, high-level practice partners, or practice regimen, and thus get the snot beat out of them.

      So, yes: if a game like Starcraft, which is inherently internet-centric, can have the best players in different regions doing different things because of cultural differences, then we shouldn’t be surprised to see something similar in classical music, which doesn’t mesh well with the internet.

    • FullMeta_Rationalist says:

      Re: the study — meh. For one thing, the study doesn’t seem to have had a control group. They simply sat 17 college musicians down and drew correlations.

      Re: the conclusions — this absolutely agrees with my personal experience. Like a coach of mine used to say,

      Practice doesn’t make perfect;
      practice makes permanent.

      The underlying idea was that it’s really easy for us athletes to rationalize “Yeah, my practice runs are sloppy, but I’ll have perfect form on race-day.” This is lazy and false. If we practice with bad habits, then we’ll race with bad habits.

      The goal is for us to train such that good form comes second-nature. This way, when we’re chugging along on an empty fuel-tank, we’ll be able to maintain good form without thinking about it. If it’s not second-nature, we won’t have enough will-power to maintain good form on race-day, because we’ll be too busy fighting through the fatigue.

      As a hobbyist musician, I can tell you that slowing down difficult passages is considered best-practice. Isolated one passage; slow down (enough to play without mistakes); repeat the passage; then gradually speed up repetitions as the piece becomes more comfortable. Like zz said, pretty standard.

      Repeating decelerated passages builds what’s called muscle memory. If you play incorrectly, your muscles will memorize the wrong movements. Better to play the right notes slowly than to play the wrong notes quickly.

      Also, I remember one high-school band teacher who differentiated himself from the average teacher by (among other ways) making us practice a single difficult passage for an entire class period. I will be the first to vouch that it worked miracles. The typical band teacher will use class-time to simply run through the sheet music repeatedly from top to bottom and say handwavy things like “um, maybe we could be more dynamic” (this is a generalization).

      (He was my second favorite teacher because he proved himself immensely classy and competent. I had to delete several paragraphs of merely-tangential fanboying. In lieu, I will note that our music program started winning lots of awards under his stewardship.)

      On the other, learning and forgetting and learning and forgetting seems to be an inevitable part of all learning processes for me, as does “jumping in and picking up necessary basics as I go” to overcome initial hurdles of boredom.

      Well, yeah. If you’re just learning music for fun, go ahead and play your favorite songs without laying down the fundamentals. I know lots of self-taught friends who’ve played for years without ever learning what a scale is. If you’re playing music as a hobby, it’s perfectly acceptable to just sight-read everything and make small mistakes in front of an audience. That’s generally what I do nowadays. But if you want to improve your musicianship and be at the top of your game, then you gotta practice the boring stuff.

      Do you think Kobe Bryan spends more time practicing alley-oops, or boring lay-ups? The lay-ups, of course. For music, the analog is practicing scales. Practicing scales is super-super boring. Hobbyists generally don’t practice scales. But for those who want to shred like Van Halen, starting each practice session with scales will pay spades in the long run.

      And like I said before: good habits are important. Consider weight-lifting. Lifting with good form is a must. If your torso lunges while practicing curls, then you’ll finish your sets more quickly – but your biceps won’t gain as much. Similarly, if I practice lazily (with bad posture, or without a strong diaphragm, or without a tight embouchure, or with poor articulation, or without stagger breathing, or without intonation, or without a strong attack, or monotonously, etc) then I’m not going to improve as nearly as quickly.

    • ppa says:

      I agree with the other commenters that this isn’t surprising at all, even though the study isn’t really powerful enough to justify its conclusions without a musician’s prior. That seems to happen a lot with this kind of research. These experiments don’t really have the scope to powerfully test hypotheses in realistic settings (that’s never easy, and doubly so here), but they still seem to point toward conclusions well known to expert folk wisdom or to more general learning research (deliberate practice, interleaved practice, spaced repetition, testing effect, metacognition, scaffolding). Your best bet is still having a good teacher who can communicate these things; barring that, find some good expert case studies (I like The Practice of Practising) and translate the most robust research on learning in general. (Even then it’s hard for most people to actually put this stuff into practice — see link in name.)

  8. Princess Stargirl says:

    http://www.patheos.com/blogs/hallq/2015/03/harry-potter-methods-rationality-review/

    I kind of like HPMOR but this review (by HallQ) seems to point to real problems. Also the links in it to other articles (by hallQ) are good.

    • Susebron says:

      I personally disliked the conclusion, as well. It was too gimmicky. Not because Voldemort was stupid, because he wasn’t as stupid as it might seem. But the way Harry solved the final problem was just… boring, somehow. It didn’t have enough action to make up for the gimmickiness, and it didn’t have anything to do with the larger narrative.

    • damn… the entire series is 661,637 words

      that easily makes it one of the longest contiguous novels ever http://en.wikipedia.org/wiki/List_of_longest_novels

    • houseboatonstyx says:

      From the review:

      Cbvagvat bhg gur vzcynhfvovyvgvrf bs cerivbhf vgrengvbaf bs lbhe senapuvfr, ohg fgvyy gelvat gb tvir lbhe nhqvrapr gur traer-svpgvba rkcrevrapr gurl jnag, whfg zrnaf lbh’yy raq hc snyyvat onpx ba irefvbaf bs gubfr vzcynhfvovyvgvrf va gur raq [….]

      Pratchett might have bit the bullet and raised this meta to a meta meta, probably turning a whole level inside-out. Ref _Witches Abroad_, misc, and especially what tvtropes calls Traer Fniil in _Guards! Guards!_

  9. FullMeta_Rationalist says:

    Happy Pi Day.

    03/14/15

    • Peter says:

      Vi Hart has something to say about that: https://www.youtube.com/watch?v=5iUh_CSjaSw

      • FullMeta_Rationalist says:

        I’m familiar with the Tao Manifesto. This just gives us another reason to bake more pies in June. 😀

    • Franz_Panzer says:

      Not to be insulting, but I have come to think about this day as Laugh-At-Americans-Because-They-Can’t-Even-Put-Dates-In-A-Sensible-Order-Day.

      Not as catchy, I’ll admit.

      • Peter says:

        That said, even DD/MM/YY or DD/MM/YYYY is still somewhat middle-endian – for a non-middle-endian date format you want YYYY-MM-DD, but I don’t tend to go for that – except in filenames…

        • Irrelevant says:

          I use YYYY-MM-DD almost exclusively, in hope that exposure will prompt better filenaming.

          • Airgap says:

            ISO! ISO IS THE STANDARD!

          • FullMeta_Rationalist says:

            For the record, I do use the YYYY-MM-DD format whenever I can. Because of both the ISO Standard and the decreasingly-significant digits. Not that I’m keeping track, but I’m the only American I know of who does this (in the hopes that others will catch on to the format’s sheer correctness). But in the mean time …

        • Pete says:

          I’m currently going through the Canadian immigration process, where you’ll be happy to learn all dates must be entered in the form YYYY-MM-DD.

        • Anthony says:

          DD/MM/YYYY is not middle-endian. DD/MM/YY isn’t, either, it’s just suffering from lossy compression.

          • Peter says:

            The trouble with DD/MM/YYYY is… if I label the most significant digit as 8 and the least as 1, then 21/43/8765, with the ends (1 and 8) in the middle. OK, with byte orders, you don’t worry about ordering within the byte (is it even meaningful to talk about ordering within a byte in the same way as ordering between bytes?), so treat DD, MM and YYYY as being analogous to bytes, then you can’t call it middle-endian. This is why I said “somewhat”.

          • Anthony says:

            Ok. I hadn’t been thinking of that, but you’re right.

      • Chevalier Mal Fet says:

        Speaking as an American, I always put my dates in the infinitely more sensible DD/MM/YYYY format.

        I have a vague hope that eventually all of my countrymen will spontaneously start following me.

        • Radford Neal says:

          And in the meantime, you and the MM/DD/YYYY people are of course quite OK with people reading the date you wrote as 01/02/2015 and having no clue what it means. I mean, actually communicating information is less important than making a statement about how stubborn you are, right? Wouldn’t want to be like those wimps who go for the international standard of YYYY-MM-DD, which nobody misunderstands…

          OK, sorry for being a bit overly-sarcastic (especially since I’m not really sure that Chevalier Mal Fet was serious). But really, why do people use MM/DD/YYYY or DD/MM/YYYY? Do they not realize that substantial numbers of people use the opposite of these two formats, and that they therefore aren’t actually communicating?

          • Deiseach says:

            Do they not realize that substantial numbers of people use the opposite of these two formats, and that they therefore aren’t actually communicating?

            Americans don’t really believe in the existence of people outside the U.S.A. (e.g. all the English language versions of Windows being in default American so you have the happy experience of downloading your Irish-based purchase of your Irish-based usage of the Office Suite and going through everything from Word onwards and switching from U.S. spelling etc. to what you actually use – if I knew who thought setting Word by default to font Calibri, size 11, 6 pt spacing, multiple line spacing was a good idea and what every customer wanted, I’d strangle them with their own intestines), and the rest of us are either using a system that everyone else we communicate with understands, or we have to like it or lump it translate from American; it does help when reading a date given as 3/17/15 to go “There aren’t 17 months in the year, so this must be the American for St Patrick’s Day”. This does not, however, help when you barbarians call it Patty’s Day 🙂

          • speedwell says:

            YOU barbarians? YOU? Hey, here in this village outside Sligo, I’m the one who calls it “St. Patrick’s Day” and my husband (a Tyrone man) and the locals who wish me a “happy St. Paddy’s Day”.

            But I do agree with you on the stupid fake ineffective internationalism of Windows. I had to buy a new laptop from a UK company a month ago and kick Windows in the crotch repeatedly until it realized I was expecting Irish English, not UK English, and not whatever that thing is I grew up speaking in the US.

            Also, “16 Mar 2015” is my standard since I became a documentation specialist for a multinational corporation. It doesn’t help the non-English speakers much but since I write in English anyway, it presumably gets translated along with the text.

        • Airgap says:

          As an American, when the value of the day and month are such that there would be ambiguity as to whether a written date was day-first or month-first, I decide which to use by flipping a coin. Otherwise, I use ISO format.

        • Yehoshua K says:

          Personally, i just write the name of the month for clarity’s sake.

          • jaimeastorga2000 says:

            I do the same. A date like “1/Apr/2015” may seem clunky, but at least there is no possibility of a misunderstanding.

            But that’s only for documents that will be used by others, like checks or government forms. My personal files use the YYYY-MM-DD convention, which allows for easy alphanumeric sorting.

          • Anthony says:

            The Russian (Soviet?) convention seemed to be DDmmYYYY with the month in roman numerals. So yesterday would have been 14III2015. My source on that may not be accurate, but it seems a somewhat sensible system, since only August is longer than using 3-letter abbreviations. (And you don’t need to translate, either. What’s 17ene2015?)

          • Emile says:

            I also systematically do the same, in case I’m read by an American that doesn’t go out much.

        • Harald K says:

          It’s maybe more sensible, but if you say we’ll meet at 1/1/2016 10:00, now you’re listing them in the order (from largest time unit to smallest) 3/2/1/4/5. How sensible is that?

          That is why 2015-01-01T10:00 (with military time, of course) is the standard. There’s very little chance of people misunderstanding you if you put years first and minutes last.

      • Cerebral Paul Z. says:

        Your own Pi Day celebration on 3 Dodecember will surely leave ours in the shade anyway.

      • Airgap says:

        Look, if you want to celebrate Pi Day on July 22, we’re not going to lead an international coalition to liberate you or anything. Well, probably not.

  10. DrBeat says:

    Why would anyone read Harry Potter and the Methods of Rationality when they could read Harry Potter and the Natural 20 instead?

  11. Airgap says:

    An idea I’ve been kicking around, which is probably stated better elsewhere: “Do your own thing” doesn’t scale.

    When my dad was young, the evil patriarchy reigned, and women were more or less supposed to be mothers and housewives. They could maybe do other things, but it wasn’t entirely uncontroversial, and in any case, being housewives and mothers was clearly the main thing they should do. A woman who categorically didn’t want to be a mother/housewife was obviously broken somehow.

    Dad and others were attracted to the feminism of the time, whereby they would change society so that if women wanted to be housewives and mothers, they could, and if they wanted to be other things, they could. Women obviously still had to have the babies, but if they wanted to have careers and the men wanted to raise the babies that was fine too. Basically, whatever works for you. Also, it was pretty hip back then, unlike now where every goddamned college kid and his dog are feminists.

    Flash forward to today: Women are more or less supposed to have careers, like men. They can maybe have children, but it’s not entirely uncontroversial, and in any case, having careers is clearly the main thing they should do. A woman who aspires to be a mother/housewife is either a moron, or laboring under the weight of the false consciousness whose last vestiges Dad didn’t get around to shaking off before he got busy raising his own family and going back to school and stuff like that.

    Looking back, dad isn’t exactly turning his back on feminism, but he’ll admit this wasn’t the idea.

    Since I don’t want to write an essay of examples, I’ll just skip to the end: A few highly-independent individuals (like my dad) can use “Do what works for you” as a guiding ideal. For everyone else, we basically get to choose a fuzzy ideal, and have society copy it. We can have patriarchy for everyone, or feminist gynocracy for everyone. There may be a third option, but it won’t look any more universally unobjectionable than the first two.

    It’s not so much that my dad can beat up your dad as that if you want to move society, fuzzy ideals is how you do it. You can try move it with abstract principles, just like you can try to teach a pig to sing. So your options are going to be fairly limited.

    • Anonymous Coward says:

      Two objections:

      How does the fact that men and women are expected to have similar life goals imply that we live in a gynocracy?

      Why do you think marrying someone who makes enough money that you don’t have to work should be considered a respectable life goal? It certainly doesn’t sound like one to me.

      • Airgap says:

        “Gynocracy” is just a term of abuse to go along with “Patriarchy.” If you prefer a different term of abuse, that’s okay with me.

        Your other objection is a straw man. You’ll have to at least plate him if you want me to take you seriously.

      • Anthony says:

        Why do you think marrying someone who makes enough money that you don’t have to work should be considered a respectable life goal? It certainly doesn’t sound like one to me.

        Ah – a sufferer from feminist false consciousness.

        Let’s rephrase: Why shouldn’t women aspire to marrying a man who is successful enough to allow them to spend all the energy necessary to raising their own children, rather than having to hire people with less attachment to their children to do that work for them? And incidentally, have children who hopefully have inhereted some of the characteristics which made that man successful?

      • Irrelevant says:

        The example isn’t supposed to be the point here, but since I have yet to find a way to approach my opinion on the larger topic that doesn’t require I coredump my worldview, might as well address the example.

        How does the fact that men and women are expected to have similar life goals imply that we live in a gynocracy?

        The phrasing of the dichotomy was deliberately over-polarized, ala characterizing the abortion debate as baby-murderers vs. woman-haters. He’s implying there is no happy medium that doesn’t make someone in the space of colourable mainstream views declare you evil for having encouraged a social order that stomps all over their desires. My interpretation is this.

        People do not intrinsically know what they want. They must either rely on received knowledge of what “people like them” want, or they must absorb substantial opportunity costs of one form or another in order to experiment with different life plans until they find one that fits better. “Do your own thing” is commonly held up as an ideal and in the ideal person’s best interest, but it does not scale because it is only the very well-equipped (cognitively, financially, socially, whatever) who are able to eat the opportunity costs involved in determining what they want from a broad range of possibilities and come out ahead by doing so. For everyone else, the advice is bad, and executing their society’s best guess for people like them (or selection from a narrow range of best guesses) is in their best interest instead.

        Now historically, this was rarely a problem, because society’s best guess was nearly always right. Survival bottlenecks dominated the equation, maintaining the supplies of food, clothing, shelter, and bodies required rounds-to-all of humanity’s available labor, and virtually nobody could take on the costs of exploring non-default possibilities. In this context, strong gender roles, professional heredity, multi-generational clan marriage contracts, and similar constructs that feel like rank violations of personal liberty from a modern perspective were instead the legitimate results of a necessarily risk-averse optimization strategy: Women shall ensure disaster-resistant replacement-rate fertility and perform other compatible tasks. Men shall follow their father’s trade and marry their mother’s niece. Our search depth is terrible but by committing to this paradigm we’ve freed up enough resources that we don’t starve too much and we can slowly search for improvements. Technological change alters these constraints, with social change following as the dominant strategy is renegotiated.

        And through that system we slowly get to the modern world, where all those survival constraints are gone, and our search depth is incredible, and we devote the resources to pick out who’s inclined to be surgeons out of the field of people inclined to be doctors that we picked out of the field of people inclined to be technical that we picked out of the field of people inclined to grasp abstractions well… but we’ve still got some problems. Three of them, specifically.

        Firstly, remember that awful, dumb, search depth-1 survivalist algorithm we started with? Well the bad news is we used it for so long that we reshaped ourselves to use it better. Inability to identify the dominant strategy is existentially distressing, and when we see other people who are failing to comply with the dominant strategy we want to gang up and shout at them, and when we meet a gang of people like us shouting at us to follow the dominant strategy we’re inclined to agree.

        Secondly, the perceived dominant strategy is laggy and therefore historically contingent.

        Thirdly, when I said we’d gotten rid of all our initial constraints, I was lying. We still need disaster-resistant replacement-rate fertility.

        Enter feminism, which has taken up the unenviable responsibility of trying to renegotiate what we consider the dominant strategy for women while subject to these constraints. What we want is a new default division of labor that lets people find what’ll make them happy and productive without expending extraordinary effort or wallowing in self-doubt ever after, avoids demographic crises, and doesn’t heavily hinge on the historical accident that we solved the problem of farming being really hard before we solved the problem of clothing requiring continuous intensive maintenance.

        What we tend to get in practice is tactical abuses of Problem 1 to declare my personal preference the new default and enforce that by ganging up on anyone who doesn’t agree.

        • Cauê says:

          I basically like this, but:

          >”(…) when we see other people who are failing to comply with the dominant strategy we want to gang up and shout at them”

          We do? Why? This doesn’t look realistic to me. Looks like it’s lacking moving parts, a couple of gears missing between “see people failing to comply with dominant strategy” and “want to shout at them”.

          • Irrelevant says:

            Wait, are you arguing that we don’t enjoy exerting peer pressure, or are you asking me to explain why/how we enjoy exerting peer pressure, or did you just not recognize that as a description of peer pressure?

          • Svejk says:

            Well, it has been argued that humans have a well-developed Cheater Detection module as part of our survival algorithm (quite useful in negotiating reciprocity and repeated Prisoner’s Dilemma scenarios), and that module might identify certain non-compliers as defectors or parasites (sometimes rightly so!)

          • Cauê says:

            Irrelevant, I was looking for some response like Svejk’s.

            If our “cheating-detection” mechanisms, for some reason, activate in response to “people failing to comply with dominant strategy”, that’s one missing gear found.

            But then, why *would* that activate our cheating detection mechanisms?

          • Irrelevant says:

            Ah, in that case, yeah, Svejk has the right idea: We have an evolved tendency to find judging people intrinsically rewarding, which took hold because it functions as a hardware hack that rigs a bunch of complex coordination problems towards reaching the right answer. This allows immensely useful, society-enabling things like altruistic punishment to function by default, rather than requiring everyone to be inducted into some sort of explicit bargain.

            As to your second question, it works on optimization failures rather than just defections/sins because the mechanism involved isn’t very specific, and seemingly triggers on a sense of “you’re doing something wrong” that’s more like “you’re violating my category-informed predictive model of you.”

          • Cauê says:

            >” it works on optimization failures”

            Oh, now I see it. I was looking at it as punishment for defection, and it just wasn’t clicking (incidentally, “cheating detection” looks wrong now).

            Nice. I see the gears connecting self-optimizing through empathy/mirror neurons into other-optimizing; getting annoyed when you see people behaving sub-optimally would then be analogous to flinching when you see someone get hurt.

            And/or: whatever it is that makes us imitate and learn from others might “try to process” behavior X but reject it as stupid, generating dissonance and annoyance.

            But I don’t know what I’m talking about, and will update my reading list.

          • Irrelevant says:

            (incidentally, “cheating detection” looks wrong now)

            Yeah, I’m not sure on that part either. We’re looking at the system that generates contractualism-like social models in hardware without requiring you actually convene the contractualist congress. Reciprocity and contractualism are obviously related, and when I’m blackboxing I want to assume as few internal mechanisms as possible, so my initial guess would be that “cheater detection” uses the same system.

            But on the other hand, altruistic punishment is as far as we can tell human-specific, so we’re talking about a very new innovation, while other animals can still pull off simpler forms of reciprocity. And the emotional signals on observing defection and on observing atypical behavior feel somewhat different, in a way that’s probably not just one being an attenuation of the other. So my best guess is that “You’re being unfair!”and “You’ve gone insane!” do use different systems. (Sanity check: if they’re different types of bad, an alignment between them should feel even worse. And… yep, treason gets you an extra-special circle of hell.) I acknowledge, however, that I’ve got a lot of social programming telling me that individualism is a good thing, so it’s possible that I’m wrong and if I lived in circumstances where there were no mixed messages on the matter, I would feel moral signals from conformity failures in the same way I do cheating. So I’m unsure.

        • Airgap says:

          A+

          What we tend to get in practice is tactical abuses of Problem 1 to declare my personal preference the new default and enforce that by ganging up on anyone who doesn’t agree.

          There’s certainly a fair amount of that, but it’s not clear to me that these people (I’ll call them “Feminist Kants”) are the reason why we can’t have nice feminism.

          My intuition here is that the way you communicate the renegotiation to society is by saying “Here is Sally. She is a woman who has a career. This is good. You should be like Sally.” If you try to make it complicated like “Sally is good, but so is Betty who is a housewife, because Sally & Betty’s respective natures fit with those social roles” society can’t hear you over the clamor.

          As much as I’d like to blame it all on horrible Feminist Kants, I don’t think the problem is their fault. Or maybe it is in the sense that they made an impact which was the only impact which could be made. But reasonable people could have believed that all of the views put forward would be considered by society, and the best overall one would be accepted.

          • Irrelevant says:

            You’re of course right that it’s more complicated than that and especially that it’s harder to assign blame. The whole line of reasoning is based on taking an ennobling high-agency approximation of humans that offers a sort of ceiling function to people’s behavior but is not in fact correct. Morality and blame are a particularly weak area of correspondence because the model assumes path invariance, while morality is all about path. If the king declares a law you dislike, responding by giving a speech that moves him to recant, by murdering him and installing a new king, or by starving to death are all considered “negotiation” moves that, to the extent that they generate the same effect on how people act in the next generation, are equivalent actions.

            So when I spoke of tactical abuse, I was doing it at an abstraction level that groups together both “Actor A comes up with a plan that makes them happy, realizes it would do terrible things if generalized, and advocates universal implementation of their plan” and “Actor B comes up with a plan that makes them happy, pre-wonders for a moment if it would do terrible things if generalized, recoils from the line of thought because it gave stressful feedback, and advocates universal implementation of their plan.” That, when they say they think their plan is good for everyone, A is lying and B is mistaken doesn’t make a difference, because they’re sending the same incorrect signals and externalizing the same costs onto everyone else.

            My intuition here is that the way you communicate the renegotiation to society is by saying “Here is Sally. She is a woman who has a career. This is good. You should be like Sally.” If you try to make it complicated like “Sally is good, but so is Betty who is a housewife, because Sally & Betty’s respective natures fit with those social roles” society can’t hear you over the clamor.

            But reasonable people could have believed that all of the views put forward would be considered by society, and the best overall one would be accepted.

            I agree with how the communication tends to function but not why it seems to be failing. What reasonable people seem to have believed starting off was that the ideals could diversify without persistent conflict. That is, that deciding whether you were a person like Sally the Businesswoman and should imitate Sally or a person like Betty the Housewife and should imitate Betty would be relatively simple. And they had good evidence for this, because on the male side visions of success aren’t in bitter conflict. We have a lot of competing ideals but people can mostly figure out which group has the people they’re more like and be OK there. There’s status competition between groups, to be sure, but the soldiers don’t feel a need to convert all the bankers into soldiers, and the bankers don’t feel a need to convert all the soldiers into bankers, and no man’s publishing articles anguishing over the fact that they failed to become soldier-bankers and whether this means they’ve let down their gender. And for some reason this hasn’t turned out to be the case on the women’s side yet.

  12. Stella says:

    I am three chapters into Harry Potter and the Methods of Rationality and so far it is one of the best things to ever happen to me, fiction-wise.

    • Airgap says:

      One can only imagine how delighted Stella will be when she discovers science fiction novels. Or non-science fiction novels.

      Scott, do you suppose you could do something to discourage the degree of hero-worship we’re seeing from your commentariat? It’s kind of worrying. These are smart kids you’ve got here, and they’re not broadening their horizons. I can try trolling harder, but I’m not sure it’s working.

      • jaimeastorga2000 says:

        non-science fiction novels

        You mean fantasy novels, right?

        • Airgap says:

          I mean kumquats, the concept of doubt, or a blurry polaroid of someone you loved passionately who died too soon.

      • Scott Alexander says:

        Remind me again what’s wrong with liking a really good book?

        (I doubt it’s hero-worship; if Stella’s only 3 chapters in right now it means she hasn’t been following along and probably doesn’t even know who Eliezer is)

        • Airgap says:

          Remind me again what’s wrong with liking a really good book?

          You think HPMoR is a really good book? Not like, amusing, a bit educational, or a better way to spend your time than petty theft and drug abuse, but really good? I guess I have to revise my opinion of one of you…

          Maybe I’m reacting too strongly, but I’m seeing a pattern of people getting, in my vainglorious opinion, too attached to certain members of the rationalist community as individuals, as opposed to somebody who has produced good thoughts and will probably do so again.

          There was that medical student kid who said, forget going around to medical student forums and asking lots of different people for their advice, and attempting to somehow synthesize this information into an informed decision. I’m just going to ask Scott, and his views will determine my decision. He’s like a rationalist medical-resident God!

          Maybe that’s not totally accurate. Maybe what level of devout following you get is natural given the effort you put into this blog and the quality of it. Maybe you have too much responsibility as it is to worry if people are taking your opinions too seriously. I’m not totally sure. I just get a weird feeling occasionally. I’ll take two asprin and call you tomorrow.

          • Zorgon says:

            Dude, do you even availability heuristic?

            Of course you’re seeing a lot of “Woo Yay Eliezer” and “Woo Yay LW” etc on here. You’re here.

          • Pete says:

            I don’t really care about Eliezer but all I can say is that I’m reading three things currently, HPMOR (as of yesterday), Better Angels of our Nature, and an apparent classic of English language literature, Ulysses, and I’m finding myself wanting to read HPMOR more than the others.

            I’m not going to argue its great literature, but then neither is the source material. It is, however, amusing and enjoyable and makes me want to know what happens next and that surely meets at least some definitions of “really good.”

            Taste is entirely subjective, remember. I’ve read LOTR twice and found it incredibly tedious both times.

          • MicaiahC says:

            I have a friend who considered Eliezer a hack, was not much of a fan of Harry Potter beforehand and who has very little tolerance of things he deems low quality.

            He finished the entire story in a week.

            Now, I’m not actually much of a fan of the story (the prose could use a lot of work, there’s a complicated plot but not a theme), but I find it hard to believe that you can’t give it at least as much credit as a page turner, that works on people who aren’t completely turned off by EY’s stupendous status blindness.

          • BD Sixsmith says:

            I think it’s worth remembering that lots of people in Less Wrong spheres have a STEM background and minimal interest in literature. To open a novel and find critical thinking + wizards might be something of a revelation.

          • Anonymous says:

            Something like 1/4 of Less Wrong survey respondents were referred to LW by HPMOR, not the other way around. So a lot of people like HPMOR on its merits (whatever they are) rather than liking it because they are “too attached to certain members of the rationalist community as individuals.”

          • houseboatonstyx says:

            To open a novel and find critical thinking + wizards might be something of a revelation.

            For that, put Y’s Harry into iirc _The Flying Sorcerers_, instead of assuming a Hogwarts faculty with no common sense.

          • Nick T says:

            I agree with a lot of this (while still having a high opinion of the rationalist community), but it’s silly to conclude that anyone who thinks HPMOR is “really good” is hero-worshipping. (Even if only someone with bad taste would think that, bad taste is different from hero worship.)

          • Airgap says:

            The prize for adding value to the discussion goes to BD Sixsmith.

            Anyone who wants to beat this dead horse further should keep in mind that the words complained of were “One of the best things to ever happen to me, fiction-wise.” “Really good book” was Scott’s more measured description.

          • Deiseach says:

            I think it’s worth remembering that lots of people in Less Wrong spheres have a STEM background and minimal interest in literature.

            Minimal interest in literature.

            This sentence as a whole makes me sad and that part makes me cry 🙁

            It’s like saying someone has eyes but would never bother even looking at Botticelli’s Pallas and the Centaur because pfft, centaurs aren’t real!

            C.P. Snow’s The Two Cultures is alive and kicking still, and even worse because STEM fields are more than ever considered to be the Only True Way of knowing anything or helping make life better, while the humanities are the equivalent of putting gilt on gingerbread.

          • Scott Alexander says:

            “You think HPMoR is a really good book? Not like, amusing, a bit educational, or a better way to spend your time than petty theft and drug abuse, but really good? I guess I have to revise my opinion of one of you…”

            I don’t know how to answer this.

            I’m aware that The Great Gatsby is supposed to be an amazing brilliant literary classic. I’ve read it at least twice, and although I get the themes it’s trying to talk about and I feel like I understand it on the same level as the people writing the “Understanding The Great Gatsby” guides, it doesn’t do anything for me. It doesn’t excite me. It doesn’t move me. It doesn’t teach me anything about life. Give it to me alongside an agreed-to-be-mediocre story about 1920s rich people doing 1920s rich people things, and I won’t be able to tell much interesting difference.

            On the other hand, HPMOR did excite me, to the point where I would religiously read each update the day it came out. And it did move me, to the point where I cried at the points where I was supposed to cry and felt uplifted by the points where I was supposed to feel uplifted. And it did teach me things (or at least it would have if I hadn’t read Eliezer’s previous presentations of the same ideas). It’s certainly not unique in this – Asimov, Pratchett, Carey, etc are other authors who can do this to me – but it was part of that group.

            Does that mean HPMOR is better than Great Gatsby. I know I’d be lynched if I said it was. So all I can say is that I see a lot of evidence that there’s something called “great literature” such that people who can appreciate it get more out of great literature than I get out of anything, and it’s an extremely difficult technical challenge to produce great literature that should be celebrated when someone does it right – but that I lack the knack for it.

            But if I had to remove either Gatsby or HPMOR from my own life, it wouldn’t even be a hard choice.

            This isn’t because I’m uncultured – I wouldn’t be surprised if I’ve read more great classics than the people who are going to show up to mock me here. And I’m not totally dead-set against the literary establishment – I have very strong appreciation for classic poetry, such that I can admit that eg Tennyson has done something I wouldn’t expect one in a million people to be able to replicate, even including Eliezer. It’s just that the great works of prose don’t do it for me.

            Also, I strongly reject all of this “Less Wrongers, as STEM nerds, don’t know anything about culture or literature” stuff as rank stereotypes. I am consistently impressed with how much “STEM nerds” know about history, literature, and culture. The SSC meetup in San Jose, the heart of Silicon Valley, devolved into competing to see who could recite more Kipling poems, and debating the causes of the fall of the Roman Empire.

          • BD Sixsmith says:

            I am consistently impressed with how much “STEM nerds” know about history, literature, and culture.

            Would like to be clear that I said “lots of people” and not “all” and did not say “nerds”.

            I don’t think people necessarily lose out by being uninterested in literature any more than I lose out by being uninterested in maths. There are always going to be sciency types and arty types for the simple reason that the hours in a day are finite and so are the interests in a life, and it takes a special kind of person to learn quantum theory and read the canon. (The kind of people one might at SSC meet-ups in San Jose.) What societies need is for the interests of different people to complement each other so that they can form a collective wisdom.

          • Airgap says:

            @Scott:

            You answered correctly. If you wanted to say “HPMoR is a really good book, like Scoop or My Man Jeeves” I’d say you could make a case for that (although there’s no way it’s better than Scoop).

            Also, I strongly reject all of this “Less Wrongers, as STEM nerds, don’t know anything about culture or literature” stuff as rank stereotypes. I am consistently impressed with how much “STEM nerds” know about history, literature, and culture.

            Then you know cool STEM nerds, and I’m jealous, but part of me is also incredulous. I’ve met cultured STEM nerds too, but not many. I’m confident the stereotype is accurate. But then, most of them are.

          • Stella says:

            1. I have, in fact, read *lots* and *lots* of books, in a variety of genres though fantasy and science fiction have been my favorites.

            2. When I said that HPMOR was one of the best things to ever happen to me fiction-wise, I was exaggerating for comedic effect. I was, after all, only three chapters in. I am enjoying it, though.

            3. I haven’t finished it, so I can’t judge whether it’s a “great book” or not. I’m not sure I believe in the concept of “great books” per se–I may be a book relativist. That being said, I think it has clear prose, strong character work, and page-turning plotting. The humor fits my humor, and if it’s insaaaaaaaaanely preachy at least I mostly agree with it’s sermons. I also appreciate how it relies on its readers having a detailed knowledge of the original books fueled by childhood nostalgia (the first HP books came out when I was 7, making me the perfect age for them) so that you can appreciate the hints and allusions and ways the originals have been slightly twisted. Similar things have been done, of course, mostly to fairy tales and mythology, but the HP books are huge books full of details, giving more opportunities for twisting. I’m finding that an extremely pleasant experience.

            4. I do not know who Eliezer is. He’s the guy who wrote it? Whatever.

          • Stella says:

            Scott: That is literally exactly how I feel about The Great Gatsby. The only thing that moved me about it was the poetry of the language, though even that I find exhausting after a while. I, personally, like my poetry short as a punch. (Similarly, I like Faulkner’s *sentences* and his *paragraphs* but not his *novels.*)

            Great Literature improves the world, and has truth and beauty it it. However, I have always detected a strong dose of tribal signaling (a.k.a. snobbery) in those who not only love Great Literature but scorn those who aren’t as inclined to it. All groups do this, of course. I’m a chemical engineer, and in college we would laugh about how much easier the civil engineering course was and also about how much more socially adept we were than the electrical engineers.

            The Great Literature snobs have always annoyed me particularly for all the usual reasons one group of snobs annoys you more than another.

          • Airgap says:

            The Great Literature snobs have always annoyed me particularly for all the usual reasons one group of snobs annoys you more than another.

            Metasnobs are the worst.

          • James says:

            Don’t get me started on metametasnobs, Airgap.

        • Jiro says:

          What I read of HPMOR screamed out to me “this is wishful thinking where you can be right but not have any social skills, and actually get away with it”.

          (And Harry is only even right by authorial contrivance. If his ideas worked, someone would have already been implementing them; if not, there’s probably some reason why they don’t work which involves knowledge he’s unaware of.)

          • Samuel Skinner says:

            It turns out Harry is wrong in a lot of the things he thinks. “Harry overestimates his ability to do things” is a really important theme in the story.

          • jaimeastorga2000 says:

            (And Harry is only even right by authorial contrivance. If his ideas worked, someone would have already been implementing them; if not, there’s probably some reason why they don’t work which involves knowledge he’s unaware of.)

            You should try reading Eliezer Yudkowsky’s Abridged Guide to Intelligent Characters.

  13. Matthew says:

    A possible motivational mindhack I have accidentally stumbled on:

    The door to my office at work used to have an extremely difficult lock that required a bunch of finesse to get open. A couple of years ago, they replaced the lock with a new one that is much easier to work. Yet even now, I still get a substantial sense of satisfaction from getting the door open, as though I’ve pulled off something requiring a significant degree of skill (a not totally inaccurate description of getting the old lock open).

    So if you’re faced with a necessary routine task you’re not particularly motivated to perform, it may be possible to get a long-term increase in the dopamine reward your brain releases by temporarily adding complications that make the task harder.

  14. Pete says:

    So, I’ve just started reading HPMOR and I’m enjoying it so far, finding it laugh out loud funny and so on, but does Harry get any less irritating? I just want to give him a well deserved slap around the head at this point (I’ve read 12 chapters).

    • Tenobrus says:

      I never found him particularly irritating, but his behavior does change pretty significantly as the story morphs into more of a… well, story, instead of just straight deconstruction. Also, see this: https://www.facebook.com/yudkowsky/posts/10152305725324228

      • houseboatonstyx says:

        Thanks for the link. That’s the chapter I bailed at, but not so much for the status reason that others gave. I thought the author was making McG (and Hermoine in a previous chapter) very stupidly stupid; trashing good characters. If he wanted to show HP being smarter, he should be smarter than their best state, not smarter than their cartoon flat versions.

        I sort of tolerated it with Hermoine, who was another student, and it was sort of reasonable to use another student as a Watson. But making the experienced teacher just plain dumb? As for respect, it seemed that H was deliberately being mean to McG, who might reasonably have felt bad when he did it.

        Iirc on an afterword to that chapter, EY said dropping anvils on characters came along with first draft inspiration. If he ever rewrites it to clean out that sort of thing, I hope someone will let me know.

      • Pete says:

        Thanks for that. Since I wrote that comment I have read another couple of chapters and he’s grown on me slightly.

        As I said, I’m enjoying it (although I actively disliked chapter 6 nor was I a huge fan of the sorting hat chapter), but I don’t think I could have read over 100 chapters with me wanting to strangle Harry to within an inch of his life before Obliviating him so I could do it all again next week without any consequences. I’d happily just give him a couple of well timed pies in the face at this point.

    • 27chaos says:

      No, he only gets worse. However, if you can manage to ignore him, it can be entertaining.

  15. Airgap says:

    @scott: if you blocked my last comment for language choices, I’m happy to clean it up a bit.

  16. walpolo says:

    Can I complain about something about social justice for a bit? I work in a field of academia where there are a lot of social justice types, and it’s very easy to get “called out” for saying very ordinary things on the professional blogs. For example, I’ve seen people told they were being ableist for saying an idea is “crazy” or for using the phrase “blind review” to describe anonymous refereeing.

    This gets to the thing I find strangest about call-out culture: People within this culture often seem to think that violations of even very controversial and arguable ethical principles ought to be called out. These are principles accepted by only a tiny segment even of progressive academics.

    That says nothing whatsoever about the truth or falsehood of these principles, of course. But it seems to me that it does have import for whether it makes sense to call people out. My vegan friends don’t publicly call me to account whenever I drink milk in their presence. I think they’re making the right decision by not doing so, and I thank them for it.

    (Of course I’m always happy to have a substantive discussion about whether some controversial ethical thesis is right. What I find odious is being on the receiving end of social opprobrium for holding the nearly-universally-accepted opposite view.)

    • Airgap says:

      People within this culture often seem to think that violations of even very controversial and arguable ethical principles ought to be called out.

      Well, sure. That’s where the money is.

      The way they see it, there’s no real need to call out views which are well beyond the pale of the right side of the overton window.

      A: “Slavery is too good for subhuman nigger scum. They should be executed to a man, cremated as a body, and their ashes shot into space, to ensure future scientists are unable to clone them from traces of their DNA.”

      B: “Whoa, dude. Not cool.”

      Person A is already effectively excluded from the conversation, or at least the one carried on by people with any power. Also, a combination of social and cognitive stratification as well as their own discrimination as individuals ensures that SJWs will never meet anyone like Person A.

      (Note you can replace “SJW” with “Stormfront Poster” and that sentence is still accurate.)

      By contrast, calling people out over controversial points advances the SJ cause.

      That says nothing whatsoever about the truth or falsehood of these principles, of course.

      Why would they? A true-blue activist will see this as like having a philosophical discussion with an enemy soldier over whether you’re allowed to shoot back at him. Having such a discussion admits that there’s a discussion to have. If there isn’t, his position is stronger.

      My vegan friends don’t publicly call me to account whenever I drink milk in their presence.

      You’ve probably selected your friends carefully for qualities like basic human decency. SJW scum comes in “vegan” too.

      What I find odious is being on the receiving end of social opprobrium for holding the nearly-universally-accepted opposite view.

      If Social Justice was your boyfriend, I’d tell you to quit complaining and dump him because he’s an abuser and he definitely won’t stop if you keep indulging him. I don’t mean you need to change your views (that can wait), but is it impossible to escape your present toxic SJ environment? Are there doors, for example?

      • Airgap says:

        You mention “Progressive Academics” though, which makes it sound like you’ve chosen to place yourself in a position where SJW abuse is normal and tolerated, possibly without having considered this fact, and you probably cannot leave without abandoning a significant investment. I don’t really know what your options are, but I suggest you consider them. Maybe you can find a department with fewer assholes. It’s worth a shot. And if none exist, that’s worth knowing.

        • walpolo says:

          It’s very far from being a problem that would ever make me consider changing my field (which I love). It’s more of an annoyance that keeps me away from certain issues on certain blogs. Doesn’t happen much in person.

          • Airgap says:

            I meant “Department” in the sense of “University” rather than “Field of study.” Like, maybe the faculty of Walpolo’s Chosen Field at Columbia is saner than the one at NYU. You don’t go nuclear until you’ve ruled out conventional warfare.

    • Airgap says:

      [deleted]

  17. Anthony says:

    Scott, and anyone else interested in psychiatry and deinstitutionalization, I recommend Clayton Cramer’s My Brother Ron: A Personal and Social History of the Deinstitutionalization of the Mentally Ill.

    Cramer is one of the people who discovered that Michael Bellesiles was making shit up, and documented how and where Bellesiles was wrong. He later wrote a book about it, which is a cross between a history of guns in the US and a prosecution brief against Bellesiles; My Brother Ron is as meticulously footnoted as the previous book. There are some fascinating bits of history in the book, and it does a great job of explaining how we got to the point of using urban alleyways as our primary housing for the severely mentally ill.

  18. Stephen Frug says:

    Scott, I don’t know if you take post topic requests/suggestions. On the off chance you do, here is one (or perhaps two, if you want to split it up).

    You often write about Less Wrong, and sometimes also about Eliezer Yudkowsky, who I understand to be the main person behind/author of material on LW, as well as a personal friend of yours. When you write about it, you often write as if everyone who reads Slate Star Codex reads/is involved with LW. Personally, I’d never heard of it until I began reading your blog a few months ago, and still haven’t done more than poke around the site a bit, and google a bit.

    So what I’d love to see is Scott’s Guide to the Less Wrong World, and/or Scott’s Personal Story of His Involvement with Less Wrong & Eliezer Yudkowsky, or both combined into one. Basically, I’d love to hear what you’d have to say about the site and/or the person behind it to people (I’m guessing I’m not the only one who reads SSC?) who don’t know much if anything about it/him.

    This was prompted, by the way, by following the link to the Vice article in the main post. It implied a certain cultishness to LW, which I gathered from googling is one of the main lines of its detractors. So I was a bit surprised (only a bit) that you linked to it. If you wanted to address this (apparently) common perception — and certainly a perception that a quick google on Eliezer Yudkowsky and Less Wrong leaves, rightly or wrongly (I presume you think wrongly) — I’d love to hear about that in particular.

    Thanks!

    • Pseudonymous Platypus says:

      I’d also be interested to hear about how Scott got involved with Less Wrong. I’ve tried (and I’m still trying, slowly) to become acquainted with the site, but I have to admit that I’m finding it difficult. The Sequences do not make for particularly easy reading, IMO.

      • Scott Alexander says:

        The Sequences were much easier reading when they were published as a blog post a day. Someone reading the SSC archives would probably have a hard time of it too.

        • Pseudonymous Platypus says:

          I only started reading SSC less than a year ago, though, and I don’t have the same problem with it. The “best of” page is great for getting up to speed on the most important posts, and there’s a clearly ordered archive where I can go back and read posts in chronological order.

          I think some of the problems I’m having reading The Sequences have to do with the format I’m trying to read them in (one of the e-book compilations). Organization/ordering is a big problem with the particular compilation I downloaded, and it also hasn’t really been cleaned up to work well as an e-book; there are still links all over the place that point to the original posts, which don’t work well on my Kindle. I hear there’s another e-book compilation being published soon, so I’m hopeful that it will solve those problems.

          The other problem I have, though, is that some (several?) of The Sequences are written as parables, and… well, I’m not opposed to parables per se, but I feel like they’re not the best format for a treatise on rationality.

        • Airgap says:

          Also, you didn’t have Eli’s posts interspersed with a bunch of “Rationalist recipes for cupcakes” grade posts from the hoi polloi. Sure, Robin did post occasionally, but it wasn’t nearly as bad.

        • onyomi says:

          Your writing is clearer and less idiosyncratic than Eliezer’s.

        • Princess Stargirl says:

          Having read both the SSC archives (and much of our old blog) as well as most of the Sequences I disagree. Your writing seemed very clear. The sequences required alot of pain and effort to understand. In many cases I had to read a Elizier post multiple times (Separated by weeks or moths) o even really get what the main point was.

          I could just be unintelligent but Luke M. also claims that Elizier’s writing is impenetrable to him. And Luke is not dumb.

          I would be very surprised if an intelligent person had alot of trouble understanding Scott’s writing. But tons of intelligent people report great difficulties in understand Elizier’s posts.

          • Pseudonymous Platypus says:

            I guess I sort of already said it above, but I will third this, and I consider myself at least reasonably intelligent – although perhaps only average for the SSC crowd. 🙂 I’ve only read a handful of the Sequences and there have already been several where I felt like I just wasn’t getting the point. I shall continue trying, though.

        • FullMeta_Rationalist says:

          FWIW, the Ebborian story during the Quantum Physics Sequence really threw me. Otherwise, I felt both you and Eliezer were pretty clear.

          Also. When I stumbled across LW, I didn’t really follow any of the sequences at first. I just kinda followed Eliezer’s links wherever they would take me. Like I would do with TV Tropes, or Wikipedia.

        • aguycalledjohn says:

          I think also when the communit was more active and people like you and luke[rg were regularly posting ood articules with links back to sequences that made it easier to get into

      • Cauê says:

        I’m a bit surprised by this thread.

        I thought the sequences were easy enough to read, and quite pleasant. Exceptions: the QM sequence (but that’s on QM, not Eliezer, and he made it easier than I would have expected), and the metaethics sequence (I think EY gets weird when speaking of ethics, but then again who doesn’t).

        I’m not sure what it’s being compared to, but I can’t think of more readable sources that deal with similar subjects (Scott is wonderful, but his preferred subjects are easier to communicate, I think).

        • Anonymous says:

          Also, I think Scott’s style produces the illusion of agreement more than Eliezer’s. And maybe the illusion of comprehension. This might just be that Scott writes longer essays, while Eliezer’s short essays cause one to stop and assess more often.

    • Scott Alexander says:

      The short version is that economist Robin Hanson and self-taught AI safety researcher Eliezer Yudkowsky started blogging about cognitive science and philosophy on the site Overcoming Bias in 2007ish. A lot of people liked their stuff, and so Eliezer spun off an online community sort of like a web forum called Less Wrong.

      I started reading Overcoming Bias in 2008. When Less Wrong was formed I did a lot of the blogging there that I’m now doing here. When I got this site, a lot of people followed me over, so a lot of (most) SSC readers are familiar with LW.

      People on LW often believe in unusual things like transhumanism and effective altruism, and they sometimes use a lot of idiosyncratic jargon (though I think we’re getting better at that). This has created a really strong in-group vibe, and so a lot of people formed real-life friendships and relationships within the LW community or even moved to the Bay Area where the community is strongest. This has led some people to joke that LW is a cult. It admittedly doesn’t help that many members are polyamorous, that some people like experimenting with “rituals” and “ceremonies”, and that Eliezer is constitutionally incapable of doing anything without coming across as hilariously over-the-top arrogant, and at some point instead of fighting it he just turned it into his style so that now it’s kind of hard to tell when he’s joking or not.

      Aside from reading his writing and hugely respecting his ideas, I haven’t had too much personal interaction with Eliezer, although when I lived in the Bay Area in 2012 we sometimes went to the same parties. I just find him consistently insightful and interesting in a way very few people are.

      Probably the best introduction to all of this you could get is to read the new ebook version of the original blog posts that started Less Wrong.

      • Airgap says:

        Part of the “Cult” allegation stems from the incident where most of the staff of SIAI committed suicide via phenobarbital-assisted asphyxiation as part of a misguided attempt at “Mind Uploading.” Eliezer was not indicted in connection with the incident, but he was forced to sell the trademark to Ray Kurzweil in order to fund the creation of MIRI.

      • Noah Siegel says:

        He gives you a shout-out in the Preface for being responsible for making LW a nicer culture.

        • Scott Alexander says:

          Thank you for telling me this. I might not have found it otherwise, and it made my whole week.

      • Stephen Frug says:

        Thanks for the reply! If you felt like talking more about your own personal experience/take (i.e. the sort of stuff that *isn’t* in the sequences), I’d love to read more.

      • Brett says:

        How is “Eliezer is constitutionally incapable of doing anything without coming across as hilariously over-the-top arrogant, and at some point instead of fighting it he just turned it into his style so that now it’s kind of hard to tell when he’s joking or not”

        distinguishable by anyone that isn’t Eliezer from

        “Eliezer used to take steps to pretend he wasn’t hilariously over-the-top arrogant, and eventually just gave up trying.”?

        Because I have to say from everything I’ve read (I’ve never met the guy), it sounds a lot like the second one.

    • jaimeastorga2000 says:

      May I recommend you read The Sequences? I would rate Scott Alexander as second only to Eliezer Yudkowsky as far as non-fiction authors go.

      • zz says:

        Just today, a proper ebook of the sequences has been published, and can be yours for as little as $0.00!

        http://lesswrong.com/lw/lvb/rationality_from_ai_to_zombies/

        • Anonymous says:

          Maybe that’s a proper ebook and Jaime’s second link isn’t a proper ebook, but at least it’s a proper server. It will take me at least 10 minutes to download from your link and determine what you mean by “proper.” That’s a lot more than $0.00 of my time and attention.

      • Chevalier Mal Fet says:

        You should read Terry Pratchett’s book of essays, A Slip of the Keyboard. Pratchett is just as good a non-fiction writer as he is fiction.

        …er, was. ;_;

        <_< Buuuut I'm kind of hugely devoted to him and still in mourning, so I may be biased.

      • Airgap says:

        How many non-fiction authors have you read? I have a lot of respect for both Scott and Eli, but come on.

      • > I would rate Scott Alexander as second only to Eliezer Yudkowsky as far as non-fiction authors go.

        I agree with those words, but not their order.

        • jaimeastorga2000 says:

          But, Ancient, what does “Scott Yudkowsky would only go as far as I to rate Eliezer Alexander non-fiction authors as second” even mean?

      • Ilya Shpitser says:

        Sorry, _all_ non-fiction authors? How much non-fiction have you read?

        re: “cultishness.”

        A certain inability to look past one’s nose (like the parent post) is one problem in the community that I see a fair bit.

        Eliezer’s ego/acting like a guru is another problem. I am sure some of it is a kind of inside joke … but can you really do that? If I act like an asshole online, for example, can I just say “actually it’s just an inside joke between me and my friends.” No, I think it just means I act like an asshole online.

        I find myself criticizing the rationalist community a fair bit, which is kind of a shame, because I find much value in many rationalist ideas. I just find the social dynamic extremely off-putting, and I think folks whose cultey sense is tingling are completely right.

  19. Pseudonymous Platypus says:

    I’ve posted here a few times about my anxiety problems, and after many ineffective stops on the med-go-round, combined with semi-effective psychotherapy, I’m still looking for a better solution. Today I saw an ad for this device. I immediately assumed it must be a Scientology-level scam: I’ve never heard of it before; no doctor or psychiatrist has recommended it to me; and it looks like a fucking stud finder.

    But then I started reading and saw that there are apparently multiple published studies supporting its efficacy, and that it is “FDA cleared.” It also requires a prescription to purchase. I haven’t read any of the studies myself yet, and given my limited knowledge of medicine and biology, I don’t know that I’d be able to spot any methodological errors or even blatant factual falsehoods.

    I’m still very skeptical about this device, but anxiety really sucks, so if there’s even some real evidence behind it, I might be willing to try it… even with the hefty price tag. Can anyone here with more expertise (*cough* Scott *cough*) comment on whether or not this thing might be legit?

    • Scott Alexander says:

      I’d never heard of this. A brief search turns up this article in which the FDA rejects it because “The Agency reported finding mixed results and problems in study size, design, and methodology. The reviewers concluded, that the FDA believes the available valid scientific evidence does not demonstrate that CES will provide a reasonable assurance of effectiveness for the indication of insomnia, depression, and anxiety.” Wikipedia sounds similarly unimpressed. And the anecdotal reviews I am seeing online are a lot of “it did nothing for me”.

      This doesn’t seem like a complete scam – in theory it’s the sort of thing that might work and it does have some intelligent people behind it. But if you’re after highly theoretical things with only a smidgeon of evidence, I bet you could find much less expensive ones.

      • Pseudonymous Platypus says:

        Thanks Scott! I have to admit that wasn’t the answer I was hoping for, but as the litany goes, “if the box does not contain a diamond…”

        Words can’t express how much appreciate you spending your own time to look at these sorts of things for me. Truly. I doubt I’ll ever be able to repay the favor, but if there’s anything I can do for you, let me know.

    • J says:

      Several people including Scott have suggested inositol as surprisingly good for panic (but not if you’re bipolar). I tried some today with lunch and found almost immediate relief. Fwiw my anxiety all seems to be centered in my gut and the inositol made it go from panicky to kind of a numb fuzzy feeling that lasted the rest of the day. (The fact that my symptoms can be simultaneously severe and vague drives. me. bonkers.) I’ve used it a few times at around 0.5-1g/day.

      • Pseudonymous Platypus says:

        Thanks for the suggestion! I will talk to my psychiatrist about it next time I see her.

  20. Emile says:

    (Something I’ve brought up before)

    For effective altruists and consequentialists and utilitarians: how does the “impartial care for all humans” part of those philosophies doesn’t work well with our natural tendancy to care more for our friends and family?

    For me, this is why I don’t consider myself a consequentialist / utilitarian, unless I take a very loose interpretation whereby I’m an “utilitarian for my utility function”, and my utility function values people “close to me” more strongly, with decreasing circles of loyalty.

    • Cauê says:

      This is entirely compatible with consequentialism, although not with some formulations of utilitarianism.

    • Whatever happened to Anonymous says:

      > “utilitarian for my utility function”
      That is certainly a way to align your morality and your intuitions.

      You can also just claim that you’re an utlitarian and also a bad person.

    • I think a world without friends and family would be a sad one. And importantly, I think pretty much everyone in the world would agree with that. So any ethical theory that tells us to do away with friends and family is to my mind a flawed one. If we were to somehow find the “correct” theory of ethics (whatever that means) and implement it, I suspect it would make some people sad at the expense of others (in fact it’d pretty much have to). But it shouldn’t make *everyone* sad – that would be absurd. In what sense is it even an ethical theory then? Ethics exists for humans, not the other way around. That’s why I (tentatively, anyway; always tentatively) identify as a consequentalist contractualist.

    • blacktrance says:

      There’s no incompatibility with consequentialism or with some formulations of effective altruism (though it is incompatible with utilitarianism) I don’t care about all humans impartially, and I care about those close to me more than I care about strangers. But when I help someone, I want to spend my resources as effectively as I can.

  21. Cauê says:

    So they turned the Sequences into an ebook: http://lesswrong.com/lw/lvb/rationality_from_ai_to_zombies/

    I may be unreasonably excited about this.

    Now… how do I get people to *read* it?

    • Chevalier Mal Fet says:

      …actually, this is really good news to me.

      Last week I downloaded the crude ebook made of Yudkowsky’s blog posts from 2006 to 2010, since the main reason I haven’t read most of the sequences is lack of time/convenience. But the million-word project was mighty daunting and I have made little progress given the size of my reading list.

      I’ll read this one instead.

    • FullMeta_Rationalist says:

      I’ve been waiting to put this on a coffee table one day. Then hopefully one of my friends will casually pick it up while I’m in another room, leaf though it, and ask to borrow it. I suppose I should get around to reading HPMoR too. I decided to abstain until after it was completed. And I’ve never even picked up Terry Pratchet. So much to read … yet so little time. 🙁

      +1

    • aguycalledjohn says:

      Does anyone know how substantial the edits and changes were?

  22. Ian James says:

    I’ve been meaning to post this in an open thread for a while!

    On an open thread a couple months ago, MealSquares heard my complaint about losing $90 on their “atrocious” beta product. Being unreasonably good sports, they sent me some of their latest batch for free. The verdict: it’s pretty decent. The texture is less rubbery and the bitter, pungent taste seems to have vanished. I would call it edible–not quite as edible as unflavored Soylent, but edible nonetheless.

    TL;DR, in my book, MealSquares is a worthy player in the “nutritionally complete foods” space! They may yet prove to be part of the solution to this problem we call “food.”

    P.S. MealSquares people, if you’re looking to make your product taste sweeter without increasing the glycemic index, you should look into what Soylent is doing with isomaltulose in the latest version of their product. Admittedly, it has sharply divided people’s opinions in the Soylent Discourse, but I for one think it’s delicious. And check out the blood glucose graph on page 3 of this PDF.

    • Anonymous says:

      >not quite as edible as unflavored Soylent

      Is that a fair comparison to make? I can down some very nasty liquids but hold solids to a much higher standard.

    • speedwell says:

      I’ve already sent the MealSquares people information on xylitol and erythritol, which are the most readily and cost-effectively obtained, best tolerated by the digestive system (erythritol more than xylitol), and lowest in glycemic impact (erythritol again more than xylitol). The only issue with erythritol is it does not like to dissolve and will crystallize out if you chill something that is especially high in it (I had a very crunchy lemon tart after an overnight refrigerator stay, lol).

      The best profile of availability, future availability, taste, cooking characteristics, and glycemic impact is actually tagatose. Tagatose, which I use heavily in my own kitchen (I’m a mild diabetic controlled by Metformin), has been shown to help improve glycemic control in diabetics. It’s not terribly easy for individuals to obtain though and I had to buy an entire bulk bag when I lived in the US. It lasts a long time though if you don’t oversweeten things or make sweetened drinks.

  23. ejlflop says:

    I’ve found an interesting post that claims to refute the notion of a general factor of intelligence (“g”), and casts doubt on the heritability of g/IQ/similar; written by an esteemed statistician. It involves slightly more statistics than I’m familiar with, so I’d like the opinions of others on it. It seems this article did the rounds in 2007, along with a slightly *ahem* “simplified” version written as an angry Q & A session.

    • Douglas Knight says:

      You may be interested in some google searches, like SSC Shalizi or LW Shalizi.

      Also, Shalizi does not attack heritability in the links you provided. Seriously, go back and read them again. You should be worried about this misreading. But he does attack heritability here.

      • ejlflop says:

        That’s very interesting & useful, thanks. In particular. I found this article, from which I found the following quote particularly enlightening:

        One can compare this to a phenotype like height, which is simply a linear combination of the lengths of a number of different bones, yet at the same time unmistakably represents a unidimensional phenotype on which individual differ, and which can, among other things, also be a target for natural selection.

        So I think my temporary excursion from agreeing in the validity of “g” is ended, and have concluded: g measures somewhere between 0.3-0.8 of the variance in IQ tests, with most of the rest being noise; and it’s about 0.5-0.8 heritable. However, I suppose you could interpret Shalizi as attacking the way the social sciences use statistics in general, which is an interesting (though separate) question.

        • Douglas Knight says:

          Why do you even care about g as opposed to IQ tests? (And what do you mean by “validity”?) If you’re a psychometrician designing new tests, then you have to care about the theory. But most people have to live with the tests that exist. All they can do is look at the literature and see that an IQ test correlates with an outcome of interest. Even the psychologists writing those papers must choose from a battery of existing tests. (The main thing Shalizi says is that those psychologists are doing applied research, not basic research.)

          The only way users of the psychological literature use the concept of g is when they pretend that all IQ tests are the same.

    • Airgap says:

      It’s been posted on SSC before, but here’s the counterargument.

      Didn’t see someone already got to it.

  24. MrJoshBear says:

    I’m not up/ready to take the GWWC pledge yet, but I’ve shifted my charitable giving from organization sponsored by my workplace to the Give Well approved Give Directly, and increased the size of my regular donations. I’m hoping to up my contributions to the recommended tithe as I’m able.

    Thanks to Scott, Ozy and TheUnitOfCaring for inspiring me! Your writing about Effective Altruism does make a difference!

    • J says:

      That’s awesome! Thanks for making the world so much better! Scott also recently inspired me to donate to Against Malaria.

  25. Captainbooshi says:

    This is a really cool video on YouTube that I think really ties right into your earlier post on why sometimes the worst possible examples of something are the ones that explode into national consciousness:
    This Video Will Make You Angry

    I thought it did a great job of concisely explaining why internet fights get so awful, without getting into any specifics that will drag it down into one of those fights.

  26. I’m tired of Civ V and my Civ IV discs are inaccessible at the moment. Someone recommend me a new 4X game.

    • Zorgon says:

      Gimme another 9 months or so 😉

      More seriously, and it’s not quite a 4X but in the same ballpark, I’ve been greatly enjoying Xenonauts recently. It’s a pretty straight X-COM clone with more up-to-date visuals and a somewhat spruced up GUI, but notably, it’s properly “XCOM Hard” (unlike the Firaxis game, which has many other qualities but isn’t nearly so difficult).

      • stillnotking says:

        The Firaxis XCOM on Iron Man + Impossible settings is quite difficult, and less vulnerable to “cheap” strategies (e.g. mind control abuse) than the original game, so I’d say it’s harder. It took me longer to beat the new game on the highest difficulty than it took me to beat the original. I still haven’t beaten it with all the “bad” Second Wave settings enabled.

    • Slow Learner says:

      Anything Paradox, but especially Crusader Kings 2 or Europa Universalis IV. Depth and historical complexity or you can install a mod and play e.g. Game of Thrones.

      • Chevalier Mal Fet says:

        Seconding this.

        I’m in the middle of an EUIV Ming China playthrough, because it’s a really easy and fun way to immerse oneself in the complexities of early modern foreign policy dilemmas (for example, why the English and French cannot co-exist, Russia’s desire for defensible frontiers, Germany’s two-front dilemma). Kind of on a Chinese history kick atm.

        If you really like exploration, well, there’s not much of that in EUIV, unless you install the ahistorical Conquest of Paradise exapnsion, which creates a randomized new world.

        Sins of a Solar Empire is a good space sim that sometimes get labelled under the 4X. <_<

        • Slow Learner says:

          Ooh, I’m going to have to get Conquest of Paradise. I’d like to be able to play without my geographical knowledge, just to see what difference that makes to e.g. strategies around exploration and colonisation.
          I must say, playing EUIV as a native American nation and trying to Westernise is extremely fun – if you do really well you can be a substantial state surviving right through to 1800, though it’s pretty tough.

      • John Schilling says:

        Crusader Kings 2 needs a better AI, but that’s true of pretty much any 4X game. And I would have preferred a better economics model. But it has consumed more of my time than any other 4X game I have ever played, probably more than any other computer game period. Eventually had to uninstall it, and am reluctant to tempt myself with any of the “sequels”.

        So, take that as a strong recommendation coupled with a warning. CK2 plus a bit of imagination is essentially an immersive “Game of Thrones” experience.

        • Susebron says:

          CK2 + mods can be literally Game of Thrones, as well. It’s got a really impressive modding community, with mods ranging from minor gameplay improvements to historical overhauls to alt-history scenarios to entire fantasy worlds.

    • Susebron says:

      Paradox Interactive has some good ones. In chronological order, their main titles are Crusader Kings II (769-1453), Europa Universalis IV (1444-1831), Victoria II (1836-1936), and Hearts of Iron III (1936-1948). Crusader Kings is more about characters and intrigue (think Game of Thrones), Europa Universalis is more about exploration and nations, Victoria is more about colonialism and politics, and Hearts of Iron is mostly about war. The easiest to get into for newcomers depends on the person, but it’s usually either EU4 or CK2. CK2 and EU4 both have a lot of modding support, as well. If you want me to go more in-depth about any of these then I can do that.

      Edit: Ninja’d by Slow Learner.

      • Chevalier Mal Fet says:

        Also seconding this one.

        I recommend CKII, I’ve found it a bit easier and more intuitive than EUIV.

        • Susebron says:

          I found it easier than EU4 as well, but some people find it harder. You have to pay a lot of attention to traits, vassals, etc. In my first playthrough, I started as William the Conqueror in the second 1066 start, no tutorials or anything. I barely managed to survive the first two generations, with pretty generous savescumming. After that, though, I got lucky and managed to do very well for myself.

        • Zorgon says:

          I’ve been known to say that CKII should be renamed “Phenotypes Against Humanity”.

    • jaimeastorga2000 says:

      Have you played Alpha Centauri yet? The controls are a little primitive by the standards of Civ IV or even Civ III, but the science-fiction atmosphere is downright amazing.

    • hylleddin says:

      Pandora was made as an independent spiritual successor to Alpha Centauri before Beyond Earth came out. I’ve enjoyed it a fair amount. Gameplay is a lot like the civ games.

    • blacktrance says:

      If you’re willing to spend money on a game, you could buy Civ IV on Steam. Alternatively, I recommend Civ III (though you’ve probably already played it).

    • Anonymous says:

      Gal Civ II!!!!

      • I actually have played GalCiv II, and did not particularly care for it. In particular, I found the rock-paper-scissors aspect of ship construction to be frustrating.

        • Anonymous says:

          *nods* That was my least favorite aspect, though war was tedious enough I often avoided it for cultural or tech victory. Hopefully Gal Civ III will avoid that pitfall!

    • Lambert says:

      I’m trying out Aurora, the Dwarf Fortress of 4X games.
      I hear the gameplay is very deep once one gets past the fact it has the ‘graphical charm of an income tax assistant software from the late nineties’ and that the learning curve looks kind of like this:
      https://i.ytimg.com/vi/Moz11eR8rEY/maxresdefault.jpg

    • Dinaroozie says:

      I found Endless Legend to be pretty fun. I haven’t played it all that much because an obscure technical issue stopped me from playing it with the person I primarily planned on playing it with, so I can’t claim it is astonishingly balanced in the long term, but I will say that the games I’ve played so far felt like I was making meaningful decisions much more often than in Civ 5.

      I’ll also add that it runs like arse on integrated graphics (not totally unplayable by any means, but the game isn’t highly optimised for low end systems, at least not in my experience), so factor that in accordingly.

    • Ilya Shpitser says:

      Offworld trading company (from the designer of Civ 4).

      “German board game: the RTS.”

  27. Troy says:

    Last month Scott asked whether falling testosterone led to falling crime. The data he was citing about falling testosterone claimed that it has fallen since 1987. A discussion ensued on whether men’s voices had risen in that time period, as the claim that testosterone has fallen would suggest. I did a bit of research on this question, first asking my choir director for his impression and then doing some Googling. My choir director said that he does not think men’s voices have changed appreciably in the 40-some years he has been directing.

    Googling on this question revealed this paper, Voice Change in Human Biological Development. The author’s timeframe is much longer than Scott’s, looking at the last half-millennium or so. But he makes several claims of interest in that paper which may be relevant to the question of how testosterone has varied in that longer period:

    (1) The median age of voice change during puberty for men has decreased by 4 years over the last 3 centuries.

    (2) Medieval music, e.g., Gregorian chants, was written for male voices that we would recognize as baritones or tenors today. Around the 15th century a gradual shift began in which bass parts became more common.

    (2*) There is a parallel change of more parts being written for female voices, especially sopranos.

    (3) Prior musicologists attributed (2) and (2*) above to cultural demand, but a more plausible explanation is vocal supply — i.e., men’s voices have gotten deeper over time and women’s have gotten higher. One explanation of this is nutritional changes in the West. (Bass and soprano parts were added around the same time that consumption of meat and butter increased.) More recent nutritional changes and health improvements may similarly explain (1).

    (4) Anecdotal reports from voice teachers suggest that since the 1960s there are an increasing number of deeper male and female voices. (p. 251)

    (5) Studies of testosterone levels in male singers find higher levels in basses and baritones than tenors. (p. 252)

    • Wulfrickson says:

      I attended a lecture on Bach’s church music a while back. The lecturer mentioned that Bach’s church choir in Leipzig had boys as old as 18 years with unchanged voices, which allowed him to write music with ferociously complicated soprano parts that younger children would have difficulty mastering. He attributed this to better modern nutrition and added that some choirboys today follow special diets in a (mostly vain) effort to delay voice changes.

      Two other anecdotal scraps: my old voice teacher mentioned to me once that basses tended to be tall and thin, and tenors tended to be short and stocky (I am very much of the former type), though I have no idea how right he actually is; and men on /r/NoFap often mention deeper voices as a consequence of giving up masturbation (which temporarily increases testosterone levels).

      • Troy says:

        Fascinating! (There’s a subreddit for everything, isn’t there?) Oddly enough, the subject of masturbation comes in the linked paper too:

        Toward the end of the eighteenth century, when many boys experienced an earlier mutation of their voices than their elders considered normal, anxious educators found a deep voice in a teenager disturbing. A German educator was convinced that early voice change was a sure sign of masturbation: “In cities, where choirs exist, one should especially watch out for boys who sing soprano parts. If their treble voice darkens before they are seventeen, even though they observe the dietetic rules of a soprano singer, it is obvious what is the matter with them.”

    • Another factor that could account for change in voice quality across time: the rise and fall of tobacco smoking during the past hundred years.

  28. zz says:

    How to deal with performance anxiety?

    (I’m asking in the context of musical performance, but if anyone has any particularly good methods of dealing with sexual performance anxiety that generalize, they would be well-received.)

    —-

    (Optional) Fuller problem statement: I’m 100% nonreligious, but there’s a nearby Catholic church I have a relationship with that sometimes hires me to play cello for special services. They grossly overpay me ($50–$75 for 3–4 minutes of Bach or Apocalptica) and are always ridiculously nice to me. These three factors (being very much in the outgroup, nevertheless being overpaid, nevertheless being treated like a member of the ingroup) contribute to a good deal of performance anxiety and, as it happens, it’s basically impossible to execute ultrafine motor control when your body is flooding itself with adrelanine in a flight-or-fight response, and so I have weak tone, wobbly vibrato, and misallocated concentration, all meaning I play poorly.

    I want to play well! Partly because I’m being paid too much and want to feel like I’m not completely fleecing the Catholics, partly because they’re really nice to me and I want to feel like I’m not fleecing people who are consistently nice to me, and mostly because, even though I’m not remotely good enough to play cello professionally, I do things as professionaly as possible and that includes playing as well as I can, which I’m yet to do. Right now, this means dealing with performance anxiety—I’m at the point where practicing more won’t really help because I’m limited by the performance anxiety.

    Also, the church I play in has better acoustics than anywhere else I have a reasonable chance of playing at, and I want to work that as much as possible before I move out to California and can’t play there anymore.

    • Deiseach says:

      Okay, I don’t know if this will help, but:

      (a) You are not being grossly overpaid. Maybe you only play three minutes within a service, but you have to get there, and I’m sure transporting a cello is hassle; you have to set up; you don’t bolt out the door once you’ve played your three minutes so you probably have to sit through (e.g.) the rest of the Mass; believe me, I’d rather listen to three minutes of wobbly cello than what passes for modern hymns and I’m sure I’m not the only one out there – all in all, as the Scripture says “The labourer is worthy of his hire” so you are not fleecing my co-religionists.
      (b) We believe in alms-giving as a worthy deed. Take our money, please! You are doing us a service by taking it! 🙂
      (c) If they’re regularly asking you to play there, you’re entitled to be there even if you’re not attending as a member of the congregation. The church is there for you to go into, so you aren’t intruding or anything else. Maybe sometime go there when it’s quiet and you’re not there playing for a service, sit in the pews, get used to the place as a place you’re familiar with?

      Good luck!

    • onyomi says:

      I have found the following mindset helps me with all kinds of “performance anxiety,” be it how well I do at a presentation, whether or not I have insomnia on a particular night, etc.

      Basically assume that the outcome is 100% determined by the preparation and that when the time comes for the event there will be no room for you to do any better or worse than your preparation would predict.

      For example, when having insomnia in the past, I would try all kinds of mental gymnastics, counting sheep, etc. as methods to actively “try” to go to sleep. I would beat myself up thinking “if only I’d stop worrying about it right now I could go to sleep,” and so on. Then one day I realized that whether or not I slept well on any given night was pretty much entirely determined by what I did during the day: how much exercise I got, how much sun, how much caffeine and when, etc. Once I started thinking in this way I was able to trace specific incidences of insomnia to specific behaviors, like “maybe I shouldn’t have eaten than big piece of flourless chocolate cake two hours before bed,” etc. and therefore start doing the things that I knew would result in a good night’s sleep.

      Added bonus: when I do have insomnia I don’t suffer more than necessary through a bunch of a mental gymnastics, other than, possibly, trying to figure out what I may have done during the day to cause it. I just accept that it may take me a long time to go to sleep and act accordingly, whether that means catching up on reading, lying in bed and meditating, etc.

      I think the same attitude applies to preparing for a piano recital or pretty much anything else. Just assume that your level of performance is predetermined by your preparation (also assume that you will not perform as well as you did in practice, because nervousness on the day of performance is an inevitable aspect of the performance itself; hence a need to overprepare).

      As with many things that touch on free will, one runs into a paradox, which is that by going into the performance acting as if you have no control of the outcome you may actually be able to improve the outcome, but that’s not my concern here. My concern is just the mindset I’ve found more constructive.

    • Airgap says:

      Take your cello to bed with you, so you can use it as a marital aid in case of severe performance anxiety. The knowledge that you have a backup plan will boost your confidence.

    • Scott Alexander says:

      If this is a big problem for you and nothing that you’ve tried thus far has helped at all, ask your doctor about beta-blockers.

      • zz says:

        I actually have 100g of phenibut, but was looks for a non-drug solution. Also, I’m sufficiently borderline hypotensive that my physician won’t write prescriptions for drugs that have a secondary effect of lowering blood pressure.

        §9 was very helpful, however. Thanks much!

  29. Zorgon says:

    Can I just take the opportunity to register my utter astonishment at the very concept of people holding wrap parties – actual, IRL wrap parties – for a fanfiction work?

    I intend no negativity by this, by the way. I’m a big HPMoR fan. It’s just… deeply confusing for me. Maybe I’m just getting old.

  30. Bric n' Brac says:

    This is slightly out of left field problem, but the collection of knowledge floating around this comments section is eclectic enough to give me hope that someone can help out.

    I’ve had, for years, chronic sinus/nasal congestion; I’ve tried net pots/saline rinses, xylitol sprays, fluticosone and other various prescription sprays, a humidifier, various decongestants, was checked for nasal polyps, went through a phase where i was obsessive about allergens, etc. None of this has produced much lasting relief (although there are days when the congestion is naturally a bit better). I’m interested in hearing if anyone else has ever successfully resolved a similar issue. A relative of mine who is a ‘naturopath’ recommended I empty a capsule of probiotics into a saline rinse, and although there’s at least one study showing that the mix of nasal bacteria affects congestion I’d like to consider injecting my sinus cavity with bacteria to be a last resort.

    • Anonymous says:

      I’d be curious as to the *cause* of the chronic congestion. (Or, maybe it’s more of a swelling/inflammation type of issue?)

    • onyomi says:

      Don’t spray bacteria up your nose, add parasites to your intestine:

      http://www.foodsmatter.com/asthma_respiratory_conditions/rhinitis/articles/nasal_allergies_nuked_by_worms.html

      Seriously, it may sound like I’m going from bad to worse, but I have heard of people who died of brain inflammation due to snorting an amoeba up their nose with unclean neti water.

    • Teresa says:

      I know you said you investigated allergens–have you investigated food sensitivities? I have chronic sinusitis that is made much worse by gluten. I know other people who have had similar experiences with dairy products. I promise I’m not trying to mindlessly push a fad diet, but it might be worth trying an elimination diet and seeing if there is a particular food that makes your congestion worse.

  31. Troy says:

    (In best TV episode trailer voice)
    Last time on Arguments Against Consequentialism:

    Let “expected consequentialism” – henceforth, just consequentialism – be the view that you ought to maximize the expected value of the world. (This is a formalization of the idea that you ought to bring about the best outcomes that you can. It focuses on expected value because we are not omniscient. An “actual consequences” view which says that you ought to do what will in fact maximize value is, I take it, useless as a decision procedure.) In the last Open Thread, I presented a version of the infinitarian challenge to consequentialism. As Sniffnoy pointed out, that challenge really targets aggregative consequentialism in particular, which says that the total value of the world is some kind of aggregative function (sum, average, etc.) of how much local value there is at each of the “value-bearing parts” of the world. So, if aggregative consequentialism is true, then what I ought to do is maximize expected value, where this is understood as some kind of aggregative function of local value.

    The basic argument went like this (restructured somewhat for clarity):

    (1) If aggregative consequentialism is true, then for any act A, there is a non-zero probability that the world will be either infinitely valuable or infinitely disvaluable given that you A.
    (2) If there is a non-zero probability that the world is infinitely (dis)valuable given that you A, then the expected value of A-ing is either infinite, undefined, or negatively infinite.
    (3) So, if aggregative consequentialism is true, then the expected value of any action is either infinite, undefined, or negatively infinite. [from (1), (2)]
    (4) If for all acts A, the expected value of A-ing is either infinite, undefined, or negatively infinite, then a consequentialist decision theory cannot guide our actions.
    (5) If aggregative consequentialism is true, it cannot guide our actions. [from (3), (4)]
    (6) If aggregative consequentialism were true, it could guide our actions.
    (7) Aggregative consequentialism is false. [from (5), (6)]

    To aid readability and structure discussion for anyone who wants to continue discussing this argument, I’m going to break down discussion of the premises into two posts which I’ll post as replies to this one: the first on (1), (2), and (6), and the second on (4).

    • Troy says:

      (6) is motivated by Ought implies Can kinds of considerations. I don’t think it was challenged in the subsequent discussion.

      (1) is motivated by the following kinds of considerations: There is a non-zero probability that the universe contains an infinite number of people. There is a non-zero probability that the universe will continue indefinitely into the future, with more people coming into existence. There is a non-zero probability that people will continue to exist in an afterlife in which they will receive positive and/or negative utility. These all remain true even if we take the probability to be conditional on some act available to you.

      (2) follows from the mathematics of expected value. The expected value of an act A is the sum of the values of the possible states of the world given that you A, multiplied by their probabilities given that you A. That is, EV(A) = P(S1|A)V(S1) + P(S2|A)V(S2) + … . Suppose that one of the possible states of the world, say S1, has infinite value, and that P(S1|A) > 0. Then P(S1|A)V(S1) = infinite. If all the other terms in our sum are either finite or infinite, then our sum is infinite. If some are negatively infinite, our sum is undefined. So, it follows from “there is a non-zero probability that the world is infinitely (dis)valuable” that the expected value of A-ing is either infinite, negatively infinite, or undefined.

      (2), then, seems fairly secure. Some people did object to it that we could restrict our evaluative considerations to states of affairs within a certain time window, or build time preference into our utility function, or something like this. But I take these moves to be either abandonments of consequentialism or at serious odds with classical motivations for consequentialism. Moreover, there are possible states of affairs that these hacks won’t solve, e.g., the possibility of infinite value at a time or a sufficiently quickly increasing series of values at times – that is, if you place half as much value on successive states of affairs at times {t1, t2, t3, etc.}, then we can make the values at each of those times {1, 2, 4, etc.}.

      Given the above, the smart money seems to be on denying (4), which is the only remaining premise.

    • Troy says:

      People advanced several lines of attack on (4). Here were some of the main ones:

      (i) Some infinitely valuable worlds are more valuable than others, and so a consequentialist decision theory will guide us to choose those worlds.
      (ii) We can ignore infinitely (dis)valuable states of the world and choose actions that have the greatest expected value given that the world is finite.
      (iii) We can somehow “set aside” infinitely (dis)valuable parts of the world and look only at the change in value brought about by our actions, which will presumably be finite.
      (iv) There is some fancy mathematical fix to this problem, e.g., involving hyperreals.

      I objected to (i) that even granted that we could order infinitely valuable worlds (which is a non-trivial task), this only lets us maximize value under perfect knowledge. For example, suppose that we have three possible outcomes, O1, O2, and O3. All have infinite value, but O1 > O2 > O3. We also have two actions, A1 and A2. Now suppose that

      P(O1|A1) = .5,
      P(O2|A1) = 0,
      P(O3|A1) = .5,

      and

      P(O1|A2) = 0,
      P(O2|A2) = 1,
      P(O3|A2) = 0.

      A consequentialist decision theory can’t tell us which action to perform merely based on the fact O1 > O2 > O3. We need to know how much better O1 is than O2, and O2 than O3, to know how to calculate the expected value of A1 and A2.

      (ii) and (iii) are more promising lines of attack. (ii) is a non-ideal response to the extent that you think it likely that the world is in fact infinite; e.g., if your actions are only ethically important if there aren’t an infinite number of people in the world, this is small solace if you think the probability that there are an infinite number of people in the world is very high. And (iii) requires some way of measuring the change brought about by our actions, a non-trivial task.

      My main response to (ii) and (iii), however, is to grant for the sake of argument that they solve the original problem, but maintain that there is a closely related problem that they do not solve, which is the possibility of our actions changing the amount of value in the world an infinite amount. If there are any actions that have the potential to bring about infinite value, then (iii) implies that you ought to do those actions. But this might be implausible – e.g., it seems that you ought not spend all your time trying to bring about some super unlikely chain reaction that will continue increasing utility forever. More seriously, it’s arguable that all actions have some potential for infinite payoff. For instance, consider hypotheses of the form,

      H1 = God will bring about infinite value if I do A1,
      H2 = God will bring about infinite value if I do A2,
      etc.

      for all available actions A1-An. If these all have non-zero probabilities, that the expected change in value for any of your actions is either infinite, undefined, or infinitely negative.

      Perhaps one of the fancy mathematical fixes alluded to in (iv) can solve this problem as well as the original problem. I think this is the most promising route to take in response to the argument.

      • Mark says:

        My main response to (ii) and (iii), however, is to grant for the sake of argument that they solve the original problem, but maintain that there is a closely related problem that they do not solve, which is the possibility of our actions changing the amount of value in the world an infinite amount. If there are any actions that have the potential to bring about infinite value, then (iii) implies that you ought to do those actions. But this might be implausible – e.g., it seems that you ought not spend all your time trying to bring about some super unlikely chain reaction that will continue increasing utility forever.

        People who support (ii), i.e. me, will deny that anything has the potential to bring about infinite value. Note that this is different from bringing about some infinite thing which is also very valuable. The mathematics could work out like this: a year of bliss is worth 1 utility point, two years is worth 1.5, four years worth 1.75, and so forth; and an infinitely long life of bliss is worth 2 points. If this is how your utility function is set up, then there’s no sense in asking whether there’s some chance that it’s “wrong.”

        • Irrelevant says:

          Your math suggests we figure out how to Bliss everyone to death in order to generate QALY impulse functions.

          • Irrelevant says:

            I’m saying you’ve proposed Logan’s Run. If you give diminishing utility returns to lifespan, then the utility-optimizing strategy involves killing people and replacing them with younger, higher-weighted lives.

          • Mark says:

            I’m not suggesting diminishing returns to lifespan per se, but diminshing returns to aggregate bliss-years (or QALYs or whatever).

        • Troy says:

          I would classify this response under (iv), fancy mathematical solutions: you’re proposing that we have a bounded utility function. However, (ii) and (iv) may overlap here.

          I think a bounded utility function is a promising option to pursue. Here’s one thing that’s not clear to me from your specific proposal, though: what do we do with multiple people? If we just add up the bliss from each person, then we’d get an infinite amount of expected bliss given an infinite amount of expected people.

          • Mark says:

            As I mentioned in a response above, the diminishing returns would apply to aggregate bliss-years. There’s no difference in this scheme between an infinite number of people who live one year and one person who lives an infinite number of years. Nor would an action X giving person A an extra bliss-year be of any additional value given X already bestows upon someone else infinite bliss-years. That’s counterintuitive, but I don’t yet see how it would engender any more realistic implausibilities.

          • Troy says:

            I see — I think that makes sense.

            I do not see any immediate way to amend the infinitarian challenge to account for your solution. Congratulations! I will have to think more about the extent to which this picture of utility is independently plausible/well-motivated.

          • FrogOfWar says:

            I’m not seeing how Mark’s proposal solves the problem.

            If I understand it correctly, the proposal involves making aggregate utility a bounded function of years of individual utility that rises asymptotically toward, say, 2 as the number of such good years increases to infinity.

            This would mean that once you already have infinity such years, any further change in individual utility would have no affect on the aggregate. So our actions would still be ethically indifferent in an infinite universe. (We haven’t said how negative utilities would work but it doesn’t seem like they’d help).

            Have I misunderstood something?

            edit: Oh, this response assumes option (ii). So the expected utilities still make a difference even though if the world is as a matter of fact infinite we have problems. I guess the question is then how much of a problem that is.

            As indicated last week, I don’t think the expected utility formulation of the problem is that much stronger than simply arguing from the implausibility of ethical significance as a matter of fact being contingent on finiteness. But I should think more about this.

          • Troy says:

            This would mean that once you already have infinity such years, any further change in individual utility would have no affect on the aggregate. So our actions would still be ethically indifferent in an infinite universe. (We haven’t said how negative utilities would work but it doesn’t seem like they’d help).

            Yes, this seems right — if the universe is infinitely old and has had conscious beings forever, or there are an infinite number of people in alternate universes, or anything like that, then there will already be maximal utility (and/or disutility). So the proposal only works if we’re combining it with the proposal to act as if the world is finite, and take utilities conditional on that supposition.

            I’m inclined to agree with you that this is a bad consequence: it doesn’t seem like my actions should suddenly cease to matter if there turn out to be an infinite number of alternate universes.

          • Mark says:

            It’s a bad consequence, but it’s not too surprising that intuitive ethical principles break down when dealing with infinities, just like intuitive arithmetic principles break down when dealing with infinities (viz. Hilbert’s hotel). But if I were really serious about defending this position, I’d try to combine the bounded utility defense with one of the other defenses: that one’s expected utility calculations should only look at utility differences one’s actions cause. This immediately dispenses with the problem of maximum utility in the past or in alternate universes one has no access to.

      • Peter says:

        Fancy fixes; maybe you could take an idea from analysis and do something with limits. Let’s think about infinite duration. Suppose I consider a time window of x minutes. I can calculate the expected utility for A1 (call it E(A1)), and E(A2), and E(A1)/E(A2) – if this ratio is above 1 (and both are positive, shouldn’t be _too_ hard to fix to allow negatives) then there’s a prima facie case for preferring A1. Then let x increase. If E(A1)/E(A2) tends to some above-1 number as x tends to infinity, then we can prefer A1.

        Instead of a sharp cutoff, we could imagine applying an exponential discount to the future, and then letting our discount rate tend to … whatever it is that gives no discounting. We could have a social distance discount or a physical distance discount or spacetime interval discount or whatever to cope with infinite spatial extent, and again gradually remove that discount.

        Of course, there’s no treatment of infintely concentrated value, some infinity at a specific point in spacetime. It’s less clear what that corresponds to, though.

        Status: highly speculative, something I just thought of. Don’t hold me to it! OTOH I had been thinking before about utilitarianism as being a limiting case of social-temporal discounting…

        • Troy says:

          I’ll have to think more about this proposal. Offhand here’s an objection: some possible successive amounts of value will not have a limit. For instance, consider the Erratic God Hypothesis: given that you perform A1, God will give you the following successive utilities in the afterlife:

          No Limit: {1, -2, 3, -4, 5, -6, …}

          This sequence has no limit.

          If No Limit has equal probabilities on A1 and A2, then the limit of their ratios will be 1. So to address your proposal, we need a case where No Limit is more probable given A1 than A2. Perhaps this is so if we let A1 be pleading with God to bring about No Limit for you. Or perhaps we can change God to a very powerful machine that you program to bring about No Limit.

          • Peter says:

            Well, Very Powerful Machines have the heat death of the universe to contend with. I *think* if you use my variant with exponential discounting, then you can avoid the no-limit problem, but that’s not so relevant, as you could make the things in your sequence grow exponentially too.

            Possibly you could also demonstrate that some alternative is strictly preferable to the No Limit sequence, and you should take that. Of course, a sufficiently motivated god would find ways around that.

            Other question: could it ever make sense to talk about an infinitely delayed Big Reward?

          • Troy says:

            Other question: could it ever make sense to talk about an infinitely delayed Big Reward?

            This is basically Pollock’s EverBetter Wine case: you have a bottle of wine that gets better with age, without limit. Supposing that you are immortal, when should you drink the wine? If you drink at any time t, you would have enjoyed it more if you had waited until t+1.

      • Gbdub says:

        I don’t think ordering the value of worlds is all that hard. Simplest counter argument to yours: do you actually believe that local value is currently maximized? E.g. there is no physically possible configuration in which your life utility would be higher? If you actually believe in infinity, then no, you are not the maximum utility version of you. Besides, we can clearly see examples of negative utility around us.

        This allows for utilitarianism, even if you can only predict local effects:

        Gbdub1) The best possible universe includes the best possible version of me and my locality (basically, as far out as I can predict).

        Gbdub2) Outside where I can predict, by your own definition I have no inkling of whether my actions are infinitely positive, infinitely negative, or neutral (if I have an inkling, then it falls within my predictable sphere).

        Gbdub3) Doing nothing is still an action that has the potential for infinite effect outside my predictable sphere (there is no way to fully isolate myself from the universe), so I can’t use do nothing as an out.

        Gbdub4) Therefore, I should do the things most likely to increase net utility within my observable/predictable locality, because no best universe contains the case where I don’t maximize my immediate local utility.

        You’ll note that this therefore collapses to local utility maximization, because any universe that doesn’t include maximum local utility has lower value than one that does, even if both values are infinite.

        • Troy says:

          I’m not sure I follow all your reasoning, but two of your premises seem potentially problematic to me:

          Gbdub1) The best possible universe includes the best possible version of me and my locality (basically, as far out as I can predict).

          Why think this? Why not think that maximal local value will lead to its depletion elsewhere — i.e., that value is a scarce resource in some way?

          This is perhaps more plausible for temporal than spatial distance. Many religions have held that there will be greater future rewards for people who make sacrifices now, for instance.

          Gbdub2) Outside where I can predict, by your own definition I have no inkling of whether my actions are infinitely positive, infinitely negative, or neutral (if I have an inkling, then it falls within my predictable sphere).

          Why think that there’s a cut-off beyond which you can’t predict at all? Psychologically I grant that there will be, but epistemically it seems that your evidence is going to, at best, dwindle out to infinity in what consequences it predicts, not predict consequences for 1000 years and then stop.

          • Gbdub says:

            1) If utility is a scarce resource, then you’re not really arguing about infinity anymore, are you? Nothing can be infinitely good or bad, because all you’re doing is moving around a finite resource!

            If you’re really talking about infinity, then there’s some configuration that maximizes both local and extra local utility. So the best subset of possible futures must include maximum local utility – do things that produce that!

            2) Your entire premise rests on the possibility that my actions might have unpredictable infinitely positive or negative effects. If I can predict everywhere, then the problem goes away and consequentialism is super easy: just do the things that cause infinite goodness and don’t do things that cause infinite badness.

            Honestly I feel like you’re shifting the goalposts enough that I’m not quite sure what you’re arguing for anymore. You’re taking your original premise and loading it with a bunch of caveats to avoid counter arguments. Sure, it may be possible to construct a possible infinity that breaks consequentialism. But there are also infinite possible infinities that don’t, and you’ve done nothing to demonstrate that your particular infinity is more likely!

          • Troy says:

            1) It can’t be scarce in the sense of finite over all of spacetime. But it could be scarce in the sense that the universe with the most total utility is one in which adding any more utility to region R1 would take utility away from region R2. (Maybe ‘scarce’ is a funny word for this, but I take it the idea is clear enough either way.) Indeed, on crude hedonistic versions of utilitarianism, it seems that utility is clearly scarce in this sense. If there’s only enough food this year for you or me to eat, and we’ll each be happier iff we eat it, then there’s no way to maximize both our utilities in the short term (i.e., this year). If we maximize utility for me we don’t maximize it for you. This is compatible, of course, with our being able in the long-term to produce more food and perhaps make everyone happier next year.

            Here the offsets are foreseeable. But why should we expect things to be different when we move from short term to longer term, and the offsets become beyond our ken?

            2) My suggestion is that you can predict everywhere in the sense that you can assign probabilities to all possible future outcomes. You still don’t know for certain which outcomes will come about. So you don’t know which things will cause infinite goodness, and so you have no way to choose between, say, actions A1-A3 in this post.

    • houseboatonstyx says:

      I hope you will also put this whole series up at your own blog, for easier reference.

    • With regards to 2 the question is not the utility of choice A, it is the value of choice A relative to choice B. I would argue that for almost all “sane” utility functions there is no physically-realizable choice which has infinite value relative to another choice.

      Unless an agent or term has an infinite value per time, the second law of thermodynamics would limit the possible impact any choice that could be made.

      Even the choice between tiling the universe with optimal happiness and tiling the universe with optimal unhappiness would have a finite difference, since both of these would decay into the same thing at some point.

      • Troy says:

        How confident are you that the laws of nature are such that our choices could never make an infinite difference? Credence 1?

        Perhaps our best physics are wrong. Perhaps there’s an afterlife in which our choices will continue to have implications. So long as there’s some epistemic possibility that our choices could make an infinite difference, you have to take that into account in your expected utility calculations.

  32. Troy says:

    Installment II of ? in Arguments Against Consequentialism: Counterexamples

    For a recap of Part I of this series, see the post below. Today I’m going to continue with a second argument against consequentialism.

    Consequentialist moral theories hold that we ought to maximize the good. As is well-known, consequentialist theories are subject to numerous apparent counterexamples. In general, such a counterexample takes the following form:

    (1) In case C, we ought not perform action A.
    (2) But according to consequentialism, we ought perform action A.
    (3) Therefore, consequentialism is false.

    Apparent counterexamples are not necessarily fatal to a philosophical theory. It is open to the consequentialist, for instance, to deny either (1) or (2) of the above argument for a given case. One common strategy for denying (2) is to argue that contrary to first appearances, performing A would not have the best consequences – we just have to think more carefully about what all of the likely consequences of A and ~A would be. Alternatively, the consequentialist could deny (1). Then he owes us some error theory for why we are wrongly disposed to judge that (1).

    In the case of consequentialism, however, the counterexamples that have been proposed are so numerous and varied that it is implausible that these strategies will always succeed. Consider the following five counterexamples:

    (i) A doctor can save five people’s lives by killing one of her patients and harvesting his organs. No one but her will ever know. Nevertheless, she ought not do so.
    (ii) A chemist can take a job manufacturing nuclear weapons which will be used for evil ends. If he does so, he will be able to provide for his wife and family. If he does not do so, someone else will take the job anyway. Nevertheless, he ought not do so.
    (iii) An anthropologist is studying a tribe that does not want to be photographed. A member of that tribe agrees to assist him in his work on the condition that the anthropologist never photograph him; the anthropologist promises that he will not. One day, the anthropologist’s assistant is asleep. A photograph of him would significantly contribute to anthropological knowledge, and the assistant need never know. Nevertheless, the anthropologist ought not photograph his assistant.
    (iv) There is (we’re stipulating) good empirical evidence that the death penalty is a highly effective deterrent: each execution can be expected to save 18 lives. You are aware of this evidence, and so reasonably believe that were you to kill your wife you would be executed and the net gain to society would be large. Nevertheless, you ought not do so.
    (v) There is a utility monster who gets orders of magnitude more pleasure, happiness, or whatever you think is good from each unit of a resource than anyone else. So, sacrificing everything for this monster would maximize the good overall. Nevertheless, we ought not do so.

    These counterexamples are not just variations on a theme. They describe very different scenarios and very different actions. As such, it is unlikely that the same explanation for why one of them fails will work for the other ones. For instance, non-utilitarian forms of consequentialism (depending on how they are developed) might be able to plausibly deny that in (v), sacrificing to the Utility Monster maximizes the good. But it is unlikely that their grounds for doing so will similarly show that the actions specified in (i)-(iv) do not involve maximizing the good.

    Similarly, an error theory about our intuitions in the Organ Harvesting case is unlikely to also be applicable to the very different Nuclear Weapons or Utility Monster cases. Very general error theories, like “performing similar actions in similar circumstances would not maximize goodness,” could perhaps be applied to all, but that an explanation this general and vague correctly applies to one case gives us little reason to think it correctly applies to another. Assuming it’s true that usually we don’t maximize goodness by harvesting organs and that this explains our intuitions in case (i), this doesn’t make it particularly more likely that usually we don’t maximize goodness by giving more to people with more capacities for pleasure, happiness, etc., and that this explains our intuitions in case (v).

    So, inasmuch as the error theories needed to defuse these examples are probabilistically independent, the probability that they are all true is roughly equal to the product of the probabilities that any individual one is true. If, for instance, the probability that any individual error theory is true is .8, then the probability that all 5 are true is only around .33.

    It may well be that particular counterexamples to consequentialism fail. I find some consequentialist error theories more plausible than others. But inasmuch as there is no general explanation consequentialists can offer that accounts for all the kinds of counterexamples that have been proposed, it is unlikely that all counterexamples fail.

    • Slow Learner says:

      All of the examples you give break down, except for the Utility Monster.
      i) If a doctor can actually save five lives by killing someone to harvest their organs, they ought to do so. Of course in reality that doesn’t work, but in the Least Convenient Possible World that’s exactly what she should do.
      ii) The chemist ought to take the job if the replacement candidate would be as competent as him in every way and there is no other way to provide for his family (can’t his wife get a job?). If other candidates are less competent then he will impede the Evil Nuclear Weapons Project by declining the post, at which point we’re balancing the utility of his family’s comfort against the small contribution to the evil of the ENWP he is making over the next candidate.
      iii) If nobody will find out about the photograph being taken, it will not harm anyone, etc – he damn well ought to take the photo. The rule “don’t take the photo even when he’s asleep” survives because that is really covering for the risk that the anthropologist is incorrect (e.g. there is another tribe member in the bushes watching, his assistant isn’t really asleep) and the harm that would result.
      iv) If we live in the Least Convenient Possible World, where killing someone is a benefit because your execution will stave off more murders, thus gaining more lives and QALYs than are lost by you killing someone and being executed yourself…then yes you ought to kill someone. Don’t be an idiot and kill your wife though, find a gang leader or serial rapist or something, for the greater benefit.
      All of your “Nevertheless we ought not to do so” presume inconsistent consequentialism. I defeat your counterexamples by being consistent – I might have intuitions against some of the actions described, but if my confidence in the utility gain, combined with the scale of the utility gain, stack up I will over-ride my intuition.

      • Troy says:

        All of your “Nevertheless we ought not to do so” presume inconsistent consequentialism.

        You may replace those quotes with “it seems we ought not do so,” if you like; then the question is how to explain why it seems to us that we ought not perform the actions described.

        I defeat your counterexamples by being consistent – I might have intuitions against some of the actions described, but if my confidence in the utility gain, combined with the scale of the utility gain, stack up I will over-ride my intuition.

        So long as you treat your intuitions about cases as having some evidential weight, however, the combination of intuitions about such apparently disparate cases ought to lead you to become less confident in your consequentialism.

        My argument isn’t that consequentialists are deductively inconsistent — you can bite as many bullets as you want. My argument is that there’s a lot of intuitional data that they can’t easily explain away. Compare: we get a bunch of experimental results apparently inconsistent with our best theory of physics. One or two inconsistent results we can plausibly blame on bad experimental set-up or the falsehood of some auxiliary assumption used to derive the predictions. Five inconsistent results done in very different settings with very different equipment is harder to explain away.

        • Slow Learner says:

          Why would I expect my intuitions to be morally correct or useful in edge cases?
          I have evolved intuition based on what has helped my ancestors to survive and thrive in tribal groups.
          That leads to, for example, a marked in-group bias in my intuitions which is neither justified nor moral.

          Pointing out cases where my intuition conflicts with consequentialism is not a problem because I don’t expect my intuition to give correct moral guidance.
          What I’m not certain of is why you think conflict-with-intuition to be a viable argument against a moral theory?

          • Irrelevant says:

            a marked in-group bias in my intuitions which is neither justified nor moral.

            That’s a singularly terrible example. Helping people whose needs and desires you actually have concrete information about rather than merely assumptions and abstractions is both justified and moral.

          • Slow Learner says:

            Nesting broke.
            @Irrelevant – you make a fair point that helping people I know better is more likely to be effective, but the fact that my intuition rounds “knowing better” to things like accent and presumed nationality doesn’t exactly line up with what you’re describing. I have a visceral bias towards British people, and broad national stereotypes don’t exactly count as “concrete information”.

          • Irrelevant says:

            True, but that implies you broke the algorithm by making your in-group too large.

          • Troy says:

            Pointing out cases where my intuition conflicts with consequentialism is not a problem because I don’t expect my intuition to give correct moral guidance.
            What I’m not certain of is why you think conflict-with-intuition to be a viable argument against a moral theory?

            It seems to me that you will have to appeal to intuition at some point: for instance, your judgment that in-group bias is “neither justified nor moral” is presumably based on an intuition to that effect. If you really thought that moral intuition was in general useless, you shouldn’t give any weight to the intuition that we have no moral reason to prefer members of our in-group to others.

            I have evolved intuition based on what has helped my ancestors to survive and thrive in tribal groups.

            Does this hypothesis actually predict the responses most of us have to (i)-(v)? It’s not obvious to me that it does.

            It seems to me that using this hypothesis to explain our moral intuitions about the above cases faces the same problem as a lot of evolutionary psychology: it’s indefinite enough in its predictions that it can be made to “explain” anything. If we had the opposite reaction to cases (ii)-(iv), say, it seems to me that you could just as easily have argued that those responses “helped our ancestors survive and thrive in tribal groups.”

          • Slow Learner says:

            Maybe so – I still have intuitions that are ill-aligned with morality, and I’m not sure why that’s unexpected.

          • Slow Learner says:

            @Troy What? It is not my intuition that tells me every person has value, that’s my reason telling me that.
            My intuition merrily tells me that I am more valuable than anyone else, my immediate social circle are next most valuable, and so on, because my intuition has an in-group bias.
            Unless you think my intuition can contradict my intuition (in which case it’s even more unreliable than I am asserting), what you’re saying here makes no sense.

          • Troy says:

            Slow Learner: I suspect that we are having a verbal dispute. Would you agree to the following (read in the first person)?

            “When I consider the proposition ‘every person has [equal?] value,’ I have an inclination to believe that proposition — it seems true to me.”

            “When I consider the proposition ‘the chemist ought not take the job,’ I have an inclination to believe that proposition — it seems true to me.” [If you do not have the requisite intuition in case (ii), substitute another case where you do.]

            Note that inclinations to believe or seemings-true (in the sense I intend) are defeasible and can be overridden. I can be inclined to believe that for any property there’s a set of things that have that property — that proposition seems true upon consideration — but not believe it because I can see that it leads to a contradiction. Similarly, it could seem true to me that every person has equal value but I could not believe it because I have theoretical reasons to be an anti-realist about value.

            Unless you think my intuition can contradict my intuition (in which case it’s even more unreliable than I am asserting)

            I think that’s almost certainly possible. The set-theory examples given above is an example. Similarly, different outcomes of empirical tests can contradict each other. That doesn’t mean empirical tests are worthless, just that they’re fallible.

          • Slow Learner says:

            Your definition of intuition seems odd and unfamiliar to me, and thus I’m guessing it isn’t standard usage.
            Avoiding the word, I still don’t see your point – you think that certain principles will conflict with consequentialism, fine. They probably do. Kinda the whole thing about consequentialism is that you break the rules if following them would be worse – so I fail to see the problem?
            I don’t expect my aesthetics on which principles I want to be true to be valid. I don’t expect my inclinations about what seems to be true to necessarily be valid either, and it still all comes back to what Infernal whatsits said earlier in this subthread (on phone, can’t find his username right now)

          • Troy says:

            Your definition of intuition seems odd and unfamiliar to me, and thus I’m guessing it isn’t standard usage.

            The term is used somewhat differently in, e.g., psychology and philosophy. My usage is closer to the latter. For example, this SEP article on intuition eventually settles on “[A7] S has the intuition that p if and only if it intellectually seems to S that p.”

            I don’t expect my aesthetics on which principles I want to be true to be valid.

            I don’t think seemings-true and what we want to be true are coextensive, or that one is a subset of the other. That said, sometimes wanting P to be true could cause P to seem true. That would be a possible error theory for our intuitions about (i)-(v). But it’s not a very good error theory, unless it tells us why we would want the moral facts to be as we intuitively take them to be in those cases.

            I don’t expect my inclinations about what seems to be true to necessarily be valid either

            Surely you do sometimes. I take it that “2 + 2 = 4” seems true to you, and that you think that your seemings-true about such simple mathematical claims are reliable.

          • Belobog says:

            @Slow Learner It is not my intuition that tells me every person has value, that’s my reason telling me that. Could you sketch how this works, or point me towards an article that does? It seems to me that the idea that every or any person has value is just one more moral intuition. Then, if I’m doubting most of my moral intuitions, I’m tempted to go one step further and just be a moral nihilist.

          • Peter says:

            Seemings-true about mathematical things: it seems true to large numbers of people that if a bat and ball together cost £1.10 and the bat costs £1 more than the ball, then the ball costs 10p. It’s an intuitive answer that leaps straight into the mind when the question is asked. It is easy enough to verify that the intuition is wrong, too. Indeed when I encountered the puzzle my first immediate thought was 10p, the second was that I needed to work it out properly.

        • InferentialDistance says:

          So long as you treat your intuitions about cases as having some evidential weight, however, the combination of intuitions about such apparently disparate cases ought to lead you to become less confident in your consequentialism.

          Human moral intuitions are heuristics programmed by the Blind Idiot God. The whole point of trying to formalize an internally consistent ethical model is so that we can use it override our intuitions because sometimes they’re just plain wrong.

          Contradicting human intuition isn’t a weakness of consequentialism, it’s a weakness of human intuition.

          • Slow Learner says:

            This.

          • onyomi says:

            Why do you take it as axiomatic that that is the nature of human moral intuition? What if human moral intuition is a rational faculty capable of (imperfectly) grasping objective facts?

          • InferentialDistance says:

            Why do you take it as axiomatic that that is the nature of human moral intuition? What if human moral intuition is a rational faculty capable of (imperfectly) grasping objective facts?

            I don’t. Your alternate explanation is less predictive of reality (i.e. the frequency with which moral intuition contradicts itself). Additionally, it requires more unproven assumptions (i.e. some objective morality that human intuition is imperfectly grasping).

            Even were that the case, I would still be correct because human intuition would be imperfectly grasping objective morality, and we want a coherent system that perfectly models said morality so we can correctly overrule our intuition when it goes wrong.

          • onyomi says:

            How could consequentialism be more predictive of ethical reality than intuitionism, given that the measure of an ethical system is how well it produces results which accord with our moral sense?

          • InferentialDistance says:

            How could consequentialism be more predictive of ethical reality than intuitionism, given that the measure of an ethical system is how well it produces results which accord with our moral sense?

            ethical reality

            Try “plain reality”. The assertion that moral intuition is plugging into some sort of of objective morality is less likely because of Occam’s Razor. You still need the Blind Idiot God to explain reality, and he also explains moral intuition; adding objective morality into the mix is an unnecessary assumption (it adds zero explanatory power). Similarly, because the Blind Idiot God is both blind and an idiot, it immediately follows that the heuristics will contradict each other frequently; you have to assume that human intuition is very imperfect at plugging into objective morality to generate that many contradictions. And if human intuition is that bad at mapping to objective morality, it makes a pretty shitty tool for figuring out what said objective morality is.

          • onyomi says:

            So you are basically saying that morality is reducible to evolution?

            I would say that evolution is a very poor predictor of what we will find moral and immoral. If we were operating behind a veil of ignorance as to what people in the real world typically find to be moral or immoral, evolution would lead us to predict that we would find such behaviors as lying to get people to have sex with us to be moral, since they increase our reproductive success.

            You might say “well humans are social creatures and lying may reduce the level of trust needed to survive and reproduce…” but that is ex post facto. Evolution has very poor predictive power with respect to what people actually find moral and immoral. Genghis Khan could hardly have been more successful in terms of evolution, but I don’t know anyone who thinks he was a moral paragon. Contrast that to various celibate saints.

            Intuitionism is the more economical stance: we evolved to be able to perceive good and bad because good and bad are actual properties of actions. Evolution creating a strong sense about properties of action which don’t exist and the sensing of which may not aid reproductive fitness is a less parsimonious explanation.

          • Cauê says:

            How would that work?

            Suppose we grant that good and bad are properties of actions. How would evolution reflect this?

            If action X is inherently good, and individuals who behave as if it’s good have a reproductive advantage, then there’s evolutionary pressure to identify X as good.

            But if X is inherently bad, and individuals who behave as if it’s good have a reproductive advantage, then there’s *also* evolutionary pressure to identify X as good.

            You can remove “X is inherently good/bad” entirely from the picture, and the result is exactly the same. Inherent goodness/badness is an unnecessary complication.

          • InferentialDistance says:

            So you are basically saying that morality is reducible to evolution?

            No, I’m saying that human intuition is reducible to evolution, and that if you want to make a case for why human intuition trumps a coherent logical system, you need sufficient evidence to overcome the tendency for human intuition to just plain be wrong.

            Human moral intuition is often self contradictory (framing effects, for example). I want a non-contradictory system that I can use to resolve contradictions in my intuition. Rejecting any system that contradicts my intuition entirely misses the point.

            You’re typical minding pretty hard with your examples, too. Genghis Khan had different moral intuitions from celibate saints. That you agree with the saints and disagree with the Khan is not persuasive that the Khan was wrong, just that your intuitions differ.

            And, given evolution programming human intuition, the history of violence and war is obvious. Might makes right is not an uncommon moral intuition, historically (or even today).

          • Peter says:

            I don’t think it’s necessary to rely on evolution as the sole source of intuitions. There’s also culture, and individual learning – our capacities for both are of course evolved.

          • onyomi says:

            “Might makes right is not an uncommon moral intuition, historically (or even today).”

            Actually, I think it is an uncommon moral intuition. The problem is that many people don’t care about doing what is right. That’s why there is widespread agreement in most societies, and even across societies about who is and isn’t moral. There are edge cases, to be sure, but that is usually a result of political alliances and other biases. Virtually no one anywhere thinks Ted Bundy was a good guy, and whenever people try to indict the character of a widely admired figure they do so by trying to dig up bad things they may have done or said, not by trying to convince people of different standards of goodness.

            Can you give me an example of widespread and blatantly contradictory human moral intuitions? In most cases I find it’s a disagreement about facts, not what is right. Take abortion, for example: almost no one thinks it’s okay to kill innocent, helpless people, but people do have widely divergent ideas about when a fetus becomes a full-fledged person.

            Re. framing effects, it just seems to be a matter of confusing people as to the reality of a situation. The fact that one can fool the moral sense doesn’t mean it’s useless any more than the the blue and black dress renders vision useless. (Yes, vision is somewhat unreliable, but unlike with vision, we can’t pull out a ruler or similar tool to measure how ethical something is in advance).

            Also, how is saying “human intuition is reducible to evolution” different from saying “morality is reducible to evolution,” other than being even broader, given that ethical intuitions are a subset of human intuitions?

            “Rejecting any system that contradicts my intuition entirely misses the point.” Here I think we are using the word “intuition” to mean different things. You seem to be using it to mean, “what I want to do.” I am using the term “ethical intuition” to mean “that which, it seems to me, based on the knowledge I have, is the right thing to do.”

            It is an extremely common experience for someone to want to do something which his ethical intuition at the same time tells him is wrong. It is also common for one’s ethical intuition about an action to change in the light of new facts. My intuition about the rightness of abortion might change, for example, were I to learn that fetuses can’t feel pain. Intuition is not, therefore, redundant as a guideline for behavior as you seem to imply.

          • InferentialDistance says:

            “Rejecting any system that contradicts my intuition entirely misses the point.” Here I think we are using the word “intuition” to mean different things. You seem to be using it to mean, “what I want to do.” I am using the term “ethical intuition” to mean “that which, it seems to me, based on the knowledge I have, is the right thing to do.”

            Then consequentialism is intuition.

          • onyomi says:

            Only if you define “the right” as “that which produces the best outcome,” which is an arbitrary heurestic that sometimes maps onto our moral sense and sometimes doesn’t.

            For example, I can think of things that would increase average happiness, such as killing all sad people, which are nonetheless wrong, and of things that would produce bad overall outcomes, such as voting not to convict an innocent man when you know the decision will result in deadly riots, which are nonetheless right. Therefore, I don’t think right and wrong are fully reducible to utilitarian and/or consequentialist considerations.

          • InferentialDistance says:

            Only if you define “the right” as “that which produces the best outcome,” which is an arbitrary heurestic that sometimes maps onto our moral sense and sometimes doesn’t.

            Speak for yourself. Maps onto my moral sense just fine.

            For example, I can think of things that would increase average happiness, such as killing all sad people, which are nonetheless wrong, and of things that would produce bad overall outcomes, such as voting not to convict an innocent man when you know the decision will result in deadly riots, which are nonetheless right. Therefore, I don’t think right and wrong are fully reducible to utilitarian and/or consequentialist considerations.

            Bad examples. The former creates many new sad people, and also is a laughably simplistic model (dead people are a consequence that has to be accounted for too). The latter ignores the consequences of unjustly punishing people, which in the long term is less cooperation with the justice system and more violence.

            You’re tearing down straw men. There are strains of consequentialism that promote adherents to act based on heuristic principles because adopting the heuristic principles leads to better outcomes than attempting to perform on-the-spot calculations of outcome. But the heuristics themselves are determined by calculations of consequence, not some fuzzy “moral sense”.

          • onyomi says:

            Does your moral sense tell you it’s okay to kill the patient in the organ-harvesting example? Is the sadness of the survivors or the bad precedent your only ethical qualm about killing all sad people?

            These objections are no more “straw men” or laughable oversimplifications than utilitarian objections to rights-based arguments, such as “is it wrong to steal a penny to save a million lives?” Respecting people’s rights is usually the right thing to do, but we can’t reduce morality to just that. Doing that which produces the happiest outcome is usually the right thing to do as well, but neither can we reduce morality to just that either.

          • onyomi says:

            “The latter ignores the consequences of unjustly punishing people, which in the long term is less cooperation with the justice system and more violence.”

            This is bending over backwards to save an oversimplification.

          • InferentialDistance says:

            How do you determine which consequences are “good”?

            Same way everyone else does: bias.

            Does your moral sense tell you it’s okay to kill the patient in the organ-harvesting example?

            In the least convenient possible world, yes. In this one, no.

            Is the sadness of the survivors your only ethical qualm about killing all sad people?

            No, because I value people beyond their mere happiness/sadness (doubly so if we’re talking about temporary emotion).

            This is bending over backwards to save an oversimplification.

            You don’t get to simplify with consequentialism. We’re concerned with getting better consequences in the real world. If your premises don’t hold in the real world, you haven’t actually defeated consequentialism, merely imagined a reality where your moral sense is poorly tuned.

          • onyomi says:

            I’m saying you’re oversimplifying because consequentialism, like rights-based arguments about ethics, does not adequately capture the way morality does actually function in the real world.

            You want to eliminate the intuition aspect of it as if that were somehow clearer or more objective (ironic, considering you also seem to argue against moral realism), but you’re basing all this on your intuition that making people happy is good. What if I told you that I think suffering is good for people, so for me, good consequences means maximizing suffering? On purely consequentialist grounds there can be no reason for telling me I’m wrong. Yet I am wrong, because redefining the good as “maximum suffering” doesn’t make it so, because “good” is an actual property of actions, not a vague feeling.

            Given that reliance on intuition is inescapable, and arguably no more prone to error than reliance on the senses (I can find you a lot more people who think the dress is yellow and white than who think murdering the innocent for fun is okay), and given that purely utilitarian (or rights-based, or virtue-based, or evolution-based) explanations of ethics are insufficient, what, then, is to be gained by throwing out intuition?

          • Cauê says:

            Does your intuition say that killing people is wrong, or that people dying is bad?

            Maybe it says both things, and that’s why we don’t like trolley problems.

            But if one decides to ground morality on intuition, and intuition says that people dying is bad, and more people dying is worse, doesn’t this lead to consequentialist calculations?

            If, further, one’s intuition says that more people dying is worse, but they also know, for example, that their intuition has a problem with scope insensitivity and would react to a thousand deaths as it would to ten million, then wouldn’t it be coherent to override intuition when its answers would lead to worse consequences, as defined by the criteria initially set by those same intuitions?

          • onyomi says:

            “But if one decides to ground morality on intuition, and intuition says that people dying is bad, and more people dying is worse, doesn’t this lead to consequentialist calculations?”

            I’m not saying that one cannot consider consequences in making an ethical determination, merely that ethics is not purely reducible to consequential considerations alone.

            Though this may not have been clear, I also do not mean to limit intuition to a vague sort of feeling that is immune to rational thought. As in the example of abortion above: if new science comes out showing fetuses don’t feel pain, that might change my intuition about abortion. It’s not so much that the underlying intuition about the rightness or wrongness of killing changes, but that my understanding of the situation changes.

            Similarly, if my intuition tells me that innocent people dying is bad, my rational ability to evaluate will tell me that lots of innocent people dying is worse. If the sense doesn’t scale well, then that is something one can take into account.

            Having my wallet stolen from me, for example, would cause me more emotional distress than learning that North Korea just executed 10,000 political criminals, but it’s also clear to me that the latter is much worse, ethically speaking.

            The key here is not to confuse intuition with an emotion: having the intuition “charity is good” doesn’t mean the same thing as “charity: 🙂 ” and having the intuition “murder is wrong” doesn’t mean the same thing as “murder: 🙁 “

          • Irrelevant says:

            Genghis Khan could hardly have been more successful in terms of evolution, but I don’t know anyone who thinks he was a moral paragon.

            Seriously? OK, hi. You know one now. And then there are the people who literally venerate him.

          • onyomi says:

            You think raping and pillaging is virtuous behavior? Not, you admire the bravery necessary to do it, but you think that raping and pillaging is ethical?

          • Troy says:

            Inferential Distance: The assertion that moral intuition is plugging into some sort of of objective morality is less likely because of Occam’s Razor. You still need the Blind Idiot God to explain reality, and he also explains moral intuition; adding objective morality into the mix is an unnecessary assumption (it adds zero explanatory power).

            If you deny “objective morality,” in what sense is consequentialism a better moral system than, say, one which respects intuitions (i)-(v)?

            Caue: How would that work?

            Suppose we grant that good and bad are properties of actions. How would evolution reflect this?

            You needn’t think that moral intuition was selected specifically for tracking moral facts. It might be a byproduct of other faculties that were selected for. Compare: evolution presumably didn’t select for ability to do Calculus. But ability to do Calculus came along with other abilities that were selected for.

          • Cauê says:

            Ok, but if good and bad were actually intrinsic properties of actions, how would evolution *get it right*, instead of coming up with intuitions about good and bad that are a product of evolutionary pressures and may or may not reflect the *true* goodness or badness of an action? Where’s the entanglement?

          • onyomi says:

            As Troy mentions, I think the ability to determine good and bad is probably largely a side effect of having a rational mind, which evolution probably selected for because it made us good at using tools and navigating social situations.

            I have read arguments, for example, that the ability to see stars or appreciate music (though maybe beneficial for sailing or psychic comfort) were probably not specifically selected for; rather, seeing things not on earth is a side effect of having the obviously useful skill of seeing things on earth, and enjoying complex sound patterns is a byproduct of having the useful skills of hearing and pattern recognition.

            And, in fact, I think we do see evolutionary pressures to get ethical questions wrong, though it’s telling that they seem to come from a different place (that is, it is conceivable to say, “my gut is uncomfortable with gay people getting married, but my rational evaluation of the ethics of the situation says it isn’t a problem).

            In fact, bias is not the source of ethical intuition, but the biggest obstacle to its accurate functioning. Evolution, for example, wants us to side with our own tribe even when they’re wrong, but our rational ability to evaluate the ethics of a situation tells us that “my tribe is always right” is not a good ethical justification. In such a case, getting the question right largely hinges on seeing the evolution-shaped bias clouding the rational ethical evaluation faculty.

          • blacktrance says:

            If you deny “objective morality,” in what sense is consequentialism a better moral system than, say, one which respects intuitions (i)-(v)?

            Because consequentialism is (or can be) derived from agents’ assignments of value to things/outcomes/states of the world, and maximizing the fulfillment of those values conflicts with some of those intuitions.

          • Troy says:

            Ok, but if good and bad were actually intrinsic properties of actions, how would evolution *get it right*, instead of coming up with intuitions about good and bad that are a product of evolutionary pressures and may or may not reflect the *true* goodness or badness of an action? Where’s the entanglement?

            I’m in substantial sympathy with what onyomi said. I think that what specific story you tell here will largely depend on your meta-ethics, e.g., your analysis of normative concepts like ‘ought’ and ‘good,’ and your metaphysics about what kind of thing goodness is. These are difficult questions, but I don’t think the difficulty is in any way unique to ethics. For instance, how do we perceive mathematical facts? Why couldn’t evolution have landed us with a bunch of false mathematical intuitions?

            (Of course, in the case of both morality and math, we have some false intuitions. But it is difficult for me, at least, to imagine what it would be like to have uniformly false mathematical intuitions.)

          • @bkacktrance

            That’s an explanation of the difference, not why the one is better than the other.

          • Troy says:

            Because consequentialism is (or can be) derived from agents’ assignments of value to things/outcomes/states of the world,

            I don’t think consequentialism can be so “derived.” But if it could, then it would be “objectively true,” it seems to me.

          • InferentialDistance says:

            If you deny “objective morality,” in what sense is consequentialism a better moral system than, say, one which respects intuitions (i)-(v)?

            Logical coherence. And, for properly tuned consequentialism, more total respect for human moral intuition across all scenarios than one that respects said intuitions at the cost of others.

          • Cauê says:

            “Why couldn’t evolution have landed us with a bunch of false mathematical intuitions?”

            The entanglement there is quite clear, I think. Mathematical intuitions that didn’t align with reality could easily be damaging (“there were two tigers there, one died, so there are no tigers left”; “we found five apples, one for each of the six members of the tribe, but I didn’t get an apple, so somebody stole mine”).

            And even then there’s no law saying evolution would get it right – it should accord with reality only in the manner and to the extent that reproductive advantages take it. So we’re left with a lot of unintuitive math, and our naïve physics only approximates reality enough to be useful at most everyday situations.

            For evolution to get right “good and bad as intrinsic properties of actions”? I don’t know what that would take.

          • Troy says:

            Cauê: The entanglement there is quite clear, I think. Mathematical intuitions that didn’t align with reality could easily be damaging (“there were two tigers there, one died, so there are no tigers left”; “we found five apples, one for each of the six members of the tribe, but I didn’t get an apple, so somebody stole mine”).

            Fair enough — simple mathematical ability, at least, has clear evolutionary advantages. But it still seems to me that there are other examples that we’re all committed to the reliability of that don’t have such advantages. For example, you are presumably committed to your being reliable in judging whether there is such a thing as inherent goodness: you think you’ve got a good argument that there isn’t. But why think evolution would make you a reliable guide to such esoteric questions as that?

            InferentialDistance: Logical coherence. And, for properly tuned consequentialism, more total respect for human moral intuition across all scenarios than one that respects said intuitions at the cost of others.

            (1) What do you mean, logical coherence? Depending on your flavor of anti-realism/subjectivism, I think the notion will either make no sense when applied to morals or will tell against your theory. For example, if the belief “X is morally wrong” attributes to X some property, moral wrongness, and moral wrongness does not exist, then belief in this flavor of anti-realism will be logically inconsistent with the belief that it’s wrong not to maximize the good. On the other hand, if moral beliefs are in some way emotive or non-propositional (whatever that would mean), then I don’t see how they can be logically coherent or incoherent at all — logic doesn’t apply to such domains.

            (2) Assuming that we can make sense of these things in this domain, why should I care about them? Incoherent beliefs about objective matters of fact are bad because they keep me from getting at the truth. What’s wrong with incoherent beliefs about non-factual matters?

          • InferentialDistance says:

            What do you mean, logical coherence?

            That the moral framework does not simultaneously state that performing action X is morally wrong, and not performing action X is morally wrong. Harvesting organs, for example, places “don’t allow people to die” against “don’t kill people”; either the doctor has to allow a number of patients to die that they could have saved, or has to kill someone to save the patients. Either way, a moral intuition is disrespected; there is a contradiction in prescribed action; the moral framework is incoherent; you cannot save all the moral intuitions.

            Assuming that we can make sense of these things in this domain, why should I care about them?

            Because it’s important that your decision making algorithm not tell you to do mutually exclusive actions. Purely for pragmatic reasons.

          • Troy says:

            That the moral framework does not simultaneously state that performing action X is morally wrong, and not performing action X is morally wrong.

            Thanks; that helps. Two points:

            (1) Our belief that actions X and ~X cannot both be morally wrong is itself presumably based on intuition (e.g., this claim isn’t something amenable to empirical investigation). And this claim has in fact been denied by some people: e.g., Michael Walzer thinks that in cases of “supreme emergency” in wartime sometimes both violating the rules of war and respecting them are immoral — “damned if you do and damned if you don’t.”

            I suspect your response to this would be to say that you’re not making some kind of disputable metaphysical claim in saying that X and ~X cannot both be wrong. Rather, we have reason to accept this as a constraint on moral wrongness for, as you say, “pragmatic reasons.” Perhaps. I am tempted, however, to keep pushing on this and say that this presupposes the existence of certain pragmatic norms/reasons, the content of which could only be known via intuition. (For instance, why think that we have reason not to pursue contradictory goals?) More generally, evolutionary debunking arguments aimed at moral normativity seem to me to be just as effective (if they are effective at all, of which I am skeptical) at debunking pragmatic normativity (and epistemic normativity, for that matter, or claims about what we ought/have reason to believe).

            (2) There are other moral systems that are logically coherent in this sense. Why favor consequentialism over those? You suggest that consequentialism can account for our overall moral intuitions better than these systems. But here I have two sub-worries:

            (a) As before, if there are no facts in this domain, why care about respecting our moral intuitions? If I think my intuitions are (fallibly) reliable guides to a domain, then I should take them into account for evidential reasons. But here there are no facts, so there is no evidence.

            (b) Inasmuch as it does make sense to take intuitions into account, I can just run my original argument in this thread. That is, even if we agree there are no moral facts to capture, if we’re trying to reach some kind of “reflective equilibrium” with our intuitions, then inasmuch as our intuitions conflict with consequentialism about very different actions in very different cases (e.g., cases i-v), it will be difficult to “properly tune” consequentialism in such a way to respect all of these.

          • InferentialDistance says:

            Our belief that actions X and ~X cannot both be morally wrong is itself presumably based on intuition (e.g., this claim isn’t something amenable to empirical investigation).

            Inference from observation of reality, in which nothing has simultaneously traits X and ~X. Invariably, assertions to the contrary reveal a form of equivocation where ~X is not, in fact, the logical negation of X. The prior for logical coherence in the universe is strong enough that, to whatever degree one believes morality exists, an assertion that it does not need to be logically coherent needs extraordinary evidence. I have never seen such evidence.

            And this claim has in fact been denied by some people

            Illogical people are not a persuasive argument.

            Michael Walzer thinks that in cases of “supreme emergency” in wartime sometimes both violating the rules of war and respecting them are immoral — “damned if you do and damned if you don’t.”

            This is almost assuredly a case of X(a) and ~X(b) for a != b (respect the rules of war in situation a, violate them in situation b). That is not a contradiction. Furthermore, that looks a lot like consequentialism…

            There are other moral systems that are logically coherent in this sense. Why favor consequentialism over those?

            The existence of other logically coherent systems is not a defeat of consequentialism. My argument is not that consequentialism is the best moral, merely that the arguments against it have failed to distinguish it as worse than the proposed alternatives. The “problem” of disrespecting moral intuition is a flaw of all moral frameworks, you need to demonstrate that consequentialism is quantitatively worse in that regards. You have not done so. Probably because it is absurdly difficult to do, but being absurdly difficult to do is not licence to assume the result.

            As before, if there are no facts in this domain, why care about respecting our moral intuitions?

            That is a fully general counterargument against caring. It doesn’t defeat consequentialism, it defeats morality. If you don’t care about consequentialism for this reason, then you shouldn’t care about moral intuitions either. Consequentialism is no worse off than any other moral framework from this objection.

            Inasmuch as it does make sense to take intuitions into account, I can just run my original argument in this thread. That is, even if we agree there are no moral facts to capture, if we’re trying to reach some kind of “reflective equilibrium” with our intuitions, then inasmuch as our intuitions conflict with consequentialism about very different actions in very different cases (e.g., cases i-v), it will be difficult to “properly tune” consequentialism in such a way to respect all of these.

            It is very difficult to properly tune any moral framework to respect all human moral intuitions. That’s why ethics is hard.

          • Troy says:

            Inference from observation of reality, in which nothing has simultaneously traits X and ~X.

            This isn’t the case you described. The case you described is one in which two incompatible objects both have the same property (wrongness). The claim in question is that X is wrong and ~X is wrong, where X and ~X describe actions. This is different from the claim that X is wrong and X is not wrong. The latter is a logical contradiction; the former is not. To see this, replace wrong with “leads to at least one death.” Two incompatible actions can both lead to at least one death, as in trolley cases.

            This is almost assuredly a case of X(a) and ~X(b) for a != b (respect the rules of war in situation a, violate them in situation b).

            This is not what Walzer thinks — he really thinks there are moral dilemmas in one and the same situation. He’s not the only one, either — see this article for discussion.

            For the record, I agree with you that Walzer is wrong. But yes, he is claiming (explicitly) that there are cases where we ought to perform both X and ~X; and no, in so doing he is not guilty of violating the law of non-contradiction.

            Consequentialism is no worse off than any other moral framework from this objection [that we shouldn’t care about respecting our intuitions about a non-factual domain].

            I agree. I’m not an anti-realist; and I think if you are all normative ethical systems are equally badly off.

            My argument is not that consequentialism is the best moral, merely that the arguments against it have failed to distinguish it as worse than the proposed alternatives.

            I apologize; I presumed that you were endorsing consequentialism. If you’re merely arguing that my arguments haven’t failed to defeat it, then I agree, my question is not in order.

            The “problem” of disrespecting moral intuition is a flaw of all moral frameworks, you need to demonstrate that consequentialism is quantitatively worse in that regards. You have not done so.

            I agree with you that I have not done so. My argument is more limited: simply that the existence of several varied intuitions at odds with consequentialism is some reason to reject it. It’s defeasible reason, and to determine what overall reason we have to (not) accept consequentialism we’d have to look at other important intuitions as well as non-intuitional considerations. I see my argument in this thread as merely one part of a much larger project.

      • Deiseach says:

        If a doctor can actually save five lives by killing someone to harvest their organs, they ought to do so. Of course in reality that doesn’t work, but in the Least Convenient Possible World that’s exactly what she should do

        On the other hand, if every time someone breaks their ankle and calls in to the Emergency Department of their local hospital, they risk being murdered for their parts, then (a) people will not go near hospitals until they are on the point of death and so drive up the necessity for expensive and invasive treatment (b) people will be motivated to be unhealthy and unfit so they will not be attractive candidates for organ-murders, thus burdening society with the cost of illhealth (see the alarm about the ‘obesity epidemic’) (c) doctors will be regarded as the same level of professional as hitmen (d) if your patients come in with broken ankles and leave in coffins, someone is going to notice eventually and there may be some consequences.

        • Slow Learner says:

          Well yeah, there’s more to it than I told to Troy. I didn’t mention it because none of it undermines consequentialism, and so I don’t see it as significant in this context, not because it isn’t true.

        • Jiro says:

          If harvesting organs has the negative consequences you describe, then it’s not the least convenient possible world.

          • Deiseach says:

            If the least convenient possible world is one in which all objections are already met and disposed of, what are we arguing about? Basically, is it moral or immoral to commit murder?

            If it’s not immoral to kill, why are we worried about keeping these ten people alive? Answer: we’re not (as can be seen by the dismissal of the ‘problems with immunosuppresants’ answer), we’re interested in consistency of philosophy. If it really was a simple “Can we save ten lives at the price of one?” query about saving ten or killing one, then “Not if you risk killing your patients due to the shock of transplant rejection” is perfectly valid as an answer (because killing your patient due to transplant rejection is an undesirable consequence).

            So then let’s peel this onion down to the core: what are we asking here?

            Ten people needing transplants versus one person who can provide the ten organs needed is only a way of phrasing “if you can increase utility or happiness or whatever your ultimate value is by killing, should you kill?” We don’t care about the hypothetical ten lives, it’s the principle of “is there any action so repugnant that even if it brings forth good consequences, it should not be done? And what is your basis for your answer?”

            Which I imagine is the core of the story “Those Who Walk Away From Omelas”. An entire city is kept in peace and plenty by the suffering of one – yes or no? And being that it’s the least convenient possible world, no, you can’t just open the cellar door and let the child go 🙂

          • Cauê says:

            “If the least convenient possible world is one in which all objections are already met and disposed of, what are we arguing about?”

            About whatever the thought experiment was supposed to be…

            The least convenient possible world is the one where we help with the actual point of the question instead of looking for escapes that allow us to avoid uncomfortable moral dilemmas.

          • Deiseach says:

            But if the point of the thought experiment is “consequence of breaking the no-kill rule: save ten lives”, then it is pertinent as a matter of fact, not morality, to consider “will this actually save ten lives, or five lives, or only one?”

            You don’t get to wiggle out of it by saying “Oh, you are only trying to avoid an uncomfortable moral dilemma by raising this objection!”

            I maintain: the point of the thought experiment is not “what should you do to save a life?”, it is “do you think killing is ever justified, or do you think there is an absolute prohibition on killing?”

            Saying “yes, Dr Jones should kill the backpacker to save the ten lives” and leaving it there is not good enough. If you’re going to base your rule of action on consequences, then you have to consider all the consequences, not the consequences in a perfect world where nobody will find out Jones is a murderer, every patient recovers perfectly, the backpacker is in perfect health and conveniently turns up just in time to save all ten, there are no friends, family or work colleagues to raise the question of his disappearance (and that also means Jones has solved the problem of the perfect murder), etc.

            Why not imagine a perfect world where Jones discovers how to culture artificial organs instead? One is as likely as the other, if we’re running thought experiments! And then, we could say “is killing ever justified by consequence, e.g. if Dr Jones can commit the perfect untraceable murder that no-one will ever know about by killing the backpacker with the winning ticket for the multi-million lottery, which Jones then takes and claims the prize?”

            The principle is still the same – is there an absolute moral bar on killing or do positive consequences outweigh that? – but dressing one problem up as “save ten lives at the cost of one” versus “killing to become rich” is just as much “escapes that allow us to avoid uncomfortable moral dilemmas” for consequentialism as anything for other points of view.

          • Troy says:

            Deiseach: I did try to pick semi-realistic counterexamples, because it’s easier to respond to an outlandish case with “well, your assumptions are absurd, so it’s not surprising our intuitions would go wrong in such a weird case.” The Utility Monster case is admittedly outlandish, but (ii) and (iii) — the Nuclear Weapons and Anthropologist cases — are not unrealistic at all (the latter is taken from an actual anthropologists’ field notes). (iv) is counterfactual inasmuch as we don’t actually have such strong evidence that the death penalty is an effective deterrent, but I don’t think it’s too hard to imagine a world in which we do.

            In the Organ Harvesting case, I think we can fill out the details so that the No One Will Ever Know assumption is realistic. For instance, it’s wartime and no one currently has the resources or inclination to check up on what this doctor is doing on this one occasion.

        • Troy says:

          On the other other hand, perhaps this will incline people to self-medicate more and so the health care system will be less taxed. 🙂

        • Mat Hatter says:

          I think these are rationalizations mostly.

          There’s all kinds of stuff we could do to solve the organ shortage before doctors become a public liability. We could make commercial life organ donations legal, we could allow suicidal people to get euthanized in a hospital and donate all their organs.

          And if there’s still a shortage, then the occasional “harvest murder” would still be in most patients’ interests, because they’d be more likely to get an organ when they need one!

          • Deiseach says:

            they’d be more likely to get an organ when they need one!

            Depending on what the percentage of harvest murders is and what your need is; if you need a new kidney, but have perfectly good liver, it’s easier for you to live on your one remaining functioning kidney than it is for someone to live without a liver, so you might be killed as one of the harvest murders for your good liver, plus it saves on having to give you a kidney which can go to someone else (e.g. someone with both kidneys failing).

            In that world, being the most unhealthy would be the safest choice, since you’re not going to be knocked off for your healthy young vital parts and you go to the top of the queue for the healthy young vital parts of others 🙂

          • Mat Hatter says:

            Yes, if it happens too often there may be moral organ hazard. 😀

            But since this is a forseeable consequence, the consequentialists should plug it into their world models and find the sweet spot where the right number of people is harvested.

            You could also measure the population’s stress hormone levels and kill off the unhappiest people more often, this would create a moral anti-hazard because then people would try to be as unstressed as possible to prevent their being harvested (“race to the spa” instead of “race to the bottom”) 😀

          • houseboatonstyx says:

            You could also measure the population’s stress hormone levels and kill off the unhappiest people more often [….]

            Simpler to let The Market take care of it. Offer legal euthanasia by heroin overdose (or donor’s drug of choice) plus a fortune to donor’s chosen beneficiaries, etc. The money comes from the organ receivers: the highest bid for each organ gets it. A potential donor watches the total reward for a donor with their profile build up and up, like a US state lottery, till it reaches an amount zie will accept.

          • Mat Hatter says:

            houseboatonstyx, sounds good. :3

        • houseboatonstyx says:

          I think you’ve missed the part about “No one will ever know.”

          • Deiseach says:

            My eye no-one will ever know. “Hmmm – funny, that: everyone who was treated by Dr Jones ends up dead, even if they only went in with a cough. On the other hand, Jones has made a mint from their ‘Transplant-While-U-Wait’ clinic, motto “Whatever you need, we got it on ice. No wait, no limits”. What a coincidence, eh?”

          • John Schilling says:

            Isn’t it usually a strong indicator that you’re about to do something very, very wrong, that you feel you have to hide it even from good people?

          • Cauê says:

            Well, it’s a strong indicator that you expect people to react in ways you don’t want. It being wrong (or rather, they thinking it’s wrong) is only one in a large list of possible causes for that.

          • houseboatonstyx says:

            everyone who was treated by Dr Jones ends up dead

            The question was about one incident, which she will not get caught at. Doing it often enough to get caught, would be a different question.

            I find this quite credible.

        • Airgap says:

          In the year 2087, the human race is at war with a race of beings known as the Alexandroids. Created as a result of an experiment in transhumanism and psychiatry gone awry, the Alexandroids have a fearsome biological weapon which, when it enters a human, causes exactly one random organ to fail.

          In the aftermath of these attacks, it is common for individuals to be sacrificed, and their organs used to keep attack victims alive. People often volunteer or accept if selected, but if no suitable voluntary candidate is available, the human military government will select someone to be sacrificed. This is widely accepted as necessary and correct.

          Some soldiers are having a philosophical discussion between Alexandroid raids:

          A: “So imagine a world in which the inhabitants are so jaded by security and material wealth that, imagining a world even less convenient than the real world, they would object to a doctor trying to distribute a sacrifice’s organs in an optimal way, and ironically claim that they did so out of morality.”

          B: “What? Dude, that’s ridiculous. No way any human could be that evil.”

          A: “I don’t think it’s evil so much as a lack of exposure to genuine hardship, so they can’t imagine what being in a really bad situation would be like.”

          B: “Bullshit. Anybody who says something like that seriously is an Alexandroid in disguise. Shoot first before they get any bugs launched.”

          • Deiseach says:

            The question was about one incident, which she will not get caught at. Doing it often enough to get caught, would be a different question.

            But if the principle has been established that Jones should kill the one to save ten, then what about the next opportunity? Deciding “No, I’ll let these eight people die because otherwise I’ll get caught” is putting one life ahead of eight, and is inconsistent philosophy! Jones is obligated by the rules of philosophical thought experiments to consistently continue untraceably killing backpackers to save multiple organ-transplant groups!

            Besides, the risk of getting caught is only a quibble to avoid facing uncomfortable moral dilemmas, is it not? Once having pulled the lever, pushed the fat man off the foot bridge, or killed the backpacker, we must continue to do this forever or ever, else we will be fuzzy-minded and imprecise in what we claim to believe!

  33. Susebron says:

    The Vice article was kind of terrible. It started out okay, then started talking about how LW would make you superhuman, and then moved on to the comparisons to Scientology. They also seemed oddly unaware that HPMOR was explicitly ideological.

    • Zorgon says:

      There are many reasons to dislike Scientology, but I think inspiring a culture-wide inability to develop an ideology around human cognition without suffering pattern-matching to Xenubbard Level 6 Thetan is probably one of the strongest.

  34. Forlorn Hopes says:

    David Auerbach wrote a very interesering piece about the psychology behind online leftism.

    http://theamericanreader.com/jenesuispasliberal-entering-the-quagmire-of-online-leftism/

    • Peter says:

      I read the article, I was thinking, “very interesting, I can’t wait to see what it concludes” and then it suddenly stopped.

      • Zorgon says:

        Auerbach suffers from the failure mode common to ideologues of assuming that everyone reading already agrees with his thesis and that as a result he doesn’t actually need to present an argument; he simply needs to present a long series of assertions and evidentary or commentary supporting positions and wait for everyone listening or reading to smile and nod sagely in agreement.

  35. briancpotter says:

    Is there any merit to caring about animal welfare from an acausal trade standpoint? Animals are so much less capable agents that it’s rarely worth the coordination cost to try to cooperate with them, but that perhaps doing so makes it that much more likely that any agents that much more powerful than humans will do the same.

    • Irrelevant says:

      The lack of special reverence we hold for organisms that have symbiotic relationships of some form suggests no.

    • Jiro says:

      This reasoning is basically “how we treat beings inferior to us may mean that we are better treated when we are inferior to some other being”. Taken literally, it means that we should treat *all* inferior beings in a certain way, including beings that are more inferior than the beings that animal rights activists are concerned with, such as insects, plants, nematodes, and computer programs. Of course, we don’t do that–rather, we draw a line, and pretty much everyone draws the line in a way that excludes nematodes. But drawing a line, even at nematodes, is inconsistent with your reasoning, since if we draw a line excluding nematodes, these agents could draw a line excluding humans.

    • RCF says:

      How would other agents benefit from us treating animals well?

      • briancpotter says:

        I don’t have the strongest grasp of acausal trade, but the intuition was that a commitment to cooperate with much less powerful agents – even if cooperation is a net loss – is beneficial if you expect to run into much more powerful agents that can’t gain anything by cooperating with you. As long as any given agent thinks it possible to run into an agent more powerful than itself, it would (by this line of reasoning) benefit by running a similar algorithm. A sort of daisy-chain of prisoner dilemmas, if you will.

        I don’t know whether it works out like that. I suspect it doesn’t – one obvious problem is that animals don’t seem to be running an algorithm like this, which sort of ruins the “cooperate with other cooperators” idea behind it. But I hoped someone with a better understanding of acausal trade and/or decision theory and/or game theory might shed some light.

  36. Dale says:

    A while ago I wrote a short comment advocating allowing people to sell equity stakes in their future earnings. I wrote a longer comment defending the idea, but it became very much longer. Much too long for a comment! Basically I argue

    Equity means your repayments are proportional to your income, rather than fixed like with debt
    Equity would be lower risk for borrowers – it makes more sense for investors to hold it in a diversified portfolio
    Equity would give students information about the future prospects of different majors and colleges based on the % asked, which would allow them to make more informed decisions
    Equity would act in some ways like mildly progressive taxation
    Equity would give you de facto ‘mentors’, who could be trusted to give you good advice on one domain (how to get a high paying job)

    I also calculate some examples, discuss the similarities to taxation, and mention applications to parenthood and divorce.

    The post is here

    https://effectivereaction.wordpress.com/2015/03/09/human-capital-contracts/

    and there is some additional discussion on LW here

    http://lesswrong.com/r/discussion/lw/luj/human_capital_contracts

    Here are the two sections that made up the core of the comment before I felt the need to add a multitude of references and explanations and supporting arguments.

    ————

    Education

    Funding higher education is perhaps the best application for Human Capital Contracts.

    Firstly, this is an extremely risky investment. There are countless stories of people who took out huge student loans to fund an arts degree and then have their lives dominated by the struggle to repay. Alternatively, if people could discharge education debts through bankruptcy, the risk to the lender would be too great, as the borrowers typically lack collateral, so loans would be available only at prohibitively high interest rates, if at all. Selling equity shares would avoid this problem; people who did badly after school would only have to repay a minimal amount, but lenders could afford to offer relatively generous terms because the average would be pulled up by the occasional very successful student.

    The other appeal is the information such a market would provide students. It is fair to say that many students don’t really understand the long-term consequences of their choices. The information available on the future paths opened up by different majors is poor quality – at best, it tells you how well people who studied that major years ago have done, but the labor market has probably changed substantially over time. What students really want is forecasts of future returns to different colleges and majors, but this is very difficult! And many people are not even aware of the backwards-looking data. The situation isn’t improved by professors, who generally lack experience outside academia, and sometimes simply lie! I remember being told by a philosophy professor that philosophers were highly in demand due to the “transferable thinking skills” – despite the total lack of evidence for such an effect. Human Capital Contracts would largely solve this problem.

    TIPs markets provide a forecast of future inflation. Population-linked bonds would provide similar forecasts of future population growth. Similarly, Human Capital Contracts could provide forecasts of the future returns to future degrees. Lenders would expect higher returns to some colleges and majors (Stanford Computer Science vs No-Name Communications Studies), and so would be willing to accept lower income shares for people who chose those majors. As such, being offered financing for a small percentage would indicate that the market expected this to be a profitable degree. Being offered financing only for a large percentage would be a sign that the degree would not be very profitable. Some people would still want to do it for love rather than money, but many would not – saving them from spending four years and a lot of money on a decision they’d subsequently regret.

    What could make clearer the difference in expected outcomes than being offered the choice between Engineering for 1% or Fine Art for 3%?

    Certainly I think I would have benefited from having this information available. Most people probably know that Computer Science pays better than English Literature, but that’s probably not a pair many people are choosing from. I was considering between Physics, Math, Economics or History for my major. I knew that History would probably pay less, but didn’t have a strong view on the relative earnings of the others. I probably would have guessed that math beat physics, for example, but in retrospect I think physics probably actually beats math.

    Astute readers might object here that I am conflating the benefits of the type of financing (debt vs equity) with the mechanism for pricing the financing (free market or price fixed). If there was a free market in debt financing, lenders could charge different interest rates, and these would provide information to the students. This is true, except that 1) the interest rate would only tell you about the risk you’d end up super-poor, rather than providing information about the full distribution of outcomes, and 2) as student loans cannot be discharged through bankruptcy, there’s not really much reason for lenders to differentiate between candidates. If student loans could be discharged through bankruptcy, the interest rates charged would be informative but also probably very high. Perhaps this would be a good thing!

    Mentors – Incentive Alignment

    The modern world is very complicated, and we can’t expect people to understand all of it. Which is fine, except when it comes to understanding contracts, or credit cards, or multi-level-marketing schemes. At times the complexity of the modern world allows people to be taken advantage of, even in transactions which would be perfectly legitimate had the participants been better informed.

    Equity investments have the potential to help a lot here. All of a sudden I have a third party who is genuinely concerned with maximizing my income. I could ask them for advise about looking for a job. Perhaps they could negotiate a raise for me. Indeed, they might even line up new jobs for me! Obviously their incentives are not totally aligned with me. Except insomuch-as happy workers are more productive, they might not put much weight on how pleasant the job is. But true incentive alignment is rare in general; even your parents or your spouse’s incentives aren’t perfectly aligned, and the government’s certainly aren’t. Even better, it’s very clear exactly how and to what degree my inventor’s incentives are aligned with mine: I don’t need to try and work out their angle. I can trust them on monetary affairs, and ignore their advice (if they offered any) with regards hobbies or friendships or whatever else.

    Indeed, you could imagine schools that funded themselves entirely through equity investments in their students, and advertised this as a strength: their incentives are well aligned with their students. They would teach only the most useful skills, as efficiently as possible, and actively support your future career progression. This is basically the model App Academy uses:

    App Academy is as low-risk as we can make it.

    App Academy does not charge any tuition. Instead, you pay us a placement fee only if you find a job as a developer after the program. In that case, the fee is 18% of your first year salary, payable over the first 6 months after you start working.

    Compare this to current universities, which actively push minority students out of STEM majors to maintain graduation rates.

    • Blogospheroid says:

      Have you read “The unincorporated man” by Dani kolllin. The concept is similar but kinda dialled upto 11 in that novel.

    • chaosbunt says:

      The main problem with implementing a market for human equity capital is imo the information distribution.
      An investor will want to know if he can expect you to be a profitable investment. markets do not magically generate data, investors will have people calculating your chances of future incomes like they are calculating future businesses at the moment. Your major and your school’s reputation are not the only interesting things here. Since people are very different, there are costs of obtaining data on possible investments: reliable IQ testing on every possible investment would probably be compulsory, that’s the first thing coming to my mind.
      More importantly: Socio-economic background is the second big determinant. If you are from the ghetto: sorry, our statistics say we can give you only so much. But your white male suburb equivalent gets double. That is to say existing biases would be perpetuated.
      And as i already mentioned, markets do not magically make perfect distribution happen. Things like investment bubbles or information related market failures like adverse selection seem likely problem to me here.
      And if a market like this was to emerge, google would maybe be something close to a monopoly, given their vast amounts of data and their knowledge handling it

    • Airgap says:

      Do you work for App Academy? Or did you just use their copy?

      In any case, App Academy probably does the same thing. The only way their business makes sense is if they choose people likely to be hired. And they only accept 5% of the people who apply.

    • Julie K says:

      That system was used in the children’s novel Ballet Shoes (1935); the academy trained the main characters for free in exchange for 10% of their earnings for 5 years. I wonder how many readers have gotten inflated ideas from that book of how easy it is to have a career in acting or dancing?

      • Deiseach says:

        Isn’t that acting like an agent? I’ll promote you, get you jobs, and in turn take a cut of the earnings? An established ballet school with a good reputation probably could get its star pupils placements in jobs where theatre companies, etc. come asking “We need new members for the corps, we need a new young supporting dancer, with chances of becoming a lead in time” and so forth – something like the milk round in universities.

        I can see where, with a school that has contacts and networking being able to push your career and get you into auditions, this would be more support for one of those graduates than someone trying to do it on their own without the same backing – “pull”, in other words – so it might be less of a risk for the school to make the offer “You are very talented, in our professional opinion you can have a successful career in this field, so in exchange for free tuition you turn over this proportion of your earnings for that many years” to certain pupils (not all of them).

  37. Mat Hatter says:

    There has been some discussion on Less Wrong recently about brain preservation and (Singularity) scenarios worse than death. People focused on the relative probabilities of scenarios worse than death vs. better than death and their badness.

    But I wonder if there’s another factor, which is size.

    In some scenarios, there is person copying technology and you get copied a lot, and in others, you just exist linearly. And there might be a correlation between “you get copied a lot” and things like “your rights matter”, “your wellbeing matters” and/or “what you say you want matters”.

    Imagine for example a scenario where each human alive (or brain preserved) at the time of the Singularity gets an endowment of a fraction of the cosmic commons. Or where humans can invest some money and get Singularity-size growth rates out of it. Then individual people may end up with a couple star systems dedicated just for them. And they can use the resources to make copies of themselves.

    A conjecture may be that these scenarios are less probable than most others, but their personal importance is bigger because trillions of copies of the person exist for a total of quadrillions of life-years. And it may be relatively unlikely that a system would spend these resources on copying a person if their wellbeing, statements of will and/or rights didn’t matter to the system.

    That is, “scenarios with many copies of you” may be mostly disjunct from “scenarios worse than death and you can’t die”.

  38. Carinthium says:

    Requesting some help with something. There was a certain philosophical question I was considering, and wanted a rationalist perspective on it.

    What are good criterion for saying that a hypothetical person X believes a hypothetical Act Y morally wrong, or that hypothetical Person X feels obliged to do or not to do it? In particular, I am considering the possibility of cases where a person believes a hypothetical Act Y morally wrong without an explicit belief in such.

    This is important because if the hypothetical Act Y is not morally wrong, then whatever the person says they are acting contrary to their own Coherent Extrapolated Violition and therefore their reasons for not wanting to do Y can be called invalid.

    • chaosmage says:

      I don’t understand your question. Could you give two examples that differ at the object level?

      • Carinthium says:

        I’ll try to clarify as best I can with a variety of examples.

        Respondents please keep in mind my purpose for getting this data. The relationship of the concept of Belief to the concept of Coherent Extrapolated Violition is the reason I haven’t tried to solve this by subdividing the term ‘belief’.

        Also, any of these are probably more complex than yes-no answers. I’m simply doing my best to figure them out.

        Example A: Jim claims that fraud is morally wrong, but does it constantly and keeps giving justifications of one sort or another. Does he Believe Fraud is morally wrong?

        Example B: Jill has been persuaded by a philosopher that fraud is not morally wrong, but whenever she tries to commit fraud she feels queasy about it and takes a lot of pushing to actually go through with it. Does she Believe Fraud is morally wrong?

        Example C: Jack claims that he doesn’t see anything morally wrong with fraud, and the philosophical opinions he espouses practically require this. He claims that he simply doesn’t like fraud.

        However, Jack has never comitted fraud even when he could benefit considerably, and does his best to supress it in others whenever he sees it. Does Jack Believe Fraud is morally wrong?

        • Deiseach says:

          Example A: Jim sounds like the classic hypocrite; he may or may not believe in the immorality of what he is doing, but he at least believes that in some sense he is failing to live up to standards and rationalises/excuses what he is doing by re-defining it as “Fraud is A, what I am doing is B. so what I am doing is Not-Fraud”.

          Example B: Jil seems (to me) to indeed have some moral qualms about fraud; you distinguish between her and Example C: Jack in that he says he simply doesn’t like fraud. So her unease and having to be coerced or strongly persuaded into committing fraud seems as though she does assign a moral value to it as a crime of some kind.

          Example C: Jack is a bit more complicated. Does Jack believe it is morally wrong, or simply wrong because it reduces trust which is necessary for the functioning of society, or means people do not get their due rewards which is bad capitalism, or the like? I could see Jack acting out of dislike, and trying to persuade people not to commit fraud, not because he thinks it really is morally wrong but on the same level as trying to persuade people not to play heavy metal loudly in public.

          • Carinthium says:

            Got it. Thanks. I’m going to go on to explain where I would go from those conclusions, and I want your thoughts on them.

            A lot people IRL, whether they reference morality or not, appear to implicitly believe certain actions to be morally wrong. I would argue that on an instinctive level they have a delusion of Moral Wrong in the ‘objective’ sense.

            This means that most people’s CEV is against the concept of them acting morally, since it is rooted in a delusion.

        • Troy says:

          I think the cases are underdetermined. In each case you’ve got conflicting evidence about the agent’s beliefs: e.g., their self-reports vs. their actions/emotions. Which of these is more powerful evidence is contextual. For instance, in the first case Jim might be lying about his moral beliefs, or he might be weak-willed. Which one is more likely depends on my background knowledge about Jim.

          • Carinthium says:

            Could you please clarify me the criteria you would use in a case like this?

            As I said, I suspect a lot of cases a person’s CEV contradicts their desires because they have the false premise, whether on a conscious or subconscious level, that their actions are morally wrong. Therefore, the rational thing is to ignore their moral desires alltogether.

            In my view, this proposition is in need of scrutiny which is why I started this.

          • Troy says:

            Could you please clarify me the criteria you would use in a case like this?

            I don’t think there are any infallible external criteria that let you determine what someone (else) believes. I think this is true for any beliefs, including moral beliefs. But I reject any kind of materialist or functionalist analysis of belief, so on my view this isn’t something that there ought to be some surefire method of determining.

            I think that the kinds of considerations at play in your scenarios — actions, belief reports, emotions, intentions, etc. — can all be evidence that someone has particular (moral) beliefs. But I don’t have a general answer for which of these is the strongest evidence; that depends on your knowledge of the person in question (e.g., are they prone to lie? are they weak-willed? etc.).

          • Carinthium says:

            Next question. Do you have any data on a proper definition of what it is to believe, or at the very least anything useful in defining what it is to truely believe in the moral wrongness of an action committed or contemplated?

          • Troy says:

            To answer that question we’d need to answer two things:

            – What is a belief?
            – What is the meaning of the normative concepts we employ when we believe, e.g., “I ought not X” or “It is wrong to X”?

            For the former, I’ll say that it’s a propositional attitude that represents its object as true. For the later, well, tough question, but I’m inclined to give a quasi-Aristotelian analysis of ‘ought’ that is relative to the kind in question and its intrinsic telos.

            I doubt either of these will be particularly amenable to the rest of your philosophy.

          • Deiseach says:

            (Crap, I wish there were a better way of replying to comments when the sub-thread gets long).

            so I’ll come out and say it- go ahead and rape and murder, whatever floats your boat. It overrides other people’s preferences, but you have no rational reason to care

            Okay, that’s making it plain, and thanks – I didn’t want to go there e.g. “is rape wrong?” because of fears of invoking something analogous to Godwin’s Law (so you’ve done the equivalent of comparing the other person’s position to the Nazis, game over as far as reasonable discussion is concerned).

            A rational reason to come to a mutual agreement “I won’t do this to you and you won’t do this to me; if either of us breaks our agreement, the penalties may range from (depending on level of culture and society) my family calls a blood-feud to you get arrested”. Self-interest; it may not be rational of me to care about what happens to a stranger, but I am (to someone else out there somewhere) a stranger, also, and so someone else may attempt to rob, rape or murder me.

            I do not wish to be robbed, raped or murdered as these things decrease my happiness and have a negative effect on utility (real life example: I had my purse stolen by a pickpocket in a shop some years back, and it was very, very inconvenient, bothersome and I lost that amount of money in it which I could not easily spare). If I indulge my preferences re: robbery and arson without limit, so will other people. For my own protection, it may be rational to agree to a limit on expressing and following my preferences where others, too, are also limited; such limits to be overseen and enforced by mechanisms like family blood-feuds, I can legally hire assassins, or the courts.

          • Carinthium says:

            Regardless of what we do, society will remain basically ‘moral’. Morality is not in fact founded on a social contract but blind human instincts, plus statistically Slate Star Codex is a metaphorical drop in the bucket.

            Given that there are in fact plenty of people offending against popular moral codes already, the fact is practically demonsrated. They are punished when caught, yes, but nothing about amoralism says not to factor for fear of punishment.

          • Contractarianism isnt an explanation of what largely motivates moral behaviour, which is indeed instinctive, it is an explanation of why moral behaviour isnt irrational, what it can be rational for people to play along with it where it exists, why it would be necessary for rational people to reinvent it where it doesn’t exist.

          • Carinthium says:

            I concede this much- in a hypothetical world where almost nobody acted morally OR where moral instincts did not exist, contractarianism would probably make sense (there are possible exceptions).

            The reason it doesn’t is that society is so stable that even if all of Slate Star Codex became psycopaths tommorow, the threat to stability would be insignificant.

            Therefore, the only problem is a selfish tradeoff between fear of punishment, potential gains, and probability of sucess. Which means amoralism, not contractarianism.

        • Cauê says:

          I think this is a confused question, and the answer won’t take the shape of “X really does/doesn’t believe Y to be morally wrong”.

          For instance, B and C could come from System 1 vs. System 2 conflicts, as in utilitarian calculations that go against instinct – like, say, a trolley problem in B and violating purity/sacredness norms in C.

          There are many possible explanations for A, but I don’t think it’s very useful to look for what he *really* believes.

          • Carinthium says:

            The problem with that approach is that I desperately need a proper concept of a ‘preference’ in order to ground a theory of decision making. Having refuted the idea of objective Right and Wrong, without preference I’ve got nothing.

            Coherent Extrapolated Violition, or at least some philosophically 100% coherent concept of a Preference, is thus necessary.

            If you have better ways to deal with the problem, please tell me.

          • Deiseach says:

            a person’s CEV contradicts their desires because they have the false premise, whether on a conscious or subconscious level, that their actions are morally wrong

            I think there are a couple of things that need to be teased out here. One is that you’re saying you don’t think there is actual objective Right and Wrong, so morality is based on “What I want versus what We as Society agree is permissible”.

            I’m not going to touch that, since it’s a very big topic.

            On the other hand, I don’t necessarily agree that a person’s volition can go against their desires based on moral premises. There is nothing moral or immoral about me eating that third slice of chocolate cheesecake. I certainly desire to do so. Yet my will or conscious decision may be not to do so, not because of morals, but because (a) I’m diabetic, I need to control my blood glucose levels (b) I will need to exercise off the extra calories and I haven’t the time to do the extra exercise (c) this is bad dietary choices and unhealthy (d) that would be greedy of me since the rest of us have not had any cheesecake yet, it would be polite and thoughtful to leave some for the others (e) no, I’ll keep it for tomorrow and have it then, etc.

            Over-riding your desires need not be a question of morality. Ignoring ‘moral’ intuitions could be irrational, not rational.

          • houseboatonstyx says:

            (d) that would be greedy of me since the rest of us have not had any cheesecake yet, it would be polite and thoughtful to leave some for the others

            All those could be seen as moral, especially (d), even without the Virtue application.

          • Carinthium says:

            When did I ever say I believed in a morality based on what I want v.s what society believes is permissible? I don’t believe in any sort of morality, including the one you seem to be outlining.

            Sometimes I agree that overriding desires has nothing to do with morality. But given a moral belief is outright false, surely a person’s CEV is to ignore it?

          • Deiseach says:

            I really don’t understand the distinction you are making. What I’m getting here is you seem to be saying if I think I should not hit that person there over the head with a rock and steal their wallet because it’s immoral, there is no such thing as morality because it’s all false so I should ignore that ‘moral’ impulse and do it?

            But if I base my decision not to hit that person there over the head with a rock and steal their wallet because there’s a police officer watching and I’ll be arrested and go to prison, then it’s okay for me to over-ride that impulse?

            I’m trying to understand if you think any use of ‘don’t do this because it’s a bad idea’ is acceptable, because from what you’ve been saying, I’m getting the impression (and I’m probably wrong here) that you think a person’s Coherent Extrapolated Volition should never over-ride or contradict their desires and impulses, so they should always do what they desire to do based on what they prefer, and I don’t think you can be saying that, so I don’t – as I said – understand what you’re trying to get at. You seem to have a very strong objection to even the term ‘morality’, and I’m not sure if this is because you think it means Good and Evil, or God and The Devil, every time it’s used rather than sometimes being used as a label for “actions that are praised or blamed by social mores”. In that case, okay, I should not have said a morality based on the tension between ‘what can I get away with doing before it triggers unpleasant consequences for me, either from social disapproval up to imprisonment or execution, private vengeance or retribution from those I have injured, or harm to my physical/mental well-being by over-indulgence?”, I should have said “a system of governing one’s behaviour that someone follows consistently when judging whether to act or not act on a preference, wish or desire”.

            tl; dr You think there’s no such thing as morality for whatever reason, and you want suggestions for ‘what should I base my deeds on’ instead? Given that (presumably) you’re not saying ‘sure, go out and rape, murder, rob and commit arson if that’s what floats your boat’, you are asking us “I don’t want you to say ‘rape is wrong’ because I don’t believe there is any such thing as morality; I want you to tell me ‘don’t commit rape because – ‘ where the ‘because’ is not based on morals”?

          • Carinthium says:

            As to your first example- Assuming you have no other reasons not to hit them over the head and steal their wallet, yes you are being irrational.

            On the other hand, your desire not to go to prison is not based on a moral impulse therefore it isn’t grounded in a delusion.

            My philosophy is grounded in the Amoralist Challenge- why should a person do anything? The only known way around it is to respond ‘You actually do want to do it’ or similiar and use a person’s wants as a foundation for ‘ethics’.

            Sometimes a Coherent Extrapolated Violition ‘should’ override an action, however, because a person is failing to take into account how another want, not built on a delusion, would be adversely affected by their actions.

            You refer to “a system of governing one’s behaviour that someone follows consistently when judging whether to act or not act on a preference, wish or desire”. The problem is that without very careful checks, this falls back into ordinary moralism. One of my priorities when making my philosophy is to forge an anti-moral identity into it as a safeguard against that.

            I never said rape and murder are wrong. They are in fact o.k from a rational perspective. On this board I am very careful not to let my emotions get away in my reasoning, so I’ll come out and say it- go ahead and rape and murder, whatever floats your boat. It overrides other people’s preferences, but you have no rational reason to care.

        • blacktrance says:

          Example A: It seems that either Jim is sending a false signal that he’s a moral person by claiming to believe a common moral claim, or that he genuinely misunderstands what morality entails. In the latter case, he’s not talking about morality but about “morality”, as if he were an anthropologist describing the norms of a foreign tribe – when he talks about fraud being morally wrong, he only means that it’s morally wrong according to other people’s moral intuitions, some ethical theory, or a societal consensus, but not that he believes that he shouldn’t do it.

          Example B: Jill’s beliefs and moral intuitions are in conflict – she may believe that it’s morally okay but alieve that it’s wrong.

          Example C: Jack sounds like he could be the inverse of Jim. Where Jim believes that fraud is wrong according to “morality”, but really not wrong, Jack could believe that fraud is okay according to “morality”, but really wrong. Alternatively, he may really believe that fraud isn’t wrong but has a distaste for it (unless you’re excluding that by saying that he refrained from fraud when he could benefit considerably?), similarly to how I can say that microwaving cheese isn’t morally wrong but I still dislike it.

          • Carinthium says:

            Requesting advice, then.

            Lately I’m considering the proposition that most people have ‘wants’ based in a delusion of moral rightness. Because of their basis, these wants contradict the person’s own C.E.V, and are irrational.

            This would be a powerful amoralist argument if true, but I’m only considering it. I’d like your thoughts on how true it is or isn’t.

          • If the person doesn’t act on them, wheremismthe probem?

            If the person does act on them, them, who is to say those wants arent genuine?

          • Carinthium says:

            That’s not helpful because what I’m looking for is a decision process that has to be able to help a person make a rational decision in a given circumstance.

          • Counterarguments generally aren’t helpful….in the sense of bolsteting the original argument.

          • Carinthium says:

            Your counterargument seems to be a pragmatic one itself though. I can’t see any way to interpret it which isn’t pragmatic.

          • Ok, it’s pragmatic? Why does that matter? Because the pragmatic cant possibly be ethical? But the contractualist argument is precisely that there is a pragmatic argument for ethics, given certain interpretations of ethics, and pragmatism.

            You were looking for an argument to the effect that if someone acts morally, they are being irrational, because the preferences they’re acting on are not their true preferences, the ones they would have if they considered carefully enough, or something. But contractarians actually do have the converse argument.,, that if you consider carefully enough, you would want to get into arrangements where you refrain from mugging people, and in return, they refrain from mugging you.

          • Carinthium says:

            Most humans act out of blind moral instincts, not social contracts. Combined with the fact Slate Star Codex is statistically a drop in the ocean, I don’t see how the contractarian argument can possibly work.

            The clincher, of course, is that for a genuine social contract to work you need a critical mass of actual followers of contract morality. Otherwise your behaviour won’t affect that of others, so a contract is pointless.

          • It isnt a clinching objection to contracts, in any context, that people need to agree with them,

          • Carinthium says:

            In practice, how many contractualists actually meet with other contractualists? And how many form social agreements on contractualist terms?

            I don’t think you’re quite this stupid, but since you probably won’t respond anyway I’ll point out for any readers that my point about blind instincts refutes the argument that ordinary people make contracts along contractualist lines anyway.

          • Irrelevant says:

            In practice, how many contractualists actually meet with other contractualists?

            Irrelevant. Contractualism is an effective model of what happens even though the actual contracts are primarily internally negotiated and imaginary.

    • Irrelevant says:

      What you appear to be attempting to characterize is the distinction between a concept’s terminal emotional context and a concept’s verbal reasoning links. The two systems influence each other but do not correspond especially well, and neither is more real than the other, and the non-verbal system is prone to forking concepts in such a manner that what “the same idea” is adjacent to can change based on how you got there. Just a few of the reasons that Coherent Extrapolated Volition is wishful thinking.

      • Carinthium says:

        Please clarify. What concept of preference would you use to replace the concept of Coherent Extrapolated Violition?

        I’m kind of in a bind here because my philosophy depends on having a concept of preference in it of some sort and pretty much can’t stand up unless I have some sort of concept I can (metaphorically) point to and say it’s what a preference is. Coherent Extrapolated Violition is my current one.

        Are there any better ones you know of?

        • FullMeta_Rationalist says:

          Maybe by “CEV” you mean revealed preferences. tl;dr actions speak louder than words.

          • Carinthium says:

            That’s not helpful for my purposes, as one of the things I need a concept of “preference” for is as part of a useful decision procedure. Revealed preferences don’t fit into a decision procedure.

        • Irrelevant says:

          You probably won’t like my answer then, because my conception of the issue is that we don’t have preferences, we have emotional test barrages. Preference is not a coherent entity. We have multiple systems generating preference-like signals, but those signals are different in nature, and the systems are internally inconsistent, don’t reconcile, and don’t prioritize.

          Getting more specific, we seem to have three test methods: empirical testing, association testing, and narrative testing. Empirical testing obviously means doing the thing to find out if it’s enjoyable, and returns experiences. Association testing throws an idea into the black box of non-verbal reasoning and lets the mind take its random walk through adjacent ideas until it finds something interesting, and gives back what I referred to as terminal emotional contexts. Narrative testing uses verbal reasoning to construct a scenario and gives back a determination of whether its ending is happy or sad.

          I assume none of those is satisfactory to you.

          • Carinthium says:

            I suppose I should explain ‘from the ground up’ why I need this.

            Any substitute for ethics must answer the question ‘What should I do?’. Otherwise it is useless.

            There is no such thing as moral right and wrong, so I can’t use that and have a correct philosophy. So I use preferences instead and ground “ethics” in the premise that a person ‘should’ attempt what they want, modified by CEV.

            Normally I’d have no problem with deconstructing the concept of a preference, but how am I supposed to have something to take the place of ethics without it?

          • If the preferences you are taking into account are other people’s preferences, you are going to end up with something identical to ethics FAPP. If moral right is respect for others preferences, it’s more complicated than ditching right and keeping preference.

          • Carinthium says:

            Actually, my ethical system has no respect whatsoever for other people’s preferences. As you said, that would just be ethics.

            Originally, I had an amoralist theory of ‘Just do what you want’, but realised that was too simplistic so I’m trying to turn it into a proper theory.

          • Carinthium says:

            I do have a rather tentative “hack” to replace C.E.V I’ve thought of. Requesting scrutiny.

            My hack would be the following: Use the concept of personal self-interest, but entirely remove the moral dimension from it as morality existing in the first place is illusionary.

            What I hope is that this self-interest can be identified with a person’s wants whilst at the same . I can’t use Coherent Extrapolated Violition, however, as I run into the criticisms used against it already.

            The best I can really hope for is that this approximates what I’m really trying to capture with my concept, but maybe it at least does that.

            The problem so far is that using this may lead to other moralised ideas of self-interest, such as a person’s ‘self interest’ being to be nicer, smarter etc for reasons similiar to virtue ethics regardless of personal wants.

            Thoughts?

          • > as morality existing in the first place is illusion

            How can you be so sure of the case against moraliy when you are so unsure of the case for amoralism.?

          • Carinthium says:

            Have you considered the possibility I thought about it and changed my mind?

  39. li says:

    I have heard that the cryonics companies refuse to preserve people who suicide. What could a terminal patient, who wishes to die before the illness consumes his or her brain, do? Would dying while of exposure while climbing a very cold place work?

    • jaimeastorga2000 says:

      Kim Souzzi an heroed herself by refusing food and drink in a deliberate bid to preserve as much of her brain as possible for cryopreservation. Alcor didn’t have a problem with it.

      • I’m kinda mixed about this. It’s once thing raising money for a medical procedure that has a demonstrable track record, such has a bone marrow transplantation, but cryonics..just seems like buying scratchers

        If it does work, she is going to be owing people a lot of interest

        • Clockwork Marx says:

          I guess you could also look at it like the “Make A Wish” charities, more of a means of fulfilling a last request then funding a medical procedure.

        • syncytial says:

          What’s the evidence that makes you think cryonics is just buying scratchers?

          Seems to me like cryonics by definition can’t have a proven track record. If it did, it’d be called reversible suspended animation. In the absence of this, we must rely upon indirect evidence.

      • jaimeastorga2000 says:

        Huh, it appears I fucked up the link. The article I meant to link to was “The Cryopreservation of Kim Suozzi” (linking to LessWrong because direct links get eaten by the spam filter).

  40. Kyle Strand says:

    Scott:

    Did you invent the idea of Schelling Fences? I assumed this was a known game-theoretic concept, but my googling has so far only revealed that Schelling Points are a well-known phenomenon. I’m trying to write a post on Math.SE about the concept, and realizing that if you’re the first person to come up with the idea, I may (bizarrely) be the first to apply it to a standard game-theoretical type game, at least in a public google-able forum. (Note: as long as that post is, it’s woefully incomplete, and what’s there is in need of improvement.)

    (By “standard,” of course, I mean “not involving murder-Ghandi.” But really what I mean is that you were making a (primarily) sociopolitical point while I am trying to make more of a purely game theoretical point.)

    (One extra interesting thought: if, instead of the payout-per-decision-threshold growing exponentially, the probability-of-losing-per-decision-threshold shrank exponentially, would a Schelling Fence still be helpful?)

    • Khoth says:

      He didn’t call them that, but I first learned the concept of Schelling fences from…. Schelling.

    • Scott Alexander says:

      I made up the term, but I got the concept from David Friedman, who might or might not believe they’re a straightforward application of Schelling.

    • RCF says:

      I came across the term “red line” that describes pretty much the same concept, but it’s rather hard to google for that meaning, given the more common meaning of the phrase “red line”.

      “Letting ct be the cumulative total after t rolls,and f(n) be the nth “losing” number, a choice occurs whenever

      d(t):=f(n)−ct≤6″

      Shouldn’t that be == rather than :=?

      I’m not clear on what the point of your article is, but it seems to be that you see a paradox in the optimum strategy being one such that the probability of getting nothing goes to zero. You seem to be calculating with a linear utility function. In reality, most people would not prefer a 1% chance at a trillion dollars over a certainty of 9 billion. Now, if you assume an unbounded utility function, then you can adjust the losing numbers so that one has a positive expected value at each decision point, so that doesn’t alter the situation much, but you should acknowledge the issue.

      This is rather similar to the St. Petersburg paradox.

  41. Anonymous says:

    Are there studies on questions like to what extent musical ability is innate versus acquired, or whether there is some innate aptitude for learning musical ability, or to what extent musical ability is genetic?

    • Rebecca Friedman says:

      I don’t know about studies, but have fairly strong anecdotal evidence that A) there is a major genetic component to musical ability (my grandfather was utterly non-musical, and married a musical wife; my father is fairly non-musical, and married a musical wife; I am fairly close to being as untalented as you can be and still become a useful member of a choir [did it, but it took me ten years of being a not-very-useful to not-at-all-useful one]; this pattern holds true for the rest of my family, ie, the siblings of my mother and grandmother were musical, my paternal aunt isn’t especially but her grandchildren are [reversion to mean by way of more musical spouses], my brother has approximately my level of talent, etc. – all of which comes out looking like a fairly strong genetic effect to me) and B) musical ability can be acquired given a fairly low level of innate talent (not tone deaf), but it seems to go a lot faster the more innate talent you have.

    • zz says:

      IME, after having played cello for 12 years (and meeting hundreds of musicians at varying levels):

      Musical ability is multifactorial. One important factor is “how good are your ears”, which is somewhat trainable… but most of my musical friends were born with better ears than I have after 12 years of study.

      Happily for my borderline-tone-deaf self, there’s other factors! If you play an instrument, dexterity is an important input. (Alas, I’m a clumsy fuck*, and will probably never be able to play something like this solo.)

      Music is also extremely g-loaded: there are many, many things that demand your attention, and if you are not very smart, you will not be able to read music while making your left and right hands do related, but entirely different, things, while listening to yourself, while hearing how you fit into the ensembles sound, while watching the conductor, etc. This is confounded by “the best musicians tend to be good because they’re vying for a spot in an Ivy League school, and were already smart enough to have a reasonable chance of getting in,” but… in my high school orchestra, of the “inner circle” of the 8 best string players, 6 were taking BC calculus (and of the two others, one was in AB and the other qualified for CTY). At the music camps I attended, of my immediate social group (consisting of “anyone who I can get to play Apocalyptica with me”), 4/4 are now at an Ivy. As you go to better schools, the classically trained scene gets stronger (no coincidence String Theory cropped up at Columbia). I could on, but the correlation is waaay to high for me to not look over there. (This is also the best explanation of how I managed to get into the orchestras I did, because I’m not talented enough otherwise and I definitely wasn’t practicing more than anyone else there.)

    • Deiseach says:

      Most of my father’s side of the family are musical, including several cousins who make a living as professional musicians (one is even a bona fide art rock/experimental pop star!) My father could play the accordion, my youngest brother was a cornet player in a local brass and reed band, my younger brother was picked to sing in the chorus in school plays, and my younger sister can play guitar and keyboards, is currently learning the ukulele, and her sons are doing very well in their music exams.

      I, on the other hand, can’t carry a tune in a bucket and have failed abysmally to learn any instrument. Those genes obviously never made it as far as me 🙂

    • Held In Escrow says:

      The thing with music is that different instruments require different skills. I’m reasonably tone deaf; I can mimic a sound if I have some time to try an sync with it, but I can’t tell if a note is higher or lower until you approach an octave apart and trying to figure out what a note is based on hearing it is a fool’s errand.

      But my instrument of choice, which I got really good at, was the trumpet. An instrument that provides instant feedback from the mouthpiece to how you’re playing in addition to the sound. Just through learning the mouthfeel of different notes I could tell what I was getting before my ears processed it. So playing trumpet was in many ways, a tactile experience for me, and that worked.

      Can’t sing worth a damn and regret never learning piano, but that’s what worked for me.

    • Alexander Stanislaw says:

      Are you asking out of curiosity, or because you want to know what goals to set for yourself or someone you know as a musician?

  42. maxikov says:

    The link is to be announced, but we’re gonna be broadcasting HPMoR Wrap Party from Berkeley, with Eliezer reading, Bayesian Choir singing, and all sorts of stuff.

  43. ari says:

    From the Vice article, after reader cameos had been mentioned:

    Why are regular, hard-working humans spending their time on a piece of fan fiction that doesn’t even include a subplot where Hogwarts falls in love with a gigantic squid?

    I read this sentence out of order. My disappointment when I read the beginning of the sentence and found out that this was something that in fact wasn’t in the story was palpable.

    e: Also, this bit amuses me:

    The most common thinking errors humanity makes is something called a systematic thinking error, or cognitive bias. Like the sunk cost fallacy.

    This can be explained by remembering the last time you went out to a club because, “I’ve already drunk six beers so might as well.”

    Way to confuse the sunk cost effect with the what-the-hell effect there. It’s apparently been discussed on LW.

    • Toggle says:

      My disappointment when I read the beginning of the sentence and found out that this was something that in fact wasn’t in the story was palpable.

      In fact, it was a reference to the very NSFW but definitely real First Encounter

  44. Andy says:

    I’ve been thinking about science fictional technologies, and wondering which ones the SSC commentariat think have an impact to change the way people live, and institutional structures, in a big way.
    (And if someone can name what science fiction author I’ve read waaaaay too much of from this list, take a crack.)
    My list:
    1) Magical lie detector that works with over 90% accuracy
    Even if it can’t predict future behavior, it could revolutionize the criminal justice system.

    2) Weapon that knocks someone unconscious without risking death
    Policing (and counterinsurgency operations) would get a lot more doable if operators didn’t have to risk making martyrs everywhere they go.

    3) AI
    Probably a duh, but I don’t know how to define a friendly, smarter-than-human AI.

    4) Uterine replicator
    Both as risk reduction for the child-creation process and as a way to create odder societies like all-male societies and transhuman genetic engineering.

    • Empathy pill, so that sociopaths get a chance to connect with others and sample what everyone is always carrying on about. Though I can think of ways other pills of this category could go horribly wrong.

      • Highly Effective People says:

        I’ve been leafing through a pretty interesting review on psychopathy lately (Skeem 2011) and between that and some stuff I had seen previously on ability to recognize facial expressions it shows an interesting perspective.

        Evidently psychopaths can empathize when directly instructed to do so and in fact show activity in the same brain regions as non-psychopaths. And they are also capable of ‘reading’ facial expressions for the most part (apparently they have more trouble with differentiating fear/disgust) unlike people with autism spectrum disorders. It’s just not something that’s active all the time.

        The idea of voluntary v involuntary empathy has very interesting implications if you think about it. Presumably psychopaths who were convinced to keep empathy “on” more of the time would lose some of the phenotype related to the callousness factor. And perhaps subclinical psychopaths could be trained to turn their empathy “off” situationally which could be valuable in the military or law enforcement.

        • Shenpen says:

          There is a different problem here. For reason people today believe that empathy equals compassion, and people who are not compassionate must lack it, and they must be socio-or psychopaths.

          Actually empathy is mind reading, not compassion. There is no rule whatsoever that if you know how other people feel you give a shit.

          This goes both ways. People who have no empathy may want to be good people, just not necessarily know how to. People how have a lot of empathy may not give a shit about other people’s feels they perfectly know or even abuse it against them, as manipulators or torturers.

          I don’t even understand why people think empathy is niceness. Perhaps, they want to desperately think everybody wants to be nice, just some don’t know how.

          • Zorgon says:

            “Empathy” became a leftist shibboleth for “tolerance” some 10 years ago. I noticed it starting to crop up on places like Daily Kos around the end of the Bush presidency.

            I’m not sure if that’s the “Empathy==Niceness” phenomenon you’re talking about, though. The term got abused towards “happy smiley bouncy feelings” around the beginning of the 90s in some subcultures (particularly grunge) so there’s a few avenues.

          • Held In Escrow says:

            I think a large split in today’s leftism comes from the sympathy/empathy divide. Callout culture comes from a place steeped in sympathy (look at the poor oppressed people) but completely and utterly lacking in empathy (shame is the best tool ever!). Socialist leftism has high empathy (people are just trying to get by and be a little better off than they were before in a broken system) which tends to be high on empathy for everyone involved but isn’t exactly sympathetic to any single plight.

            That said, I would agree that empathy and tolerance definitely have cross over value, in so much that the brainbug equivalent of smallpox in “tolerant of everything but intolerance” is really just another form of intolerance. Which is fine, just admit you’re not actually tolerant and explain your reasoning, don’t try and dress it up in high language.

          • Firstly, it would really suck if discussions about empathy where driven primarily by politically motivated reasoning.

            Secondly, I definitely agree that empathy can be used and misused in many ways. So let me clarify the way I was using it and what I think is the most important sense it is used (compatible with current psychology afaik) – empathy is where your brain makes you feel the emotions that you believe you see in others. So when you see someone trip and break an ankle, you wince and kinda feel the pain yourself a little. When you see someone who is truely happy, you can’t help but experience a little of their joy vicariously. We can easily make a fairly common sense evolutionary argument why this is useful – it helps people act more effectively as groups. We cannot do harm to others so easily because basically it’s quite unpleasant. And helping others feels good. Groups with this trait, if they perceive emotional needs accurately, are going to be pretty good at efficiently allocating emotional help and its physical correlates. I can’t prove but I strongly suspect without this mechanism coordination is not good enough for a complex society to develop.

            There’s two distinct parts to this. First is effectively recognising emotions in others. The second is reproducing them effectively. Empathy is not just mind-reading, it’s mind reproduction. The “first-person” simulation, the experience, of the emotions of another person, not just the intellectual knowledge that they’re present.

            Turn either the recognition or the reproduction off and bad things can potentially happen. This is commonly the case during wars and ideological conflicts. In these cases, the perception of the other group is changed, to minimise or block recognition of their emotional states. In psychology they call is big part of what they call dehumanisation. There’s also the issue of gaming – sensible people have to remain wary their empathy isn’t being exploited.

            Alternatively, there seems to be a small niche for free-riders in the empathy system, created by leaving recognition on, but switching reproduction off. Yes they can recognise emotion effectively. They might even feign socially appropriate reactions at a sophisticated level, either for convinience or because they still have other moral intuitions that remain adverse to predatory behaviour that we might stereotype them with. But though they see pain, or joy, or fear, and recognise it, they do not reproduce it. And so they don’t empathise.

            Like autism and problems associated with the recognition mechanism, the reproduction mechanism is not binary, but is present to varying magnitudes. And different places on the spectrum are suited to different things. People without any empathy at all make awful leaders, even where their motives are not totally self-serving, because they fail to connect with the emotional needs of those whom their decisions effect. But if the empathy is overpowering, people’s ability to make difficult and painful decisions, as all leaders must, is compromised. Even in “compassionate” role like a social worker, having a level head is very important when in emotional situations. Leaders, even more so, have to have strong empathy, but they also have to have the ability and will to keep it under a certain degree of control. I’ll leave it others to speculate on the kind of relationship these two extremes might have to political ideology 🙂 But what I personally take out of this is that it’s in all our interests, wherever we sit on the spectrum and our politics, to try promote leaders who sit somewhere in the middle of the empathy spectrum.

          • Highly Effective People says:

            @Citizensearth,

            Read my comment at the top of the tree.

            Psychopaths can in fact reproduce the emotions: this has been confirmed by fMRI. They just don’t do so unless they are specifically prompted to empathize.

            Hence why the question is interesting; if they have the choice whether or not to empathize that is very different from simple inability.

          • @Highly Effective People

            I did read your comment, and thought it was good. I do want to say I couldn’t find that particular fMRI evidence in your link, but I think you are still correct about the study. I believe you may be referring to this study?

            So I take your point, but I’m not sure its a negation of the mainstream thesis of sociopathy and empathy. If the emotion is a response to the instruction, that is deliberately trying to feel the feeling that you intellectually know to be appropriate, is that an authentic substitute for the empathetic reponse? Is that empathy?

            Perhaps more interesting is the question of whether it applies outside of the lab – will it work spontaneously without concentration in everyday situations where there are other things going on? It would be great for everyone involved if it did! However, I don’t personally think this study has shown that.

            A further reason I think this is the case is because socio/psychopathy is associated, by the vast majority experts in this field, with systematically failed interpersonal and intimate relationships. So wouldn’t a sociopath switch on the empathy for this if it was possible, considering the obvious payoff? It must be harder for them than that.

            Perhaps something else would be needed to remove the barriers and allow it happen more naturally. Either way I’m all for developing that research further, as everyone wins.

          • Highly Effective People says:

            Personally I’m inclined to say it’s the same feeling if only to avoid getting into a debate about qualia.

            As for whether deliberate empathy can substitute for automatic empathy I agree that it would be surprising if it could match it 100%. My guess rank speculation would be that it could reduce the phenotype in the coldness / meanness domain but wouldn’t help with disinhibition or boldness: sustaining relationships or avoiding antisocial behavior would still be hard although possibly not as much as before.

            As for why they don’t already try, there are a few good reasons.

            The first is simple typical mind fallacy: psychopaths are notorious for claiming to have various emotions but failing utterly to describe the sensation of them. That sounds like more accounts of colorblind people describing colors to me more than deliberate lying. It’s possible many didn’t realize normal people feel empathy all the time without prompting.

            Another is just that they are resistant to negative reinforcement in general, so suffering a string of losses doesn’t necessarily imply that a psychopath is going to be looking for new strategies.

          • I think both those reasons for not doing it already are correct, and my own wild guess is that they will remain barriers even where there is a conscious knowledge of their harm. So I think some more effective measure than “instruction” will be needed in the real world. Naturally speculation ought to give way to empirical findings – I’ll be interested to see where this stuff goes in the future. Thanks for the discussion.

          • aguycalledjohn says:

            > Actually empathy is mind reading, not compassion. There is no rule whatsoever that if you know how other people feel you give a shit.

            “Empathy” and “sympathy” are formally used to distinguish between them, but in common usage they’ve pretty much merged, annoyingly.

    • injygo says:

      Nobody knows how to define a friendly, smarter-than-human AI. If we knew that, half the problems would be solved.

    • Rauwyn says:

      Well, based on the last one the author is obviously Ybvf ZpZnfgre Ohwbyq.

      I’d say AI has the biggest potential even if it’s only slightly smarter than human, because it runs on software. The lie detector could cause pretty big changes but if it really works that well I’d expect it to be used well outside the justice system. I’m actually pretty pro-honesty in general so I should think more about why this bothers me – maybe just that in the hands of an evil government it could be a powerful tool of repression.

      As for the stunner, if it worked how you suggest that would be great. I thought tasers are kind of supposed to be the real-life stunner, except more painful – is the problem that they’re not effective enough?

      • Deiseach says:

        If the lie detector only has 90% rate of accuracy, you can bet every lawyer will argue their client is one of the 10% that gets false positives.

        Also, unless you’re going to use it on the police and prosecution (“Of course we’re telling the truth when we say your accomplice has dobbed you in/of course we’ll reduce your sentence to three years instead of putting you away for life if you co-operate”), then the same rule as always holds true for the accused: say nothing, and keep on saying it.

        And I wonder how the use of lie-detectors versus the right not to self-incriminate would work out? Anyway, I don’t think lie detectors are that great, and have very little faith in them as being these ‘can always tell with complete accuracy when you’re lying’. I think their value has been inflated for psychological effect – in order to work on the susceptible that unless they make a confession, the magic machine will find out anyway that they’re lying. See Chesterton’s The Mistake of the Machine 🙂

        Re: weapons that don’t kill – that’s what rubber and plastic bullets and tasers are supposed to do. Except non-lethal rounds can be lethal.

        • Mary says:

          Remember, we can also use the device to have the police and prosecutors tell the truth.

          Perhaps we calibrate it by the obvious step of having the witness lie.

          • Andy says:

            I’ll have to remember that for my current project, which has lie detectors in non-prominent but important plot roles.

          • Deiseach says:

            There’s also the problem of the inaccurate but genuinely confused witness who believes they’re telling the truth (“Yes, that’s definitely the guy I saw running away from the scene!”) so the lie detector goes “ping! honest and truthful witness!” and then it turns out that they need glasses, it was twelve midnight in a total lunar eclipse, and the guy was dressed up as Darth Vader 🙂

          • Deiseach says:

            Remember, we can also use the device to have the police and prosecutors tell the truth

            I don’t think you’d get a court in the land to do that, but I’d love to see it:

            So, Inspector Knacker, when you said the witness made an unprompted identification from the lineup, what you really meant was you picked six red-haired midget Scottish sailors and your suspect, then asked the witness “Which of these looks most like the six foot blond Alpine goatherder you claim robbed you?”

      • Paul Torek says:

        I see the lie detector as *causing* bad government, by way of loyalty tests.

      • Tarrou says:

        Tasers and stun guns are very short range (tasers are basically a touch weapon, stun guns can shoot a few dozen feet). They don’t work reliably through heavy clothing. When your max range is measured in feet and a sweatshirt counts as body armor, the weapon system is not a substitute for a real one.

    • Harald K says:

      Just how magical is #1? Because I know some liars, but I also know some people who are good at actually convincing themselves of the sort of things other people might lie to you about.

      Either way, if it’s the same 10% that can fool the lie detector every time, I expect it wouldn’t change very much. Especially not if it also has a false positive rate.

    • Andrew G. says:

      Okita studied the positioning, cocking his head and narrowing his eyes. “Right.” Bracing Ethan’s arching body against the railing with his knees, he raised doubled fists for a powerful blow.

      The catwalk shook, a rattling jar. The panting figure raising the stunner in both hands did not pause to cry warning, but simply fired. She seemed to have dropped out of the sky. The shock of the stunner nimbus scarcely made any difference in Ethan’s inventory of discomfort. But Okita was caught square on, and followed the momentum of his aimed blow over the railing. His legs, picking up speed, tilted up and slid past Ethan’s nose, like a ship sinking bow-first.

      “Aw, shit,” yelled Commander Quinn, and bounded forward. The stunner clattered across the catwalk and spun over the side to whistle through the air and burst to sizzling shards far below. Her clutching swipe was just too late to connect with Okita’s trouser leg. Blood winked from her torn fingernail. Okita followed the stunner, headfirst.

      Ethan slithered bonelessly down to crouch on the mesh. Her boots, at his eye level, arched to tiptoe as she peered down over the side. “Gee, I feel really bad about that,” she remarked, licking her bleeding finger. “I’ve never killed a man by accident before. Unprofessional.”

      “You again,” Ethan croaked.

      She gave him a cat’s grin. “What a coincidence.”

      The body splayed on the deck below stopped twitching.

      Ethan of Athos

    • Wrong Species says:

      Mind uploading. I assume that it and AI will go together but I could be wrong.

    • John Schilling says:

      The stun gun is likely going to make counterinsurgency warfare a lot less doable, along with every other sort of warfare. War is a contest of will, not of tactical efficiency. If you are using “stun guns”, you can’t deliver suppressive fire, your enemy will never surrender, and he will basically fight with the courage and intensity of a Norse berzerker but the coolness and skill of a deer hunter or target-range marksman. Your soldiers, on the other hand, will be unable to kill the enemy, and the ability to exact lethal revenge upon those who have killed their comrades is part of the formula that keeps soldiers on the battlefield when things turn ugly, so expect your army to turn and flee as soon as their tactical superiority becomes less than overwhelming.

      On the law enforcement front, have you considered the utility of stun guns to various sorts of criminals? Rapists, for example.

      I’m with Aral Vorkosigan on this one. Never bring a stunner to a gunfight, or even a knife fight. If all you brought is a stunner, that’s a good sign that you really shouldn’t be fighting at all.

      • Held In Escrow says:

        I’m not seeing this, especially if you have a Kill setting on your stunner. A stunner takes someone out of the fight; it’s just as effective as killing someone in a single engagement. The problem comes that they can evacuate the stunned person, but that’s an additional burden on the other side.

        Stun them all and sort them out later would be incredible for counterinsurgency tactics because of this; you never have to worry about killing innocents or checking your fire but for allies being downrange.

        As for the worry about being used by criminals, I definitely think that’s an issue.

        • John Schilling says:

          Stun them all and sort them out later would be incredible for counterinsurgency tactics because of this

          Yes, but “stun a few, then watch as your entire army flees in panic from the hordes of unstoppable killing machines”, works rather less well.

          Against the army using stun guns, every other army becomes a force of smart, fast zombies with real guns. Good luck with that.

          • Held In Escrow says:

            Why are your enemies just getting up after you shoot them? The issue with fighting insurgencies isn’t winning individual engagements, it’s one of the big problems is that you have restrictive rules of engagement and they don’t. Stunners remove that handicap.

            The fact that you can now instantly unleash all your firepower the moment an insurgent reveals themselves gives a huge advantage to the occupying force. If our stunners have kill switches, well, you can always go all out if you don’t think you’ll win (or just also use normal guns!). The sheer ability to go in and grab insurgents without worrying about killing them is also extraordinarily powerful

          • Gbdub says:

            In the tactical scenario, I don’t see how a stunner doesn’t allow suppressing fire / forces you to flee. Getting stunned for any length of time still takes you out of the fight, and imposes a burden on your comrades to evac you (Anecdotally, a lot of battlefield weapons are designed to wound rather than kill for precisely this reason – blow a guy to bits and you just piss off his buddies. Wound him, and his buddies will do risky things and take themselves out of the fight in order to save him).

            Truly “harmless” stunners have the added advantage of allowing you to drop a “nuclear stun bomb” and stun a whole area at once without being terribly concerned about collateral damage. Would mitigate the “human shield” strategy a lot of insurgents rely on.

          • Deiseach says:

            Why do you think I’m going to put forces in the field? Fleet of drones equipped with stunners; fly ’em in, zap the opposition, then send in mop-up to hold the position.

            The way things seem to be going, we’ll probably have drones versus drones and very few flesh-and-blood soldiers on the frontlines; I foresee stunners being used in civilian/peacekeeping roles, e.g. police and ‘police actions’.

          • John Schilling says:

            In the tactical scenario, I don’t see how a stunner doesn’t allow suppressing fire / forces you to flee. Getting stunned for any length of time still takes you out of the fight

            Do you even understand what suppressing fire is? Suppressing fire is all the bullets that don’t hit the enemy – not because you have decided to be merciful and fire over their heads, but because you can’t hit them. Because people who are being fired upon by lethal weapons are really quite good at not getting hit. They duck, they hide, they run and dodge and weave, they pop smoke, they keep their distance, and above all else they shoot at you in a manner that tends to throw off your aim. And yet it is still useful, decisively effective even, to shoot at them with bullets that aren’t going to hit anyone. You haven’t convinced me on the effectiveness of firing off stun rays that aren’t going to hit anyone.

            Escrow: Yes, you can now unleash all your firepower against them. And all that firepower willl miss. Because ninety-nine times out of a hundred, that’s what firepower does, for reasons noted above. During the American Civil War, the Confederate Army could average about one DamnYankee(tm) casualty per hundred bullets fired. No American army before or since has ever done so well. With modern armies, the figure is more like one casualty per hundred thousand bullets fired.

            You all are using the first-order approximation where war is an exercise in striking down the enemy with ordnance, so that he is materially incapable of continuing to fight. The actual first-order approximation is that war is an exercise in scaring the holy living hell out of the enemy, so that he is psychologically incapable of continuing to fight. Bullets are scary. Even bullets which miss are scary, and the ones which actually hit have an effect well beyond the material incapacitation of one man.

            Stun rays, even if they hit, aren’t scary. And if you’re planning to win a war without having scary on your side, I’m going to be betting on the other side.

          • Andy says:

            Why are your enemies just getting up after you shoot them? The issue with fighting insurgencies isn’t winning individual engagements, it’s one of the big problems is that you have restrictive rules of engagement and they don’t. Stunners remove that handicap.

            Yes this.
            John, you may be confusing counterinsurgency with regular warfare, in a way that is very obvious, very logical, and very, very stupid. I don’t blame you, because it’s one of the mistakes *everybody* makes in counterinsurgency.
            The goal in counterinsurgency warfare is not to kill or scare the enemy; it’s to set up a stable, predictable system of norms and rules and rewards and punishments so that people can live their lives. It’s a lot closer to police work than regular army-on-army combat.
            And why are you assuming stun guns are the only weapons on the battlefield? Nothing in the stunner concept precludes a counterinsurgency force from having a SWAT-ish force with heavy weaponry on standby out of sight. Stunners would be the armament of the “beat cop” soldiers patrolling among the population, ready to call on the heavy weaponry in the unlikely situation that an insurgency decides to make a stand-up fight. Especially in ruban counterinsurgency, which tends to be highly disaggregated combat – lots and lots of little firefights, all with combatants all mixed up among civilians – using lethal force in every encounter is a very very good recipe to make lots of martyrs. Just ask anyone who’s experienced the “Baghdad Death Blossom.”
            Stunners don’t mean the enemy, in a disaggregated, confusing mess of an urban firefight, are going to get up. It means the counterinsurgent forces, with proper training and leadership, are going to be capturing a LOT more targets than they kill. Combined with a working lie detector or some truth serum, the innocent can be quickly seperated from the guilty. Very unlike the modern counterinsurgency thing where you just kill everyone, label them all combatants even if they’re babes in arms, and then wonder why all their relatives are mad at you.

            Stun rays, even if they hit, aren’t scary. And if you’re planning to win a war without having scary on your side, I’m going to be betting on the other side.

            Counterinsurgency, when done correctly and without genocide, is not about fear alone – it’s about building a better peace, one day at a time.

          • John Schilling says:

            1: I’m pretty sure I understand counterinsurgency warfare much, much better than you do.

            2. Counterinsurgency warfare is still warfare, involving battles fought with guns and bullets. You might want to put a bit more thought into how those bullets serve the cause of “setting up a stable, predictable system of norms and rules and rewards and punishments”

            3. You might also want to consider what you could realistically hope to have accomplished by calling me “very, very stupid”.

            4. I’m done with you, and with this discussion. Feel free to declare victory and go home; I understand that’s a fairly common strategy in your version of “counterinsurgency warfare”.

          • Airgap says:

            you may be confusing counterinsurgency with regular warfare, in a way that is very obvious, very logical, and very, very stupid. I don’t blame you, because it’s one of the mistakes *everybody* makes in counterinsurgency.

            A common mistake I’ve seen a lot and, to my shame, detected in myself from time to time, is having a simplistic and uninformed understanding of a subject, and then having the tendency to ascribe that to your interlocutor when challenged. It seems odd, but it happens too often to dismiss as a fluke.

            Are you doing this? You don’t have to say so, or even stop arguing your position, but double check for your own good.

            The goal in counterinsurgency warfare is not to kill or scare the enemy; it’s to set up a stable, predictable system of norms and rules and rewards and punishments so that people can live their lives. It’s a lot closer to police work than regular army-on-army combat.

            The negative-spin version is “The goal in counterinsurgency warfare is to set up an occupational military government without bothering to win the war first.”

            What I should perhaps explain here is that it’s not that John or I don’t understand your theory of counterinsurgency warfare. Rather, we disagree with it.

            Counterinsurgency, when done correctly and without genocide, is not about fear alone – it’s about building a better peace, one day at a time.

            APPLAUSE

            Insulting people who disagree with you and flagrant use of applause lights are both quite common, but they don’t really make sense together.

          • Anonymous says:

            They only don’t make sense together if you think that insults are intended for convincing their target.

          • Airgap says:

            Correct, I was leveraging the rhetorical fiction of a discussion in order to insult him, but also make a point. Well spotted!

          • Deiseach says:

            (W)ar is an exercise in scaring the holy living hell out of the enemy, so that he is psychologically incapable of continuing to fight

            Which is why the army put such effort into psychologically breaking down and rebuilding recruits to mitigate this psychological effect; otherwise, every time your soldiers came under fire for the first time, the majority of them would break and run, which they don’t do.

            Stun rays may not have the same visceral effect as “this will hurt me horribly, maim me, kill me” but the point is we continue to have people in armies who remain soldiers even after having been shot at and seeing their comrades shot, precisely because there are always work-arounds for this effect.

          • Gbdub says:

            I am fully aware of what suppressing fire is. Stunner suppressing fire is still scary, unless you think “beams shooting over my head that would leave me immediately incapacitated and probably captured by my enemy” isn’t scary.

            Besides, if that ends up being an issue, there are plenty of nonlethal ways to make loud scary noises and bright lights. Also they’ve got that microwave ray that causes pain but no immediate damage.

        • Held In Escrow says:

          The only thing that will guarantee government power more than nuclear stun bombs are Terminators
          .
          .
          .
          armed with nuclear stun bombs.

      • Slow Learner says:

        The stunner is going to make counterinsurgency warfare more practical…as long as you have enough troops on the ground to execute it.
        For example, you can stun the whole crowd and sort out the insurgents afterwards. You can be indiscriminate with your stunner use in a way that you simply can’t be with lethal weapons.
        Basically it’s a tool for turning riots and other disturbances into a whole lot of busy-work. Yeah, you’ll still need guns available to deal with open combat, but the stunner can turn anything short of combat into a nice calm conversation in a detention centre.
        The combination of increased policing effectiveness and decreased martyr creation promise that you can be a lot more gung-ho about who you arrest, who you detain, and who you confront.

        To give an example, in Northern Ireland there were plenty of known paramilitaries that the British authorities would have loved to have arrested and questioned. Sometimes there was enough evidence to actually charge them with crimes. Even when there wasn’t they could have been taken in on suspicion. There was enough military force available to take any and all of those people into custody; however doing so risked serious confrontations that would probably lead to civilian deaths, which would have enough of a PR impact to offset any gain from questioning them.
        If a snatch squad could go out with stunners and rifles though, that reasoning disappears, making it possible for the authorities to erode the militant leadership faster than they could in reality.

        • Andy says:

          On the other hand, eroding militant leadership doesn’t help if there are lots and lots of young toughs ready to become new leaders. The most you can do is create a disruption – the only long-term strategies for defeating an insurgency are either A) genocide or B) reducing and solving grievances among the population. It will still take diplomacy, tact, and cultural literacy to understand and resolve those grievances without creating a whole raft of new ones.

          • Slow Learner says:

            Yes but – if you can decapitate the militants, the new leadership are likely to burn themselves out on ill-advised strikes and tactical blunders that the more experienced leaders would have avoided.
            Once the hot period is over, you now have a period of peace where the militants are rebuilding their organisations and weapons stashes – and having weathered a series of attacks and severely damaged your enemies, your side is better situated politically to start making concessions to undermine the grievances of the broader population from a position of strength.

      • Luke Somers says:

        You seem to be oddly assuming that the person with a stun gun does not also have a regular gun.

    • grort says:

      How about cheaper easier contraceptives? A quick web search tells me that 40% of American births are from unintended pregnancies. I think we could solve a lot of societal problems if we could fix that. I want everyone (of all genders) to get a yearly contraceptive shot, for free, every year starting at puberty, unless they specifically tell their doctor they want to be a parent.

      • Troy says:

        Data like this tend to rely on problematic operational definitions of “unintended pregnancies.” For example, looking at that link, the article gives me no indication how they would count a birth to a couple that was not trying to have children, but was not actively trying to avoid having children either, and was willing to leave it up to God/fate/luck. Given that it’s Guttmacher, I strongly suspect anyone in that category gets lumped into their “Mistimed” or “Unwanted” category, which is a bit misleading.

        • You took the words right out of my mouth. In general, this approach seems to conflate “unintended” with “unwanted”, which is ridiculous.. Fully 1/2 of me and my siblings were unintended, but none were unwanted.

        • houseboatonstyx says:

          More to grort’s point, we might take whatever number of births are unwanted, and add to it whatever number of abortions are requested, for a total of unwanted pregnancies.

        • grort says:

          You’re right that it’s unclear what “unintended” means above. I checked wikipedia and
          found:
          * 49% of pregnancies are “unintended”
          * roughly half of “unintended” pregnancies result in births
          * 69% of “unintended” births are reported as “mistimed” rather than “unwanted”.

          Doing the math, I get that roughly 10% of our population is coming from “unwanted” births.

          (It — it actually kind of hurts to type the words “unwanted” and “births” next to each other. I wonder if that’s why so many statistics are expressed as “unintended”?)

          That’s a significant change compared to the 40% originally cited. But I still feel like improvements in contraception technology could significantly improve our society.

        • Luke Somers says:

          Exactly. My daughter was ‘unintended’ to the tune of ‘4 months later would have been really nice, but we can deal with this’.

      • Anonymous says:

        All this talk of birth control, including on previous threads, compels me to tell the story of my grandmother’s pregnancy, while subject to what I believe is the most effective form of contraceptive in the history of the world. Stop and think for a moment and consider: what is the most effective contraceptive? What would shock you if it ever once failed? What my grandmother had was ulfgrerpgbzl, in rot13.

        • speedwell says:

          I once met a young woman who had had a tubal ligation and was then pregnant. To make matters even worse, this was not her first, but her second, pregnancy after the sterilization. This was corroborated on the spot by the counselor we were both seeing at the time.

          • Anonymous says:

            Tubal ligation and vasectomy are designed to be the minimal surgery to achieve the goal, but that means it’s easy for the damage to heal, reversing the procedure. They both have measurable rates of failure. Of course, once you know the procedure has failed/reversed, you should consider the person fertile.

          • Matthew says:

            Wait a second…

            It didn’t occur to anyone, after the first pregnancy, that they might need to redo the ligation?

          • speedwell says:

            Matthew, she was poor and lived in the American South. She had been using other contraception after the first pregnancy, but it also failed. She was seeing the counselor to ask about giving the second baby up for adoption. Where we lived, it wasn’t an option for an uninsured jobless single woman to have care for surgical complications that weren’t imminently life-threatening.

      • Deiseach says:

        “Unintended” also means “using contraception which failed”. Implanted contraception may work much more efficiently (I’d like to see some data on that) but there’s always going to be some margin of error, from people having bad reactions to it, to plain old “whoops, forgot to check the calendar for my next appointment for a renewal”.

        • Airgap says:

          Modern contraceptive implants are more reliable than both partners having their tubes tied. Like, an order of magnitude more reliable.

        • Anonymous says:

          Renewal is a problem for 3 months implants, but not for IUDs.

          • Deiseach says:

            IUDs are not without their problems, either and I have to admit, given that grort was going to contracept all genders, I’d love to see the male version of one of these – “now don’t worry, we just need to insert this right up the urethra until we get it nice and snug into the scrotum”.

          • Airgap says:

            The male version, RISUG, is much better and less invasive. It’s not fully tested yet though.

      • Airgap says:

        We could also require the shots as a condition of getting welfare. Probably the most practical way to implement negative eugenics.

        • Deiseach says:

          Why welfare alone? There are a lot of middle to upper-middle class scum out there as well. Society and humanity would be better off if their genes were not perpetuated.

          Don’t be classist, Airgap!

          • InferentialDistance says:

            Because getting poor people to have fewer kids has inertial benefits for decreasing wealth inequality in future generations.

          • houseboatonstyx says:

            Let them have tax breaks.

          • Airgap says:

            There are more poor scum, absolutely and proportionately. If you’re trying to reduce the rate of scum without totally remaking society…

            @InferentialDistance: +1. This has been one of my pet ideas for a while. Suppose a person with a net worth of $1bn can have an additional child, put it through fancy prep school &c, and after tax breaks it’s free. So when he dies, he splits up his estate 10 ways instead of 2. Also, people with $1bn net worth probably have reasonably good genes. Sure, it’s regressive as hell, but if you’re trying to make the whole thing better rather than worrying that only the good guys benefit, it looks like a decent idea.

          • nydwracu says:

            “Let them have tax breaks” — and breed thrift out of the population? And what’s this too-many-kids problem anyway? Isn’t it the other way around? Let them have tax breaks for reproducing!

    • jaimeastorga2000 says:

      1) Magical lie detector that works with over 90% accuracy
      Even if it can’t predict future behavior, it could revolutionize the criminal justice system.

      Not much. The 90% accuracy clause means any lawyer worth his salt will get it thrown out, which generalizes to other situations; you can bet every politician will insist he is one of the 10% honest ones. In any case, humans are amazingly good at believing their own bullshit.

      Unless the lie detector works by direct reference to the truth and is not based on brain states. If it does, then you have a 90% accurate Oracle AI and I’d expect the world to end in a matter of months.

      2) Weapon that knocks someone unconscious without risking death
      Policing (and counterinsurgency operations) would get a lot more doable if operators didn’t have to risk making martyrs everywhere they go.

      Risk compensation will probably wipe out any increase in safety the stunners bring.

      3) AI
      Probably a duh, but I don’t know how to define a friendly, smarter-than-human AI.

      Duh indeed.

      4) Uterine replicator
      Both as risk reduction for the child-creation process and as a way to create odder societies like all-male societies and transhuman genetic engineering.

      Interesting. Haven’t thought much about it, but my guess is that this would be huge.

      • RCF says:

        “Not much. The 90% accuracy clause means any lawyer worth his salt will get it thrown out”

        How accurate are fingerprints? Eyewitness identification? Dental records? Blood type comparisons?

      • Airgap says:

        Also, politicians don’t usually tell regular lies. They prefer Gricean lies, since that’s such a hard concept to explain to newspaper readers.

    • Agronomous says:

      a way to create odder societies like all-male societies and transhuman genetic engineering

      I see no way an all-male society could possibly go wrong.

      • Samuel Skinner says:

        Well, presumably they will all be gay otherwise yeah things would get crazy pretty quickly.

        • Chevalier Mal Fet says:

          Surely if we’re capable of making artifical wombs we’d be capable of making fembots?

        • Andy says:

          I recommend Lois McMaster Bujold’s novel Ethan of Athos, which starts in an all-male society having a population problem because the ovaries-in-cans they brought along are running to the end of their lifespans. I remember her saying she had a choice of three models for an all-male society from our own world – armies, prisons, and monasteries. Athos is modeled on the third, and actually sounds quite pleasant, if a bit rural for my taste.
          And yeah, Bujold’s been writing on these topics for longer than I’ve been alive, and I got the Miles stories read to me at bedtime, so this might be a little bit central to my thinking.

        • Deiseach says:

          I take it none of you have read Cordwainer Smith’s The Crime and the Glory of Commander Suzdal? Which, even reading it in the late 70s as an ignorant rural conservative Irish Catholic mid-teenager, I thought was not necessarily how an all-male gay society would turn out.

        • Well, maybe. I suspect that a lot of people would fit into their society, tolerating vaguely acceptable sex because they don’t know that something they would like a lot better is possible.

    • cassander says:

      >1) Magical lie detector that works with over 90% accuracy
      Even if it can’t predict future behavior, it could revolutionize the criminal justice system.

      I find it adorable that you think that this would have a bigger effect on society by its use in criminal trials rather than people using it on their romantic partners.

      • Deiseach says:

        Yes, we’re not thinking big enough. Job applicants, the employed (why waste time on drug tests when you can just wave the Magic Lie Detector over them and see if it goes “beep” when they say “No, I’m not taking anything stronger than aspirin”?), landlords versus tenants (sorry, Scott!), romantic partners, ‘who drank the last of the milk?’, teachers and students – we’ve come to accept CCTV and (nearly) constant monitoring as normal, why not having the lie detector brandished at us when we walk into shops (are you a shoplifter? are you here to buy or only ‘looking around’ in which case get the hell out, timewaster!).

    • Dale says:

      > Magical lie detector that works with over 90% accuracy

      Supplemental Job Interview Question: “Do you think we should hire you?”

  45. Cauê says:

    A couple of Open Threads back, I asked religious commenters for their thoughts on the interaction of religion and Sequences-style epistemology, and got many very good responses. So, in the name of curiosity and of trying to better understand the stronger kind of religion, I’ll try something similar again.

    I have noticed that, when I do find people with more sophisticated views on religion, they’re going to be incompatible more often than not – frequently on pretty fundamental stuff.

    And I understand why sophisticated theists get frustrated when atheists attack what they see as weak arguments and unsophisticated caricatures of religion, instead of engaging with the better class of theology, but still, after decades living on the largest Catholic country on the planet, one thing I will say is that very few people know anything like sophisticated theology, and most people’s beliefs are closer to those supposed caricatures than to Aquinas.

    Also, of course, there are the billions of Muslims, Hindus and Buddhists, of many sophisticated and unsophisticated varieties.

    So, I’ve always been curious about this, and I’ll thank anyone who feels like either responding or pointing me somewhere useful. Is this variety of views among coreligionists seen as a problem? When smart religious people look at all those *other* people with strongly held but presumably incorrect religious beliefs, how do they conclude their own beliefs are better grounded, and not caused by the same factors? On what grounds do sophisticated Christians dismiss Islam, Hinduism and etc.? And if they aren’t given an equal chance, is there a reason for this? Does any of these questions not even make sense?

    Thanks in advance, and, once more, I promise not to argue about any responses.

    • BD Sixsmith says:

      I’m agnostic but this argument has never moved me: …most people’s beliefs are closer to those supposed caricatures than to Aquinas.

      True, but I’m willing to bet that most people’s understanding of evolution by natural selection is not much more sophisticated than “humans came for monkeys”, and that most people’s understanding of the Big Bang is not much more substantive than “there was a big bang”. Fact claims should not be judged by popular interpretations.

      …how do they conclude their own beliefs are better grounded, and not caused by the same factors?

      I think the variety of religious beliefs should give intelligent believers cause to question their views (which is not, of course, the same as dismissing them). One point worth making, though, is that the philosophical underpinnings of theism are not entirely different from faith to faith. Thomists would admit to owing a debt to the pagan Aristotle and the Muslim Avicenna.

      • Shenpen says:

        This is a tad bit more complicated. If you want to debate religion _intellectually_ you debate with Aquinas, but if you see the views of millions of people as a _social problem_ you attack that head-on. While their views are less accurate, the social effect of those views is more important.

        It’s very similar to people complaining how the Soviet Communism was not real Communism. Why care? They were a superpower. Real Communism is *books*. Why should books have more importance than superpowers?

        Real ideas, that affect history, are often mangled beyond recognition but that is no reason to not engage with them: they have way more power and influence than pure ideas.

        • Furrfu says:

          Some of us think it’s important to investigate whether our beliefs are true, as well as what would happen if other people believed them.

          For example, in the case of Communism, for example, we care whether or not it’s true that the internal contradictions of capitalism will inevitably destroy it by immiseration of the proletariat, not just whether people believe they will. If that “pure idea” had turned out to be predictive of the US’s economic performance during the 1960s and 1970s, that would have been a lot more important than any mere superpower; Khrushchev really would have attended the funeral of the US, and of capitalism in general, as he so famously declaimed he would. If it turns out to be predictive of the US’s (and/or EU’s and Japan’s) economic performance during the 2010s and 2020s, it will again turn out to be a lot more important than any mere superpower.

      • I think this is the right answer. (Edit: By which I mean I think it is what religious people would most probably say. I, too, am not religious.)

        One thing that can be added to it is that the theology / spiritual writing of at least some Christian faiths explicitly allows people to be saved with erroneous or incomplete beliefs. (Many spiritual writings also slight advanced theological study.)

        I once read a story (from a Catholic) about an uneducated Catholic, who, on being asked what it what he believed, responded by saying “I believe what the Church believes,” which the teller of the story took to be an excellent response.

      • Nicholas says:

        I suppose what I would rebut, is that when someone talks about (naive)Christianity(naive) what they are typically actually pointing at in terms of LessWrongian Constrained Anticipation is “whatever most of the people in my area who self-identify as Christians” believe. Because my pool of non-representative Christian self-identifiers includes people who don’t believe in the existence of Jehovah or Jesus, dust-ring cosmologists, and strict materialist determinists; I might as well expect a claim about “Christianity” to describe what Zoroastrians believe as that divergent and not-internally-consistent group.

      • houseboatonstyx says:

        I’m willing to bet that most people’s understanding of evolution by natural selection is not much more sophisticated than “humans came for monkeys”

        So? Allowing that the famous Common Ancestor was him/herself what most of us would call a ‘monkey’ if we met him/her, don’t most people have the gist of it right? Plus, just to make everything clear, that monkeys and everything else all came from the same Primordial Globule.

    • Irrelevant says:

      I’ll take a stab, though as I’m no longer religious myself people are free to argue if they think I’m misrepresenting something.

      Is this variety of views among coreligionists seen as a problem?

      “Problem” is underdefined in this context. Would the world be improved if every Christian agreed on the truth of e.g. annihilationism? Certainly. Should some amount of effort be dedicated to correcting their false beliefs where possible? Sure. But convergence failure is only a fatal flaw in progressive worldviews. There is no reason to suggest humanity would have a better understanding of salvation or the divine in 2015 than it did in 1015, and the reverse can very well be true.

      When smart religious people look at all those *other* people with strongly held but presumably incorrect religious beliefs, how do they conclude their own beliefs are better grounded, and not caused by the same factors?

      This is a purely general argument against considering yourself more correct than average, so I’ll give the purely general response: Believing each of your beliefs is correct individually is not the same as believing all of your beliefs are correct in concert. In fact, since I am not omniscient, I can be approximately certain I do hold some incorrect beliefs. But I don’t know which ones the incorrect beliefs are, because if I knew I would change my mind on those ones.

      On what grounds do sophisticated Christians dismiss Islam, Hinduism and etc.?

      Christianity and Islam accept a lot of shared premises, and argue between and within them. I’m glad you brought up Hinduism though, because I just realized I’ve never thought about it. Dismissal of polytheism is so typical of religious sophistication even outside of Christianity, and conversion to Hinduism in the west so slight, that I do not even know what the argument there looks like. I know an ancient argument that if the fundamental kinds of matter are many then so might be the gods, and there’s a somewhat common, very abstract, Christian-deistic conclusion that since nothing about the nature of existence outside the universe is revealed to us it’s possible God is only locally omnipotent and others exist, but polytheism really appears to be a dead question. Anyone got another view on that?

      And if they aren’t given an equal chance, is there a reason for this?

      The notion of giving an equal chance to every religion is like giving an equal chance to every language or career: not even possible in theory, arguably not even conceivable or well-defined. You can merely be more or less limited in the size of the sliver of options you considered.

      This is normally discussed in the context of if, when, and to what extent there’s an obligation on non-believers and pagans to convert to Christianity, but the answer applies symmetrically: you have a moral obligation to accept religious truths that you have the cultural fluency to understand.

      • Forge the Sky says:

        Good comment overall. I’ll just focus in on one thing I might be able to contribute a bit to. I’ve studied Hinduism to a degree, though I’m no expert.

        Hinduism is less a single religion and more a mash-up or conglomeration of a variety of indigenous folk practices, abstract philosophies, obsolete scientific systems, psychological technologies, and Aryan paganism all bound up in a great 4000-year-old ferment. So it’s hard to say anything about Hinduism in general about anything, really. There are strains of ‘Hinduism’ that are atheistic, when it comes down to it. But the most prolific and well-known forms of Hinduism generally follow a pantheistic pattern – seeing a vast undifferentiated ‘reality’ underpinning everything. Sometimes that reality is unmanifest – there are great ages of ‘nothingness’ or ‘all one-ness’ interspersed with periods where forms and images make themselves manifest from this reality – becoming worlds, heavens, hells, creatures, gods, and innumerable other things. This reality goes through vast cycles of time, some more orderly, some less, until all is finally folded back into undifferentiated reality once more.

        Different groups of hindus will often worship a certain deity as being a manifestation of this ‘all-that-is.’ Sometimes they see this as just an image to follow in pursuit of ultimate truth, and sometimes it approaches monotheism, where Vishnu or Shiva (for example) are seen as being that reality itself.

        Less ‘sophisticated’ practitioners are functionally polytheistic, often. But greater thinking about religion has not really, to my knowledge, lead to a surviving ‘question of’ polytheism, or even (as with Nordic religion) dualism as a serious answer to the question of existence. It’s monotheism, atheism, pantheism, or panentheism all the way down.

      • Kiya says:

        I am no kind of theologist, philosopher, cultural historian, or even theist, and I don’t know some of the words you guys are using. But polytheism has a strong intuitive appeal over monotheism for me in a way I will try to explain.

        *Polytheism allows greater precision of worship. Would you rather invoke and praise a god who kills children, or a god who heals them? If you conceive of them as separate individuals, you can decide.

        *Polytheism denies omnipotence. If everything that ever happens in the universe is the intent of a single intelligent agent, then that agent’s goals are incomprehensible. Incomprehensible probably does not mean good, as good is a small point in the possibility space. If gods affect limited spheres in limited ways, then they can be entities that think like humans and are possible to reason with.

        *Polytheism can evolve. Human cultures change over time. If there are many gods, then they can rise and fall in prominence along with their spheres of influence; new gods can be born, old gods can die.

        *Polytheism makes for more interesting mythology with characters and conflict and dialogue and so forth.

        • Forge the Sky says:

          I understand that polytheism may make for a more appealing prospect for you if you’re looking at what is the most aesthetically pleasing, interesting, evocative, and so on. It might be better at giving you the sorts of things you’d like to get from religion.

          But people who are inquisitive about religion and religious beliefs tend to want to believe things that are true. And generally polytheism is thought to be untenable as a general ‘theory of why;’ it doesn’t really explain what the origin of all existence is, what its nature is.

          C.S. Lewis, the famous Christian apologist, said that if given the choice he actually found the Norse pagan way of looking at the world to be more personally appealing to him than Christianity. But he believed in Christianity instead because he thought that it was true. People who are serious about religion (or not-religion) are serious about knowing what the truth is.

          • Kiya says:

            Thanks for the clarification. A deep interest in the question of why the universe exists (as opposed to more empirically-attackable questions like how and whether) sounds like a good reason to be serious about religion. I hadn’t previously known of any good reasons, and was confused.

          • houseboatonstyx says:

            @ Forge the Sky

            You are starting at the deductive end (as Lewis did in _Miracles_ vs his Materialist).

            Starting at the inductive end (what saves the appearances we encounter in real life) some of us can see at most something rather like the Greek gods, or a Far East set (Kali, the Pure Land Buddha, etc). This world looks more like something run by a committee!

          • Jaskologist says:

            I dispute that the world looks like it is run by committee. Science seems to show the opposite, or at the very least, is based entirely around assuming the opposite. We are ever-questing to find the grand unified equation which under-girds everything, and we are pretty sure that the ultimate rules of the universe are the same not matter where or when you are. We even go so far as to assume that the answer will ultimately be simple. This is a highly monotheistic view, and there are more than a few who have proposed that it was only due to monotheism that we came up with science in the first place.

            (The monotheistic religions actually do have a place for the types of creatures that polytheists consider “gods.” They’re variously labeled angels, demons, or djinns, and considered interesting but not ultimately important.)

          • Troy says:

            I dispute that the world looks like it is run by committee. Science seems to show the opposite, or at the very least, is based entirely around assuming the opposite.

            Robin Collins has done some extremely interesting recent work on the apparent fine-tuning of the universe, not just for life, but for discoverability and the possibility of science.

          • houseboatonstyx says:

            @ Jaskologist

            I dispute that the world looks like it is run by committee.

            Not at the level of nuts and bolts and molecules, but at the level the Greeks saw theirs operating. Raising a storm to wreck a particular ship. Turning the tide of a battle. Taking over someone’s actions or feelings, possessing the person to do some deed. Consequentialist actually. And Acts of God of course.

            (The monotheistic religions actually do have a place for the types of creatures that polytheists consider “gods.” They’re variously labeled angels, demons, or djinns, and considered interesting but not ultimately important.)

            Depends on whether you’re looking down on them in scorn, or looking up to them for help.

    • speedwell says:

      The only relevant thing I’ve ever said is probably this, and I don’t know how relevant it is to what you are asking, but it might be useful anyhow. http://www.patheos.com/blogs/hallq/2014/12/rokos-basilisk-lesswrong/#comment-1729338182

    • Jaskologist says:

      Is this variety of views among coreligionists seen as a problem? When smart religious people look at all those *other* people with strongly held but presumably incorrect religious beliefs, how do they conclude their own beliefs are better grounded, and not caused by the same factors?

      I think the key here is you are thinking of atheism as some kind of default view, and then all the religious explanations as competing against each other. I, however, see atheism, Hunduism, Buddhism, Shiite Islam, Sunni Islam, Catholicism, Anabaptism, etc as being all in the same possibility space, all competing against each other. Therefore, the existence of competing explanations is a problem just as much for, say, atheism, as it is for any of the others, which makes it more or less a wash.

      (There are more sophisticated versions wherein this says you should weight the plausibility of each by number of adherents, and while I do place a little weight there, it’s not much. You can also further weight based on agreements between the different systems; nearly everybody thinks that the supernatural exists, most agree on there being a heaven and hell, and by the time you’re arguing supralapsarianism vs infralapsarianism, you’re not really disagreeing on enough to call the whole rest of the project into question.)

      • Shenpen says:

        Uh, no. Remember, theism and religion is not the same thing, philosophical theism is one thing, it is metaphysical, but actual religions are based claims that are empirical, physical, such as the resurrection. There is a huge difference there. Atheists don’t have any unprovable empirical, physical claims.

        • Gbdub says:

          Given that the principle claims of atheism are effectively the negation of the claims of religion, how are they any more provable? If A is unprovable, so is Not A. Actually, atheism is probably more unprovable – proof of Jesus’ resurrection would be fairly straightforward (given observation tools not available at the time unfortunately). But conclusively proving every tenet of every religion wrong would be pretty hard.

          Also, atheism does make some physical claims, e.g. “the Earth and life on it developed from purely physical processes with no conscious influence from an outside sentient power”. Certainly explanations and evidence for that could be assembled. It could be disproven, if some being shows up and conclusively claims responsibility for Earth’s creation.

          Note that I don’t think this means atheism is less likely to be true – just that “provable” isn’t really the right metric.

          • Troy says:

            If A is unprovable, so is Not A.

            I agree with you about the object-level point, but this principle doesn’t seem right.

            “All tables have four legs” is not provable: I can never be sure I’ve examined all the tables.

            “Not all tables have four legs is provable”: I produce a table with three legs.

          • InferentialDistance says:

            Given that the principle claims of atheism are effectively the negation of the claims of religion, how are they any more provable?

            Because absence of evidence is evidence of absence; logic, like other fields of human knowledge, has advanced since the ancient Greeks.

          • Gbdub says:

            @Troy – yes, that’s a good point, and one I noticed as well, but couldn’t quite articulate succinctly so I hoped no one would call me on it 😉 I still think that asserting A and asserting Not A are still related, in that in both cases you’re making a claim that could theoretically be refuted.

            @InferentialDistance – absence of evidence may be evidence of absence, but not proof. Certainly we haven’t looked everywhere (or even a small fraction of everywhere) for gods or godlike beings. And we don’t quite have fully formed “theories of everything” to provide alternate explanations for all the claims of religion. So while the available scientifically acceptable evidence slants strongly to atheism (at least in the literal existence of gods sense), I don’t think you can label atheism fully proven either.

            In fact I always thought the issue with theism was not that it was unprovable, but that it’s unrefutable (e.g. God is everywhere, we just can’t see him, so you can’t “prove” his nonexistence).

            And remember that the original argument I’m responding to is that atheism is fundamentally different than theism because atheism doesn’t make any unprovable physical claims. Clearly it does make claims, and I think atheism’s claims are, insofar as they address the same questions, no better than equally “provable” to theism’s claims, because in fact they are mostly the same claims negated.

          • InferentialDistance says:

            but not proof

            You can’t prove anything in an uncertain universe, brah. Probabilities are in (0,1), not [0,1].

          • Gbdub says:

            Fine you caught me. I’m treating “prove” as synonymous with “probability pretty close to 1”. Explain how that affects my argument?

            Absence = evidence doesn’t mean we’ve yet acquired sufficient absence.

          • InferentialDistance says:

            Because we don’t care about everywhere, we care about the locus that we can observe. A deity that spends all their time faffing about outside our light cone is irrelevant. And postulates with irrelevant deities are less correct by Occam’s Razor.

        • Jaskologist says:

          I know atheists often like to try to argue for atheism as the default position, but I never find it convincing/interesting, and I doubt most others do either, given actual religious adherence numbers.

          It’s semi-moot in context anyway, given that most smart Christians believe that the Resurrection is about as proven as you can expect a historical fact to be.

          • Troy says:

            Right, I’m perfectly happy to grant that the intrinsic probability of atheism (understood just as the claim that God does not exist) is higher than the intrinsic probability of theism, and so is “default” in that sense. But we have a lot of evidence for theism, and for the historical claims of Christianity in particular.

          • Samuel Skinner says:

            ” I doubt most others do either, given actual religious adherence numbers.”

            That doesn’t follow. The default language is no language (see feral children), but most people on the planet speak a language.

            The religious beliefs of early humans are the closest to default we can get and as far as I know they cluster around animism.

          • RCF says:

            “It’s semi-moot in context anyway, given that most smart Christians believe that the Resurrection is about as proven as you can expect a historical fact to be.”

            Are your seriously claiming that thinking that the resurrection is as proven as you can expect a historical fact to be is a smart thing to believe?

          • XerxesPraelor says:

            Why do you think it isn’t? Rhetorical questions don’t contribute much to the discussion.

          • RCF says:

            @XerxesPraelor

            I assume that was directed at me. It’s a good idea to note who your post is direct at.

            It wasn’t an entirely rhetorical question; there was a possibility of my misunderstanding. It’s not a smart thing to believe because it’s simply not true, and going around making ridiculous claims like that just tells people not to take you seriously as a grown-up.

      • Irrelevant says:

        you should weight the plausibility of each by number of adherents

        I’ve always found it more plausible to weight by age, and am eternally confused by how few people in search of a new religion convert to Tengriism.

      • Peter says:

        I think there’s an issue here to do with revealed religion vs. non-revealed positions. You could think of a cluster of positions – atheist, Enlightenment deism, various nonspecific theisms, some of the wishier-washier bits of Anglicanism etc. – that don’t claim any miraculous evidence for their position. At any rate, no localised miracles happening at specific times and places that most of us only get to know about second-hand. Contrast with that a second cluster which does claim some miraculous proof – for example Christians who cite the Resurrection as their proof. If you’re in that cluster, you have the potentially tricky question of why _your_ proof goes through, and no-one else’s does.

        • Yehoshua K says:

          I (as a religious Jew) would say that (as far as I know) Judaism is the only religion that claims to be based upon a revelation witnessed by hundreds of thousands of people. In fact the Jewish Bible (Deut. 4:32-34) makes the falsifiable claim that the Jewish revelation was and would remain unique: “Has any people ever heard the voice of God speaking from the midst of the fire as you have heard, and survived? Or has any god ever miraculously come to take for himself a nation from amidst a nation, with challenges, with signs, with wonders… as the Lord your God did for you in Egypt before your eyes?”

          (Edit- this was written by Julie K; Yehoshua is my husband.)

          • Clockwork Marx says:

            Yet there is no correlating accounts of any such “challenges, signs, and wonders” outside of the oral tradition of the Hebrews.

            Even if Egypt somehow managed to erase the event from history (possibly the most successful case of mass censorship ever) One would think that Egypt’s trading partners and foes would make note of the massive loss of natural resources, citizens, solders, and the death of the Pharaoh. Instead there isn’t the slightest fluctuation in Egypt’s relationship with its neighbors in the timeframe Exodus is placed.

          • Yehoshua K says:

            Clockwork Max,

            This is Yehoshua responding, not my wife Julie.

            So far as I know, aside from the Bible, no organized histories come down to us from that period in that part of the world. Maybe Chinese histories of the period survive, but not Egyptian ones, or Assyrian, or Babylonian. None at all until the rise of the Greeks, several hundred years later.

            All that we have, aside from the Bible, are scattered and shattered archeological finds, which give us at best a hazy and vague knowledge of the history of the times.

            So demanding clear discussion of the events of a dramatic year in Egyptian history at that time sounds to me sort of like demanding that someone who claims to have seen a dog in the concrete city show you the dog’s paw prints.

          • Clockwork Marx says:

            If we have no knowledge of past civilizations outside of their (often mutually exclusive) collections of stories, I’d rather take the position that their actual history is unknown (and possibly unknowable) then arbitrarily take one mythos at face value while disregarding the foundation legends of say, China or Rome.

          • Clockwork Marx says:

            Of course there is nothing stopping you from believing that legendary figures like the Yellow Emperor, Romulus and Remus, and Moses all existed as they are described. Once you try to reconcile the Emperor being descended from Amatarasu and the Pharaoh being descended from Horus however…

          • Jaskologist says:

            The position that Egypt’s ancient history is unknown/unknowable is quite defensible, but it is not the position you originally took, which was that it is known, and contradicts Exodus.

          • Yehoshua K says:

            It is certainly possible that Remus and Romulus lived; somebody founded Rome, after all. But the legends about them (being suckled by a she-wolf, as I recall) were of the first type; unverifiable even by people who might have known them. Their claims to semi-divine status, even if they made them, were claims that their contemporaries could never have verified.

            Moses’ claims to be sent by G-d, as described in Exodus, by contrast, were of a type which could and would be verified by his contemporaries. That is, if you and I had known him, we would have been in a position to know that he was either telling the truth or lying about the Ten Plagues, the Splitting of the Sea, and the other events related.

            If it is your position that these stories were made up later, empirically demonstrating that such lies can be introduced into a nation’s founding stories, then I think you need to explain why nobody else has made up stories of this caliber. Unless, of course, you know of some other group that has a comparable founding story?

            For me, the bottom line is this. Judaism’s founding story, and the subsequent history of the Jewish people, are, to the best of my knowledge, unique in a very special way, as I discussed in detail in my first post. That uniqueness seems to point to supernatural intervention.

            I would regard my argument concerning the founding story as defeated if you could show me some other group with a founding story of the same type–particularly as the Bible makes a falsifiable claim that no such group will ever exist. Can you show me such a group? Do they exist, or have they ever existed, anywhere in known human history?

            I would likewise regard my argument concerning the history of the Jews as defeated if you could show me some other group with a history of the same type. Can you?

      • RCF says:

        If a bunch of people believe in some particle, but they can’t agree on whether it’s a boson or a fermion, massive or massless charged or uncharged, or on any property, and they can’t produce any evidence of it, then obviously the sensible position to take is that it doesn’t exist, at least until more evidence comes in. I don’t see why people insist on pretending that the God is somehow subject to different logic. Atheism is clearly distinguished from other hypotheses either doesn’t understand the concept of “distinguished hypothesis”, or has something wrong with their critically thinking skills.

    • Chevalier Mal Fet says:

      Never really been a problem for me.

      I was an atheist who wound up converting back into Christianity, albeit a different form than the one I was raised as.

      For all the vast variety of beliefs out there, there’s a lot of agreement on some basic stuff. For example, the Hindu concept of Brahma is loosely analagous to the Christian God. I don’t mean to minimize the differences, of course, but the basic structure of the universe is the same in most theist worldviews.

      I also don’t really hold with the argument, “Well, there’s a lot of disagreement here, so I’d better default to atheism,” since, like another commentator said, atheism for me is just one of many competing possibilities (I could see someone throwing up their hands and embracing agnosticism, but not having SOME conclusions about the supernatural is an impossible way for me to live my life).

      Neither, too, am I concerned that many intelligent people disagree with me. I’m quite used to that – my political views hover somewhere between conservatism and libertarianism, so both sides hate me equally for my heresies (and anyone left of center is right out, obviously). So, too, with religion – Catholicism isn’t exactly in vogue these days (except when people gleefully misunderstand the new pope), but going against the crowd has never been a problem for me. Essentially I feel that if everyone else had the same experiences I had, they would come to the same conclusions I have.

      • Shenpen says:

        Remember, theism and religion is not the same thing, philosophical theism is one thing, it is metaphysical, but actual religions are based claims that are empirical, physical, such as the resurrection. There is a huge difference there. Atheists don’t have any unprovable empirical, physical claims.

        • Clockwork Marx says:

          Plenty of atheists do, but atheism itself doesn’t require them to hold any specific ones.

      • houseboatonstyx says:

        (I could see someone throwing up their hands and embracing agnosticism, but not having SOME conclusions about the supernatural is an impossible way for me to live my life).

        I think Bayesianism has something to say about conclusions.

        • Chevalier Mal Fet says:

          In that case interpret “conclusions” as simply a “basis of action.”

          Consider: Let us say that someone is as pure an agnostic as you can get – he attempts to make absolutely no statements about the divine/supernatural/things spiritual as best he can.

          Yet, he will probably still go about his life in some way – buying and selling, eating certain forms of meat (eating meat at all), engaging in romantic and platonic relationships, working. At least at some level, he must accept that all the things that he does are either acceptable to whatever Deity may or may not exist, or that there is no deity to command him not to do such things.

          If you live your life making no determinations about the divine, some part of you must surely have determined that this way of living is acceptable to the universe.

          That’s what I was getting at with conclusions, I acknowledge that it is perhaps the wrong word.

          EDIT: In other (perhaps clearer) words, I view it as making a decision about the divine. “Not making a decision” is in most cases not really possible, since “not acting” is itself a decision. Does that make sense?

          • houseboatonstyx says:

            In that case interpret “conclusions” as simply a “basis of action.”

            Perhaps, “a temporary, shifting basis”? Not that mine actually shifts very much, at a divine-needing level.

            Consider: Let us say that someone is as pure an agnostic as you can get – he attempts to make absolutely no statements about the divine/supernatural/things spiritual as best he can.

            Except hypotheticals?

            Yet, he will probably still go about his life in some way – buying and selling, eating certain forms of meat (eating meat at all), engaging in romantic and platonic relationships, working. At least at some level, he must accept that all the things that he does are either acceptable to whatever Deity may or may not exist, or that there is no deity to command him not to do such things.

            To command him? That’s fiat deontological, I think. Lots of people see other bases for object-level morality. (I like C. S. Lewis’s: a certain set of precepts is self-evident.)

            If you live your life making no determinations about the divine, some part of you must surely have determined that this way of living is acceptable to the universe.

            Most of the major religions pretty much agree on most of the activities you listed. Christianity says we’re doing some things unacceptable, or unacceptably, all the time.

            That’s what I was getting at with conclusions, I acknowledge that it is perhaps the wrong word.
            EDIT: In other (perhaps clearer) words, I view it as making a decision about the divine. “Not making a decision” is in most cases not really possible, since “not acting” is itself a decision. Does that make sense?

            It makes sense in many areas. But deciding to follow the precepts that are common to all/most religions, kind of takes care of that, doesn’t it? (Of course you may need ‘divine’ help to succeed, but I think there are answers to that.)

    • Troy says:

      So, I’ve always been curious about this, and I’ll thank anyone who feels like either responding or pointing me somewhere useful. Is this variety of views among coreligionists seen as a problem? When smart religious people look at all those *other* people with strongly held but presumably incorrect religious beliefs, how do they conclude their own beliefs are better grounded, and not caused by the same factors? On what grounds do sophisticated Christians dismiss Islam, Hinduism and etc.? And if they aren’t given an equal chance, is there a reason for this?

      As others have noted, the epistemic problem of disagreement is quite general, applying not just to religion but to many of our other beliefs. Nevertheless, it’s a good question.

      I approach the beliefs of others as just one more kind of data in the world. Whether the devout beliefs of Muslims should move me depends on the degree to which those beliefs are more likely given that Islam is true than given that Islam is not true. This is a contextual matter, but as a general rule I don’t think that many people’s holding particular theological beliefs (or political beliefs, for that matter) is very strong evidence. That is, it’s unsurprising that people believe as they do whether or not their beliefs are true; people are irrational and prone to all sorts of cognitive biases, influenced by things like ingroup cohesion, wish fulfillment, confirmation bias, and so on.

      In a way, this relocates the problem: if others’ beliefs don’t move me much because I don’t expect people’s beliefs to in general be correct, how can I be so confident in my own beliefs? I don’t have much to say there except to appeal to the first-order evidence on which my beliefs are based. I’ve tried to evaluate things fairly and avoid biases in my own thinking. If I’ve misjudged the import of some of the evidence in some way then I’ll try to be open to being corrected. But until someone can point out to me how I’m doing that I’ll stick with what the evidence seems to me to best support.

      • RCF says:

        I don’t know what your beliefs are, but you appear to be a Christian, and there is absolutely no common* evidence for Christianity that does not come down to “people are more likely to believe in Christianity if it’s true than if it’s not, people believe in Christianity, therefore Christianity is true”.

        *One could have an internal, personal experience that leads one to believe that Christianity is true. However, such evidence would be applicable only to them, and would not be evidence for people in general (except insofar as one believes the “people are more likely to believe in Christianity if it’s true” premise).

        • Troy says:

          there is absolutely no common* evidence for Christianity that does not come down to “people are more likely to believe in Christianity if it’s true than if it’s not, people believe in Christianity, therefore Christianity is true”.

          Naturally, as an evidentialist and a Christian, I disagree. 🙂

        • XerxesPraelor says:

          For example, there’s the evidence of lots of people testifying to seeing the resurrected Jesus in a culture where that would not arise spontaneously. There’s also the evidence that the Romans did not take out Jesus’s corpse to stop the early Christians, showing that they didn’t have it.

          • Clockwork Marx says:

            Was the idea of a dead person appearing to the living really that novel of an idea at the time (not a rhetorical question, I’m honestly curious)? I always though that it was a relatively common trope throughout human existence.

          • Chevalier Mal Fet says:

            It’s happened before, but usually not so close to the death of the individual involved.

            For example, there’s the Egyptian Osiris myth, but that involves gods – there weren’t hundreds of people running around claiming to have seen the actual, living Osiris as they experienced him before his death.

            Similarly, there was another fellow around the time of Christ, a magician by the name of Apollonius to whom people ascribed miracles. Upon his death he is supposed to have been raised into heaven, but our only source for this is more than a century after his death.

            Christ, on the other hand, was testified to have resurrected within living memory of most of the people concerned. He was put to death sometime between 29 and 33 AD, and by the mid ’40s you have reliably dated letters of Paul claiming he saw the guy alive again, and had met many more people besides who also had. By the ’60s and ’70s you have at least two full-length biographies – Mark and Q, followed by Matthew and Luke using elements of both – attesting to multiple miracles and the resurrection, again, still within living memory of most of the people closest to events.

            SO, the concept of someone rising from the dead wasn’t exactly an unheard of or unthought of thing, but making the claim about an actual human being, when it SHOULD have been easily refuted by hauling a dead Jesus out of his tomb, is unusual.

          • Samuel Skinner says:

            “For example, there’s the evidence of lots of people testifying to seeing the resurrected Jesus in a culture where that would not arise spontaneously. There’s also the evidence that the Romans did not take out Jesus’s corpse to stop the early Christians, showing that they didn’t have it.”

            The Muslims explanation seems to be a bit stronger, namely the Romans blotched the execution.

          • RCF says:

            “For example, there’s the evidence of lots of people testifying to seeing the resurrected Jesus in a culture where that would not arise spontaneously. ”

            Well, first of all, that would fall under the category of “you are assuming that people would be more likely to testify to that if it were true, than if it weren’t”. Second, the claim that people testified to it is itself a claim that needs support.

            “There’s also the evidence that the Romans did not take out Jesus’s corpse to stop the early Christians, showing that they didn’t have it.”

            Again, that relies a huge amount of facts not in evidence.

          • Jiro says:

            It’s easy to make any religious claim seem “unique” by choosing the reference class.

            I’m pretty sure that if Christianity had claimed that Jesus’s resurrection was *not* witnessed by a lot of people, Christians could pick another facet of the exact same story and still call Christianity “unique”.

          • Deiseach says:

            Clockwork Marx, there’s the bits in the Gospels where the disciples see Jesus and go “Ahhh, it’s a ghost!” (paraphrasing here, as you may tell) and the writers put in bits to show no, real body, e.g. Luke 22: 36-43

            36 While they were still talking about this, Jesus himself stood among them and said to them, “Peace be with you.”

            37 They were startled and frightened, thinking they saw a ghost. 38 He said to them, “Why are you troubled, and why do doubts rise in your minds? 39 Look at my hands and my feet. It is I myself! Touch me and see; a ghost does not have flesh and bones, as you see I have.”

            40 When he had said this, he showed them his hands and feet. 41 And while they still did not believe it because of joy and amazement, he asked them, “Do you have anything here to eat?” 42 They gave him a piece of broiled fish, 43 and he took it and ate it in their presence.

            Yeats’ play, The Resurrection, has three different characters giving three opinions on what happened: one, that Christ was a great teacher but only a mortal who was really put to death and really died, another that Christ was indeed a god or demi-god and it was all only in seeming, and the third that something new has happened that never has happened before.

          • Clockwork Marx says:

            Jesus’ bodily resurrection was a claim of his disciples. There is (to the best of my knowledge) no information about the specifics of the claims made by the nameless random people who claimed to have encountered him after his death.

          • RCF says:

            As far as I know, the disciples never claimed that Jesus was resurrected. All we have is some books with unclear authorship claiming that the disciples saw Jesus after he died, without any explanation as to how the authors came across that information.

          • Troy says:

            Well, first of all, that would fall under the category of “you are assuming that people would be more likely to testify to that if it were true, than if it weren’t”.

            Let the background K be that Jesus was killed, R be the proposition that he was resurrected, and tR be the proposition that a particular disciple testified that he was resurrected. P(tR|R&K) is presumably nearly 1 — how many people would not tell others if their master was killed and raised back to life? Are you really suggesting that P(tR|~R&K) is also nearly 1? That the disciples would be nearly certain to lie about Jesus was resurrected, when doing so literally cost them their lives?

            Second, the claim that people testified to it is itself a claim that needs support.

            As far as I know, the disciples never claimed that Jesus was resurrected. All we have is some books with unclear authorship claiming that the disciples saw Jesus after he died, without any explanation as to how the authors came across that information.

            The authorship of the relevant New Testament books is not unclear. The first verse of 1 Corinthians identifies its author as Paul, and 1 Cor 15 gives a list of the people to whom Jesus appeared. The Pauline epistles in general, as well as the Book of Acts, give us a general picture of Paul’s life which makes it abundantly clear how Paul could have “come across that information.”

            The Gospels are not explicitly signed in the way that 1 Corinthians is, but this was completely normal for books at that time. An unsigned book is not necessarily anonymous: its authorship can be common knowledge. Plato’s dialogues, for example, are not explicitly signed, but for most of them their authorship is not in serious doubt, and all ancient sources agreed that they written by Plato. In the case of the Gospels, it was the unanimous record of literally every historical source who takes a stand on the matter up until the 4th century that Matthew and John were both written by disciples of Jesus of those names, Mark was written by a disciple of Peter, and Luke and Acts were written by the companion of Paul by that name mentioned in Acts. There is no rival tradition of authorship for any of these books, and not even skeptics whose writings survive expressed doubts about the authorship of these books until Faustus in the 4th century.

            Matthew and John’s source for their information is obvious; they were disciples of Jesus. Mark, if we are to believe the tradition, was a disciple of Peter, so his source is clear. Luke was a companion of Paul, and he himself tells us that he spoke to eyewitnesses:

            “Inasmuch as many have undertaken to compile a narrative of the things that have been accomplished among us, just as those who from the beginning were eyewitnesses and ministers of the word have delivered them to us, it seemed good to me also, having followed all things closely for some time past, to write an orderly account for you, most excellent Theophilus, that you may have certainty concerning the things you have been taught.” (Luke 1:1-4, ESV, emphases mine)

    • Cauê says:

      (addressing all the above)

      Hm, I think I could explain my questions better.

      “Is this a problem” was meant to be open – maybe it’s a practical problem, or raises theological concerns, or is taken as problematic evidence… I don’t know what importance this has (if any), and in what way.

      “Caused by the same factors” – What I meant was something like “we know there are factors that can cause people to have strongly held but incorrect religious beliefs, as shown by the variety of religions; is this something that is taken in consideration (maybe explained)?”

      And I probably do take atheism as the default, but I ask because I’ve already seen many people explain why they reject atheism, but not many that have explained their rejections of other religions.
      (eta: also, I know atheism’s answers, but not the religious ones)

      • Troy says:

        There are certainly practical and theological problems in the neighborhood here. The practical problems don’t seem to me to be fundamentally different from those involved in living in a pluralistic society. Nor do their solutions: treat others as free and equal beings with the right to believe what seems true to them. Attempt to persuade them by rational argument to the extent you think the correctness of your own views is important.

        Theologically, there are various problems you might raise. On some Christian theologies all these people are going to be eternally damned, which produces an apparent problem for God’s compassion. I say reject those theologies; we can still be Christian (and, e.g., affirm everything in the Nicene Creed) without them.

      • Jaskologist says:

        I think the trick is that most people don’t line up all the options and then reject them until one is left standing; instead they accept the best option they have before them. And then maybe they continue to evaluate new options as they make themselves known.

        So most Christians accept Christianity because they find it compelling for whatever reason. They probably have reasons they’ve rejected atheism because that is the alternative they have encountered the most.

        Unless they like studying different religions/philosophies, they probably haven’t had much cause to think one way or the other about Hinduism, Buddhism, etc.

      • Irrelevant says:

        A couple further responses then:

        we know there are factors that can cause people to have strongly held but incorrect religious beliefs, as shown by the variety of religions; is this something that is taken in consideration (maybe explained)?

        The premise set of most (hedging, pretty sure it’s all) major religions includes Everyone Is Wrong. So we start there, and then we observe people being wrong constantly, and there’s not a problem.

        I probably do take atheism as the default.

        I think this view is amnesiac. Personal, philosophical, and anthropological evidence suggest that believing in souls is a natural result of the mind-body dichotomy we all experience. This doesn’t mean it’s correct –my stance is that it’s a measurement error– but to suggest materialism as the default epistemological state requires you to not only live in a bubble that excludes virtually everyone to ever live, but almost certainly to forget your own previous intuitions on the matter.

        • Samuel Skinner says:

          Atheism doesn’t mean materialism. You can believe in souls and still doubt the existence of God- in fact it is easier because in a universe where magic exist, someone claiming a miracle isn’t a God, but an accomplished sorcerer.

          Of course then you run into what category of magical beings counts as gods.

          • Forge the Sky says:

            “Atheism doesn’t mean materialism.” Fine, though I think a bit of a stretch. Without the existential any sort of thing becomes just more ‘matter,’ however well or poorly understood.

            “…in fact it is easier because in a universe where magic exist, someone claiming a miracle isn’t a God, but an accomplished sorcerer.” I’m not sure this is valid. Belief in God and a belief in magic are hardly mutually exclusive, if anything they are correlated.

          • Samuel Skinner says:

            “Fine, though I think a bit of a stretch. Without the existential any sort of thing becomes just more ‘matter,’ however well or poorly understood.”

            Why? What do you think the new age stuff like crystal power and the like? You can totally believe in things that go beyond what matter does and not believe in God.

            “I’m not sure this is valid. Belief in God and a belief in magic are hardly mutually exclusive, if anything they are correlated.”

            Sure, but there is no reason someone who claims divine power isn’t just a sorcerer. In fact Christianity generally rejects the existence of magic on the grounds only God can violate natural laws.

          • On the contrary, Christianity usually asserts that magic is damnable (and sometimes not even that), not that it is impossible. The notion that magic doesn’t exist is a fringe position before modern times. Furthermore, there are Christian white magic traditions which survive to this day, such as many streams of curanderismo.

          • Samuel Skinner says:

            “On the contrary, Christianity usually asserts that magic is damnable (and sometimes not even that), not that it is impossible. ”

            The Christian position is only God can suspend natural laws. Satan may or may not also be able to but that puts a bit of a damper on the ability of people to use magic. You can receive power from one of the other but I don’t believe Christianity believes in independent sources of power.

            “The notion that magic doesn’t exist is a fringe position before modern times.”

            It is sort of essential for Christian belief. The resurrection doesn’t work as a miracle if anyone can pull it off.

            “Furthermore, there are Christian white magic traditions which survive to this day, such as many streams of curanderismo.”

            Faith healing isn’t magic. It is supposed to be through Gods power not the practitioners that the changes are possible. I don’t consider it magic any more than prayer or relics are magic.

          • Irrelevant says:

            Atheism doesn’t mean materialism. You can believe in souls and still doubt the existence of God. … What do you think the new age stuff like crystal power and the like? You can totally believe in things that go beyond what matter does and not believe in God.

            Given appropriate refinement of terms, sure, but one of the requests in this thread is that we not focus overly much on atypical cases. Its adherents can align themselves with whichever coalition they want, but for the purposes of this discussion, I don’t accept impersonifiable spiritualism as a salient form of atheism. (And I don’t think chalking out the line between thaumaturgy and magic is relevant here either, for similar reasons.)

          • aerdeap says:

            Atheism means materialism to pretty much everyone who isn’t a nonmaterialist atheist, which is unfortunate, since it sounds like a fasinating viewpoint.

          • Samuel Skinner says:

            “Given appropriate refinement of terms, sure, but one of the requests in this thread is that we not focus overly much on atypical cases.”

            Someone claimed that atheism isn’t the default because atheists are materialists. Pointing out a single example is enough to show that chain of logic is wrong- it doesn’t matter if it isn’t typical.

            “I don’t accept impersonifiable spiritualism as a salient form of atheism.”

            Why not? They certainly don’t believe in God and the requirement to be an atheist is to not believe in God. You might not like it, but the people who claim God isn’t real and the stories are all about aliens are as much atheists as you are.

          • houseboatonstyx says:

            Jainism and Theravada Buddhism are often described in reference books as “atheistic religions”, meaning they don’t see any one big creator God at the top. But they seem to have plenty of the in-between stuff*, and past and future Buddhas and Tirthankaras to reverence.

            * Eg reincarnation, psychic healing, astral travel, chakras, auras, mantras, beads, chanting, asanas, (temporary) heavens, etc

      • I don’t believe that some people having simplistic (I prefer this to ‘incorrect’) beliefs is necessarily a problem, in the right context. To take an example from upthread, most people have an extremely simplistic understanding of evolution, but this doesn’t tell us very much about the value of evolution as a theory which informs educated scientists. In a similar vein, I don’t care if laity have a simplistic understanding of the faith, so long as pastors and theologians have a more sophisticated understanding. (However, there are branches of Christianity which have deliberately expelled intellectual theology and demand that their pastors be as ignorant as their laity. They deserve what they get.)

        More importantly, however, is to recognize that the end goal is ethics, not epistemics. “If I can fathom all mysteries and all knowledge, and if I have a faith that can move mountains, but do not have love, I am nothing.” So any pastor has to consider what the abilities and needs of his congregation are, and gauge his presentation to that level. Jake has an IQ of 87 and work laying gravel on a road construction crew, and when he learns about Genesis, what he needs to understand is that God made the world and made him. He does not need to understand how the creation myth of Genesis 1 is situated within Ancient Near East mythopoetics and connect that to to St. Augustine’s work on the relationship between faith and reason. Trying to give him that kind of understanding is likely to confuse him, and actually hamper the social, ethical, and spiritual benefits that he gets from his more simplistic understanding.

        • Jiro says:

          To take an example from upthread, most people have an extremely simplistic understanding of evolution, but this doesn’t tell us very much about the value of evolution as a theory which informs educated scientists.

          Scientists are able to describe what it means to do science in a way that makes it utterly clear that scientists are a lot closer to doing real science than the guy in the street with a simplistic understanding of evolution. Religious believers are unable to do likewise.

          Jake has an IQ of 87 and work laying gravel on a road construction crew, and when he learns about Genesis, what he needs to understand is that God made the world and made him. He does not need to understand how the creation myth of Genesis 1 is situated within Ancient Near East mythopoetics…

          He doesn’t need to understand those details, but the things that he does need to understand are implied by those details. so whether those details are true or false matters, even to him.

          • Which things implied by those details do you think are important for Jake?

          • Samuel Skinner says:

            Of the top of my head “God made you”, “natural instincts come from God” and “certain natural instincts should not be followed” are the relevant train of thought that has to be scrutinized.

        • Forge the Sky says:

          I think this line of reasoning is valid in some faiths, particularly Christianity, but is not universal. In Christianity you have a single life to get your shit together, as well as a perfectly all-knowing and compassionate judge figuring out what to do with you at the end of it all. So things like your innate ability and circumstances are presumed to matter.

          In Buddhism, for instance, all that matters is how well you actually accomplish and comprehend the truth – regardless of ability or circumstance. The Absolute doesn’t care how hard you tried to achieve Nirvana. Fairness or right judgement isn’t part of the vocabulary – although there is some compensation in the fact that, failing this round, you get another go at it until you do manage to find Nirvana (not that it’s any comfort to you in this life, with these memories and this personality).

      • Yehoshua K says:

        I guess my answers are pretty clear by now. In Jewish scholarship, we regard the evidence for Judaism as pretty overwhelming, and generally ask ourselves why other people are not convinced.

        The most common explanation that I have come across in Jewish scholarship as to why most people have wrong beliefs (from our perspective, of course) is that most people believe what makes them feel good, what allows them to live as they wish to, not what is logically true.

        • Chevalier Mal Fet says:

          Purely out of curiosity (not contention), how do Jewish scholars explain Jesus’ apparent fulfillment of prophecies about Messiah?

        • Airgap says:

          Presumably, Judaism makes Jews feel good and allows them to live as they wish too. This is basically the reason Luke Ford gave for converting to Judaism, not that I’m particularly inclined to credit his Torah chops

          One thing that I loved about Prager was that unlike all of the serious Christians I knew, he didn’t regard the sins I wanted to commit as immoral, only as unholy. The other Jews I met held similar views.

    • iarwain says:

      Could you link to that previous discussion? I must have missed it.

    • J. Quinton says:

      I’m not religious, but I’m replying with a general point about defending things intellectually.

      Quite literally *anything* can be defended intellectually. Just because it is defended intellectually doesn’t mean that it is correct. Any belief that you find completely and utterly wrong or immoral there’s probably a very intelligent person out there who can defend it. A mistaken political position, worldview, or whatever isn’t mistaken because every premise is wrong. It might be mistaken because of only one premise; and that one premise might be buried under thousands of years of sophistication meant to keep it hidden.

      Furthermore, I would dare say that for the majority of people who believe in religion (or most worldviews), hardly any of them were convinced by the actual intellectually rigorous version of said worldview. There are very few atheists that I know personally who were convinced by sophisticated arguments for atheism. Most of them are just cheerleading for the opposing team.

      • Samuel Skinner says:

        “There are very few atheists that I know personally who were convinced by sophisticated arguments for atheism.”

        Atheism doesn’t have sophisticated arguments. Its entire premise is all religious belief needs to meet a certain standard of evidence and it has failed to do so.

        • Vulture says:

          If you’re successfully responding to someone else’s sophisticated arguments, then in all probability you are yourself making sophisticated arguments.

          • Samuel Skinner says:

            Not particularly. Most responses are “that is logically flawed”- pointing out the line of argument wouldn’t be considered acceptable in any other case.

            It is a bit like the difference between building a bridge and pointing out that the bridge is lacking support struts.

          • Joel says:

            That is logically flawed.

    • Yehoshua K says:

      My wife pointed out this question to me, and I’m going to answer from the perspective of an Orthodox Jew. I will answer from the perspective of a specific Orthodox Jew–myself. First, though, I’d like to note that I find it interesting that lots of people addressed the assumptions underlying the question, but very few (only one that I noticed) attempted to actually answer the question.

      First of all, what does not move me to belief in Judaism? My emotional experience of G-d during prayer. Sometimes I have it and sometimes I don’t. Sometimes I am emotionally moved by works of fiction. I assume that followers of other faiths at least sometimes emotionally experience the deity or deities, as they understand them. I will take for granted that it is not the case that G-d exists when I emotionally experience Him and does not when I don’t, that fictional characters and stories remain fictional even when they move me emotionally, and that Judaism, Christianity, Islam, Hinduism, etc. are not all simultaneously both true and false based on the emotional swings of billions of human beings all around the world. I’m sure that you are with me so far.

      Now, in surveying religions, I note that all religions (so far as I know) give some Origin Story. By this, I do not mean an account of how the world came to exist, but an account of how that particular religion came to exist in the world.

      Mormonism, for example, famously tells us that Joseph Smith received an angelic revelation concerning the location of golden tablets in a form of ancient Egyptian, which he located, translated, and then lost. Islam tells us that Mohammed likewise experienced an angelic revelation. Standard Christianity tells us that Jesus performed a number of miracles, culminating in his Resurrection, and thereby gained credibility for his claims of personal divinity. Buddhism tells us that Siddhartha Gautama meditated until he achieved Enlightenment.

      While each claim is unique in its specific details, they are similar in the crucial point that each requires us to trust the claims of their founders. Even if I had met Smith, Jesus, Mohammed, or Gautama, I could never independently verify their alleged revelations or status. Even in the case of Christianity, which claims that Jesus performed numerous public miracles, someone who witnessed those miracles (granting, for the sake of the discussion, that they all occurred) would still be right to question Jesus’ claims of divinity on the basis of Deut. 13:2-6, which speaks of false prophets that succeed in performing signs and wonders.

      We might refer to this form of uniqueness, where only the details differ while the general characteristics are identical, as “specific uniqueness.” The histories of the various countries of Europe are also specifically unique; they all experienced the same Roman Empire, and then the post-Roman Dark Ages, had feudal periods and monarchist periods and the Renaissance, while having different specific wars, kings, and so forth.

      Judaism, in contrast with the above faiths, has a generally unique Origin Account. According to the account of the Books of Exodus through Deuteronomy, the Jewish people experienced a national period of slavery in Egypt, then saw Moses perform the Ten Plagues in public fashion, in most cases after announcing them in advance, then communally experienced the Splitting of the Sea, communally ate manna for forty years, a food which had several miraculous properties, and experienced communal prophetic revelation at the time of the giving of the Ten Commandments. This story describes Moses, not as someone who received a personal revelation which he then convinced others of, but as someone who guided his entire nation into and through a period of sustained public revelation.

      This leads me to ask myself–if Judaism’s claims are untrue, if these events did not occur, then someone successfully convinced the Jewish people to accept extraordinarily dramatic events into their national mythology. If so, then it is empirically possible to get away with lies of this sort. If so, why do no other religions make such claims? Why is Judaism’s claim generally unique?

      The point is strengthened by the fact that in Deut. 4:32-35 the Bible predicts that no other religion will make such a claim. That is, not only is Judaism generally unique in the way that I discussed above, it has made a falsifiable claim that it will permanently remain generally unique. If Judaism is false, if the claims of Exodus through Deuteronomy are false, then at some point over the intervening millennia, the falsification should have shown up.

      While it is true that Christianity and Islam do not dispute the truth of the events of Exodus through Deuteronomy, they do not make any claims of their own in any way comparable. Simply put, if G-d wanted to replace Judaism with Christianity or Islam, why did He not publicly announce Himself as He did the first time?

      Judaism is also generally unique in a second way–in its history. I do not know of any other nation than the Jews to have been repeatedly conquered by foreign empires and removed from its land, repeatedly maintained its cultural identity in those lands, and repeatedly returned to re-establish mass settlement of, and rule over, its land.

      If it is true that Jewish history is ruled by the same factors that rule, say, British history, or Zulu history, or Chinese history, then I would expect Jewish history to be specifically unique, not generally unique. The fact that it is generally unique indicates that it is not ruled by those same factors.

      Third, in examining contemporary Israeli military history, I do not think that the outcomes of the various Israeli-Arab wars were predictable on the basis of material factors, particularly the War for Independence, the Yom Kippur War, and the various periods when Israel has endured enemy missile fire since the early 1990s. This comment is already long, and this is not the place to go into my views on Zionism (suffice to say that nothing so simple as “I am a Zionist” or “I am not a Zionist” could possibly be adequate). However, I think that this military history, all of which is within living memory, and some of which I have personally lived through, does not look like something that I would expect to see in a world devoid of the supernatural.

      • houseboatonstyx says:

        As a Christian-gone-Agnostic Lewisian, I’ll take a shot here — at a parallel Christian (possibly straw) target.

        Several of Yehoshua’s points might fall under Selective Demands for Rigor. Lewis failed on that (or left out some steps) in complaining that Hinduism let philosophers and savages live and let live, instead of making both of them sit side by side sharing a hymnbook, then both hear a somewhat philosohical sermon, before participating in a claimed bloody cannabalistic rite. He presented that as a deal-breaker. But who ever said that sort of Procrustean homogenization was a requirement for a true/legitimate/etc religion in the first place?

        • Yehoshua K says:

          Houseboat,

          I would be very interested in hearing you elucidate your points. Would you please explain where my logical errors are, as you see them?

          Thanks.

        • Yehoshua K says:

          Houseboat,

          I don’t really understand the weakness that you see in my arguments. I claimed that Judaism’s Origin Account, and the later history of the Jews, indicates supernatural intervention in a way that the Origin Accounts of other faiths, and the histories of their followers, do not.

          Why do you refer to that as Procustean homogenization? I am very interested to understand where I am going wrong, if I am going wrong.

          Thanks.

          Edit–When I wrote this, I thought that my earlier comment had failed to upload. Sorry for the repetition.

          • houseboatonstyx says:

            @ Yehoshua

            ‘Procrustean homogenization’ was nothing to do with you or Judaism. It was my snarky term for something my dear Lewis admired in Christianity; and he rejected Hinduism for lacking it. He also rejected older religions for not claiming (in surviving materials) that Balder or Osiris were historical figures in a certain place and time. Thus he was setting those features (homogenization of followers, and historicity of deity) as features required by any true religion. Several of the items on your list seemed (forgive me) as … unimportant as evidence of truth … as those Lewis required (for whatever his reasons were).

            Scott’s terms, such as “Isolated Demand for Rigor, are so useful that some of us get tempted to stretch their meaning to cover more and more things — which I may have done in my post. Jiro’s statement was probably better: “It’s easy to make any religious claim seem ‘unique’ by choosing the reference class”. And whatever that unique feature is, it’s easy for the adherents to find reasons why it should be important evidence for supernatural intervention.

        • Yehoshua K says:

          Houseboat,

          Ok, I understand that.

          Here’s my thing. My whole argument sums up to this. According to Judaism, when G-d established our faith, He did so in a way that made it unmistakably clear to anyone there that He was involved. No other religion, so far as I know, makes such a claim.

          If such a claim were easy to make, everyone would make it; since nobody else makes it, it must be hard to make.

          That means that we must ask ourselves–how did Judaism make it? It seems to me that the answer must be either that Judaism could make it because in Judaism’s case it is true, or else that the founders of Judaism had some unimaginably unique circumstance that allowed them to make a claim that nobody else could make, even with their example.

          Of the two, the former seems more probable to me than the latter; at the very least, it seems to me to make Judaism more probable than competitor theisms. I made a structurally similar argument from Jewish history generally, and recent Israeli military history particularly.

          I also noted that, in contrast to other faiths (again, so far as I know), Judaism makes a falsifiable prediction about future human religious history. That prediction has not been falsified, despite the passage of millennia.

          • houseboatonstyx says:

            @ Yehoshua

            Thanks for the summary. I’ll stay at summary level also, partly because such logic is of more general interest, and partly…. Well, if I won the argument, that might shake your faith, which might make you very sad. Whereas winning the argument would be only a mild pleasure for me. So your big sadness plus my small pleasure would sum to a net negative utility.

            In an earlier comment you said:
            Simply put, if G-d wanted to replace Judaism with Christianity or Islam, why did He not publicly announce Himself as He did the first time?

            I might simply say this is an argument against Islam or Christianity, not against Hinduism or other religions. Or I might argue that anything on the level of “why would any Supreme Being” is way above our pay grade. But the point that interests me is, you seem to be referring to the same G-d with the same motives and personality — that you already believe in. Which might prematurely assume the answer (once called ‘begging the question’). Whereas a quite different deity could choose a quite different way of founding Zis religion.

            Elsewhere you said:
            [A]ny human being who comes along and claims to be divine is, by definition, trying to replace Judaism’s understanding of the divine with a totally different one. — So, this totally different Supreme Being might well choose a totally different method of announcement. (The claimed claim ‘to be G-d’ brings in at least two different complications.)

            There’s another point which (forgive me) I can’t resist challenging. You have said:
            While each claim is unique in its specific details, they are similar in the crucial point that each requires us to trust the claims of their founders. and This story describes Moses, not as someone who received a personal revelation which he then convinced others of, but as someone who guided his entire nation into and through a period of sustained public revelation.

            Trusting someone else’s claim (of one kind or another), is needed in (almost) any religion. The writers of the Gospels tell us that Jesus said whatever, of the Koran, Mohammed; of various Sutras, Buddha; etc – if we trust those writers. The writer of Exodus tells us that Moses said something different. But in any case it is the writer of the document whom we must trust first.

            Another point. You’ve said in a different connection: Even in the case of Christianity, which claims that Jesus performed numerous public miracles, someone who witnessed those miracles (granting, for the sake of the discussion, that they all occurred) would still be right to question Jesus’ claims of divinity [….] Yes, there’s the observable fact – but the interpretation of that fact is something different. “Here’s a piece of wood. I tell you it came from the True Cross. So it is evidence that xyz is true.”

            Re falsifiability, a Bayesian(?) thought:
            “Well, it’s been a century and no vampire has shown up. So probably we don’t need to pack garlic in carry-on luggage this trip.”

          • Yehoshua K says:

            @Houseboat,

            Sorry that I didn’t get around to answering you yesterday.

            Please don’t worry about shaking my faith. What good is it to me if it isn’t true, after all? My faith demands of me high prices, potentially including a rather painful death, should I ever fall into the hands of ISIS or anyone like them.

            You note that my complaint against Christianity and Islam is not an argument against non-Abrahamic faiths. True. I already presented my argument against them earlier.

            You suggest that predicting the actions of a Supreme Being(s) is above our pay grade. Essentially, you’re denying that human reason is a tool able to puzzle out what true actions of an SB from actions falsely attributed to the same. Now, the position that human reason is an unreliable tool is a respectable philosophical position, but it’s not an assumption I make. We’ll just have to agree to disagree about this one.

            My argument, again, hinges on my assertion, detailed above, that Judaism’s Origin Account and subsequent history is of a type lends its claims of supernatural intervention great credibility. I have already gone through this at length, and do not wish to waste my time or your going through it again, unless you have specific questions to ask.

            However, supposing some other supernatural entity indeed founded, for example, Hinduism, I cannot know that, because that entity did it in such a way as to be indistinguishable from a human-invented myth.

            My discussion of the difference between the Christian and Jewish understandings of the divine was meant specifically to explain why Judaism would regard JC as a false prophet or a cult leader irregardless of how many miracles he might have performed. As Troy noted, JC claimed to be the G-d of Judaism; I was explaining why any such claim would be untenable. That part of my argument had nothing to do with non-Abrahamic faiths.

            You note that we must trust the writer of the document–in my case, the writer(s) of the Pentateuch–first. You seem to be missing my point, here, or rejecting it without telling me why.

            The point that I have tried to make is that the claims of Exodus etc. are not the kinds of lies that could be entered into a national history. My proof to that is that nobody else in human history (so far as I know) indeed attempted, or succeeded, in getting another nation to believe something similar.

            This is not simply an arbitrary “I believe the authors of Exodus but not the authors of Mark or the Koran.” If you indeed regard this as arbitrary, if you are of the view that my distinction is not significant, I would ask you kindly to explain why.

            In response to my explanation of why a Jewish witness to the career of JC, even if that career progressed as described in the NT, you say “Yes, there’s the observable fact – but the interpretation of that fact is something different. ‘Here’s a piece of wood. I tell you it came from the True Cross. So it is evidence that xyz is true.'”

            I don’t actually understand your point here at all. Would you please clarify?

            Again, I’ll need you explain more clearly why you are rejecting my claim that the falsifiable assertion made by Deut. lends it credibility. It seems to me that your closing statement actually supports me, if anything, which seems unlikely to be your intention.

            Supposing that we’re arguing over whether or not vampires exist, and one day goes by without seeing a vampire, that is weak (very weak) evidence that vampires do not exist. If a hundred years go by without vampiric sightings, that is significantly stronger evidence. If millennia go by, that is stronger evidence yet.

            Likewise, Deut. predicts that nobody will ever make a claim comparable to that made by Judaism. Millennia have passed, numerous religions have developed, two of which are strongly influenced by Judaism. Nevertheless, the event which Deut. tells us to expect never to happen has indeed never happened. Why does this not lend Deut. credibility?

          • houseboatonstyx says:

            (A short remark to test editing.)

            This is not simply an arbitrary “I believe the authors of Exodus but not the authors of Mark or the Koran.”

            Legends grow in the oral telling. Mark and the Koran came further into the age of written records, so the Jewish legends had more time to grow, before being fixed in a written form.

          • houseboatonstyx says:

            The point that I have tried to make is that the claims of Exodus etc. are not the kinds of lies that could be entered into a national history.

            What do you mean by “a national history”? Would the Mahabarata qualify?

            I beg to take issue with your terms ‘lies’ and ‘entered into’. Legends are not lies, they are sincere beliefs — unwitting works of art around a core of fact or meaning.

            Aiui, a compiler looks through much material — which includes legends — and culls out whatever seems dubious. So the writer/s of the Penta/// would not have to “enter [something] into”, they could just refrain from removing a legend or legends that were already part of popular belief. (Or the legendary or exaggerated bits of a true event. For example, JC may have miraculously fed 50 or 500 people, which grew to 5,000 in the oral or informal letter stage, before Mark fixed it in an official written form.)

      • Troy says:

        Yehoshua: Thanks for adding your contribution. I’m not sure I understand your argument against Christianity. You say,

        While it is true that Christianity and Islam do not dispute the truth of the events of Exodus through Deuteronomy, they do not make any claims of their own in any way comparable. Simply put, if G-d wanted to replace Judaism with Christianity or Islam, why did He not publicly announce Himself as He did the first time?

        Christians hold that G-d did publicly announce himself: he became incarnate in Jesus, who taught “I am the way, the truth, and the life,” etc., and verified Jesus’s claims by raising him from the dead.

        You seem to more or less note this elsewhere, and say:

        While each claim is unique in its specific details, they are similar in the crucial point that each requires us to trust the claims of their founders. Even if I had met Smith, Jesus, Mohammed, or Gautama, I could never independently verify their alleged revelations or status. Even in the case of Christianity, which claims that Jesus performed numerous public miracles, someone who witnessed those miracles (granting, for the sake of the discussion, that they all occurred) would still be right to question Jesus’ claims of divinity on the basis of Deut. 13:2-6, which speaks of false prophets that succeed in performing signs and wonders.

        Christians, naturally, will maintain that Jesus and his followers were not asking Jews to follow other gods, but the same G-d. Jesus claimed, after all, to be the Messiah foretold in the Hebrew Bible.

        Is it possible that Jesus was a false prophet? Sure, it’s possible. But the question is how credible this is, given the nature and extent of the miracles he performed, the nature of his message, and so on. That the New Testament reports are more or less accurate in their descriptions of Jesus, to the point where he even came back to life after being crucified, but that Jesus is nevertheless a false prophet seems to me a tough pill to swallow.

        • Yehoshua K says:

          Welcome to the conversation, Troy–happy to “meet” you.

          I do not regard the story told in the NT as a public announcement because it did not share any of four characteristics with the story told in Exodus.

          The NT’s story did not occur in the presence of huge numbers of people. We’re talking about what, a few dozen people at a time?

          Exodus states that the events related there occurred in the presence of approximately 600,000 men between the ages of 20-60 plus their wives, children, the elderly–several million people, certainly.

          Second, the NT’s story relates a career that happened in the presence of a tiny percentage of the nation. Exodus’ story, by contrast, relates events occurring in the presence of the entire body of the nation, 100%.

          Third, the NT’s story relates events that occurred over a relatively short span of time. Correct me if I’m mistaken, but Jesus’ preaching career was only a few years, right?

          Exodus’ story, by contrast, relates events that occurred over slightly more than 40 years, including two miracles (the manna and the protective and guiding Glory Clouds) that were maintained for decades on end.

          Fourth, the NT says that Jesus claimed to be G-d, but not in a way that made it inconceivable that he was simply a human cult leader. I could also make the claim of being divine, which you would either believer or not.

          Exodus, by contrast, describes public revelation in a way that would leave no doubt to an onlooker that something non-human was occurring. I refer you to Exodus 19:9, “The Lord said to Moses, Behold, I come to you in the thickness of the cloud, so that the people will hear as I speak to you, and they will also believe in you forever,” and Ex. 19:16-19, which describes G-d verbally answering Moses from the midst of flame, earthquake, cloud, shofar blast, and lightning storm, and Exodus 20:15, which mentions them seeing sounds.

          As to your second point, it is true that Jesus claimed to be the same G-d, according to the NT. But the traditional Jewish understanding of G-d is that He is non-corporeal and absolutely One.

          Allow me to explain what I mean. Judaism is not just monotheistic in the sense of “there is one deity, not two or more.” It is radically monotheistic, in that it denies that the deity can be subdivided. You and I are composed of minds and bodies, each of which has various parts (memory, imagination, lungs, and intestines, for example), which in turn can be divided and sub-divided pretty much endlessly.

          Judaism understands G-d to be totally non-divisible. This means that any apparent separate divine traits or emotional states are to be understood not as actual separate “parts” of G-d, but as flaws in our human perception of Him. Thus, His “memory” is identical to His “strength” is identical to His “justice” is identical to His “mercy,” and so forth.

          Given the above, you can well understand that any human being who comes along and claims to be divine is, by definition, trying to replace Judaism’s understanding of the divine with a totally different one.

          Also, allow me to be clear that I do not in fact think that he actually performed anything like the miracles ascribed to him in the NT. If he had, I think we would have found some references to him in the Talmud and other Jewish works from that period which do survive in their entirety to this day.

          My point was that even if he had–even if I had seen them with my own eyes–I still would not accept his theological claims.

          In general, before Judaism can accept someone as a prophet, he first must be someone of outstanding Torah scholarship and moral character, someone that the community of believers knows to be worthy of Divine communication. Then, he must make positive predictions of the future (such-and-such will happen), which must occur in every particular.

          If his prediction fails to materialize, even in the smallest detail, he is a false prophet.

          Even if it does materialize, if his claims in any way contradict the teachings of Moses, or claim to settle a matter of Jewish law via prophecy (which is to be determined only by human scholarship, never by prophetic revelation), then he is a false prophet.

          I think it is clear how I must regard revolutionizing Judaism’s understanding of the Divine, demanding that we go from a belief in a non-corporeal One to a sometimes-corporeal Trinity.

          If you wish to continue this conversation, it might perhaps be appropriate for us to switch to private email–what do you think? My address is yehoshua.kahan.personal@gmail.com

  46. Hey Scott, apparently you advocate ‘one omnipotent, omnipresent World Government‘, according to a comment in a recent reddit post on your Moloch article. Would someone that knows Scott’s archipeligo better than I do like to respond to that?

    edit – to his/her credit, this person appears to be interested in discussing the issue.

  47. Avantika says:

    There are wrap parties where I live!

    Thanks for alerting me to this, Scott.

  48. PDV says:

    It has come to my attention that a)binding_of_caller is a fairly popular Ruby gem for debugging and b) BindingOfCaller is not a currently-existing Weird Sun Twitter.

    Anyone want to get on that?

  49. Linked List says:

    So how did Terry Pratchett impact your lives?

    • Izaak Weiss says:

      At the behest of one of my middle school friends, I took home Jingo and Thief of Time, the only two books our middle school had of his in their library. I read them both extremely quickly and began to devour every other book of his I could find.

      Pratchett isn’t just my favorite sense of humor; he basically helped me build my sense of humor.

    • Tenobrus says:

      I read The Wee Free Men and Hat Full of Sky at a pretty young age, and looking back they have been seriously important to my current identity/situation. Along with several other books I read around that time they helped implant the idea that there even were “different modes of thinking” and that rationality/thinking about things could be just as interesting and important as magic. Without those ideas I don’t think I would have ever gotten into HPMOR (and thus LessWrong, and thus SSC).

      I love all of his books though. Every one has some kind of interesting concept, some fascinating twist on the real world, amazing characters, and of course a goddamn wonderful sense of humor. I’ve been meaning to reread all of Discworld for a long time now. I think this is enough of a catalyst for me to start.

      I cried for a good three minutes after hearing about his death. That is not something that happens often.

      • Nornagest says:

        The Tiffany Aching books are probably better rationality 101 than anything I’ve seen elsewhere, including on LW. Also fun to read. They are very, very, very good.

        Unfortunately I read them well after I should have. When and if I reproduce, they’re going to be some of the first serious reading material I give to my grubs.

    • injygo says:

      His writing didn’t change my life or anything. It was just very enjoyable and fun to think about. Good Omens introduced me to Neil Gaiman, whose Sandman comic did change my life.

      Much of his thought is beautiful and true, but somehow not salient enough to make it into my personal mythology.

    • Lachlan Cannon says:

      I started reading his books when I was ten, two to three a year, based on new ones coming out, or library availability on back copies changing. It’s amazing how deeply two to three books a year for twenty years ingrains itself. I’ll forever be grateful for the strong humanist bent to his writing, something no other fantasy authors I had read before that time, and too few since then have displayed.

    • Slow Learner says:

      He was funny, and wise, and true.
      He loved people including their failings and foibles.
      And he had a whole lot of rage for the self-satisfied, the rich, and the cruel.

      I learned a lot from him. He will be sorely missed. I am going to hand out copies of his books to anyone I think might enjoy them.

    • I doubt I was the only thirty-something who had to sneak into the toilets at work and have a little cry when I heard the news. He was like a kindly, wise uncle who’d tell you outlandish stories, and as you got older you’d realise you’d learned something profound from them.

    • Chevalier Mal Fet says:

      After reading “A Slip of the Keyboard” last month, I realized I was running out of time and started writing him a fan letter. My plan was to mail it off next week, when spring break gave me a break from teaching so I could edit it to a nice shine.

      I finished it yesterday. ._.

      Here it is, for those interested:
      “Dear Mr. Pratchett –

      In the fall of 2004, a friend of mine pushed a book into my hand and breathlessly whispered to me, “Read this.” When I looked down, I found that I was holding a curious hardback tome, laminated, as all library books were, with a cover showing a fist thrusting out of a pile of letters. I read the words on the cover. Going Postal, by Terry Pratchett. Some silly book about a post office? What could be more boring? Still, my friend wasn’t usually wrong about my literary tastes.

      A few hours later, instead of paying attention in history class, I cracked open the book. “The flotillas of the dead sailed around the world on underwater rivers…”

      I devoured the book in six hours.

      From that rainy October day, I was a lost soul. I scoured the library for more books by this Terry Pratchett. I got a part time job, money to feed my obsession. Every week I had a spare bit of coin it was off to the local bookstore, where I methodically hunted down and seized every last Pratchett book they had, to be dragged kicking and screaming back to my lair.
      I bought the books not just for myself, no. This was too big for just one person. I became a missionary, spreading the gospel of Pratchett wherever I could. My books were resources, meant to hook new victims, luring them in to the glorious new world I had found. Countless other friends, at some point or another, have now had a Pratchett book shoved into their hands with a breathless, “Read this.”
      It was like nothing I had ever read before.

      Discworld is special.

      Plenty of authors do humor. Plenty of authors can do fantasy. The intrigue, the mystery, all that is not difficult to find. But I have never encountered another author who can put the same amount of heart into his books. The only books I have ever cried at were Discworld novels. The only novels that always left me with the warm glow of contentment radiating through me after the end were Discworld (I can still quote from memory the last line of Witches Abroad: “But they went the long way, and saw the elephant.”).

      Discworld found me an angry, disaffected young man, right at that age where he knows everyone else in the world is wrong, where everything is awful, and he’s oh-so-superior for being the one to notice that. Discworld shook me out of that state.

      Discworld taught me to see the good in people. “It takes all sorts to make a world.” Even Ankh-Morpork’s worst, most selfish, and vile characters had some nugget of goodness inside of them. The lazy, incompetent Fred Colon is still a wonderful human being. Who can fail to be charmed by CMOT Dibbler? And are there any who would not vote for self-professed tyrant Vetinari?

      That, to me, was the charm of Discworld: the sheer, irrepressible love of humanity and the world that constantly shines through, no matter how dark the subject matter. Joy, love, and tolerance, not hatred, or cruelty, were truly the best ways to live (and of course, what better teacher of these things than Death Himself?).

      Discworld made me who I am today.

      I learned that you have to believe the little lies as practice for the big ones.
      I learned that some things are important.
      I learned that sometimes, in the hands of the right person, glass become diamond.

      Discworld is usually regulated to the fantasy and science fiction shelves. My English teachers turned up their nose at it. It was not “serious literature.” But I felt differently. I felt that it was the most serious literature I had ever read.

      This winter of 2015, I eagerly purchased your book of essays, A Slip of the Keyboard. Afterwards, I found myself once again seriously rethinking things. I was deeply distressed to learn of how your disease has progressed, and realized that my dream of getting my copy of Going Postal signed (still my favorite, after all this time) might never be realized. I might never be able to tell you how my own life, personally, is the richer because of your work. Never be able to tell you that I am a better human being because I read Discworld.

      I have never written a letter to an author before. But I may have no other chance, so here I am, fumbling around for words to address the greatest wordsmith I have ever known.

      Ultimately, though, there are no real words to convey what your work has meant to me, so I will close simply.

      Thank you, I enjoyed the books very much. “

      • Slow Learner says:

        You bastard, you’ve made me all teary all over again. That was beautiful.

      • InferentialDistance says:

        Going Postal was also my first Discworld novel. Though mine had Moist von Lipwig in gold on the cover.

        I can’t say that the series has deeply altered my life, but it has reassured me that the cynical view isn’t wrong and cynics can still be good people (thank you, Vetinari). It made my life happier, more comfortable, funnier, and more friendly, and I’m sad to see Pratchett go. Those stories had heart in a way that a lot of stuff doesn’t, and I wish I knew how he did it.

        I learned that you have to believe the little lies as practice for the big ones.

        “THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE, ONE MOLECULE OF MERCY.”

        Best scene in Hogfather, which is my second favorite Discworld novel (Night Watch being my favorite).

        Thank you for sharing.

    • speedwell says:

      I discovered to my shock that I hadn’t read anything whatsoever by him (I have been reading science fiction for decades! how in hell did I manage that?), so I immediately downloaded five of his books for Kindle. A young friend told me that she derived great comfort from reading the Discworld books when things were particularly crazy and difficult in her life.

    • Shenpen says:

      Good laughs. I don’t understand why a parody writer is so idolized. Caveat: i have only read The Colour of Magic, Guards! Guards! and Sourcery or what it was.

      • Slow Learner says:

        The early ones are more parodic, the later ones are much deeper as he develops his voice, his characters and his skill as a writer.
        Try something like Feet of Clay, or Hogfather.

        • Zorgon says:

          Yussss. Feet of Clay in particular is a humanist masterpiece.

          I strongly enjoyed the cultural commentary of Interesting Times too, although it too often falls into fanservice for the early adopters.

          More or less anything with Vimes or Granny Weatherwax in it is likely to be among his best.

          • Slow Learner says:

            Feet of Clay is the first one I read, so I guess I always knew there was more depth to his writing, albeit I didn’t get even half of it that first time through.

      • Peter says:

        The caveat explains a few things. Like many authors, Pratchett’s work varied as time went on. A lot of the early works were very much parodies – certainly The Colour of Magic and Sourcery. Guards! Guards! was one of those books that launched a set of characters and situations that Pratchett kept wanting to come back to, and it sort of stopped being parody and more an interesting setting to write stories in.

        That said, the variation by era isn’t quite the whole story. Mort, for example, is very early, but I think there’s a lot more depth to Mort than to The Colour of Magic or Sourcery.

        Also, you get lots of people, including myself, saying the earlier works were the better ones.

      • Chevalier Mal Fet says:

        Color of Magic and Sourcery, in particular, are not his best works.

        I would also add Reaper Man and Soul Music to the list of really good ones. Anything with Death as the main character is excellent.

        Small Gods if you’re interested in the subject of religion at all.

        The Truth, Thief of Time, the Last Continent, Going Postal – each of these taught me something significant. I recommend you give him another shot.

        • Susebron says:

          Reaper Man is probably one of my favorites. Here’s a relevant quote:

          “No one is finally dead until the ripples they cause in the world die away—until the clock he wound up winds down, until the wine she made has finished its ferment, until the crop they planted is harvested. The span of someone’s life, they say, is only the core of their actual existence.”

      • Devilbunny says:

        Read Hogfather. If that doesn’t hit you in the feels, Pratchett isn’t for you.

      • Deiseach says:

        The early ones are very spotty until he gets his voice. People tend to have favourite series; either the Witches, or the Watch, or the Wizards. I haven’t read the latest ones, but for things like “Lords and Ladies” he got the Good People spot on.

        And the “vampires in brocade waistcoats” never fails to make me laugh.

        • houseboatonstyx says:

          I’ve heard L&L praised as symbolic/allegoric of this and that, but the praisers never mentioned politics. I guess you had to be British.

          • Deiseach says:

            the praisers never mentioned politics

            If it’s me you’re replying to, I mean the Good People as in the fairies (don’t you know you never refer to them by name, only by euphemism)?

            🙂

          • houseboatonstyx says:

            ETA: Oh damn, I may be conflating _Lords and Ladies_ with the one about the vampires as landed gentry with the maypole and all that? Not enough edit time to fix it.

            Pre-Marx landed gentry vs tenants.

            Okay, Marx was talking about factory owners vs factory workers. And I like the US version, “filling their pockets with the sweat of honest working men”. But anyway.

            Chapter Ten: The Working Day
            marxists.org/…/marx/works/…/ch1…
            Marxists Internet Archive
            Karl Marx. Capital Volume One … Compulsory Laws for the Extension of the Working-Day from the Middle of the 14th to the End of the 17th … Within the 24 hours of the natural day a man can expend only a definite quantity of his vital force. …… It quenches only in a slight degree the vampire thirst for the living blood of labour.

            [PDF]The Political Economy of the Dead: Marx’s Vampires – Wake …
            gretl.ecn.wfu.edu/~cottrell/OPE/…/att…/01-PoliticalEconOfTheDead.pdf
            by M Neocleous – ‎2003 – ‎Cited by 30 – ‎Related articles
            reminder of the extent to which the theme of blood and horror runs through …. 14 Karl Marx, ‘Inaugural Address of the International Working Men’s Association’.

            BTW, Google informs that we can buy:

            Marx Blood on eBay – Find Marx Blood for less‎
            Ad http://www.ebay.com/‎
            eBay – It’s where you go to save.

          • Deiseach says:

            Yup, “Carpe Jugulum” is the vampire one 🙂

      • Bugmaster says:

        I would suggest Monstrous Regiment and Maurice and his Educated Rodents. Both are standalone books that, while obviously very funny, will grab you right by your Coherent Extrapolated Volition and yank, hard.

    • Murphy says:

      I learned to read reading his books.

      He played a huge part in shaping most of my early views on morality and life.

      I’ve read all his books, most of them many times.

      I only ever met him once at the Irish DW convention and I remember really wanting to tell him this when I went up to get a book signed but I just kinda froze up.

      When I heard the news I felt like a close family member had died which also feels wrong because it feels like usurping something rightfully belonging to people who were actually close to him.

  50. jaimeastorga2000 says:

    Scott, I don’t think Ozy is doing her open threads anymore. Where should we go to talk about gender and race?

  51. Princess Stargirl says:

    http://fredrikdeboer.com/2015/03/10/critique-drift/

    This article gives an alternate explanation of the some of the phenomena that
    “motte and baily” is applied to.

    • Irrelevant says:

      It’s a good description of the process, but I wouldn’t call it an explanation. The explanation is that talking points are invoked, not understood. Rounds-to-nobody actually reads their own side’s primary sources or strives for a critical understanding of their own side’s views, with the result that 90% of speakers don’t know what the hell (to pick arbitrarily) “cultural appropriation” actually means, they simply have an approximate understanding of when it earns applause to say it.

      • Anonymous says:

        For while I thought of mathematics as something where you don’t quite understand so much as just know how to put the right words in the right order, and more recently I’ve been extending this to everything. Help I’m going crazy and becoming an unwilling radical skeptic!!

    • stillnotking says:

      That piece perfectly encapsulates the difference between liberalism and leftism. Leftists — the reasonable ones like Freddie — adopt a crusader mentality with limits. Liberalism is a crusader mentality of limits; the limits themselves are the goal; the liberal effort is to restrain the common failings of humanity, rather than to eradicate them (or ignore them, as the less reasonable leftists do). Liberalism is fundamentally pessimistic, for lack of a better word; not a vision of a utopian future, but of a future that is just a little less bad.

      There’s a reason the Bill of Rights outlines all the things Congress can’t do.

      • > Liberalism is fundamentally pessimistic,

        In the sense that pessimist is an optimist’s name for a realist? But liberals are still entitled to regard conservatives as the true pessimists.

        Liberals don’t want to limit state power because they think limitation is all that is achievable, they want to limit it to give individuals scope to improve their lives. That’s how liberal individualism connects up with liberal progressivism. And how liberal individualism suffers from conservative individualism.

        • Yehoshua K says:

          Your take is that a liberal is someone who wants to limit government power in order to give people a chance to improve their lives? Ok, by that definition, I’m a liberal. That is to say, I recognize that a world without government, an anarchic world, is a horror; I also recognize that a world in which government is able to dominate my life will be a horror, albeit one of several varieties of horror. Therefore, I favor a small and limited government, one that has the legal and practical power to suppress most violent and deceptive crime, but has neither the legal nor practical power to ruin my life. I recognize that a perfect balance cannot be struck, but very much want to live in a society that recognizes that neither extreme is desirable and tries to strike the best balance possible.

          • houseboatonstyx says:

            I also recognize that a world in which government[*] is able to dominate my life will be a horror

            *Or corporations, or churches, or the military, or police, or the rich.

            Of all those, I think it is the rich (in or out of corporations) who are most likely to snowball their power, and honest democratic government the most likely to keep them and the others in check, effecting a sustainable balance among them all.

          • Yehoshua K says:

            I don’t see a Reply button in houseboatonstyx’s comment, so I’m responding here.

            I agree to an extent; I don’t want any human being in a position to dominate my life and ruin it, whether with good or bad intentions.

            I do not think that a wealthy man who lacks any political power to take away my freedom and property poses anything like so severe a threat as a politician (elected or appointed) who does have that power.

          • houseboatonstyx says:

            The rich man can buy the politician, or defeat him in the next election. Still it is not as easy to control how someone votes (in an honest democracy) as to control where he lives, what he eats, what forums he can post on, etc).

          • Yehoshua K says:

            Yes, the wealthy man poses a potential threat. He might be able to control the politician–but he faces the competition of other wealthy men to do it. He might be able to defeat him in the next election–but the incumbency is a powerful advantage, and wealthy men are defeated often enough. I notice that Donald Trump did not become the Republican nominee when he last declared his candidacy, much less President.

            The politician poses an actual threat to me; he can pass laws or regulations that stifle my freedom, that seize my property, or that even actually imprison me.

            I fear the threat that is far more than the threat that may be.

          • Cauê says:

            Once the politician has the monopoly of violence, the rich man can only threaten me through the politician, who can do it directly.

          • Yehoshua K says:

            Caue, yes, that’s my point. I think that all people, of whatever vocation, have the same human nature; politicians are no more likely to be wise and knowledgeable and honest and decent than corporate CEOs. But they do have a lot more direct power to ruin my life.

          • houseboatonstyx says:

            The politician is the only one who can be to some extent controlled by the voters. Without him, it would all be corporations with Pinkertons, and sending in bulldozers. Mall security officers would have guns and tasers, and the accused no rights.

          • Yehoshua K says:

            Houseboat,

            Of course that is correct. That’s pretty much what government is for; to protect people from violence and fraud. Of course if no formal government existed, we would soon find ourselves in a state of anarchy; I said as much in my first comment on this topic.

            That does not mean that government does not itself pose a danger of great abuses and a near certainty of minor abuses.

            Again, as I said, neither extreme is acceptable to me; I believe that a society, to be worth living in, must make an effort to maintain a balance, however imperfectly, between the horrors of anarchy and the horrors of tyranny.

            Let me put it this way. Domination is bad. I don’t want to be dominated. Not by corporations, not by churches, not by politicians, not by anyone. If any of them should have the power to dominate my life, I expect them to ruin my life. They are all equally bad, and in exactly the same way. What’s more, if I should somehow get to a position of dominating society, I would do the same thing to everyone else.

            It’s about being a limited human being, it’s about humans not being fit to wield that kind of power over each other. We’re not smart enough, not knowledgeable enough, not wise enough, not decent enough, and certainly not everything-at-once-enough. We’re humans, not gods.

          • Cauê says:

            If the corporations have Pinkertons, the politician hasn’t quite achieved monopoly of violence, and what I said doesn’t apply.

          • Yehoshua K says:

            I’m afraid that I am not familiar with the term Pinkerton. Would you please explain more fully?

          • Highly Effective People says:

            The Pinkerton Detective Agency is a private security company which became fairly infamous on the left due to their use as strikebusters in the late 19th and early 20th centuries. Blackwater before Blackwater if you will.

            Though that example drives home the absurdity of the comparison. The smallest state government has it’s own National Guard troops in addition to thousands of State Police and even a local municipal police department will typically have armored vehicles and fully automatic weapons. Bill Gates has, what, maybe a hundred guys on his security detail?

            As Stalin may-or-may-not have said “How many divisions has he got?” The rich today have coercive power nearly exclusively through their ability to buy government power: weakening the government forces them to buy their own goons while the average joe would become if anything better armed and organized. I’d give the Michigan Militia at least even odds against Pinkerton.

          • Yehoshua K says:

            Caue, Highly_Effective, thank you. Yes, I agree with that completely.

  52. Scott Alexander says:

    Legal advice request:

    I am a tenant in Michigan renting a house for two years from a landlord company. I was in California for two weeks. I left my heater on at 52 degrees. During that time it got very very cold outside and the pipes burst and flooded my house. Now I need to get it fixed and it looks like it will be very expensive.

    My landlord says he is not going to pay. His argument is that first of all, I was the one who went on vacation and left the heat on at that level and not a higher one. Second of all, my lease has a clause saying I need to buy liability insurance and present it to the landlord, and I was an idiot and forgot to do this.

    A contractor who I brought in to help with the repairs says this is totally wrong. He says Michigan law says that landlords pay for repairs to property, no ifs ands or buts. Going on vacation with the heat set above freezing is perfectly reasonable and not negligent. Even if the lease contract saying I had to buy liability insurance was legal, which it might not be, pipes bursting is not a liability-related issue and would not have been covered even if I had it. He says I would win any court case, and I should use this fact to force my landlord to pay or else actually bring it to court.

    I should probably consult a real lawyer, but that will cost money, so before I do I would like the opinion of anyone who’s faced a similar problem on who bears responsibility here.

    Also, I want to get my house unflooded as soon as possible. If I pay for a company to do it, and I win in court, does my landlord pay me back? Or have I given up my right to press the case by paying myself?

    EDIT: Thanks for the advice, I’ve got a free consultation with a lawyer scheduled Monday

    • Anonymous says:

      You really ought to be able to get a lawyer to answer this question without committing to payment. This is a very simple marketing expense for the lawyer.

    • Protagoras says:

      I’d say definitely talk to a lawyer. I’m not one, and I don’t know Michigan, but I know that it’s common practice everywhere for landlords to put all sorts of stuff in their leases that isn’t actually enforceable, either because they don’t realize that, or because they hope the tenants won’t realize it.

    • Evan Þ says:

      As for the unflooding your house, the standard solution I’ve heard for maintenance problems that your landlord won’t fix is to hire a company, save the receipt, and get your landlord to reimburse you afterwards (through small claims court if necessary.)

      • dirtyHippy says:

        I’m not sure about MI, but here in Minnesota rather than trying to get reimbursed (good luck with that!) you can simply deduct those costs from rent paid.

    • Will says:

      I had a similar problem in Lansing. In Lansing I was able to talk to a lawyer for half an hour for free, after which he drafted a letter to the landlord for $150. The landlord then relented.

      Probably worth talking to a housing lawyer. The initial consultation is usually free anyway.

    • Luke Muehlhauser says:

      Oh man that sucks.

      Initial consultation is free, so talk to a lawyer — or better yet, three different ones — ASAP.

      Whenever I’m seeking legal counsel on an issue I haven’t dealt with before, I get at least three different initial consultations before doing anything. There is actually a lot of variance between lawyers in my experience.

      • houseboatonstyx says:

        I’ve found lawyers who advertise “Free consultaion” of one sort or another, to be very helpful, generous with their time, and non-hypey. The time may end abruptly, when zie reaches zis destination — they’re usually on their cell phones.

    • A. Rex says:

      I defer to the other comments for legal advice. I wanted to chime in as a data point, as I have been in much the same situation.

      I lived in an apartment in Cambridge, Massachusetts, where the pipes burst while I was gone for winter break because the heat was turned off. I believe that in a _moral_ sense, setting the heat low (say below 55/60F or even off as in my apartment) during a cold winter is negligent on the part of the occupant/renter. (This should not be confused with any legal judgment nor a suggestion that you should pay.)

      In any case, the damages came out to something like $1000, and the landlord did not want to pay, under the argument of negligence. But really, no one wanted to pay the bill. The situation never made it to a lawyer or court, but instead was a prolonged game of chicken. In the end, the landlord backed down and paid for everything.

      Good luck.

    • Airgap says:

      Or have I given up my right to press the case by paying myself?

      Probably not. Basically, it usually works like this (at least in California): If it was the landlord’s responsibility to fix something, and he won’t, you pay to fix it yourself and take it out of the rent. If it costs 3 months rent to fix it up, you just don’t pay rent for 3 months.

      There are a lot of rules about notifying him, giving him the chance to find a guy who can do it cheaper instead of whoever you found, and that kind of thing. But that doesn’t necessarily sink you. If he could have found a guy who would do it for 90% of what you spent, you might have to pay back that 10%, but maybe not. After all, you had to take care of that shit now, and he was dragging his heels even thought it was his responsibility to do something ASAP.

      Basically, talk to an actual lawyer and do some research, but you’re unlikely to be in trouble here.

      • Edward Scizorhands says:

        If it was the landlord’s responsibility to fix something, and he won’t, you pay to fix it yourself and take it out of the rent. If it costs 3 months rent to fix it up, you just don’t pay rent for 3 months.

        This might be turn out to be the Michigan rule, but be very careful. It might be the rule that you need to keep paying rent until this matter is decided.

        • Airgap says:

          You might have to put the rent in escrow, but probably not. Basically, if you didn’t pay rent, the landlord sued for possession, you say “Oh hai judge, plz stay execution b/c i haz a case here, srsly,” and you’d probably get a stay. Even if you lost the subsequent trial, you’d just be ordered to pay what you owe plus some extra charges, else possession reverts to landlord. If you then paid, you’d still have right of possession. You can always block an eviction for nonpayment by giving the landlord money. Also, if you had a reasonable-looking case, told the landlord, but he went ahead and ordered eviction and sued for possession, this starts to look like what’s called “Retaliatory eviction.” If he thinks you’re wrong and should pay, he ought to sue seeking the back rent and an order to the effect that the damage was your fault. If he tries to evict you for a good-faith exercise of your rights as a tenant, bad stuff could happen.

    • Jordan D. says:

      I am not a lawyer and cannot give you legal advice. You should totally be able to find a free consultation with people who live in Michigan.

      My initial confusion here is thus; what on earth does liability insurance have to do with anything? Liability insurance policies pay for defenses and/or settlements/judgments (to a certain premium) if you’re sued. The pipe bursting sounds more like a homeowners/renters/umbrella coverage issue than liability insurance.

      I wasn’t able to locate a Michigan appellate case which was on-point. I see some articles approvingly eying New Hampshire Ins. v. Lambobard’s holding that a tenant isn’t liable for fire damage without a an express clause in the rent, but I note that the courts haven’t applied that same reasoning to all cases of rental damage.

      Without seeing a Michigan case, I’m not sure how clear-cut your landlord’s negligence argument is. If there’s no assignment in your lease then it’s probably on the landlord- leases in my area usually require tenants to keep the heat around 60 degrees when they’re away during the winter for exactly this reason. But, you know, see an actual lawyer.

      Your last paragraph I really have little knowledge about, but generally speaking your affirmative duties are probably to inform the landlord, request that they make the repairs and log it. If they refuse to make repairs you can usually get a reasonably low-cost repair service to do it and deduct from your rent (your duty to pay rent is tied to the landlord’s duty of maitenance, if I recall right). Your big worry here is that it opens all sorts of fun new lines of attack in court, including whether you gave the landlord enough time, whether you picked a sufficiently low-cost plumber, whether the person you brought in caused damage themself, etc, etc.

      Summary: Get that consultation, but don’t feel that your position is untenable. The liability insurance claim seems ridiculous on its face and unless there’s clear precedent it doesn’t look to me like you were sufficiently negligent to trigger liability without an express clause in the lease. Finally, you can probably get it repaired yourself but you condemn yourself to being SURE that you’ve jumped through hoops correctly. Most importantly you cannot rely on anything I say at all ever.

      • Deiseach says:

        Liability insurance should be for replacing any damaged furniture and fittings destroyed or rendered useless by the flooding (or fire, etc.) That’s why we require our tenants to take out liability insurance; if there’s a fire in the house, we’ll repair and make good, but if your curtains were destroyed or your suite of sitting room furniture is too smoke-damaged and has to be thrown out, claim on your insurance and buy replacements.

        Scott did take reasonable precautions by turning on the heating, and maybe he can say that being inexperienced in the vagaries of Michigan weather, he never expected it to get colder than that. It does sound like the landlord doesn’t want to pay to fix the damage.

        I hate to say it, but go see a lawyer.

        Oooh, and there’s a tenant/landlord booklet issued by the State of Michigan online – sample example of lease agreement:
        (o) PIPE-FREEZE PREVENTION: If Tenant plans to be away from the premises for any length of time, the heat must be left on during the cold season and the windows closed to avoid broken pipes and water damage.

        Well, Scott certainly did that. There’s advice about what to do in a dispute about repairs, advice about withholding rent and going to court, sample letters to landlord, and contact details for mediation services.

        Even a sample letter about withholding rent and putting it into an escrow account if the landlord refuses to carry out repairs after having been notified!

      • Deiseach says:

        Okay, and here’s an excerpt from a site advising landlords to put it in the lease that tenants take out renters’ insurance/liability insurance:

        A landlord is not financially responsible for a tenant’s possessions or living expenses when there is a fire, a break- in, property damage or other catastrophe. Generally, landlords have insurance aimed at protecting the building in which the tenant lives not the tenant. Many tenants assume the landlord is responsible for the protection and care of their personal belongings. Educating your tenants and ensuring they have insurance might save you time, money and a headache down the road..

        So yes, seems like this clause has nothing to do with repairs to the property itself. Sounds like the landlord is trying to bluff you into paying for the repairs yourself – you didn’t get the insurance, you have to pay for your damaged floor coverings, furniture, and so forth, you can’t ask the landlord to replace those – but he or the leasing agency are still liable for fixing the damn burst pipes (and any other structural damage).

    • speedwell says:

      Send your landlord a certified, return-receipt-requested, letter detailing what you told us, including the date and time of the damage, the date and time you originally reported it, the date and time of the landlord’s response, what the landlord responded, the fact that your heat was on (above 50 is reasonable diligence in any apartment I’ve rented; perhaps an adjacent apartment was unheated?), and any other salient facts. Try to run it by a lawyer before you do so. In some places, you must be able to prove you contacted the landlord in writing and gave him sufficient time to arrange and complete repairs before the landlord can be held legally liable for them, or before you can escalate. Flooding is usually considered an emergency. If you called from your cell phone and can show proof you called the landlord on the day you discovered the damage, that will also help.

      • Phillip Bullard says:

        This. Creating a contemporaneous record is one of the most important things you can do to help yourself, whether or not you get a lawyer involved.

        You might also consider seeing if there are any legal clinics in your area that offer free assistance. These might be organized by various groups (e.g., law school or community service association), and may be of a general nature or subject matter specific (i.e., you may find specific legal resources available for tenants).

        Good luck.

      • RCF says:

        And of course, keep a copy of the letter for your records. It might be good to also email a copy, if you have email address of the landlord. The advantage of email is that return-receipt-requested mail shows that the recipient got the letter, but does nothing to show what was in the letter.

    • efnrer says:

      Try asking in reddit.com/r/legaladvice. These kinds of questions are answered there all the time.

    • caryatis says:

      Talk to a lawyer. Next time, get renter’s insurance, it’s cheap. I don’t think your legal claim is at all hurt by getting the house unflooded now–just document whatever you spend on it.

    • FJ says:

      I actually am a lawyer, but not in Michigan. So for all intents and purposes, I am not a lawyer. That said, I have seen similar issues arising with friends, and I have consulted with them about how to handle the situation.

      I echo what others have said about seeking a free consultation. Also remember that any lawyer is required to discuss his billing arrangements with you, and give you a written statement of them, before you owe him a red cent. Any decent lawyer is going to tell you that this can probably be settled with a letter, or maaaybe giving you some advice on how to represent yourself in small claims court. Drafting a letter and having a conversation with you are not very time-consuming activities, so even paying for legal advice is unlikely to cost you more than a small fraction of the cost of the repairs themselves. YMMV, etc.

      As for the substance of your inquiry: again, I can’t speak to Michigan law. And I wouldn’t exactly take a contractor’s word for it, especially one who claims the landlord is responsible for all repairs whatsoever (so if you deliberately committed arson, you could then sue to have your landlord rebuild the place?). But if you lived in my state (PA), I think you could make a pretty good case that setting the thermostat to 20 degrees Fahrenheit above the freezing point is not negligence. (Aside: it sounds like it got so cold out that your heating system was not able to maintain an internal temp above freezing. Would setting the thermostat to 80 degrees have magically made your radiator more powerful?) I also would be shocked if you lost your claim because you paid for the repairs yourself and later sought reimbursement — I can’t think of any example in any area of law I’ve ever heard of where repairing an unlivable condition somehow estopped you from later pursuing a legal claim. The only thing I can imagine is stuff like mootness, but this case wouldn’t be moot: you’d want to be reimbursed with money, and demands for money are never moot. (Mootness is stuff like, “I have served my criminal sentence and been released, but I just realized I should have been released even earlier.” Without a time machine, there’s nothing we can to do for that guy.)

      Sorry for the prolixity. It proves I really am a lawyer!

      • RCF says:

        “(Aside: it sounds like it got so cold out that your heating system was not able to maintain an internal temp above freezing. Would setting the thermostat to 80 degrees have magically made your radiator more powerful?)”

        Maybe. Suppose the heater and the thermostat are in the interior of the house, and the pipes are at the periphery. Outside the house, the temperature is, say, 0 degrees. Next to the thermostat, the temperature is 55. So there’s going to be a heat gradient throughout the house from 0 degrees to 55. Halfway between heat gradient, the temperature is going to be 28, which is cold enough for the pipes to freeze. If the thermostat were instead set to 80, then halfway though the heat gradient would be 40.

        Not being familiar with how things work in cold environments, I’m unclear on what’s keeping the pipes outside the house from freezing, regardless of what the thermostat is set to.

    • I can’t represent you, but I am a lawyer, and here are some general points.

      (1) Compared to other states, Michigan law is very favorable toward residential tenants. (2) Liability insurance is irrelevant. (3) You lose no rights by going ahead and paying for the repair. (4) If you consult a lawyer, seek out someone familiar with your local district court, where landlord-tenant disputes are adjudicated.

      • Anthony says:

        Renters insurance is a generally pretty good idea, because even if the landlord’s negligence caused the pipes to freeze, he may not be liable for damage to your property.

        Liability insurance would cover you for damage to the property, and renters insurance would not, if your negligence was the primary cause of the damage.

        • Perhaps I was overly terse. I meant that liability insurance is not relevant to paying for the cost of structural repairs to the property caused by pipes freezing, without any tenant negligence. Obviously various forms of insurance have other good points.

    • RCF says:

      I’m pretty sure that by fixing the damage, you aren’t giving up your right to seek reimbursement, but it’s likely that if you don’t fix the damage, you give up your right to seek reimbursement for further damage. For instance, if it would take $1000 to fix it now, but you wait a month and by then it takes $1500 to fix it, you’ll likely be able to only get $1000 and be out the additional $500.

    • Prof. Overforce says:

      I am not a lawyer, however my father-in-law is a landlord in Michigan, and here is his response when I put your questions to him:

      “This is a good legal question that (in my opinion) could go either way. How does the lease read in terms of damages? That is the most important part.

      Not being a lawyer………my response…..the pipes should not have froze or burst at 52 degrees. Sounds like the landlord/owner did not have proper insulation for plumbing. Act of god is what my thoughts are. Tenant did not intentionally do damage.

      Renters insurance is usually purchased for fire and theft of tenants property. Pipes freezing should be covered by landlords home owners insurance. I would hope tenant (if necessary) takes it to small claims. Stay out of the district courts. Time both parties pay for lawyers pipes could be repaired.

      Try to cut a deal with the landlord. Tenant removes the water and the landlord repairs the pipes. The house is not livable because it has no water…?? Right? Basic rental law in MI states house must have heat, hot and cold running water. Beyond that it can get sticky.

      In my non-lawyer opinion tenant would win. Hopefully tenant has pictures with dates on them showing the condition of house when they moved in. Tenant should document everything. Dates of every detail from beginning to the time of the flooding. Verbiage in court is worthless.
      Now, if the tenant has young children and does not have hot and cold running water, judges CAN get very tough on Landlords. Especially if tenant has to live somewhere else while repairs are being made.
      If tenant loses and has to pay and is employed, tell him to pay up. Do not let the landlord go after tenants wages. This really screws up the tenants future in getting a good or better job. I have done this numerous times.

      After the fact information. Judges in Jackson MI state that all leases should have a clause that the landlord has the right to inspect their property every 30 days.

      As a landlord I would take into account…..is this a good tenant? If so, and I want to keep them I would try to negotiate a win/win deal.”

      • Agreed, except that small claims court is a subset of district court.

        Large or corporate landlords often refuse to be sued in small claims, which moves the case to regular district court.