Open Thread 5: My Best Friend’s Threadding

I’m off to California for the weekend to attend Alicorn and Mike’s wedding, so don’t expect much SSC for a few days. Here’s an Open Thread to keep you occupied till then.

1. Comments of the “month” are Robby on idealism in Continental philosophy and Anatoly on giraffe sex.

2. Frequent SSC commenter Ialdabaoth has unfairly fallen on some hard times, and also-frequent-SSC-commenter Elissa is asking us to help him out:

I’m posting here on behalf of Brent Dill, known here and elsewhere as ialdabaoth. If you read the comments at SSC, you’ll recognize him as a contributor of rare honesty and insight. If you read Less Wrong, you may have enjoyed some of his posts. If you’d had the chance to talk with him as much as I have, you’d know he’s an awesome guy: clever, resourceful, incisive and deeply moral. Many of you see him as admirable, most as relatable, some as a friend, and more, I hope, as a member of our community.

He could use some help.

Until last Thursday he was gainfully employed as a web developer for a community college in Idaho. Recently, he voluntarily mentioned to his boss that he was concerned that seasonal affective disorder was harming his job performance, who mentioned it to his boss, who suggested in all good faith that Brent should talk to HR to see if they might help through their Employee Assistance Program. In Brent’s words: “Instead, HR asked me a lot of pointed questions about when my performance could turn around and whether I wanted to work there, demanded that I come up with all the solutions (after I admitted that I was already out of brainpower and feeling intimidated), and then directed me to turn in my keys and go home, and that HR would call me on Monday to tell me the status of my employment.” Now, at the end of the day Tuesday, they still haven’t let him know what’s happening, but it doesn’t look good.

I think we can agree that this is some of the worst horseshit.

On the other hand, he’s been wanting to get out of Idaho and into a city with an active rationalist community for a while, so in a sense this is an opportunity.
Ways to help: Brent needs, in order of priority: a job, a place to stay, and funds to cover living and moving expenses– details below. Signal boosts and messages of support are also helpful and appreciated.
Ways NOT to help: Patronizing advice/other-optimizing (useful information is of course welcome), variations on ‘cool story bro’ (the facts here have been corroborated to my satisfaction with hard-to-fake evidence), disrespect in general.

1. Job: Leads and connections would help more than anything else. He’s looking to end up, again, in a good-sized city with an active rationalist community. Candidates include the Bay Area, New York, Boston, Columbus, San Diego, maybe DC or Ann Arbor. He has an excessively complete resume here, but, in short: C#/.NET and SQL developer, also computer game development experience, tabletop board/card game design experience, graphic art and user interface experience, and some team leadership / management experience.

2. Crash space: If you are in one of the above cities, do you have/know of a place for a guy and his cat? How much will it cost, and when will it be available? Probably he’ll ultimately want a roommate situation, but if you’re willing to put him up for a short time that’s also useful information.

3. Funds: Brent is not now in immediate danger of going hungry or homeless, but a couple of months will exhaust his savings, and (although it is hard to know in the current state of things) he has been told that the circumstances constitute “cause” sufficient to keep him from drawing unemployment. Moving will almost certainly cost more than he has on hand. There is a possible future in which he runs out of money stranded in Idaho, which would be not good.

If you feel moved to help, he has set up a gofundme account here. (the goal amount is set at his calculated maximum expenses, but any amount at all would help and be greatly appreciated– he would have preferred not to set a funding goal at all.) Though Brent has pledged to eventually donate double the amount he raises to Effective Altruist causes, we wouldn’t like you to confuse contributing here with charitable giving. Rather, you might want to give in order to show your appreciation for his writing, or to express your solidarity in the struggles and stigma around mental illness, or as a gesture of friendship and community, or just to purchase fuzzies. Also, you can make him do stuff on Youtube, you know, if you want.
Thank you so much for your time and kindness.

I don’t yet have a principled policy on when I will and won’t signal-boost requests for help but I hope it doesn’t reach the point where I have to form one.

3. SOMEONE (who wishes to remain anonymous) MADE FANART OF ME!!!

Click to expand

PS: NO RACE OR GENDER IN THE OPEN THREAD THAT NEVER HELPS

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

443 Responses to Open Thread 5: My Best Friend’s Threadding

  1. Evan Gaensbauer says:

    1) What’s the estimated regular readership of Slate Star Codex? Does anyone know?

    2) Do the ‘top posts’ overlap much with the posts that get the most traffic? I’m aware of why which posts are indeed considered ‘top posts’ by Scott himself. If one of his articles is the most shared on social media, gets the most positive feedback from regular readers, and friends, Scott trusts the most, and inspire people, I understand why those are considered among the best. However, I suspect some of the posts Scott has written touching upon social justice, or neoreaction, or related social issues, get the most traffic.

    3) When I was visiting Berkeley this past summer, I was sharing my excitement over Slate Star Codex, and a half-seriously ventured the idea that I would like this site to have a donation button, and if it did, I would donate. Concerns raised by others were that if his readership paid him to blog all the time, medicine would be deprived of one of the few statistically literate doctors America sorely needs. Additionally, it was mentioned that changing the incentive structure for how Scott blogs might negatively affect the quality of his posts.

    Still, I learned something from asking this question. In conversation with Ben Kuhn, and Katja Grace, at the CFAR alumni reunion this last summer, they were musing over how Scott might be one of the best non-fiction writers currently writing, including among published authors. Some essays, and analyses, written by Scott on this blog have really changed how I think for the better. All of this makes me think that in addition to the intellectual stimulation, and entertainment value, this blog provides, Scott also provides great educational value for what I’m assuming are his thousands of readers.

    I wish that whatever think tanks, or publications, which are seeking someone to know how to solve a problem are able to find Scott if the answers he can provide are the fit they’re looking for. I doubt it’s feasible given his current hectic work-life balance, but it seems as if there’s more good Scott could do with targeted, formalized writing. I’ve got no clue if there’s any low-hanging fruit here, though.

  2. mjgeddes says:

    I notice lukeprog posted a link to an excellent review article on consciousness:
    http://lukemuehlhauser.com/wp-content/uploads/Reggia-The-rise-of-machine-consciousness-Studying-consciousness-with-computational-models.pdf

    Of course I totally cracked consciousness long ago (yes, I really am that good). In fact folks, I’ve succeeded in making total mince-meat of virtually all the central problems of philosophy and cog-sci in the course of developing my Mathematico-Cognition Reality Theory (MCRT) over the years.

    The review article names 5 general categories of theories of consciousness:

    1. a global workspace,
    2. information integration,
    3. an internal self-model,
    4. higher-level representations,
    5. attention mechanisms.

    So which general theory of consciousness is correct according my MCRT ?

    *drum roll please*…… the answer is … 4. higher-level representations!

    Consciousness is a system of symbolic representation, the purpose of which is to enable communication between the society of agents that makes up the mind. But higher-order representations of what? The short answer is narratives of goals; consciousness is a system which represents goals in the form of narratives.

    Other theories of consciousness, while having some merit, all have serious flaws. Let me briefly explain what I believe to be the flaws in the other theories:

    (1) global workspace; there are elements of this, but its not the central feature of consciousness – global workspace is more a description of certain activities of the brain correlated with consciousness, rather than an explanation of consciousness per se.

    (2) information integration ; this theory is an interesting attempt at defining consciousness in terms of information theory, not a bad try but plain flat wrong, Scott Aaronson did a good rebuttal of this on his blog

    (3) Internal self-model ; internal self-models *do* contribute to consciousness, but this is not the central defining feature of consciousness – for instance it’s possible to have awareness of the external world without any self-awareness.

    (5) Attention mechanisms; not really, see (1), this is simply a correlate of consciousness , not the explanation for it.

  3. Stephen says:

    http://www.science20.com/news_articles/vancomycin_modified_to_vanquish_antibioticresistant_bacteria-145162

    This seems like a big deal to my untrained eyes, but it was only published in the Journal of the American Chemical Society. I would have thought it would be more a Nature-level paper. Is it not as exciting as it sounds? Anyone who knows more about antibiotics care to comment?

  4. Kevin Nall says:

    Scott: Thank god you left us an Open Thread….i am not sure i could be “occupied till then.” As such, my mind is dull. Therefore, I’m taking advantage of the open Thread. I suspect that diatribe some how stimulates the amygdala…..yes…you heard me right…not the prefrontal cortex (PFC) or is it “Private First Class”…not sure…ignorance is bliss…..it is the amygdala!…I am fearful happy…..hahahaha

    KGN

  5. For Brent, the company Stack Exchange (owner of Stack Overflow) seems to have some openings that fit his skill set – they write C#/.NET websites. From what I’ve read, they have a decent working environment. They have offices in New York, and also allow working remotely.

  6. 27chaos says:

    Does anyone have opinions on the paradigm of Universal Darwinism, which applies theories of replicator dynamics to just about anything you can think of? I learned about it a few months ago and was astounded by its power. The only reason I distrust it at all is that the outside view tells me to be wary of universal paradigms, currently.

    Here’s a link to an article featuring Universal Darwinism applied to physics. Once I saw this idea, the potential applications everywhere else seemed to jump out at me all of a sudden, it was an incredible experience for me. Very interested to hear the thoughts of others on this.

    http://on-memetics.blogspot.com/2014/06/tim-tyler-darwinian-physics.html

    I am very chatty today. Just took my Vyvanse for the first time in a week, which is probably the cause. I hope no one is bothered by this, please let me know if you are.

    • The Anonymouse says:

      Re: very chatty (and your previous question about serial posting):

      Do it! SSC is full of interesting minds. As long as the serial posting are not content-free (and yours definitely are not), each one of them is adding value. There’s a million interesting topics that I’d love to read about, if only they would occur to me. Your posts, and the replies they draw, open up windows to new topics.

    • oneforward says:

      Re: universal Darwinism

      Looking for similarities between different systems is useful and can help you come up with new ideas, but you need to follow through and see if the analogy works on a technical level. Tyler’s examples don’t.

      For example,

      Re-radiation of photons after hitting dust particles is one of the most common events in the universe and it follows the classical Darwinian algorithm – producing a family tree of photons with the older photons with higher energies being ancestral to their more numerous, but less energetic offspring.

      This lacks key features of evolution, like offspring being similar to their parents.

      Tyler is correct that evolution uses some mathematical tools (trees and optimization) that are used in physics. That does not mean Darwinism explains physics.

      • 27chaos says:

        I don’t think it lacks that similarity. It is a large tree, presumably, and the change is gradual. Perhaps I’m misunderstanding you? I’m not too familiar with physics, honestly, which might be the problem.

        • oneforward says:

          Here’s a more careful criticism.

          The photons’ energy is their only relevant attribute in this context. The total energy of the system should be conserved, so the only way more photons can be produced is if one photon is absorbed and then two or more lower energy photons are emitted. If a photon has n “descendants” the average energy of one descendant is 1/n.

          You can make a loose analogy to reproductive fitness since higher energy photons will have on average more descendants. Here, though, having more descendants requires that those descendants be less like the parent.

  7. 27chaos says:

    What are the effects of raising taxes on the rich? This seems like the most important economic question ever, potentially, and yet I have little idea what the true answer is, even though I’m an Economics major. There are lots of interesting hypotheses, but practically all of them sound reasonable to me even when they conflict.

    • Emily says:

      It’s really that important of a question compared to something like “how can we make more countries prosperous?” This is just another “what happens when you raise the price of x” question. The answer is – as usual – people buy somewhat less of it. There’s a whole massive literature about how much less, which you can access if you google-scholar elasticity income tax marginal or effect of income taxation rates elasticity. You won’t find a lot of economists who think that we’re currently at a place where raising taxes on the rich would cause fewer taxes to be collected (that is, that we’re on the wrong side of the Laffer Curve.) You also won’t find a lot who think that there wouldn’t be some negative effect on reported income, through both tax avoidance behavior and labor supply.

    • Alan Crowe says:

      Psychological talk about whether the rich will respond by earning more, to maintain after-tax income, or earning less, due to reduced incentives is cohort. Tax revenue is longitudinal.

      That makes very little difference next year, and not much more the year after. To understand the difference between cohort and longitudinal imagine that “the rich” are basically 50-something men. We can tell a cohort story about a fifty year old facing a higher tax bill and changing his behaviour. Next year he pays more/less tax. But next year he is 51 years old. Does tax revenue go up or down? That is a longitudinal question: do this year’s 50 year olds pay as much next year as this years 51 year olds?

      Well, that’s as clear as mud. Looking at a single year fails to make the distinction between cohort and longitudinal clear. Lets try again with 30 years. We live in quiet times of very slow change. Taxes rates for the rich go up and stay up and time goes by and we get to see how it pans out.

      Do rich people work more or less hard? Following the cohort we notice that they stop working altogether because they grow old and die. The cohort story stops being interesting because it becomes a story about mortality not tax.

      The longitudinal story compares the rich 50 year old tax payers of 2014 with the rich 50 year old tax payers of 2044. There is an implicit cohort story. Look at today’s 20 year olds and follow them as some of them pursue business careers with varying degrees of success, eventually becoming the rich tax payers of 2044.

      A new question has emerged. Does a new generation of rich tax payers come through? Do today’s 20 somethings look ahead by looking at today’s 50 year olds? If they do, do they say “I want to be rich after tax, therefore I will aim at becoming super-rich before tax.”? Or do they say “Not worth it :-(”

      That kind of intergenerational question strikes me as far beyond the reach of contemporary social science 🙁

    • Anonymous says:

      There are lots of interesting hypotheses, but practically all of them sound reasonable to me even when they conflict.

      No, not at all. You have to look at reality, at what are called “natural experiments,” and they limit what hypotheses are viable. For instance anything that says that the sky would fall with extremely high marginal tax rates at the top end gets obliterated by the history of the 1950’s. That eliminates a huge chunk of politically relevant ideas, and almost all of the tax policy proposals of the right, which rely on the notion that even slightly raising marginal tax rates on the top end would lead Atlas to forsake his burden.

      • Alex Godofsky says:

        Taxes are a lot more complicated than “what is the headline marginal tax rate”. Among other things, that actual taxable base matters. Your comparison isn’t valid unless you propose returning to the 1950s tax code in all respects, including the various available exemptions.

      • 27chaos says:

        Please explain the 1950’s in more detail, or provide a link that does? I’d love to see it.

        Isn’t the 1950s just one data point, also? Or do you think that economics is closer to physics than psychology, so lots of data points are unnecessary and we can extrapolate validly? If so, why is that your perspective?

        If you do think the 1950s is just one insignificant data point, can you show other evidence from other times or countries?

        One thing that’s been bothering me lately, although it sounds silly, is that in most areas of thought I find the right and left to be stupid roughly the same amount of time. But in economics, mainstream left wing opinions seem much more reasonable to me than mainstream right wing ones. It would be odd for left wing economics to be perfect while general leftist ideology terrible, so I’m worried I’ve fallen into dogma or that Cathedral has blinded me or something.

        It’s possibly the other way around, however. My parents are tea party conservatives, and many people around me are too (Midwest). I’ve broken from many of their beliefs, decided to retain others, yet this week is the first time I seriously questioned whether or not taxing the rich would hurt the economy’s productivity or increase unemployment. Never thought in detail about it, vaguely assumed it would hurt the economy somehow. The idea implies that there has been a bunch of util lost for no good reason, so I might be flinching from it for that reason too.

        If my OP sounds dumb or overenthusiastic, assume that it’s because I’m struggling with bias one way or another, so any hint of real insight seems artificially amazing to me.

        • Douglas Knight says:

          The top marginal tax rate in the 50s was 90% in America. In the 60s and 70s it was 70%. The opening song on the best Beatles album is about the top marginal tax rate being 95% in Britain in the 60s (“there’s one for you, nineteen for me”).

          Ronald Reagan was enthusiastic about the Laffer curve because he had personally experienced it and had cut back on his work (as a motivational speaker, I think) because he was subject to the 90% tax rate. The Beatles, on the other hand, were not receiving direct income, but were owners of intellectual property, and were able to structure their decisions to manipulate their effective tax rates.

          As Alex says, very few people actually paid the top marginal tax rate like Reagan. The tax rate was dramatically lowered by Kennedy and Reagan, but the amount of revenue collected did not change much either time. This demonstrates that people were not actually paying the headline rates. But whether the exemptions were infra-marginal or affected the effective marginal tax rate requires knowing details. (For example, the oil depletion made 15% of oil income untaxed. This did reduce the marginal tax rate.)

          • houseboatonstyx says:

            I’m not sure whether anyone has yet questioned the assumption that for the rich to slack off working, would harm the country.

            At some point, many successful people switch from the productive work they were doing, and spend their time maximizing increasing/keeping their money by non-productive activity such as unfair competition, gaming various systems, lobbying officials, etc. If they slacked off on that sort of work and spent their time loafing, the country might be better off.

          • James says:

            Obligatory “I prefer Sgt. Pepper’s” nitpick.

    • Alex Godofsky says:

      The question is ill-posed because there are a lot of different ways to raise taxes on “the rich” and some of them are catastrophically awful while others are not that big a deal (in relative terms).

      Wealth tax? Apocalypse. (Only sort of exaggerating.)

      Tax on investment income / cap gains? Very, very bad.

      Tax on consumption? Not that big of a problem, especially if used to reduce the above two.

      • 27chaos says:

        Would you explain why those various forms would have those respective results? (It might be obvious I worry, but although an Econ major, I am a micro kind of guy, macro scares me.)

        • Anonymous says:

          I think he’s making a micro argument: tax it, and you’ll get less of it.

          • 27chaos says:

            I’m a micro guy in the sense that policy related stuff is not my focus. I’m in economics because I like the idea of complex systems determined by tiny individual decisions aggregated. I look at cellular automata much much more than dollars and cents.

        • Alex Godofsky says:

          The short version is that deadweight loss from taxation increases with the square of the tax rate in conventional models. (See the second figure on this article for a geometric demonstration of why.)

          This means that as a starting point you want to make your tax rates as even as possible, because revenue is ~proportional to the (weighted) sum of the rates while the deadweight loss is ~proportional to the sum of squares. The solution to that optimization problem lies in making the rates equal.

          Combine this with the observation that taxes on wealth or capital income, when combined with taxes on wage income or on consumption (e.g. sales tax or VAT) are “double taxation”. This doesn’t just mean that the money you get in, say, dividends from stock you’ve purchased has been already taxed (after all, every dollar gets taxed over and over as it flows through the economy). Rather, from the perspective of the individual taxpayer, whose behavioral response to tax rates is what causes deadweight loss in the first place, money from wages spent on immediate consumption is taxed at one rate, while money from wages spent on deferred consumption by way of saving and investment is taxed at a higher rate.

          As a demonstration, consider a simple example in which a person earns $10,000 in one year. In a world without taxes, he might have the following two options:

          1. purchase $10,000 of goods this year
          2. save the money in a bond that pays 5% interest per year for one year, and purchase $10,500 of goods next year.

          Now add in a 20% tax rate on wage income only. He now can purchase $8,000 of goods this year, or $8,400 of goods next year. Both choices are taxed at a rate of 20%, which we calculate by dividing the post-tax amount of consumption by the pre-tax amount of consumption (which is 80% in both cases, subtract from 100% to get 20%).

          Now add in a 10% tax rate on investment income. The choices are now $8,000 of goods this year, or $8,360 of goods next year. The first choice is still only facing a 20% effective tax rate, but the second faces a 20.38% effective tax rate.

          OK, so an extra tax of 0.38% may not seem like a big deal. But imagine now rather than saving for one year he is saving for 30 years for retirement. In a world of no taxes at all, again, this would allow him to buy $43,219 of goods in 30 years. In a world of just the 20% wage income tax, this would allow him to buy $34,576 of goods in 30 years. In a world with the 20% wage income tax AND the 10% investment income tax he would only be able to buy $29,963 goods in 30 years – an effective tax rate of 30.67%.

          In reality, the taxes on investment income are actually much higher than 10%, which makes the compounding even worse. The problems this cause with retirement savings have been partially solved for many Americans by exempting IRAs, 401(k)s, pension plans, and a number of other retirement savings vehicles from investment income taxation. But for anyone who wants to save money for non-retirement, or who doesn’t have a pension plan and wants to save more than the caps on 401(k)/IRA contributions, there are very few options.

          The options that do exist create other problems. For example, I try to save efficiently by investing my non-tax-sheltered money in the Vanguard Tax-Managed Capital Appreciation Fund. One of the strategies followed by this fund is to invest in companies that don’t actually pay out dividends, so most of the “income” to this fund is actually unrealized capital gains. Because you can defer realization of the gains you can reduce the amount of compounding tax rates you face. But this introduces some big distortions in the way corporations are able to manage their finances, distortions that don’t serve any useful purpose in themselves.

          The results above follow directly from standard microeconomic theory without embellishment. There are certainly additional features you could add to the model that would imply a non-zero tax rate on investment income is optimal, but I am generally skeptical of these. Personally I feel like many of them are invented post-hoc rationalizations, but I’m not unbiased either so feel free to discount that.

    • ADifferentAnonymous says:

      There’s a very simple story told by three big graphs, which I’d be interested to see someone refute.

      First, we have the tax rates over time.

      Next is income inequality (EDIT: on a nasty non-zero indexed plot, see below for a better one to the same effect).

      Then there’s our old friend US gdp per capita over time.

      It sure looks like high top tax rates promote equality without harming productivity, doesn’t it?

      (For some epistemic metadata, these results were not what I expected when I first researched them, and caused me to update my beliefs on economic policy substantially.)

      • Nornagest says:

        There is a special place in Hell reserved for people who publish percentile graphs that aren’t zero-indexed.

      • Alex Godofsky says:

        There’s a very simple story told by three big graphs, which I’d be interested to see someone refute.

        Easy. Income Data is a Poor Measure of Inequality.

        • ADifferentAnonymous says:

          It’s not enough to show that income data overstates inequality; you’d have to argue that it started overstating it more and more around 1980.

          But since subsection 401(k) entered law in 1978 and people learned how to use it in 1980, then to the extent that 401(k) plans cause overstated inequality, that probably has been going up since then. Not sure if any of the other effects have reasons to have started increasing since that time.

          Does this also explain away the “real wages have been stagnant since the ’70s” thing?

          • Alex Godofsky says:

            It’s not “there’s this effect, it makes inequality appear bigger, therefore inequality is smaller”. It’s “there are many effects, they make inequality move in various directions, and they are HUGE relative to your current estimate”. Your data is garbage.

            I mean, look at the size of the longitudinal critique for the average case alone! That’s huge!

            Then look at how much worse it gets when you look at specific examples of people who go to college, or surgeons. You want an example of an effect that’s probably gotten “worse” over the past 40 years? Well how about college attendance going up?

            Did you miss the part about how almost the entire middle-class housing stock is completely missing? Hmm, I wonder what was happening to housing in the years before 2007, the end of your chart…

            Seriously, we are talking about $40tn of assets omitted from this measure. The entire US stock market has a market cap of ~$20tn.

            Aside: this isn’t even the only critique of your original argument. For example, regarding GDP growth – basically every country in the world that wasn’t ruled by literal Communists (and many that were) grew very quickly very soon after 1945, irrespective of policy. For another, the same critique I’ve made in a branch above of the measure of “tax rate” – your cited headline top marginal tax rates are not actually reflective of the tax environments of the time because of how much income was treated specially / exempted from taxation.

          • ADifferentAnonymous says:

            Thanks for elaborating.

            The idea that more income was tax-shielded in the past is excitingly counter to conventional wisdom–do you have a citation?

            Growth irrespective of policy supports the case for redistribution, since reduced growth is usually considered the major downside.

            I’m still not convinced the measure is useless–surely actual inequality is at least a major contributor? My thought is to take the five bullet points at the top of that link (not counting the first one, which isn’t a specific effect), and add ‘true inequality’ as a sixth. The net of these six effects has been increasing rapidly since Reagan and was pretty flat before, so the question is, which of these effects have been increasing? Six effects is hardly an unmanageable number.

            We would naively expect genuine inequality to go up from reduced/less progressive taxes, just as we would expect productivity to go up. It’s not obvious what effect they would have on the others, but it’s interesting to consider. For instance, noticing that 401(k)s came into existence around this time tells us that might be part. College enrollment, though increasing, doesn’t show the 1980 kink. The longitudinal effect is interesting–if companies had a way of smoothing income, paying the young more than their worth and the old less to avoid high taxes, then removing the incentive for this practice would raise measured inequality.

            The observation that there are lots of other effects influencing measured inequality certainly weakens the case above, but to dismiss it I think you have to go a bit deeper.

          • Douglas Knight says:

            Jaskologist’s that federal tax receipts are a constant percentage of GDP is a very simple demonstration that Kennedy and Reagan did not cut taxes when they cut nominal marginal taxes from 90% to 70% to 30%. Thus some tax increase much make up for the headline tax decrease, and that was removal of exemptions. I gave the more specific example of the oil depletion allowance, which disappeared with Reagan.

            Another example is that before Reagan perks were generally not taxed. It was a lot easier for the company to give a car to the employee back then, whereas now cars are usually purchased with post-tax money. Pre-Reagan a lot of rich consumption was not taxed. If you care about consumption inequality, nominal income misses this fact. If you care about accumulation of cash, maybe income statements are what you want.

          • Alex Godofsky says:

            The idea that more income was tax-shielded in the past is excitingly counter to conventional wisdom–do you have a citation?

            Actually, “more income was tax-shielded in the past” is the conventional wisdom to my knowledge. Remember that the standard story of the ’86 tax reform is that it cut rates and broadened the base.

            Growth irrespective of policy supports the case for redistribution, since reduced growth is usually considered the major downside.

            It is weak support if any. The entire world experienced a major policy improvement called “the end of World War 2”, which new policy was so much better than the prior regime that by comparison “are you a capitalist democracy or totalitarian Communist state” was a minor detail.

            There’s also a degree of confusion between levels and growth rates. Even if long-term growth rates are completely exogenous to policy, policy could affect levels. (e.g. regime A would always have 10% higher GDP than regime B.)

            I’m still not convinced the measure is useless–surely actual inequality is at least a major contributor? …

            We would naively expect genuine inequality to go up from reduced/less progressive taxes, just as we would expect productivity to go up. …

            The observation that there are lots of other effects influencing measured inequality certainly weakens the case above, but to dismiss it I think you have to go a bit deeper.

            It may be possible, with extreme care and with access to additional data, to tease out the “real inequality” component of “measured inequality in annual IRS income data”. Maybe. I actually suspect it isn’t, and that you’d have to do some sort of true longitudinal study. But even if it is possible, until you actually manage to do so you aren’t justified in drawing any conclusions from a graph of income inequality juxtaposed with a graph of annual GDP growth.

            (Not to mention that measured real GDP growth has some huge problems of its own. For example, official estimates of inflation probably overstate it by at least 1%, and we aren’t sure how this bias has changed over time.)

          • ADifferentAnonymous says:

            Is there any data on exempted income historically? That’d resolve a lot.

            The non-effect of the Kennedy tax cuts are a very strong case that tax rates are a red herring. I’m updating on that.

          • Alex Godofsky says:

            The whole problem with exempted income is that it isn’t reported (which is a fundamental point of my original link: IRS data serves a particular purpose, tax collection, and is ill-suited to other purposes). And for stuff like capital gains, you have to decide how to actually allocate those gains across years.

            You shouldn’t update that “tax rates are a red herring”, but rather than headline rates may be a red herring when you don’t put them into context with the actual structure of the tax code.

            What you should do is try to estimate the actual MTR faced by high earners, which will probably be very different from the headline rate.

            (As to how to do that – it’s hard! But it shouldn’t be surprising that you can’t just solve economics in an afternoon with Excel.)

      • Jaskologist says:

        Not a refutation, but a data point I’ve always been intrigued by: Tax receipts stay right around 19.5% of GDP regardless of top marginal tax rates (scroll down for zero-indexed graph).

        Now the refutation: “top tax rate” is not a constant enough thing to be meaningfully talked about over time. Way, way, way more people are paying the $250k and $100k rates now than were in 1917. Deduction rules have changed. Methods of structuring income to avoid taxes have changed (it used to be possible to give employees a lot more “non-monetary” compensation in the form of company cars, jets, etc, without impacting their taxes). You’re not comparing like to like, even though the number has the same label.

        As for the charts, they could show just as well that, at least since the 40s, income inequality increases GDP, or that higher GDP means more inequality (which would make a lot of sense), or even that lowering tax rates does help GDP, since that’s what we’ve been doing for a while, and GDP keeps going up. Mostly, I don’t see any firm conclusions that can be drawn from eyeballing them.

        • ADifferentAnonymous says:

          Thanks for the well-thought-out reply!

          It really looks like nothing has a long-term effect on GDP. At least, every policy tried in the 20th century seems to have been equally good for growth. But the main criticism of economic liberalism is that it impedes growth, so this suggests a strong update to the fiscal left. One way to avoid this conclusion would be to argue that we didn’t really try any economic liberalism in the 20th century–whatever measures were apparently implemented in that direction were all show. The point of the other graphs is that dramatic changes to one of the obvious metrics of policy are followed shortly by dramatic changes in the obvious measure of inequality.

          As for the fact that almost no one paid the super-high tax rate, I’d suggest that the fact that a lot more people started earning those levels of income shortly after the tax rates came down. That is, super-high top taxes discouraged super-high incomes, which you would totally expect to harm economic growth except there’s no sign that it did. (If you have good data on nonmonetary compensation, that’d be great).

          An alternate explanation is that once economic conditions made it worthwhile for executives to make such high incomes, that caused the removal of the high top taxes. I like this argument because I can’t tell whether it’s libertarian or Marxist.

          • Douglas Knight says:

            a lot more people started earning those levels of income shortly after the tax rates came down

            What is the basis for this statement? Your graph of income inequality?

            That graph shows no effect of the Kennedy tax cut on income inequality. The top marginal tax rate was lowered from 90% to 70%, tripling marginal take-home pay, and there was no effect on the graph.

            The Reagan tax cut, a doubling of take-home pay, was coincident with an increase in high earners. But your graph suggests that the rise started in 1978. The steepest part is in 1987, possibly a response to Reagan’s second tax cut, but that was a smaller cut than the first.

            How does a high tax rate discourage high incomes? The increase in inequality starting just before Reagan was concentrated in the finance industry. It’s mainly that the industry sucked more money out of the economy and not so much that inequality in the industry went up. Maybe that’s the result of people responding to larger take-home pay by being more innovative at extracting money from the rest of the economy.

          • ADifferentAnonymous says:

            Maybe that’s the result of people responding to larger take-home pay by being more innovative at extracting money from the rest of the economy.

            That’s basically the thesis, with the GDP data coming in to show that the extraction is not productive (this is the big surprise). Apart from the finance sector, executive compensation get cited as the other major driver of inequality. I find the latter more plausible as an avenue of unproductive extraction, but lots of people think the former is as well.

          • Douglas Knight says:

            You just used “extraction” to mean two rather different things and that should make you nervous. You suggest that CEOs respond to marginal tax rates by getting better at negotiating salary, while finance people respond to marginal tax rates by getting better at doing finance, although maybe only zero-sum aspects of finance. Why does marginal tax rates create an incentive to be competent at finance, but not competent at being a CEO?

            You could say that they are both examples of negotiation. But why doesn’t anyone else become better at negotiation? In particular, if CEOs are so good at negotiation, why do they give so much money to the finance industry?

  8. Carinthium says:

    Apologies if I’ve asked this on a previous Open Thread and forgot, but what do people here understand the state of affairs to be regarding the effect of having children on happiness? I’m in a real-world argument on the topic where one side claims that it is probable from the evidence that either happiness increases from becoming a parent overall, or that even if it doesn’t a scientifically informed parental style can make it so, and the other denies this and claims that in practical terms parenting is always going to be a net loss to happiness.

    Which is right?

    • pneumatik says:

      My read of the current research is that if you have people keep happiness journals where they record what they’re doing and how happy they are throughout the day, having kids is a wash with regards to overall happiness level. However if you instead ask people to think about how satisfied or happy they are with past events then having kids produces a noticeable increase in happiness.

      Speaking from experience I find this to be true, and if anything one might expect my children to be more of a net happiness decrease compared to the median. Like The Last Psychiatrist has suggested, to benefit emotionally from being a parent you need to be prepared to change your lifestyle. If your kids are nothing but work you have to take care of so you can get back to doing what you really enjoy then they’re unlikely to actually make you happier.

      • The Anonymouse says:

        Question for Carinthium: Are either of the parties to your argument a parent? I strongly suspect that–at least on the assumption that the question is predicated upon the deeper of question of ‘should I have kids?’–it matters far less what a study says happens on average, and far more on “is it a good choice for me?”

        My experience, as a parent of two who had his kids quite young, matches what pneumatik says. On a day-to-day basis, eh, some days I’m delighted, some days I wish I had more time to work.* But in the general, life-satisfaction sense, I am far happier than if I had not had kids. This is borne out by a desire to have more kids, which I suppose is the best proof of all.

        *Whenever I see people arguing against having kids, it always seems to come down to “but kids would disrupt my preferred consumption patterns! I want to spend all my money on ME!” No, I’m not always particularly charitable on this topic.

        • 27chaos says:

          Do they portray their arguments differently than that, which is why you call yourself uncharitable? If so, how do they portray their arguments?

          Do you dislike that they want to spend their money on themselves? Why?

        • Berna says:

          When I was of an age to have children, I didn’t want to have them because I was afraid they’d inherit my personality; because I thought it would be very hard to raise children and I’d probably mess it up; because there are already too many people on the earth; and because the world isn’t exactly a fun place anyway, why would I want to bring an innocent baby into that?

          • houseboatonstyx says:

            I shared the last three reasons. Also, I value net happiness, ie happy moments minus sad moments. Adding to the population of this world as it is, will produce less net happiness than helping the children and others who are in it already.

      • This matches with my own experience. Raising children involves numerous day-to-day annoyances and frustrations which don’t much contribute to happiness, or possibly reduce it slightly. But raising children also grants a tremendous feeling of accomplishment, satisfaction, and gratitude. If you’re optimizing for being free of annoyances and responsibilities, don’t have kids. If you’re optimizing for being happy with your life, then do.

        • Carinthium says:

          I may not have much empirical knowledge on this question, but I do know enough to question valuing Life Satisfication over Happiness measured in terms of pleasure.

          Basically speaking, Life Satisfaction only affects in moments of reflection, which are a very tiny proportion of our time. In addition, they are artifacts of our memory of events, which is known to be deeply flawed (memories of children overall can be happy even when most of the experiences were frustrating, for example). Is that therefore not a delusion, therefore contrary to our own Coherent Extropolated Violition?

          Yes one of the sides was a parent. I was a spectator, but after seeing the empirical evidence I’m leaning towards not having kids. I’ll admit it- I’m pretty much the selfish stereotype you describe, but I don’t see anything wrong with that. Technically speaking I value Life Satisfaction, but nowhere near as much as Happiness.

          • pneumatik says:

            It really depends on what you’re optimizing for. Optimizing for moment to moment happiness probably means not having kids (on average, of course. If a person wants kids then having kids would be optimizing [assuming we accept that people can really know what will make them happy]). But if you are instead optimizing for how good you feel whenever you consciously think about how good you feel about your life then research suggests having kids.

            I think the ideas here are related to wire heading in that the concern for wireheading is that though future you would be happy all the time present you thinks future you would have an empty life.

          • The Anonymouse says:

            My experience does not match the idea of life satisfaction only assisting in rare, reflective moments. I am a generally happy person, and my kids (as frustrating as they often are) are a big part of that.

            I tend to think of it more as a “cheetos v. pull-ups” questions. Yeah, at any given single moment, eating cheetos will make me happier (for that moment!) than doing pull-ups. I enjoy one minute of eating cheetos more than I enjoy one minute of doing pull-ups. But for every other minute of that day, I enjoy having had done pull-ups than I do having had eaten cheetos, and I have never once looked back at earlier times and thought “man, I really missed the boat on that cheetos thing, I should’ve bought another bag!”, whereas I often think that about pull-ups. Moreover, the ancillary effects of doing pull-ups on my life also make me happier than the ancillary effects of eating cheetos, in that doing one positively affects the happiness returns of everything else I do, while the other negatively affects the happiness returns of everything else I do.

            tl;dr: Consumption feels empty, investment feels awesome. At least, for this commenter.

    • Matthew says:

      This question has some addition wrinkles. Notably, in contemporary Western society, grandchildren are basically the fun part of having children without the hard part of having children. So even if children don’t markedly increase your happiness in middle age, they may indirectly increase it in old age. (Adult children almost certainly also increase happiness directly by leaving you less socially isolated in your old age.)

      Other considerations include your confidence level that your marriage is going to be lasting. I’m a single parent, and while I get satisfaction from my children, the happiness balance is obviously a lot worse than it would be if I was sharing responsibility for raising them.

      • 27chaos says:

        Adopt a grandchild proposals. Could they work as a form of charity? Orphanages run the day to day, grandparents pay donations and buy their grandchildren gifts and help them make social connections, write letters of recommendation, etc (social capital is important). This would be more popular than regular adoption, and perhaps serve as a bridge towards it? It means people without children can still experience some of the fun parts of (grand)children, I’d imagine a demand for this. It means old people don’t have to die alone.

        Making a close relationship between the child and “grandparent” would potentially be hard, but it’s something to think about in more detail. It seems like a quite solvable problem. Brainstorm!

        Is this something that should actually happen? Obviously, effective altruism and all that means not a lot of effort should go towards this. But ignoring such concerns for the moment?

        Edit: they’re real! A lot like I described. http://www.tbnweekly.com/content_articles/053106_pco-05.txt

        We should try to popularize them more, though. Pretty cool stuff, seems like a low cost win-win. Getting orphanages on board would be different than what current programs do, other changes also might be possible. Potential for relatively easy improvement, hopefully.

        Edit2: Electric Boogaloo: Orphanages don’t really exist in the US anymore, apparently, they are rare. Kinda weird that I didn’t know that. Is this something everyone else knew already, or are others surprised at this news?

  9. AlphaCeph says:

    “Recently, he voluntarily mentioned to his boss that he was concerned that seasonal affective disorder was harming his job performance”

    This reminds me that I have long had a plan in the back of my mind to write a book to help highly rational, systematizing-type people cope with and understand the messy real world, which is often built to benefit people who are more “neurotypical” and streetwise, but less rational and intelligent.

    I like to think that rational/systematizing people lack what I call a “social co-processor”, like a computer without a graphics or physics co-processor, but have a higher general intelligence. This social co-processor acts kind of like a very good prior probability distribution for the specific domain of dealing with other people, and often allows dumb but streetwise folk to outperform the high-functioning-systematizing people. I am reluctant to use the phrase “aspergers syndrome” as people might misinterpret that to mean just the extreme cases.

    Over time, systematizing-type people learn to emulate the social co-processor “in software”, i.e. they spend years of their life making mistakes just to get to what neurotypical people were born with. I can certainly see this in my life. And it frustrates me! There is definitely a lot to be gained by publishing these insights so that systematizing-type people can learn from others’ mistakes.

    One aspect of the social co-processor is how to handle office politics.

    Another important aspect is not to trust people by defualt, and that when someone is about to defect against you they are going to pretend that they will cooperate, and to understand how this human desire to defect interacts with the laws and rules our society has made to try to curtail it, as well as the selfish incentives that truly motivate most people.

    For example, there are laws about when you can fire someone. And within a company, there are written and unwritten rules about how managers should respond when an employee makes an admission that they are not doing their job properly. And managers face a very serious incentive to make sure that if something does go wrong, the blame doesn’t stop on their desk, especially if a third party could PROVE that they “knew about the problem”.

    It’s kind of hard to work out from first principles how all these factors will interact. But if you have the social co-processor, then the unfortunate incident that has befallen Ialdabaoth is probably not much of a surprise.

    There are other domains where knowing the secrets of the social co-processor would be highly beneficial for people like us. Romantic and/or sexual relationships. Gossip and social jockeying within social groups. Getting people to like you a la “Making friends and influencing people” (which is a book written for neurotypicals). Signalling games that humans play.

    • zz says:

      Why don’t these people have social co-processors?

      My current model is that, for various reasons*, ‘rationals’ grew up outside of social circles containing ‘typicals’, which, when they didn’t exist of one person, tended to have abnormal rules. Thus, they don’t lack social co-processors so much as they have 10 years less practice upon graduating college and thus have a correspondingly weaker (or, more accurately, differently optimized) social-processor. This never gets corrected because social interaction with normals is difficult, painful, and typically fruitless.

      (I pause to thank you for the processor analogy. Very good.)

      This does, however, still leave us open to being able to develop a social co-processor by practice. And since (a) we’re optimizing for buffing a processor and (b) we have all these nifty tools from our intelligence/rationality/systematizing, it should take far less than 10 years. Having someone hand you signalling is a far more efficient way to internalize it to your system 1, and since you’ve deliberately built your system, you have a greater degree of access to your social co-processors “source code” (microarchitecture?), meaning you can better optimize it.

      Also, I just summarized my model of light-side pickup.

      Anyway, I’d greatly appreciate such a book.

      *Including: typicals are boring, reward from teachers/parents/peers for trading off social time in favor of academics, and “Ahh! I don’t immediately find this trivially easy, therefore I must suck at it/it’s stupid/I’d rather be doing [math/D&D/solving Rubik’s cubes/reading LW] anyway

      • AlphaCeph says:

        Interesting points, zz.

        As to why rationalists don’t have the social co-processor: I think the social co-processor is intimately tied in to the parts of the brain which make you irrational; in particular I think having a strong social co-processor makes people more likely to adopt beliefs for social reasons and then backwards rationalize them afterwards. Obviously this is speculative.

        Examples abound of social groups that do this, many have been covered on this blog. It would be interesting to see whether the cognitive reflectiveness test correlates negatively with measures of social skill. I suspect it does.

        • Multiheaded says:

          I think the social co-processor is intimately tied in to the parts of the brain which make you irrational; in particular I think having a strong social co-processor makes people more likely to adopt beliefs for social reasons and then backwards rationalize them afterwards.

          I’ve seen this sentiment a lot in the LW-sphere, and never failed to chuckle at it. The irony is so thick you could cut it with a knife. I mean, re-read what you just wrote.

          • AlphaCeph says:

            I don’t see why this is ironic. Perhaps you could spell it out.

          • Thomas Eliot says:

            It very much sounds like a belief adopted for social reasons (to make Rationalists seem strictly superior to Normals) and then backwards rationalized afterwards. I strongly suspect that is in fact what happened.

          • Quixote says:

            +1 so much

          • suntzuanime says:

            To ensure rationality, it’s important to only believe bad things about yourself.

          • Nornagest says:

            @suntzuanime — Nah. If you believe you have weak social skills (I don’t much like the coprocessor analogy, for reasons I might go into more detail on later) and that you’re more rational than average, that’s much less objectionable. You might even be right.

            But if you believe you’re more rational than average because you have weak social skills, that’s liable to lead you into some serious trouble down the road. For example, you might look at someone else who seems poorly socialized and take that as evidence that they have the same awesome rationality ninja skills that you think you do. Guess what happens then?

            There’s an XKCD I could link here, but I expect we’ve all seen it.

          • AlphaCeph says:

            @Thomas Eliot “It very much sounds like a belief adopted for social reasons (to make Rationalists seem strictly superior to Normals)”

            There is a serious risk that we need to guard against here: people on a rationalist forum will be tempted to say things that make the in-group sound good and the out group bad and stupid.

            However, having said that, it does seem to me that especially streetwise, socially adept people tend to be low on the kind of “map-territory correspondence” AKA epistemic AKA believing what is actually true type of rationality. And the obvious explanation is that this isn’t some magic coincidence, but that there some systematic connection.

            The idea of a fixed amount of brain resources being split between the “social co-processor” and the “general intelligence” module is a decent if very crude hypothesis that explains what we see.

            Another possible explanation is that so-called rationalists are actually defective humans – we are missing a piece of mental machinery which allows normal humans to self-decieve and compartmentalize. The benefits of self deceit are well documented by Robin Hanson.

            Anyway I’m open to other explanations.

          • Anonymous says:

            It very much sounds like a belief adopted for social reasons (to make Rationalists seem strictly superior to Normals)

            Not to me. There’s no way being correct and socially inept is “strictly superior” to being incorrect and socially adept. In any case I would like someone to challenge the proposition on some other basis than “lol omg the ironyyyy”

          • 27chaos says:

            Here is a different argument that complements the above one.

            People with weak social skills are more likely to spend time thinking about ideas critically. They will be motivated to criticize opinions of the groups they are excluded from out of jealousy, and they will be isolated from others with only their thoughts to keep themselves company. This sharpens their rationality (in a lopsided way) because they get better at finding the flaws in (others’) ideas. Lopsided rationality has a better than average chance of becoming full rationality, though it also has a better than average chance of becoming ideological.

            Between this mechanism and the other, people with good social skills are motivated to avoid criticizing the weak points of popular group beliefs, people with poor social skills are motivated to do the opposite. With two forces pushing the groups in opposite directions, a gulf appears between. This doesn’t imply innate superiority or superhuman skills, both groups are serving the same irrational biases.

        • Nornagest says:

          I think the social co-processor is intimately tied in to the parts of the brain which make you irrational; in particular I think having a strong social co-processor makes people more likely to adopt beliefs for social reasons and then backwards rationalize them afterwards.

          It would be hard to overstate my skepticism for theories stating that one’s evident faults are actually superpowers.

          (They’re surprisingly common, though.)

          • This +2e6. It’s fine to accept your weaknesses or decide that you just don’t care about them, much like I’ve decided that I just don’t care about sports or whatever. But don’t try to rationalize (heh) your weaknesses into something else.

          • AlphaCeph says:

            I think you are overplaying the degree to which I or anyone else here thinks that it is super duper amazingly good to be epistemically rational and therefore who cares about being good in social situations.

            Personally I consider rationality + social ineptitude to be much more of a curse than a blessing.

            Also I feel that a very reasonable hypothesis is being written off here because it *could* be motivated by self aggrandisement rather than a genuine desire to explain what we see.

          • Bugmaster says:

            Being unable to handle social situations is inefficient; therefore, anyone who considers himself rational would be compelled to eliminate, or at least reduce, this inefficiency. Taking pride in being bad at some task is not rational, it’s the opposite.

          • 27chaos says:

            My favorite quote is

            “You’re afraid of making mistakes. Don’t be. Mistakes can be profited from. Man, when I was younger I shoved my ignorance in people’s faces. They beat me with sticks. By the time I was forty my blunt instrument had been honed to a fine cutting point for me. If you hide your ignorance, no one will hit you and you’ll never learn.”

            Ray Bradbury

            Emotions are rationality neutral. Being proud of being bad at something could be good in many circumstances. It can help motivate further practice, for one thing. If embarrassment is the alternative, pride is almost always the better choice, because embarrassment is painful and paralyzing.

            I agree that most people who take pride in poor social skills aren’t being rational. But I don’t think that there is a causal link from irrationality to pride, I think the pride is a defense mechanism that they’d be entitled to even if their rationality was perfect. Defense mechanisms are fun to criticize, but they are also often important for human psychological well being, so criticizing the wrong ones can sometimes be evil.

            In general, I find immense value in the concept of turning weaknesses into strengths, turning suffering into motivational fuel. Finding value in anything is actually a great tool in the hands of a rationalist.

            Some people go too far and claim that strong is weak and weak is strong. But those aren’t the implementations of this idea I’m defending; I find such claims sickening, actually. I’m instead defending those who would say “Yes, I am weak, but verily even in my weakness a shadow of strength remains. And this shadow I will nourish, I will feed on scraps of weakness so in time my strength will flourish, and clutching victory within my own jaws of defeat I’ll taste success at last, sweetened by the aftertaste of past failures, and my shadow will grow whole and my strength grow full with past weakness proven null, overcome at last.”

          • 27chaos says:

            Edit only lasts an hour, and I was still tweaking the wording on that bit of poetry. Ignore the two “at lasts” please, but criticize anything else you want to, so long as your criticism is constructive.

          • Nornagest says:

            Personally I consider rationality + social ineptitude to be much more of a curse than a blessing.

            I believe you believe that, but I don’t believe that mitigates the appeal much. “I’m so smart that I’m isolated from the world, woe is me” is the cliche humblebrag in geek circles, and this is basically a variation of it.

        • Hainish says:

          “As to why rationalists don’t have the social co-processor: I think the social co-processor is intimately tied in to the parts of the brain which make you irrational”

          Personally, I attribute it to hormone/drug exposure in-utero. (FWIW, I also don’t self-identify as rationalist.)

          • 27chaos says:

            Oh god, I’ve started to self-identify as rationalist. What have I done???!!!

          • Hainish says:

            TBH, I’ve always wondered who here considers themselves a rationalist and who doesn’t. (Can’t always tell just from casual observation. Disagreements are nearly always along some other axis.)

          • anon1 says:

            I don’t self-identify as a rationalist, because in my experience the rationalist culture doesn’t value pleasure (see also: Soylent) and demands justification for everything you do in terms of some explicit utility function. It seems like you can’t be a cultural rationalist and do things just because you like to do them. No, you have to dig up whatever summary of your utility function you gave ages ago (“I want to learn new things and have novel experiences!” or some such) and then shoehorn your actual desires into that somehow. (“The real reason I want to eat this delicious burger is that this will make it easier to concentrate on my studies!”)

          • Matthew says:

            On soylent, see this thread

            I generally enjoy eating and am not personally interested in soylent, but Viliam_Bur’s analogy is at least a reasonable alternative interpretation of what’s going on.

            I do feel a considerable gap from people like Lukeprog, who apparently only consumes non-fiction media. But I don’t think any of these thing are actually considered central to identifying as a rationalist.

          • anon1 says:

            It’s not that these things are necessary to being a rationalist, it’s that these people are the really salient rationalists, and the type I’ve encountered in real life, and I don’t want to associate myself with them.

            Hmm. I wonder why this reasoning makes me want to not call myself a rationalist, but makes me *more* insistent on claiming to be a feminist to prevent angry tumblrites from having the term to themselves. Maybe because the feminists I know in real life are different from the internet stereotype but this doesn’t hold for rationalists.

          • Emile says:

            I identify as a lesswronger if not as a rationalist (I don”t think applying the label “rationalist” to myself will lead to better thinking), and as part of the rationalist movement (so it’s mostly a semantic quibble).

            But demanding “justification for everything you do in terms of some explicit utility function” seems silly and I cringe when people do it, I find it annoying that along with crocker’s rules some people seem to consider it something “rationalists” aught to do.

            So anon1, rest assured that “you have to have an explicit utility function” is not mainstream on LW.

          • anon1 says:

            I do comment on LW. I even got my previous job through it, so it’s worked out pretty well for me. I just get kind of twitchy around people who make LW/rationalism a big part of their identity.

          • Adele_L says:

            I do feel a considerable gap from people like Lukeprog, who apparently only consumes non-fiction media.

            Um, where are you getting this from? Luke definitely enjoys and consumes fiction.

          • I currently identify as a rationalist, with the caveat that I use the term to refer to a particular worldview, not to mean “one who thinks and acts rationally”. While I think the rationalist worldview is the right one to have (otherwise I wouldn’t identify as one), I am skeptical of the idea that it has the unique ability to systematically capture instrumental success (what you might call “rationalist superpowers”). Accordingly, I’m not super into the self-help side of things (I found Less Wrong long after the Sequences ended and it had become mostly about self-help, which is part of why I have never posted there) and dislike the term “aspiring rationalist” (although that might just be because it reminds me of Objectivists insisting on calling themselves “students of Objectivism”).

            I was previously reluctant to identify as part of this movement, such as it is. This dissolved once I realized that there’s much less uniformity of beliefs here than is commonly recognized from outside, and sometimes from inside (I think much of this is attributable to Eliezer’s writing style). I disagree with a fair number of the “consensus positions”, but have realized that lots of other people do too. I think the only generally controversial issue where there’s very little dissent is atheism (which does bother me a little even though I’m an atheist myself).

          • Nornagest says:

            TBH, I’ve always wondered who here considers themselves a rationalist and who doesn’t.

            I try not to identify as a rationalist, especially if that would lead me doing or avoiding things because that’s what good rationalists do. But I’m aware that most people who know about the category would classify me as one.

        • schall und rauch says:

          (First time poster here, be gentle and don’t feed after midnight)

          I could imagine that rationalists have a strong system 2, where as many of the social functions are located in system 1. I don’t have any statistics to prove it, other than the “antisocial intelligent nerd/maths geek” stereotype.
          A lot of the social functions are direct, quick, intuitive and don’t rely on complex analysis. This is where system 1 shines. Rational theory weighs the pros and cons of an outcome with the probabilities — that kind of evaluation creams system 2.

          Speaking for myself, I have a pretty strong system 2, but a really lousy system 1. While most people concur that I am intelligent, I am sure that it’s not the kind of intelligence that is refered when anybody says women like intelligent men (or alternatively: intelligence is sexy).

          As further evidence for my little theory is an interesting fact from the okcupid data analysis guru: There is an optimal time for writing a message in order to receive a reply. If you spend more time on a message (perhaps to craft it in more detail, or to re-read it), the chance of reply diminish.

          • 27chaos says:

            Is the time spent crafting the message so significant that it amounts to a delay in sending the message? Because I’d imagine prompt replies are more likely to generate interest than long delayed ones. Thus confound.

      • a person says:

        I don’t think this is it. Looking at the people I know in real life I have absolutely no idea who I would classify as a rational and who I would call a typical in your terminology, this doesn’t seem like it is a “natural way to carve thingspace” (did I get that right?). I feel like this is a disguised version of the absurd “there exist two types of people, jocks and nerds” meme, but in reality I don’t think in my experience people who like Star Wars are any more rational on average than people who like basketball.

        In my experience the only substantial difference between nerds and non-nerds is that non-nerds are much more likely to decrement someone’s status for minor social incongruencies and faux pas, which I guess you could call “irrational”, maybe.

        My model of why some intelligent people have lower social skills is as follows: humans are constantly thinking about stuff, on the unconscious, subconscious, and conscious levels. Evidence for this is that many people have to practice meditation for years in order to be able to not think about stuff for a few minutes at a time. Intelligent people often seek out interesting, thought-provoking things to think about and let their brain chew on. Normal people on the other hand just think about the mundane details of their life and therefore spend a lot of brainpower mulling over social situations. There are a million rules to being socially competent so you need to invest time thinking about it in order to develop competency. Normal people naturally invest their time in social skills, but some intelligent people’s brains are occupied with other things throughout childhood so they usually don’t start thinking about social skills until they get to be fifteen or so and look around and say “Wait, I’m a total loser, I need to do something about this.” So these people will always be a little behind and therefore less socially skilled than their normal peers. This also explains why autistic people are so bad socially, it’s because their brains tend to completely obsess over a particular thing and therefore they almost never think about social interaction.

        I want to make clear however that I go to a college where the average SAT score is ~2100 and there are a lot of highly intelligent socially competent people here, it’s not like every smart person is a huge dweeb.

        • Hainish says:

          “Normal people on the other hand just think about the mundane details of their life and therefore spend a lot of brainpower mulling over social situations.”

          Do they, though? How do you know?

          (My impression is more that they don’t have to think about it very much at all.)

          • a person says:

            I don’t think normal people expend serious deliberate analytical effort thinking about social interactions, but your brain needs to think about something, so it kind of just naturally happens. As I go about my day, my brain naturally returns to and mulls over a variety of topics without any effort from me. I sometimes call this “ambient thought”. I believe that most people do this, because literature on meditation tells me that most people need to spend a lot of effort training themselves to not think about anything. My ambient thoughts consist of a mix of philosophical questions and also mundane topics. Since most people are not interested in philosophical questions, I imagine that their ambient thoughts consist of the mundane. And given that most people are fascinated by other people (think of how much people enjoy gossip, or the quote “small minds discuss people, average minds discuss events, great minds discuss ideas”), I imagine that most people’s ambient thoughts have a lot to do with social interactions. Nerds, however, often are thinking of some other thing, like math, computers, Star Trek, Minecraft, TV Tropes, what have you.

            Does that make sense?

          • Hainish says:

            It does. Thank you for elaborating.

          • 27chaos says:

            I don’t think we’re necessarily inclined to think about social interaction per se.

            As rough list, people naturally think about whatever is right in front of them, whatever bad thing happened most recently, whatever terrible horrible thing happened most recently, whatever important thing they expect to happen soon, whatever they are worried about, and trivia information that passes them by.

            Social interaction is the most common form of many of those. So it’s coincidence, not that our brains seek out social information to reason about. Such a process isn’t necessary, since social information is almost always close at hand. Causality probably plays some role as well, we do seem to seek out social information, but still coincidence alone explains much.

        • Anonymous says:

          I used bad terminology. (Also, I hit post before I realized I hadn’t filled in the name field. Still zz.)

          So, we have two dimensions: the x dimension measures the power of the social processor; the y dimension measures the power of “everything else.” We can vaguely measure them by asking questions like “would you go tell HR about your SAD?” or “what were your SAT scores?”, respectively.

          I recall reading somewhere (perhaps a Paul Graham essay?) that there’s a positive correlation between these two; as someone’s SAT scores increase, you expect their social grace to increase. We notice individuals in the fourth quadrant (ie. high SAT score, likely to talk to HR) because it’s kinda strange to come across someone who’s really smart, but social interaction tends to escape them. If their boss asked them to, do you think the average MIT grad or the average community college grad is more likely to turn themselves into HR?

          I do, however, think there’s patterns that produces individuals with anomalously high SAT-HR differentials, which we pretty much agree on: for reasons, some people invest extremely heavily into one dimension at the expense of the other. And places like here and LW seem to have unusually high concentrations of such people because we naturally filter out anyone without high intelligence and anyone of high intelligence with normal social skills tend to have better/more enjoyable/pleasurable things to do than spend inordinate amounts of time reading LW/writing long comments here.

          • a person says:

            anyone of high intelligence with normal social skills tend to have better/more enjoyable/pleasurable things to do than spend inordinate amounts of time reading LW/writing long comments here.

            I just felt like saying that I pretty much have constant opportunities for social interaction (I live in a fraternity house), but sometimes I like to be alone and when I like to be alone I like to go on this site and LW.

            Other than that your post makes sense and I agree with you. I think what happened is I thought by “rationals” you meant “all rational people, who are socially awkward”, when in fact you meant “some of the people posting on this blog”.

    • Anonymous says:

      I physically flinched at Ialdeboth’s description of telling the HR person that he couldn’t do his job. Y’know, anticipating the awfulness that was to come.

      I’d still read the hell out of that book, though. We can’t always have hindsight.

      • Hainish says:

        I did, too, but only because I’d previously learned to never trust HR.

        (I’d definitely read the hell out of that book, too.)

      • Error says:

        Me too. There’s a reason Catbert is an Evil Director of Human Resources. 🙁

        “Don’t talk to HR” is the corporate equivalent of the lawyerly wisdom: “Don’t talk to the cops.”

        • I think this is Yet Another Memetic Prevalence Debate (wow, that concept is just unbelievably useful).

          The lawyerly wisdom “Don’t talk to the cops” exists because so many people assume that talking to the cops is necessarily in their interest, and end up being wrong. It’s a meme designed to counter a different meme that had become so overexposed that it was having harmful effects. It doesn’t mean that, in every situation, talking to the cops is worse than any of the alternatives.

          The same applies to HR.

    • 27chaos says:

      As someone with Asperger’s, I think my knowledge of social norms has grown in exactly the way you describe. If it’s valid at the extremes, seems plausible it could be influencing moderates in similar ways.

      For anyone trying to learn social skills, the game theory and heuristics and biases approaches have both been very helpful to me, though personal trial and error is the best tool of all. Schelling’s Micromotives and Macrobehaviors is a great book on advanced game theory that hints at surprising social applications. Kahneman and Tversky’s original paper is worth reading, for learning heuristics and biases (Fast and Slow is unnecessarily long). Naive evolutionary psychology draws from both fields and is also quite useful for generating ideas, though bad for discarding false ones so its use requires caution.

      Anyone have a reading recommendation for ideas on the utility and emergence of social norms? That’s one notable omission from this list of suggested reading.

      Speaking of social norms, is it a faux pas to post several comments in a row here? I have not seen anyone else do it which makes me nervous. If a majority of people say yes to this question, Scott says yes, or I receive a good argument that it is a harmful practice, I will stop doing it in the future.

      • Posting several comments in rapid succession in different subthreads is very common here, especially in open threads and links threads (I do it and I’ve seen multiple others do it as well). I don’t think anyone minds; your comments don’t overwhelm readers this way since they’re scattered throughout the thread, and the recent comments sidebar has fast enough turnover that it doesn’t matter if you take up all the slots for a short time.

        Posting multiple comments one after the other in the same thread doesn’t seem to be common, since there usually isn’t a good reason to do it, but if you did have a good reason I don’t think anyone would care that much.

    • 27chaos says:

      I do not think that distrusting people by default is actually a good idea. Naively trusting everyone is worse and so it is at least an improvement over that, but personally I experience more failures due to paranoia than due to my faith in others.

      • AlphaCeph says:

        Paranoia is when you begin to think that everyone is – inexplicably – planning their lives around actively trying to hurt you, even at their own expense.

        Defualt distrust of strangers is believeing that the average stranger will defect against you in a one-shot prisoner’s dilemma situation.

        • 27chaos says:

          There have been studies done showing most people cooperate on a one-shot IIRC. This illustrates my point.

          My word choice might not have been optimal, “paranoia” is too strong.

          • Army1987 says:

            I’d guess people are more likely to Defect in certain countries than in others.

          • AlphaCeph says:

            I think it was discovered that cooperation only holds true for small amounts of money, for larger amounts people defect. I can’t find references.

            EDIT: found something: http://en.m.wikipedia.org/wiki/Centipede_game

          • 27chaos says:

            Such results imply we can trust strangers if they have no incentive to betray us, and probably even if they have a slight incentive to betray us. And, this is before we’ve factored in further effects like reputation and iterated play and social norms, or human psychology and empathy (some of these probably played a role in the original experiment, like social norms and empathy, but they will presumably be stronger in a real situation than in a game competition for money and science).

            There are some studies with dropped wallets that are better than the centipede game for these purposes, I think. IIRC 2/3rds of dropped “lost” wallets with money will be returned, though it varies by region a lot.

            My mental model of Scott has just been birthed this moment, and he says that this may be a case of joint over and underdiagnosis. We underestimate the number of scam artists, and also underestimate the altruism of normal people. This seems plausible to me.

            (Are scam artists and normal people actually mostly separate groups, or is it more that nice people sometimes have bad days on which they are terribly deceptive?)

            Bravery debate wise, I think there are more people who are isolated from others and distrustful than people who are naive and too trusting. Do you disagree with that?

      • AlphaCeph says:

        Also people who get paranoid tend to do so because they don’t have a good grasp of the realistic constraints on others’ behavior. An ex girlfriend of mine was paranoid that her boss at work wanted to kill her; in reality it is exceedingly unlikely that a boss would do this, for numerous reasons – first and foremost that in our society committing a murder is an extremely risky and costly act that “well to do” people almost never follow through with.

    • This comment kind of bothers me. Not the idea that systematizing-type people are worse than average at dealing with social situations and this is often to their detriment—that part’s pretty hard to argue with. (I’m such a person and my social skills are worse than average. I think I do okay, but then I read stuff like this and wonder if that might just be Dunning–Kruger at work.)

      Rather, I don’t like the idea that if a person with a mental illness that’s affecting their life discloses it at work, then of course it must be because they lack the social acumen to realize that this is going to have negative consequences for them, and if they were more socially competent they would have realized that it was a bad idea and wouldn’t have done it.

      This smacks of hindsight bias. Highly general theories about workplace dynamics and labor relations can be useful, but they’re not as useful as knowing the specific factors that affect that particular workplace and the individual people in it, and you can’t know those factors without being there. Just as important are factors within the person’s own mind that affect what the alternatives are to making a disclosure at work, and what the internal and external consequences of those alternatives will be. You can’t know those without actually being that person.

      The idea that “this is why we need to teach systematizers how not to be socially inept” seems unjustified.

      • houseboatonstyx says:

        I’ve been thinking that, but taking it even further. Thinking HR is there to help employees is naive, not socially unskillful. A very socially skilled, or at least appealing, person may have a always received help and cooperation from strangers because of zis charm.

    • veronica d says:

      I’ve kinda gone through this subthread a couple times and I have some comments.

      First, I wish you luck. Social skills are important and people who lack in this area sometimes miss out on some very nice parts of life. I want people to thrive, including you.

      Second, people who have good social skills are often pretty good at detecting condescension and contempt. So if you program has, from its inception, the idea that others are dumb and irrational — well, you might be baking in failure.

      Third, actually liking people is a big part of this, by which I mean a genuine delight in them as people, along with an ability to express this. You can develop this latter skill, and I hope you do. However, if you lack the former, the genuine delight, then (unless you are some ultra-cunning sociopath type) you will likely hit “uncanny valley” type responses fast.

      Myself, I tell people quite openly that I am neuro-atypical, thus they can expect me not to recognize their faces — which bothers me a lot and OMG I wish I did not have this condition. Likewise, they can expect me to be weird and kinda scatter-brained. It’s just how I’m wired. I get distracted and think about math a lot.

      But they can also expect me to really actually like them, and want to know what they think and the really cool ways they see the world. Although I’m super shy and awkward, I can usually get this across.

      Also I’m a good dancer and like to smile. That helps.

      I have no idea if I’m a rationalist. I think the epistemology sequence over on LW is pretty much totally correct.

      • AlphaCeph says:

        Thanks for your comments!

        “Second, people who have good social skills are often pretty good at detecting condescension and contempt. So if you program has, from its inception, the idea that others are dumb and irrational then you’re baking in failure”

        There’s some bad epistemology here. Whether or not other people would be offended by a particular claim doesn’t affect whether or not you should believe it.

        Anyway, I do think that there is a negative correlation between the kind of epistemic rationality we talk about here and social/streetsmart skill. I wouldn’t consider that condescending, though; there are many tradeoffs in life.

        There’s a tradeoff between being able to reach high shelves and not hitting your head on high ceilings! Yes, it’s called height. Pointing this out is not condescending or contemptuous to tall people. Or to short people.

        I also like what you say about “genuinely liking people”, I think there’s a grain of truth to it. Though plenty of people who are good with social/streetsmart skills don’t like most people, but I think they do enjoy interacting with them, if only through the pleasure of getting the better of them.

        • veronica d says:

          The thing about people detecting condescension, I’m responding to the notions in this thread that rationalists can reason their way through this and be really good at it using their mega awesome brain tools. Which, look, this strikes me as hubris. Life is not so simple, and the “typicals” are often very smart and sophisticated people. They can pick up attitudes and stances.

          The attitudes in this thread are social suicide. What I am suggesting is to examine those attitudes themselves.

          Regarding genuinely liking people, no doubt the “charming sociopath” type exists. But how common are they? How likely is it that people on this thread could pull that off?

          Which, of course, I don’t know. But that is not how I’d place my bets.

          Again, attitudes and stances. Compare:

          “I wish I could pull off being a charming sociopath”

          vs.

          “I really like people, and I want them to know I like them, and I want to know when they like me, cuz liking and being liked is AWESOME!”

          Which do you want to be good at? Which do you want people to sense from you?

          • Nornagest says:

            Regarding genuinely liking people, no doubt the “charming sociopath” type exists. But how common are they? How likely is it that people on this thread could pull that off?

            My money’s on about 2 to 5% — the prevalence of sociopathy, adjusted upward some to account for subclinical cases and good actors. I’m almost sure that being a charming sociopath requires being a sociopath.

            That said, there are ways to be cool that fall between that and a genuine delight in everyone you meet. On the third hand, though, the rationalist community’s folk idea of social success seems to focus more on the former, and that’s probably a bad thing. I think the reason for it is mythology: MoR‘s Professor Quirrell is fun to read, but he’s not a very good role model, and not just because he’s evil. A lot of people don’t seem to get that.

          • veronica d says:

            @Nornagest — Yeah, I actually know nothing about sociopaths, so I shouldn’t say much.

            On the other points, (I hope obviously) I’m playing up the “genuine affection” thing not because it is all that matters. Clearly there is more to social skills besides hugs, including dealing with adversarial situations. That said, I think the need for real affection gets underplayed in conversations such as this.

            What I mean is, reading the first couple posts on the subthread, the posters seemed to paint social skills as strictly adversarial (frex, dealing with shitty HR). And yes, we need those. Not everyone is a friend. But I think that is such a weak half of real social engagement, and that even getting good at adversarial social skills will miss two things: 1) the genuine rewards of warmth and 2) loyal allies who will go to the matt.

  10. Leo says:

    A regular commenter here (who isn’t me) has expressed a desire to submit a guest post about ADHD (something analogous to your post about SSRIs). Would you accept such?

    • 27chaos says:

      I think Scott would best be served by looking over the paper and then making a decision on whether or not to post it. Scott also might like to know more about your friend, like whether their intelligence and rationality is trustworthy. These would help him make a decision, I hope he doesn’t make a decision, even to refuse, without finding such information first.

      I would like to see it, FWIW.

      If you are good with pharmaceuticals or neuroscience, or someone else reading this is, can you help me to understand this paper and its implications (Paragraph by paragraph, I understand the abstract)? It’s one of the most technically written papers on ADHD I’ve ever come across. I can understand it, kind of, but it is slow going.

      http://adhdnet.com/wp/wp-content/uploads/2013/01/Lesch-2013-Dances-with-black-widow.pdf

      • Hainish says:

        It’s written by non-native English speakers, which is contributing to the lack of clarity.

      • Nita says:

        Re: the paper by Lesch et al.

        What made you interested in this paper in particular? It seems focused on the potential low-level, nuts-and-bolts mechanisms of ADHD — in other words, important for scientific progress but far from actionable advice. Also, the title is weird 😛

        I think non-specialists might get the most value out of review/overview articles — the science is more settled, there’s less compressed jargon and more thoughtful explanation.

        • 27chaos says:

          I’m familiar with everything about ADHD not found in the more technical papers, afaik. I don’t aspire to be a specialist, but I still like understanding what’s going on in my head. I went for several years without understanding anything at all about ADHD, and was shocked when I learned its symptoms in more detail because they were clearly controlling my behavior. Since then I’ve tried to keep on top of the issue better, and I’ve enjoyed doing so. Helps me feel I have more control over it.

          • Nita says:

            I don’t aspire to be a specialist, but I still like understanding what’s going on in my head.

            I see. Well, you’re not alone. I’m worried that mapping out the whole thing might take scientists a couple of decades, and even then the complete picture could be too complex to comprehend. But I understand how the concrete details can be comforting.

    • Scott Alexander says:

      I would at least be willing to see it.

  11. 27chaos says:

    Assuming that random noise is our most valuable resource, what are the best sources of random noise? Are some sources better for some tasks but worse for others?

    Relevant qualities to consider: biased vs unbiased randomness (which is more desirable, and in what situations?), availability (random number generators can’t be carried inside your head), flexibility/applicability (a lateral thinking Po often makes no sense as advice), maybe some others? Input?

    I like rhymes and words from foreign languages and humor/irony as sources of random noise. Anyone have other favorites?

    • MugaSofer says:

      I recently had a design teacher advocate finding a bunch of pictures vaguely related to something you’re designing, picking one, looking at a bunch of pictures vaguely related to that, through three or four iterations. Y’know, as a way to generate ideas on demand.

      Seems slow, but worth a look.

  12. 27chaos says:

    Just noticed the minotaur shadow in the fanart. I guess that’s Moloch? Awesome!

    • 27chaos says:

      Now I’m curious what Scott would do if trapped in the hallways of House of Leaves. We’ve already got Fan Art, so it’s only a matter of time until awkward real life inspired fanfiction begins. And then comes the porn.

  13. Bastomet says:

    Does pedophilia count as race or gender? I’ve been reading the NAMBLA site and I hate to say it, but they have some good points that are making me question whether pedophilia is inherently immoral.
    http://nambla.org/data.html

    • Anonymous says:

      It seems fairly clear that it’s possible for a child to engage in sexual activity in a way that doesn’t harm the child. I mean, sexual activity can hurt kids, but it’s not 100% guaranteed to. And children can and do develop sexual attractions to adults. So there’s no law of nature saying that a non-harmful sexual relationship between an adult and a child couldn’t exist.

      The problem is that it’s too hard to tell from the outside whether a situation like that is harming the child, and it’s not a risk we can take. So we put in place a universal norm against it. There might be a better norm that in theory could be imposed, but I don’t know what that might look like.

      And of course if a person happens to be sexually attracted to children then that doesn’t diminish that person’s inherent moral value. For the aforementioned reason they still shouldn’t have sex with kids, of course, but the point still stands. This actually opens up an inconsistency in the moral preferences of the average member of Western society. I believe that said average member would say that people shouldn’t be morally judged on characteristics they can’t control, and that people can’t control whether or not they are pedophiles, but that people should be morally judged on whether or not they are pedophiles.

      The current stigmatization of pedophilia is probably bad. Not even for pedophiles’ sake; no matter how bad something is, if it’s so severely stigmatized that you can’t even talk about it, it’s going to be dealt with suboptimally.

      • Cauê says:

        “The problem is that it’s too hard to tell from the outside whether a situation like that is harming the child, and it’s not a risk we can take. So we put in place a universal norm against it. There might be a better norm that in theory could be imposed, but I don’t know what that might look like.”

        This makes sense. A little too much sense, actually. I really don’t think this is the actual thought process of people promoting and enforcing the universal norm.

        Using Jonathan Haidt’s terminology, my bet is that pedophilia activates people’s “purity/degradation” moral foundation, rather than the harm foundation.

    • Scott Alexander says:

      This is my personal blog, and anything discussed in the comments is going to get linked to me. I would prefer that you not discuss that broad category of things here. I’m not saying you’re wrong, or that there shouldn’t be a place to discuss it, I just feel like “ability to discuss controversial topics without getting in trouble” is a limited resource and right now it’s kind of being burnt in the public square to general merriment.

      • Anonymous says:

        I apologize; I suppose I’m the one who did most of the burning here. I should have waited for you to say something before posting. I have no objection if you prefer to delete my comment.

      • MugaSofer says:

        Perhaps on LessWrong?

      • Bastomet says:

        Oh sorry. =(

      • Ialdabaoth says:

        “What’s going on in this threa – OH LAWD.”

        • houseboatonstyx says:

          Ialdabaoth

          “What’s going on in this threa – OH LAWD.”

          Socially skillful comment sighted.

          • Anon says:

            Don’t do this. You’re just, what, calling someone out for displaying a social skill? Don’t try to play it off as offering feedback; this isn’t what feedback looks like. It’s a subtle and insidious form of bullying, but it’s still bullying. Don’t do it.

          • houseboatonstyx says:

            It’s a compliment, encouragement. In context of much talk about him lacking social skills, this is saying “Well, you’re demonstrating some skill right now, good for you.”

            Actually it’s three skills: recognizing the inappropriateness of this subthread; pointing it out tactfully; and good comedy timing with the all caps.

          • Anonymous says:

            Different Anon, but I interpreted the parent comment as sarcasm, so it’s probably not as useful as you think.

          • veronica d says:

            Right. I also read that as snide and dismissive.

            So I guess we can file this under “socially unskilled comment that tries, but fails, to compliment socially skilled comment.”

            Social stuff is hard.

  14. Anonymous says:

    There’s currently an ama on r/philosophy about basic income guarantee that’s attracting a pretty large amount of attention. The OP seems to be running it pretty poorly, though, giving very few in-depth answers and making a few rude comments to posters. I know a number of people here support basic income so if you do you might want to check this out and/or comment.

    http://www.reddit.com/r/philosophy/comments/2gv4vl/basic_income_ama_series_we_are_jason_burke_murphy/

  15. AR+ says:

    4chan’s /pol/ looks to be going for the record in Highest Body-Count Internet Meme by forming a death cult around an anime personification of ebola and spreading it to African Internet communities in a deliberate effort to sow mistrust of Westerners and medicine among superstitious Africans and thus defeat efforts to head-off a pandemic.

    The Internet, ladies and gentlemen. If you’ve ever read a fantasy novel and wondered why somebody would willingly worship an evil deity who doesn’t even seem to have much in the way of a benefits package, now you know. For the lulz. (Link content note: blood, bones, anime)

    • think happy memes says:

      In addition to stirring up a global death cult, 4chan denizens are also possibly being driven out of the site by moderation developments from the Gamergate falloff.

      I’m sure it will all turn out to be just fine.

    • Erik says:

      Clearly the God-Emperor of Mankind needs to send the Inquisition after these Chaos Cultists.

    • MugaSofer says:

      http://tvtropes.org/pmwiki/pmwiki.php/Main/FortheEvulz

      Also, holy cow, that worked? Even a little? Damn, we need to reverse-engineer this and start using it for good.

      • endoself says:

        I think it’s more likely that it didn’t work, but some Africans would have blamed Westerners for ebola anyway. Notice that the linked article gives evidence for both things happening, but not for one causing the other.

        • Anonymous says:

          If it had worked, what would it look like? What kind of evidence could there be? The article gives direct responses to the meme.

        • James says:

          Yeah, I can’t help but feel that the article attempts, not-very-convincingly, to thread together a cohesive narrative out of a not-very-cohesive set of things which happened on the internet. Also: treating 4chan as a unified hivemind.

          • AR+ says:

            I’m not aware of any evidence that it’s made a difference. I was just impressed by the audacity of the (ongoing) attempt.

      • Erik says:

        There are a lot more ways for things to go wrong than right. Reverse-engineering anything of use from this, I think, would involve figuring out what part is “right” and extracting that, and I’m not seeing much that’s right about Ebola-chan.

    • Army1987 says:

      Now I understand why Eliezer Yudkowsky said he was afraid that if too much info about AI leaked out then people on 4chan would run simulations of people and torture them just for the hell of it.

  16. Carinthium says:

    Since I think a lot of people here (particularly blacktrance) will have interesting things to say on the topic, raising the topic of radical Philosophical Skepticism. How do you refute it? Can you refute it? Is there some way to beat the arguments for it?

    Also noting that arguments based on probabilities don’t work, as there are implicit axioms behind any system of probability which the skeptic can challenge.

    • Bugmaster says:

      I would like to refute it, but I don’t know what it is 🙁

      • EDIT: I think I might have totally missed the joke here. Leaving my original response up anyway, just in case.

        Radical skepticism is the belief that we can’t actually know anything. For instance, I may think I’m sitting at home typing this comment, but for all I know I’m actually a brain in a vat in some mad scientist’s laboratory, experiencing simulated sensory inputs which make it appear as though I’m sitting at home typing this comment. I don’t have any evidence that this isn’t the case, because anything that seemed to be such evidence might just be part of the simulated sensory inputs.

        • Carinthium says:

          Radical skepticism can go further than that. It is possible to be skeptical about induction from past to future (after all, what evidence do we have for induction that is not inductive?), about the reliaibility of memory (it’s an assumption that memory represents events which actually happened), or even the capacity to reason (see Evil Demon Argument).

          • Of course; my example wasn’t intended to be comprehensive. I probably should have elaborated more though.

          • Illuminati Initiate says:

            I’m not really sure how to put this into words or really even entirely into thoughts, but I think I’m starting to get this vague idea that circular arguments are sometimes OK, and induction is an example of such things.

            I also have another vague idea floating around about justification for acting as if you know the universe isn’t playing tricks on you. Essentially, I choose to act as if the universe is not a lie because if you have two equally likely possibility where in one you can effect the outcome and in another you can’t, then you should act as if you are in the world where you can effect the outcome. The problem then becomes how to avoid being mugged by Pascal.

          • Jaskologist says:

            Why try to avoid being mugged by Pascal at all? That’s just changing your reasoning until it comes into line with a previously-decided conclusion.

          • Carinthium says:

            Illuminati- That works if you can establish a concept of probability to begin with. There are skeptics about probability.

          • Illuminati Initiate says:

            @Jacksologist: sorry if you do know what I meant already, but in case you didn’t “mugged by Pascal” referred to the general idea of pascal’s mugging (“give me 50$ and ill save 9*10^8 people outside the simulation you live in!”) rather than the original wager specifically, which actually wouldn’t work on me because my preference to not worship such an evil deity is stronger than my preference to avoid eternal damnation. If you knew thats what I meant then you have a point, but down that path insanity lies… I’ve been having an ongoing crisis of unfaith.

            @Carinthium It seems to me that if you deny the concept of probability all that means is that you treat all probabilities as equal.

            I could escape Pascal if I could justify putting some level of trust in my memory, because then I could use induction against his dark whispers. But I’m having difficulty coming up with a way to distinguish evidence for a universe my memory follows from evidence for a universe with fake memories. And a fake-memory universe could be an influence-able universe, so my influence argument doesn’t work here. (and you don’t know how to influence it, which is how Pascal gets in).

            You could get out of this with moral rather than logical axioms (which you don’t need to justify as they are arational (not irrational) preferences), by saying that the right thing to do is what would be the right thing assuming your memory is not a total lie. I suppose this is what I’ll go with for now but it seems unsatisfactory.

          • Jaskologist says:

            I was indeed thinking of Pascal’s Wager, not Pascal’s Mugging, though I guess the objection works in either case.

            I do note that in answering the former, you’ve taken your own moral intuition as a fixed certainty, which seems even more questionable to me than believing your memory.

          • Illuminati Initiate says:

            Morality is to me obviously a subjective rather than objective thing, which ironically means I can take my moral intuitions as a fixed certainty.

            Another argument against Pascal’s Wager that might also work against Muggings in general is that opposite possibilities (you go to Hell for worshiping God, in this case) are just as likely, but I didn’t bring this up because if you can’t justify memories (again, what is at dispute here) then that argument also applies to everything and becomes a Pascal’s Mugging in it’s own right.

        • Bugmaster says:

          Ok, I gotta be honest, I wasn’t totally sure if that’s what it was, so I phrased my question in a way that would make me look good no matter what. Er, note to self: in the future, do not reveal evil plan when asked.

          Anyway, the problem with radical skepticism is precisely that it is completely irrefutable. It’s a universally applicable counter-argument: no matter what your opponent says, you could always reply, “ah, but I deny your underlying assumptions”. This may be great fun at parties, but it doesn’t get you anywhere; that is, you cannot arrive at any new ideas (or, arguably, any ideas at all) by following radical skepticism.

          Thus, while no one can refute radical skepticism by definition, positing it is equivalent to saying, “I refuse to think about anything at all”. At which point, you’re not doing philosophy anymore, and in fact it would be cheaper to replace you with a rock that has “I deny your assumptions” chiseled on it. Sure, there’d be nothing cosmically wrong with doing that, but it also effectively removes you from any discussion about anything… so… why are we still talking about this ?

          • Carinthium says:

            The first problem is that with radical skepticism in place, rationalists are no better than religious believers as long as said believers have an internally coherent system. There are numerous possible systems, all of which could use radical skepticism to place themselves on equality with rationality. Right now, they are right- we rationalists are no better than Christians at rationality.

            This leads into something else I’ve been thinking about. I call it the ‘paradigm problem’, based on Kuhn’s concept of paradigms that are incommensurable. Different philosophers have different standards of evidence, none of which stand up to skeptical scrutiny and thus which are equally false. How is one to demonstrate one as better than another?

            Possible examples:
            -There is the Elieizer-esque assumption that empirical evidence is valid and probabilties are valid. It should be very clear to someone in the broad Rationalist community so there is no need to go further.

            -My own favourite approach was once to take it on strict Faith. I have faith that my memories have a correlation to reality, my senses are valid, induction is valid, my memories are valid etc. But this has problems of it’s own- I am being arbitrary in what I do and do not have faith in, since I did not have faith in the existence of, say, unicorns even though there is just as much rational evidence for them.

            Finally, there is the Intuitionist approach. I must emphasise this approach is by far the most problematic as it denies the possibility of Rationality as we understand it and replaces it with mere coherence between intuitions.

            The Intuitionist’s starting point is to equate rational statements with intuitions. If I claim that 1+1=2 has some sort of superior rational truth, the Intuitionist would respond that the idea that 1+1=2 is some sort of self-evident truth is an intuition, and why should I consider it a superior intuition to one such as the truth of morality which is equally strong?

            An Intuitionist of this sort can decide to ignore an intuition (such as common sense), but only if it conflicts with other intuitions. Thus, they are Coherentist by nature.

            Without refuting the Skeptic somehow, refuting the Intuitionist is impossible. Yes they have one critical assumption I don’t agree with, but if I accept so many assumptions in rejecting skepticism that I can’t rationally justify how can I consider myself superior?

          • Matthew says:

            Gonna have to disagree here. On the whole “what matters is the ability to make beliefs pay rent,” most religions have a “repeatedly being wrong that the end is nigh” problem. Is there anything similar in rationality? (I grant you that if/when the intelligence FOOM doesn’t happen, some rationalists will have egg on their face. But FOOM is controversial; it’s not definitive of the rationalist community.)

          • Carinthium says:

            See my note on internally coherent systems. There are many Christians who have taken the Biblical evidence to be very skeptical about end-of-world predictions, and believe it will be entirely unpredictable when the end of the world does happen and therefore not worth predicting.

            Besides, the criterion of making beliefs pay rent presupposes beliefs about the senses giving accurate information.

            The intuitionist can accept that he has a problem regarding sense data which contradicts his theory. But see the Duhem-Quine thesis- it is always possible to save a theory by the use of an auxiliary hypothesis more compatible with intuitions.

          • blacktrance says:

            Beliefs should pay rent in terms of anticipated experiences, which come through our senses. In that respect, it doesn’t matter as much where our senses get the information, as long as our beliefs anticipate it correctly. The techniques of epistemic rationality are better than the techniques of religious belief at giving us proper beliefs that correctly anticipate our experiences.

          • Carinthium says:

            blacktrance, that’s not all there is to skepticism. Your attitude assumes the reliability of induction, the reliability of memories, and the reliability of one’s own mind (see Evil Demon Argument).

            In addition, if we do not in fact know that other people actually exist from a rational perspective, that has extreme implications.

            I take it you haven’t studied this area much?

          • blacktrance says:

            Epistemology isn’t my area of expertise, but I’m encouraged by the fact that a significant majority of philosophers are non-skeptical realists, so I think there’s some good reason for rejecting skepticism. Unfortunately, most of the arguments against it get mathematical.

          • Carinthium says:

            I’m curious what mathematical arguments you refer to. Admittedly I haven’t heard of them.

            As for most philosophers, pretty much every anti-skeptical reading I’ve found has a crap argument so my current hypothesis (from a non-skeptical perspective) is that philosophers have an instinctive aversion to skepticism which leads them to seek out rationalisations.

            Some of my evidence:
            -One response used by one of my university teachers was an ad hominem argument.
            -Another one was a Coherentist.
            -Another one argued based on intuitions (admittedly not horrible in hindsight, but…)
            -An online source tried to use the linguistic definition of “rational”, missing the point
            -Another online source tried to argue that skepticism assumed the possibility of seperating the body from the mind, missing the point entirely.

            Eliezer’s system of probabilities, which I presume you’re not referring to, fails because probability itself has too many axioms.

          • What’s the right number of axioms to have?

          • Paul Torek says:

            Bugmaster has it mostly right, except that instead of “I deny your assumptions,” the rock should read
            De do do do
            De da da da

            …because the very idea of offering arguments and evidence already commits one to the basics of deductive and inductive logic. In other words, the best answer to the talkative skeptic is a transcendental tu quoque. (When is tu quoque not a fallacy? When it’s transcendental.)

            The silent skeptic, on the other hand, requires no answer. But we can always offer treatment.

            Edit: Dang, 27chaos beat me to it.

          • Jaskologist says:

            On the whole “what matters is the ability to make beliefs pay rent,” most religions have a “repeatedly being wrong that the end is nigh” problem. Is there anything similar in rationality?

            How far back are you considering “Rationality” as a movement to exist? Can I count Malthus (modern-day rationalists still seem infatuated with him)?

            As I think over the question, what major practical accomplishments can Rationalism actually boast of? “Politics is the mind-killer” seems to me to be admitting that when it comes down to real-life, practical concerns, Rationalists fail as badly as everyone else.

          • RCF says:

            @Carinthium

            “The first problem is that with radical skepticism in place, rationalists are no better than religious believers as long as said believers have an internally coherent system.”

            Well, if you’re being radically skeptical, not even incoherence is not a counterargument, because how can we trust that our minds are correctly perceiving incoherence?

            “There are numerous possible systems, all of which could use radical skepticism to place themselves on equality with rationality. Right now, they are right- we rationalists are no better than Christians at rationality.”

            If you’re asserting the second sentence follows from the first, I don’t see how. A drunk guy with a spoon is no better than well-trained soldier with an automatic rifle against a nuke, but that doesn’t mean that the soldier is no better at fighting.

            Radical skepticism makes rhetoric impossible, so the very fact that a Christian is debating with you means that they reject radical skepticism.

          • Carinthium says:

            Robby Bensinger- Only those which are somehow self-evident.

            Paul Torek- How does that make any sense?

            Firstly, a person can clearly reject deduction or induction and offer an argument from the other anyway.

            Second, some skeptics use the argument that our very deductive system ultimately refutes itself (because people reject circularity, for example). This gets around the presumption problem.

            Third, why on earth is a transcendental tu quoque better than an ordinary one?

            RCF- True, but most religious people consider coherence part of their system of what is valued.

            On a drunk guy with a spoon- An argument from analogy can Clarify what a person is saying, but is not an argument in it’s own right. The problem is baseless assumption- if our base assumptions aren’t true, it doesn’t matter how logically we extrapolate. We’re still wrong.

            Let me give a clarifying analogy- contrast two hypothetical Christians. One is completely illogical in their theology, the other is completely logical. They are both equally wrong.

            On the Christian- The argument would go something like this. Granted not every Christian would do this, but it could be done in principle and that’s the problem.

            Rationalist: Your beliefs have no evidence for them.
            Christian: Your beliefs don’t either, as they fail the radical skeptical test.
            Rationalist: You reject radical skepticism!
            Christian: Actually, what I reject is the idea that rationality should apply to everything. Instead, I arbitrarily apply it to some things and not others.
            Rationalist: That’s… arbitrary!
            Christian: You’re arbitrary in making assumptions you have no right to in order to reject radical skepticism.

            In terms of establishing either as rational, the argument doesn’t work. But in showing they are both equally irrational, the hypothetical Christian is right.

          • Suppose I come up with a self-evident axiom, and the skeptic replies ‘I doubt that axiom’ or ‘I see no reason to accept that the axiom is self-evident’. What happens then? Has the skeptic been defeated? If so, how can we tell? I want a relatively detailed account of what would happen, in part so I can better understand what ‘self-evidence’ consists in. E.g.:

            1. If ‘self-evident’ means ‘true as a matter of logic’, the obvious response is, ‘Which logic?’ and ‘How do I know you’re using the right logic?’ There are inference systems that don’t even permit modus ponens. The skeptic can even raise the possibility that the ‘correct’ logic is the one that permits no inferences at all, the one where nothing (not even the original statement) follows from anything else. If a circular response is OK here, then surely one could also give reasonable circular arguments for less modest doctrines than e.g. ‘p follows from p’ or ‘there are no true contradictions’.

            2. If ‘self-evident’ means ‘impossible for a human being to doubt’, the skeptic can reply, ‘I find myself unable to doubt that, but that’s just a fact about my psychological makeup. My compulsion to accept a claim isn’t an actual justification for the claim; so though I am compelled, I choose to nonetheless behave henceforth as though the claim is open to doubt, because I believe a more rational, well-designed mind would not suffer from this compulsion.’

            3. If ‘self-evident’ means ‘when someone gives a skeptical argument against it, their head explodes’, then you win in the sense of killing your opponent. It’s not so clear this should count as an epistemic victory, however; a all-conquering ‘argument’ like that sounds more like a basilisk than like a category of evidence.

          • Carinthium says:

            2 and 3 are of course absurd. The answer is both 1 and that the logic in question must be itself validly justifiable by similiar standards.

            The possibility that the true logic allows no inferences on probability or deductions is one which must be refuted somehow. I’m not quite sure how, but I’ve explained why it’s necessary.

          • If you’re assuming it’s an error to accept anything without a new supporting premise, you’re thereby assuming that the only correct way to reason is from an infinite chain of logics-justfied-by-previous logics.

            Maybe that is indeed the best system; I just want to note that the criterion of ‘correct reasoning’ that assumes you need infinite chains of justification is less epistemically modest than the one that allows unjustified assumptions. The skeptic and the anti-skeptic who assume it’s ‘bad’ to have externally unjustified axioms are both assuming without warrant a criterion of epistemic ‘badness’ that we should properly be skeptical of. In particular, we should be skeptical of such a criterion if it gives equally bad grades to ‘a calculator that outputs 2+2=5’ and ‘a calculator that outputs 2+2=4 but can’t prove it’.

            One alternative to an infinite regress would be to affirm all and only the theorems that are derivable in every logic. Or we could affirm all and only the theorems that meet a formal criterion — e.g., consistency. But, again, why should we accept either of these rules-for-acceptability? And it’s very clear that the logic that proves no theorems means that there are no universally derivable theorems; and it’s also clear that the logic proves no inconsistencies. If your skepticism undermines all your reasons for rejecting the logic that proves no theorems, that’s a reduction to absurdity of your skepticism.

          • Paul Torek says:

            @Carinthium

            A person who accepts induction will quickly become convinced of the utility of deduction: it just keeps on working, over and over. A person who accepts deduction can question induction, but the questioning only has the appearance of seriousness. Induction doesn’t prove its conclusions! – the skeptic objects. Well no one ever said it did; it only makes them probable. See Troy’s reply to this thread on probability.

            A skeptical argument that convicts our deductive logic of self-refutation would be a great argument. If it worked. But for first-order logic, we have a soundness and completeness theorem. So no such result is in the offing there, at least.

            Tu quoque is ordinarily a fallacy because the shared belief or action has not been shown to be implied by reason. If I criticize you for overeating, and you point out that I eat just as much, the fallaciousness of your reply depends on the openness of eating-this-much to rational criticism. But the inference patterns of rational thought, taken as a whole, are not open to rational criticism (though individually each pattern is open to criticism via coherence with the others – and a lot of good work has been done in that area).

    • Jadagul says:

      You can’t refute radical skepticism. Not that you can really argue for it, either. You can’t actually make coherent arguments until you agree on epistemic standards, so you can’t have logical or reasoned or even coherent arguments about what epistemic standards to use.

      As Bugmaster says, though, if you’re really radically skeptical then you also can’t really participate in conversation with the rest of us. I have a set of epistemic standards and if you refuse to play by those rules I don’t have to deal with you.

      But yeah, this is the point of one of my favorite jokes, which Scott quoted in his post on Quantum Computing Since Democritus: “Of course I don’t believe in the principle of induction. It’s always worked before!” There are lots of possible self-consistent axiom systems that are totally incompatible with each other.

      • Carinthium says:

        Jadagul, doesn’t your posistion lead logically to subjectivism at best? Different paradigms have different base assumptions, and it is impossible for one to be rationally superior to another as all of them are equally without foundation?

        And in that case, what makes us rationalists any better than the religious or any other set of paradigms which are self-consistent?

        This problem has a practical component in that actual Intuitionists exist in Philosophy Departments. If we have no rational answer to their attacks because our paradigms are incommensurable, we have no right to see our system as any better than theirs.

        EDIT: In practice, of course, most arguments for radical skepticism tend to work on the basis of demonstrating a certain assumption in laypeople’s standards exists and can’t be justified any further- thus, since people tend to oppose standards that can’t be justified, presenting a problem.

        • Jadagul says:

          Basically, yeah. you can’t justify your epistemology because your epistemology determines what counts as “justification.” And if you’re talking to someone who simply doesn’t accept any of your premises, then there’s legitimately no basis for arguing with them and you can’t get anywhere.

          On the other hand, if someone genuinely doesn’t accept the principle of induction, you can’t really talk them, because they’ll think that you mean something different every time you use the same word. Most actual people have enough of a similar frame of reference to each other that we can plausibly debate.

          (This relates to Scott’s recent post on isolated demands for rigor; a radical skeptic is totally consistent but also can really interact with the world in any even vaguely coherent way. But if you engage in radical skepticism if and only if someone has established a proposition you don’t like under normal epistemological assumptions, then you’re cheating and also no longer consistent).

          As to why we’re better–well, “better” is also one of those things that depends on value judgments and thus on premises that other people may or may not accept. A rationalist epistemology is superior to most other epistemologies because it comes to to correct answers more often. You could ask why I think they’re correct, and I’d say that by the standards of judgment encoded in my epistemology, they’re correct. Or you could ask why being correct is “good,” and I’d say that the standards encoded in my value system say that being correct is good.

          Richard Rorty has a story about the Catholic Church’s refutation of Galilean mechanics. They studied Galileo’s work, and said, “Okay, this is really interesting. You’ve shown that everything looks exactly like it would _if_ the earth revolved around the sun. This might be useful for sailors who need calculational shortcuts while doing navigation. But it’s important to remember that even if the world is literally indistinguishable by any observation from one in which the earth revolves around the sun, it’s still _true_ that the sun revolves around the earth, which we know because the Bible says so.

          And there are two observations to make about this argument. The first is that it can’t actually be refuted. The assertion is “the world looks exactly like it would if your story were true, but we prefer another story with completely indistinguishable outputs.” That’s a statement of preference, not, say, a differing prediction.

          Second: well, if you really want to you can construct Newtonian mechanics in a reference frame where the Earth is still and the sun revolves erratically around it. The math is much harder, but it still works out correctly. So…the Church wasn’t exactly wrong, either.

          • Carinthium says:

            The reason I started to worry about this in the first place was dealing with intuitionist Coherentists- one in particular who claimed that logic is just another type of intuition and that moral realism must be correct because intuitions say so.

            I don’t have a problem with trying to be correct just because- I want to be correct, and I don’t see how one can criticise a base desire of that sort.

            The problem with the idea that correctness is impossible is that it means that the religious people who claim that Science and Rationality are just another faith are essentially RIGHT. Maybe it doesn’t have rituals, but that doesn’t make it any less baseless.

            As for your arguments about radical skepticism, I am trying to determine if radical skepticism is true. It is totally irrelevant if I am capable psycologically of acting on it.

          • Jadagul says:

            I’m claiming that radical skepticism isn’t true or false. Epistemological claims do not have truth values because they are claims about what “true” means. So radical skepticism isn’t true or false, it’s just an attitude you may or may not choose to adopt. And I mostly don’t, because it doesn’t get me anywhere.

            And in some sense, rationality is just another set of axioms, which a religious person might describe as a “faith”. But you and I are both arguing from within a (roughly) rationalist epistemology. In which the rationalist epistemology is justified, and a religious one is not. So there’s your justification.

          • Carinthium says:

            The only truth that matters is truth as defined by the correspondence theory. Many other conceptions of truth are coherent, but if one lacks correspondence truth the rest are pointless.

            I don’t adopt a rationalist epistemology Just Because- I adopt it because I believe (or believed, anyway) it would get me to Correspondence Truth. Right now? I’m having a crisis of faith.

          • Auroch says:

            Correspondence truth is achieved when the map perfectly matches the territory. It’s true that radical skepticism would deny that it’s possible to know whether your map matches the territory. However, it only denies that because it denies that the territory exists!

            Essentially, yes, radical skepticism implies that rationalism won’t bring you to your goal, but it implies that everything else won’t bring you to your goal either, so you should continue to behave as though radical skepticism is false. (This is similar to the nonexistence of free will and the probability that you are a Boltzmann Brain.)

            Also, from the sound of it your friend the intuitionist Coherentist’s beliefs don’t sound terribly coherent.

          • Carinthium says:

            Technically speaking, radical skepticism denies we can know the territory exists.

            Assuming the world exists at all, then even if it does I’m still irrational for believing in it because I can’t refute the sketpic and therefore lack evidence. Pragmatism is irrelevant.

            What do you mean about lack of coherence in that person?

    • 27chaos says:

      You can change the standards for evaluating an idea from “true” to “useful”. Other than that, no way to beat it. I’m uncertain whether or not it would be possible to construct an argument that truth consists only of “useful” (in a certain broad sense), if that were the case then the argument might be fully defeatable without requiring a forced arbitrary change in evaluative standards.

      • Carinthium says:

        On changing the standard from ‘true’ to ‘useful’:
        Calling the idea that the assumptions of reliable induction, reliable senses, and reliable memory useful begs the question.

        If induction isn’t true, then just because they have been useful doesn’t mean they will be in the future.

        If the memory isn’t real, we don’t know that the three assumptions have ever been useful.

        If the senses aren’t giving true information it’s a bit better, but still only useful for self-gratification since without the senses we can’t know of the existence of any other minds.

        On usefulness conception of truth:
        How would that even work? Unless truth in the ordinary sense (correspondence-truth) is demonstrated somehow to be incoherent, I don’t see how what you are doing could even be possible.

        I can’t rule out the possibility entirely, but I’m skeptical (in the ordinary sense) of such an idea.

        • 27chaos says:

          The idea has its problems certainly, but it’s the closest thing I’ve found to a potential answer to the problem of skepticism.

          I agree that converting someone from the standard of truth to the standard of usefulness is impossible unless the notion of truth not founded on usefulness can be shown to be incoherent. But I don’t think such an endeavor is as hopeless as you do. In fact, I think that skepticism actually gets us halfway to such a conclusion by itself.

          The common idea that skepticism is self refuting also has relevance here, I think.

          Your complaint that usefulness depends on truth is insightful. But if truth also depends on usefulness, then what we have is a valid tautology, like 1+1=2. Such tautologies are both true and useful :), and can teach us things we didn’t know before if applied correctly, like how mathematical identities can get us to the Pythagorean theory.

          If this comment is vague, it’s because these issues are complicated and I haven’t thought them through in great detail. Nonetheless despite the vagueness of my intuitions, I think there is value in them.

          (There is a wonderful comment in one of LessWrong’s quote threads about the importance of bad proofs in mathematics. Learning that idea has freed my thinking greatly. This comment is submitted in that spirit.)

          I am not certain or highly confident that this task is possible. But I’d place the odds of success at around 40%, while your estimate seems closer to 4%.

          • Jadagul says:

            I’m going to recommend Richard Rorty’s work again (and I’m going to keep doing it as long as this topic keeps coming up.) He does basically argue for giving up on the idea that a “true” or “correct” system of epistemology or values can be derived from the world, and instead adopting a _useful_ system of beliefs. But of course you can only make judgments about what beliefs will be useful if you already have some values and some epistemological beliefs, so this all winds up circling back on itself. But the point is that that problem is inescapable.

          • Carinthium says:

            I can easily see how you could say we have no truth from the proposistions discussed, but how would we say that truth depends on usefulness? Requesting clarification on how this might be done, even if your thoughts are vague.

            For what it’s worth, my concern is primarily with whether rationality based on truth is or is not possible so Rorty is only partially relevant. I’ll still consider him.

          • Jadagul says:

            Sorry, that was a little vague. You suggested replacing “truth” as a standard with “useful.” But it’s even less clear a priori what constitutes a “useful” belief than what constitutes a “true” one.

            My claim is that there’s no way to justify an epistemology without reference to a standard of “justification,” which is an epistemology and thus has to be justified…. It’s intrinsically circular. So you can never conclusively found your epistemology or your values from first principles, since the first principles are exactly what’s in question.

            And all you can do is say that your current beliefs are the best beliefs you can come up with, while acknowledging that they are better or worse _with regard to your current standards for what is “good” or “bad”_. I know what sorts of things I value, and what sort of justifications I tend to accept. And the fact that this is a contingent fact of the way my brain is structured–and that it’s hypothetically possible to design brains that would not accept those justifications–doesn’t change the fact that _my_ brain accepts those justifications. And you can’t actually do any better than that.

            (Eliezer writes about those issues here, and the essay gets very tangled and confused at some point, but I think that’s because there’s no good way to actually write about such things).

            You say your concern is about whether “rationality based on truth is possible,” but not only am I not really sure what that means, I think that it’s not clear that it can mean anything.

          • Carinthium says:

            For something to be rational based on truth would mean either that it is self-evident, deducible from self-evident principles, or the fact it is probably true is deducible from self-evident principles.

            Without that, we face the Isolation Objection- our beliefs have no basis in reality, and are hence irrational.

            If this is the case, how are we supposed to consider ourselves any better than people of faith? Doesn’t it make rationality just another form of religion?

          • Jadagul says:

            Why does it matter if you’re “better” than someone else? That’s a silly thing to worry about. Pick your premises and own them.

          • 27chaos says:

            Knowledge relies on and consists of reference classes and generalizations and patterns. Reference classes and patterns are determined by what phenomena you’re interested in. So concepts of truth implicitly assume an underlying value system or decisionmaking process. This is why I think it might be possible to persuade some types of skeptics, since the concept of truth assumes a value system you can perhaps work from that concept to convince them, if they’re willing to work with you.

          • Carinthium says:

            I’m a “believer” in the Correspondence Theory of Truth. I don’t consider other theories incoherent, but I consider Correspondence Truth to be my goal. If my theory isn’t better than other, contradictory theories, it means terrible things for my attempts to reach Correspondence Truth.

            Also- the definition of Correspondence Truth, purely as Definition, does not fall into 27chaos’s problems. It is purely a value judgement regarding goals, and therefore doesn’t fall into the right/wrong category.

    • Troy says:

      If I recall correctly, you were the poster who asked about moral skepticism some time back. I don’t think you were satisfied by my responses to that question, and I don’t think you’ll be satisfied by my responses to this one either, but I’ll still have a go. 🙂

      In general, the first thing I would say is that we need to be clear on what “refuting skepticism” means. If refuting skepticism means convincing a skeptic that he’s wrong, then there’s no guarantee that we can succeed at that psychological task. If it means demonstrating from agreed upon truths that he’s wrong, then there’s again no guarantee of doing that, because the skeptic can always just deny one of your premises. If, however, skepticism means starting with a priori or otherwise self-evident premises and then showing skepticism false (or probably false), that’s a hard task, but not one that seems to me to be in principle insoluble.

      For my part, I would argue, first, for the existence of logical probability in the sense of Keynes, Jeffreys, Cox, Jaynes, etc., as a relation of evidential support holding between any evidential proposition E and any hypothesis of interest H. Our evidence at a time consists of everything that we can immediately or directly know, e.g., facts about our own sensory experiences. If we adopt this framework, the skeptical question is whether our ordinary beliefs about the world are probable with respect to our evidence.

      Answering this question involves multiple tasks. One task is spelling out some kind of criteria of logical probability, criteria that go beyond the probability axioms alone (which allow for any number of probability assignments, including more “skeptical” ones). Here I think the idea of Solomonoff Induction and measuring the Kolmogorov Complexity of different hypotheses is along the right track (this is very much oversimplified, but sufficient for the purposes of sketching my view), with ordinary judgments of simplicity and rules of thumb like Occam’s Razor being heuristic approximations of this.

      At this point, induction of the kind that we think is reasonable falls out of our system. External world skepticism, though, remains a separate problem. What to say depends on the version of external world skepticism under consideration, but my basic move in all such cases would be to say that the skeptical hypotheses always require as much descriptive detail as the non-skeptical hypothesis (because they, e.g., need to predict all of the same sensory experiences) but then add more detail, in the form of the evil demon/computer program/whatever.

      Lots more to say, but I suspect that’s sufficient to give you plenty to object to. 🙂

      • 27chaos says:

        I agree. If a skeptic is making arguments, then they have premises you can appeal to so converting them is possible. If the skeptic is just contradicting everything you say, then you can’t. But I don’t think such automatic gainsaying of everything asserted can really be considered a belief in the first place. And I am doubtful such pure skepticism is even possible for conscious beings to maintain, I think hidden premises are probably an inevitability, maybe even in the case of the automatic gainsaying skeptic.

        • Troy says:

          Yes, skeptics making arguments I can engage with, because I can try to show their position to be contradictory, in Socratic style. (This was my strategy in the moral skepticism discussion.)

          Some parts of Sextus suggest that Pyrrho tried to practice “pure skepticism,” but it is indeed quite doubtful that he was successful in this in his ordinary life.

          • Carinthium says:

            I’m going to take some time. I have some standard objections to the concept of probability, but those are based on Bayesian probability so they might not apply. Maybe you’re right, for all I can tell.

            Before I look up probability, however, I’m going to point one thing I do know- 27chaos’s ad hominem argument is irrelevant. The question is what is rational, not what humans can believe.

          • 27chaos says:

            The question is “what is rational to believe?”. I think that question is related to “what do you already believe?”. So I don’t think it’s an irrelevant ad hominem that no actual humans can maintain pure skepticism.

            Beyond humans, the word “belief” implies the possibility of changing one’s mind. If your decision procedure is completely static, you aren’t smart enough to be considered conscious. Radical skepticism involves this kind of rigidity, so I think radical skepticism is impossible for anything, human or not, to believe in. Anything capable of belief has already committed itself to a non purely skeptical mind.

            This doesn’t solve the problem. The problem is strictly unsolvable, if I came across a tape recorder saying repeatedly “nothing can be known” I couldn’t change its mind through argument.

            But this reasoning does mean that we don’t have to worry about the problem, which is as good as it’s gonna get. The problem is like asking about square circles, it’s more or less contradictory, and finding a satisfactory resolution to the strict skepticism problem would be more worrisome than failing.

            I think Nozick talks somewhere about philosophical coercion. Seems relevant.

          • Troy says:

            If you’re interested in a reading list, the first three chapters of Keynes’s Treatise on Probability lay out the epistemological framework I’m working with here well. (Keynes further holds that most probabilities don’t have precise numerical values, but I think this is an intranecine dispute not ultimately important to the broader questions you’re asking.) Tim McGrew is a more contemporary source defending this kind of framework; he addresses both external world and inductive skepticism in his books. The chapters of Richard Swinburne’s Epistemic Justification on logical probability are also well worth reading, although his brand of foundationalism is a little more liberal than the above two authors’.

            The other authors I mentioned in my original post all have mathematically sophisticated works defending and explicating the logical approach to probability (although they’re not usually concerned with addressing skepticism per se). The math in this area gets intimidating quickly, though, and these authors are not easy reads.

          • Carinthium says:

            In that case, I request a ‘recess’ of sort. I have limited time, so apologies if it seems a bit insulting but I think the Treatise of Probability will be enough for me to come up with a counter-argument.

            I’ll then post a response on the next Open Thread and we can continue this.

            ——————————————
            Although I suppose I might be wrong in theory (a radical skeptic who is certain of radical skepticism has an obvious problem), I’m going to critique 27chaos anyway.

            What is relevant is if radical skepticism is true, regardless of whether it is impossible for any human to believe in. Similar with your other claims, which have no relevance to the correspondence-truth of skepticism.

            BTW, what does Nozick say about philosophical skepticism?

          • 27chaos says:

            If the skeptic asserts every argument as false regardless of its content then I don’t think the skeptic’s truly evaluating the argument at all or can be said to believe it. A tape recording on repeat has equal intellectual prowess.

            Interpreting the world as nothing but raw physics and movement is computationally impossible, so we use patterns and generalizations when thinking. In determining what counts as noise and what counts as signal, we are making an implicit value judgment. Thus it seems plausible that usefulness is somehow necessarily philosophically prior to truthfulness. So when dealing with radical skeptics who are not tape recorders, it should be possible to persuade them.

          • 27chaos says:

            Nozick talks about philosophers wanting to make arguments that are so powerful their opponents are forced to accept it lest their heads explode.

            It’s in the introduction of Philosophical Explanations. It can be read in the preview here: Philosophical Explanations

            And here is a decent link discussing Nozick and philosophical coercion: http://bleedingheartlibertarians.com/2011/06/nozick-on-philosophical-explorations-there-is-room-for-words-other-than-last-words/

          • Troy says:

            No worries at all — the readings I suggested are rather extensive, and you’re under no obligation to read all of them or even any. But yes, I’d be happy to continue the discussion later if you want to look at the Keynes first.

    • Anonymous says:

      So a human can’t meaningfully be a philosophical skeptic in this full sense. At most, hir linguistic modules can make confused noises about justification while the rest of hir brain gets on with embodying Occamian/probablistic principles, business as usual.
      It seems to me that this degree of inevitability simply is a form of justification, and does not need to appeal to other ways of justifying.

      • Carinthium says:

        Why should inevitability be a justification? At most, it’s a pragmatic justification and not everyone accepts pragmatism as a criterion of truth (besides, if you do that implies a lot of other things for consistency’s sake).

        There is no incompatibility between the proposistions “X is true” and “It is impossible for A to believe X”. This is also the case when it is impossible for A to believe X fully.

  17. AR+ says:

    Radical weird policy I thought up for science fiction/fantasy world that is just recently/might soon be technologically possible: state supported religious order of nun-like* celibate women who bear and raise children by frozen semen from military personnel (whether otherwise fathers or not) who are killed in combat or perform distinguished acts of bravery to serve as a partial counter to the dysgenic effects of war. Egg-egg conceptions as well, if you want to be feminist about it, when that become possible.

    Bonus: If 1 virgin birth was the greatest blessing ever bestowed on humanity or at least a miracle, as some in multiple faiths assert, then what might the benefits be for a war-like nation that practiced this? It would have by far the highest proportion of virgin births in the world!

    *Edit: I actually have no idea what real nuns are like.

    • kaninchen says:

      Is the celibacy strictly necessary? Providing they’re careful with the contraceptives, I see no reason why they couldn’t have a little fun on the side.

      Also, I presume that the intention of this is to breed kids specifically for war. Lets leave the ethics of this aside, and also ignore questions of how relevant soldiers will actually be for future wars. I would observe that you would want to choose the fittest women in the society to be the – for want of a less-likely-to-offend-people term – breeders. Does that imply conscription from the population at large, or splitting the breeders’ children off into a male stream (sent to fight other states and their armies) and a female stream (who become the next generation of breeders)? The first would be unpopular, especially in a nation which has been exposed to feminism, while the second seems unsustainable without eventual inbreeding.

      • AR+ says:

        No, the intention was not to breed those children specifically for war, nor to raise population per se, but only to make “willing and able to go to war for the nation” less of a fitness hit. Would work w/ or w/o conscription of the usual type.

        I have the feeling that it would be a good idea to prohibit the female children of such an order from joining the order themselves, but I’m not sure why.

        • The Anonymouse says:

          I suspect the “willing and able to go to war for the nation” aspect is less genetic than cultural. It is my understanding (also informed by my own service in the infantry) that the combat arms components are disproportionately rural and/or southern. I don’t think poverty is the link; there is plenty of urban poverty that doesn’t show up. I do think supposedly antiquated masculine values are the link, and such values are still encouraged and appreciated in southern and rural communities.

        • Anonymous says:

          I don’t think it’s a fitness hit now. Those who are willing and able and go to war and come back are considered more attractive mates, no?

        • RCF says:

          I’m not sure that “breed children for war” and “decrease the extent to which children are being bred for not-war” are as distinct concepts as you seem to think they are.

  18. endoself says:

    Do you maintain a list of the comments of the month anywhere? They’re very good comments and it would be nice to be able to see them all in one place.

  19. Jake says:

    Does anyone have a good source on the history of social-justicey terminology (“privilege”, “racism” in the power+privilege sense, et cetera)? I know, for example, that “social justice” itself started out as a movement in 19th-century Catholicism, but how it went from there to its current meaning I have no idea. I’d be interested in both conventional intellectual history as well as less formal discussion of internet phenomena.

    • BenSix says:

      The term “social justice” was key to arguments between Hayek and Rawls. Interestingly, it also had a brief stint as the name of Father Charles Coughlin’s magazine.

    • J. Quinton says:

      Not sure about the history of the word “racism” in the power+privilege sense, but I remember being in middle school (a mere 20 years after MLK’s assassination) and being taught that racism was the act of treating one race better than others. It was kind of odd to hear it being used in the power+privilege sense by people who are both younger than me and less… colored than me.

  20. So this article from Al Jazeera America, calling for the abolition of copayments, has been making the rounds on my Facebook feed. I’d be curious to know what Scott, or anyone else knowledgeable in this area, thought of it.

    • Paul Torek says:

      If you want “anyone knowledgeable in this area,” skip my comment. But the article errs here:

      the promotion of co-pays by employers and insurers fails the rationality test. Why? Because co-pays discourage people, especially those with meager incomes, from seeing doctors and obtaining medications. That reduces immediate spending on doctor visits and drugs but not total costs over the longer term. Instead, when people who are squeezed financially do not pick up their medications, thus avoiding the co-pay, they later will need more intensive and costly care

      The insurance company doesn’t necessarily face that later, intensive, costly care. It might be a different private insurance company, or Medicare. Even if it were the same insurance company, everyone dies of something eventually, so I doubt that the total costs are higher when preventive care is missed. Hasn’t that already been discussed on lesswrong? And/or here?

      TLDR: Cui bono? Insurance co.

    • taelor says:

      Economist in training here. I had a longer post written out responding to that article, but the window I was writing it in accidentally got closed, and I don’t feel like typing it out again. Long story short: health economists disagree about the extent to which the problem that co-pays are meant to address is really a problem, but I think the article does a very poor job of explaining what the potential problem even is. The idea is that without co-pays, people will over consume health services, which will drive up their premiums, which leads to an awkward situation were people are forced to pay indirectly for things that they would not consider to be worth the costs if they had to pay directly. Of note, the main study cited in support of this actually being a significant problem is Robin Hanson’s favorite study: the RAND Health Insurance Experiment.

      • ADifferentAnonymous says:

        The tl;dr of the RAND experiment (Hanson’s coverage here) is that they gave people free blank check health insurance; they responded by receiving much more health care than control but were no healthier as a result.

  21. Matthew says:

    I’ve also donated. Additionally, I will draw a comic on a topic of your choice for anyone else who donates $100 (this is on top of Brent’s creative reward offers).

    Given that Brent has apparently contributed greatly to Kerbal Space Program, I think someone should ask Randall Munroe, who seems to be a fan, to mention this campaign on his blag. Brent would probably be a millionaire by Monday.

  22. Corey says:

    This is a recent job posting from the company I work for:

    https://groups.google.com/forum/#!topic/ogre-list/gmz0TEZa1WE

    It’s in Ottawa, Ontario, Canada, home to several alumni of the CFAR workshops, including the singular swimmer963 among other luminaries.

  23. Bugmaster says:

    We are currently looking for a C# developer here at our small biotech/software company in SoCal. I’ve forwarded Brent’s resume to the people who handle all the money 🙂

    Forgot to mention — I don’t have any crash space, but if we really like a candidate, we usually fly them out to SoCal and put them up in a decent hotel… Sorry, that’s the best we’ve got.

    • veronica d says:

      Similar, but Boston, and more C++/Java shop. Anyway, I hope it was okay to forward his resume to our HR people, cuz that’s what I did.

      • Bugmaster says:

        Er. Yeah. In retrospect, I should’ve asked first before doing that, I think 🙁

        • Elissa says:

          This is fine, awesome and appreciated.

        • veronica d says:

          I figured the worse plausible outcome is an annoying call from a recruiter and the best plausible outcome is he gets a cool job faster than if I had asked permission. So I went ahead and sent it.

  24. lmm says:

    Dammit I missed my chance to ask my race-related question :(. Now I’ll have to find a reactionary somewhere else, but last time I read one of their articles it made me feel ill.

    • Emily says:

      Post an e-mail address and perhaps a reactionary will get back to you.

    • Introspectively, things with political implications used to trigger an ill feeling in me when I read something that looked like it might force me to relinquish one of my most treasured beliefs.

      For example, when I was a regular libertarian I used very regularly to feel such a sickness feeling when people laid out compelling data suggesting some free market solution was sub-optimal, or some libertarian philosophical argument was invalid. Similarly, when I was a left-wing social justice libertarian anti-feminist datapoints or evidence that suggested important race differences used to make me feel sick. Now I very rarely get the experience, because I’ve stopped caring about basically all political issues and I don’t feel personally or emotionally invested in any of the conclusions I’ve reached.

      • Less Wrong talks a lot about this phenomenon. Often in the context of deconverting to atheism from a religious upbringing, which is a common experience among many there. Having not been raised in an explicitly theistic religion myself, I occasionally wonder if I missed out on an important rationality skill (even though I rarely think in such terms).

        On the other hand, I was raised to be pro-social-justice, my social circle is overwhelmingly pro-social-justice, and some things in that sphere give me similar negative System 1 reactions that I’ll admit sometimes pattern-match to a Rationalist Crisis of Faith. On a number of occasions there was a little voice in my head saying, “You know that True Reason says the social justice movement is wrong, and regardless of what justifications you may be able to come up with for it, the actual causal reason why you identify with it is nothing more than conformity. If you remain pro-social-justice then you lack the courage to defy an irrational society and have failed as a truth seeker.”

        I’m pretty sure I never had these thoughts prior to reading the How to Actually Change Your Mind sequence, so I guess I know where to pin the blame. A realization I’ve only recently had is that there’s a different voice countering that one: “That other voice is trying to seduce you to the Dark Side. You know perfectly well what that looks like and what the results are. Surely you’re not going to make such an obvious idiot mistake as voluntarily choosing to go over to the Dark Side?”

        The problem with that voice is that it’s capable of speaking up on behalf of any belief system, even if that belief system is completely bogus. So you have to ask yourself, “What exactly makes the Light Side better than the Dark Side, other than conformity and ingroup-vs.-outgroup considerations?” In the case of social justice, it’s a huge amount of anecdotal evidence—so huge that even if what I’ve seen isn’t representative of general human experience, it’s still big enough to justify social justice on consequentialist grounds (which is also why I reject the idea of just not caring). I don’t enjoy dealing with social justice, but sometimes ethics tells us to do things we don’t enjoy doing.

        This has gotten the first voice to shut up for the moment. My next project is to figure out a correct rationalist approach to social justice, so that the voice never has anything to complain about again—and so that I can maybe bring some other rationally-inclined people on board as well.

  25. Anonymous says:

    Can anyone explain the plant made of scissors on top of the bookshelf? Afraid this is a reference I’m not getting

    • Scott Alexander says:

      I didn’t get it either and had to ask the artist to explain. They told me it was one of many Codex Seraphinianus references – see here

  26. ADifferentAnonymous says:

    Wikipedia: “Project Pigeon was American behaviorist B.F. Skinner’s attempt to develop a pigeon-guided missile.”

    Liberal Crime Squad: “A satirical open-source text-based modern strategic political RPG with bad pickup lines and bad graphics. You play as the founder of a left-wing terrorist cell devoted to saving the United States from its descent into Conservative Insanity though writing to newspapers, volunteering, playing protest music, seduction, flag burning, graffiti, hacking, kidnapping, torture, murder, brainwashing, and other common and uncommon activist activities.” I highly recommend.

    • Multiheaded says:

      Hell yeah, I like LCS.

      Hint: hacking is the easiest source of passive income and can be levelled from nothing just by practicing it. Also, go on stealing/lockpicking sprees in Industrial apartments first, then University ones, then Downtown ones once you can handle the locks without the noise of a crowbar.

  27. Tom Hunt says:

    A vague speculation which has been bugging me for a while:

    We know that obesity is negatively correlated with IQ (see e.g. here). This really just makes it like most things, in that higher IQ is correlated with better outcomes most everywhere. But I’m curious if the causal mechanism here might be more direct. There’s some speculation that the recent steep increases in obesity have to do with diets changing their ratio of carbohydrates to protein/fat, with more carbohydrates notionally causing greater risk of obesity, and I’ve seen quite a few recommendations for low-carb diets as being instrumental in losing weight.

    For quite some time, whenever hearing about low-carb diets I kind of vaguely thought “Are these people insane?” This is because my experience is that, if I don’t have sufficient carbohydrates, I become wildly hungry and begin frantically tearing apart the environs looking for bread. This happens regardless of how much protein/fat I’ve been eating, and my immediate family have similar experiences.

    So, the speculation: the brain is, we know, an enormous metabolic load. Is this load higher in higher-IQ people? If so, then maybe people with high IQ are simply metabolically more suited to higher-carb diets, as have apparently become more common over the past few decades. This would mean something like that energy burned in the muscles wants to be recouped with protein/fat, while energy burned in the brain wants to be recouped with carbohydrates. I have no idea how plausible this is or isn’t biologically, having no relevant qualifications or study. But it seemed kind of interesting.

    • Nornagest says:

      As a data point, I don’t have the response to low carbs that you describe — I do get hungry, but a salad and a pound of meat or fish will satiate me just fine. I’m also a pretty skinny guy, and don’t watch my diet much aside from staying away from preserved meats, soda, and candy.

      It’s plausible to me that smarter brains have higher metabolic demands, and I remember reading somewhere that brains run entirely on glucose — which is one, but isn’t the only, breakdown product of dietary carbohydrates. I momentarily suspected that the higher obesity rates in the US might be due to HFCS displacing sucrose in many applications, thus diverting carbohydrate resources from the brain [which can’t metabolize fructose] to the body [which can], but it turns out that the saccharide balance of HFCS is only a few percent off that of table sugar once it’s gone through sucrase metabolism. Too bad. It was an elegant theory.

      • aguycalledjohn says:

        Tossing in another anecdote/data point I find carbs like bread/rice/pasta don’t satiate me nearly as much as proteins and have pretty much cut them out of my diet. But I do crave chocolate.

        Even more anecdotally: I feel more awake/mentally functional after I’ve eaten protein, if I only eat carby things for breakfast I tend to feel less so. But this is massively confounded with a bunch of things, and indeed the correlation may be opposite since protein normally requires waking up and making something.

        *Does the inferring from single example dance*

    • Anonymous says:

      My gut instinct (as a neuro grad student) is I doubt it would make enough of a difference to matter.

      However, it is true that the brain, unlike the rest of the body, specifically needs sugar to survive (and not just calories in-general). That’s why the body has to go into ketosis if you cut out carbs completely – to feed the brain.

      The paleo-people claim that carb cravings are somehow a result of insulin resistence, which is a result of a high carb diet.

      (I do fine on a low carb diet, WAIS score 143)

    • nydwracu says:

      Anecdotally, I want carbs for thinking-work.

  28. Princess_Stargirl says:

    Robin Hanson has taught me that bragging about donating is the most moral thing to do. So I will mention I gave 25$.

    Also idk why Ialdabaoth talks about being so low in SMPV. He looks reasonably cute in the picture!

    Also I laughed out loud at the paperclip maxmizer.

    • Nornagest says:

      I’m kind of tempted to donate a hundred pounds of paperclips to MIRI now.

      • g says:

        I think it would be amusing to send them (anonymously) an envelope containing just one paperclip. And then, a week later, two. A week later, four. A week later, eight.

        Of course, after a little while it would get awfully inconvenient and/or expensive. But one might hope that before that point, someone at MIRI would get just slightly freaked out.

        • Matthew says:

          apropos of nothing

          A friend of mine once took some stamps and a permanent marker and mailed a banana to his girlfriend. The USPS clamped down on that sort of thing after 9/11, though.

        • Andy says:

          I enthusiastically second this. Would help if someone who had friends in the MIRI office could record reactions without giving the game away.

          • Multiheaded says:

            Uh, Will Newsome used to work with them. And Eliezer moderated SL4, and is likely involved in some secret SL4-like treehouse for high-level people only. They must all be very well-used to high-brow trolling and weird shit.

    • Elissa says:

      Thanks! and thanks very much to everyone else who has donated etc.

    • papermachine says:

      Not that it helps anything, but he lands pretty squarely in my desired physical type.

    • Adam Casey says:

      Clearly the most moral thing is praising others for bragging about donating:

      Well done my good chap. You bragging was praiseworthy. You have many desirable traits and ought to have high status.

    • AspiringRationalist says:

      Donated $25 to who/what? From context you appear to be referring to MIRI, though it’s rather ambiguous.

    • Ialdabaoth says:

      Also idk why Ialdabaoth talks about being so low in SMPV. He looks reasonably cute in the picture!

      This week’s event are causing massive amounts of self-reassessment.

      Probability mass is shifting rapidly away from “I am weird and creepy” and towards “I belong in rationalist communities, and as far away from gibbering ape-men as possible”.

    • Berna says:

      If bragging is allowed, then let me add I donated $24. I don’t have words to describe what I think of those HR types.

    • Error says:

      I did the same. I like ialdabaoth’s stuff and I’ve been in a similar rough spot before. Ialdabaoth, if any of the jobs you intend to apply for are at Enpxfcnpr (rot13’d), I may be able to get you in contact with the corresponding hiring manager, instead of leaving it to the whims of the HR resume filters.

      (also, this explains why I suddenly got a linkedin request from a name I didn’t recognize — I mostly remember people’s handles rather than real names, and had no idea who Brent Dill was. Ialdabaoth, just so you’re aware, I don’t actually use that account. I only have it because there was someone I needed to reach and couldn’t find another contact for. My ignoring the link request is a rejection of LI, not you.)

  29. a person says:

    Does anyone know where I can find good fiction writing advice online? I feel like there is a lot of it out there but whenever I try to google for it I just get useless corporate clickbait stuff.

    • Scott Alexander says:

      Some good discussion of a closely related question in the comments here

    • jaimeastorga2000 says:

      Mr. Yudkowsky has a lot of good advice. Unfortunately, it is scattered over many articles and comments. Fortunately, I have made a pastebin of relevant links.

    • Fazathra says:

      I recommend http://absolutewrite.com. The forums can be of variable quality, although there is lots of good advice buried there among the crud, but the critique groups are really helpful. Beyond that, if you are writing fantasy, I would recommend Limyaael’s rants, archived here: http://www.forresterlabs.com/limyaael/titlelistall

    • Andy says:

      I also recommend Forward Motion ( http://www.fmwriters.com/zoomfm/ ), as a combination of message board and chatroom. I’m mostly on the chatroom side, which is very friendly. (but I will not give my name there to avoid dragging SSC conversations over there.) Much of the community is around self-published work, but there’s a fair number of traditionally-published authors over there.

    • Susebron says:

      Writing Excuses is apparently good.

    • aguycalledjohn says:

      Relatedly can anyone recommend good advice for writing non-fiction?

      • Wulfrickson says:

        I recommend William Zinsser’s book On Writing Well in the highest possible terms. He not only gives good advice, but illustrates it by quoting passages from nonfiction writers and analyzing what makes them work or not. The one small criticism I have of Zinsser is his endorsement of Strunk and White’s The Elements of Style, about which I share Geoff Pullum’s opinion, but most everything else in his book is so good that I’m willing to overlook that.

        Zinsser is writing for authors of nonfiction, but some of his advice, especially on small-scale matters such as prose style, would generalize to fiction as well. I’ve never tried to write fiction seriously, but I can recommend How Not to Write a Novel for comedic value at the very least.

  30. social justice warlock says:

    HA HA YOU DIDN’T SAY NO RACE OR GENDER IN THE OPEN THREAD! LET THE GATES OF CHAOS BE UNLEASHED!

    I have a request that our resident Hitler lovers could help me with. I’m currently poring through your intellectual genealogy (this is really great, thanks) and trying to get a better grasp on things. Do you know of anybody moderately influential who (1) wrote before 1848, (2) was influenced by de Maistre and Bonald, but (3) was not basically religious in their orientation, other than Comte? Alternatively, could you point me to any Great Reactionary Minds outside the Anglosphere whose main cause was the defense of slavery? (I have some leads via Google etc. but it seemed stupid not to go straight to the source.)

    Thanks!

  31. jaimeastorga2000 says:

    Dr. Alexander, do you have any advice for dealing with internet addiction? How about hoarding?

  32. MK says:

    Scott, how do you manage to be a resident doctor, read so much stuff on the Internet that your more-frequent-than-monthly links posts are full of interesting and novel stuff, and still produce well-written, researched, long, high-quality posts so frequently?

    • Roman Davis says:

      I want to know, too!

      • Anonymous says:

        I’ve always just assumed that he doesn’t sleep.

        • Nestor says:

          Dosed to the gills on mod**in*l

          • Brian says:

            Is the common consensus that this is safe and effective? i.e., Should I start taking modafinil?

            Oh, what little I would not do to write as much and as well as Scott.

          • Anonymous says:

            The consensus is that it is safe, at least in the short term. No one has been taking modafinil for more than 15 years, so it is hard to assess the long term. For what purposes it is effective is probably best judged by self-experimentation.

          • Princess_Stargirl says:

            Imo Brian you should try it. Its reputaton online is extremely strong.

            When I ordered some online I shared 10 pills (each) with two friends of mine. Out of the 3 of us myself and one friend both now take modafinil daily. The life improvement for both of us has been dramatic, though admittedly we both had some a history of oversleeping and being tired. I am down from constantly being tired and feeling I need 10 hours of sleep (And still being tired) to 7ish hours per night.

            My friend is so paranoid supply will shut down he has 3 purchased a 3 years supply (I am too afraid of getting charged with dealing to do this). The stuff is important enough to my life that if the FDA shut down supply I would be willing to start taking bi-annual road trips to Mexico. Though I would actually take a week long vacation while I was there as Mexican food is so great!

            I have absolutely no idea on the safety. But I feel like I can finally function.

          • Anonymous says:

            Stargirl, how long have you been taking it? Most people I know who took it daily now only take it less often, some two days a week, some sporadically. I think that the main concern is tolerance, but loss of creativity is another.

          • Princess_Stargirl says:

            About a year.

            I have not noticed any loss of creativity whatsoever. Is there evidence loss of creativity is a common side effect?

            If I started seeing too much tolerance I would just up the doage to maybe 1.3 or 1.5 pills a day. I am sure some amount of tolerance is common but modafinil is commonly prescribed as an anti-narcolepsy drug. Nothing in the medical literature seems to suggest tolerance is a critical problem.

          • Anonymous says:

            Narcoleptics develop tolerance more slowly, to drugs in general.

  33. jaimeastorga2000 says:

    I find it hilarious that in “Wordsx3,” Scott declared that motte-and-bailey terminology was “a metaphor that only historians of medieval warfare could love” and tried to introduce the term “strategic equivocation” instead, yet here we are, months after the fact, and everyone referencing the technique still consistently sticks to the castle metaphor. This is presumably because it makes it easier to explain that X is the motte and Y is the bailey.

    • Vivificient says:

      Aside from that advantage, Motte and Bailey is a distinct and memorable term, whereas “strategic equivocation” sounds vague and easy to mix up with other fallacies.

      Also, I never fully understood the principle of a motte and bailey before reading that post; I just knew it was an early type of castle. So in a way, the argument metaphor has helped educate me about medieval history instead of history helping me understand the argument metaphor. I don’t think this is a bad thing per se.

    • Anonymous says:

      I hate it. Yes, you can say the motte is X, but I cannot remember which is motte and which is bailey. People love it because it is colorful in-group jargon. Colorful could be a good thing, but because the image is obscure, out doesn’t help new people nor does it help people learn which is which.

      I greatly prefer strategic equivocation.
      Or people could come up with something better if they tried.
      How about good old “bait and switch”?
      It has the disadvantage that people may be tempted to say the bait is X, but it isn’t clear which is the bait.

      • Illuminati Initiate says:

        The problem with strategic equivocation is that it implies people are necessarily doing it on purpose which isn’t always the case.

        Edit: There is another problem with motte and bailey which is that the motte and bailey are sometimes actually different people but look the same to many on the outside.

        • Scott’s original post was sort of vague about whether the motte-and-bailey doctrine was generally something people did on purpose. This is part of why I was never 100% on board with the concept; it didn’t seem entirely charitable.

          • Abby says:

            Motte and Bailey doctrines is not a fallacy or an action. If just one person does what Scott talks about in that post that’s equivocation or maybe bait and switch if they are particularly dishonest.

            What makes it a motte and bailey doctrine is when a concept inherently contains such equivocation, so that you can’t talk about it without taking part in the strategic equivocation.

            If you consistently use the bailey meaning others will step in front of you and say “no he means motte”. If you use the motte meaning then nobody will pay attention to you unless they misinterpret you to be talking about the bailey.

            And if you take pains to always explain the difference between the motte and the bailey and clarify which one is meant by each participant in any conversation? Well, first that’s a lot of pain. And the very work of clarifying it is a strong attack on the doctrine, which will not be appreciated by the people who have been using it.

          • The accusation of ‘motte and bailey’ / ‘strategic equivocation’ is usually directed at groups with diverse interests and beliefs, to express frustration at the fact that high-level arguers (and high-level arguments) are giving cover to low-level arguers/arguments that superficially resemble the high-level ones.

            But that means there’s nothing especially ‘strategic’ about a community’s alternation between motte and bailey versions of an argument. The motte arguers are generally upset with the bailey arguers for using terrible reasoning and inoculating people against the motte arguments. The bailey arguers are more likely to have an incentive to sow confusion, but since they’re also usually less self-aware and intelligent than the motte arguers, they’re a lot less likely to understand the situation and be able to deliberately manipulate it.

            You predict “the very work of clarifying it is a strong attack on the doctrine, which will not be appreciated by the people who have been using it”, which is a good example of a falsified prediction of the ‘people are doing this motte-and-bailey thing deliberately’; in reality the sophisticated arguers will generally love anything you do to dissociate them from the unsophisticated arguers, while the unsophisticated ones are at least as likely to change their minds in response to distinction-drawing (or simply not understand on any level what you’re saying) as to deny the distinction.

            Sophisticated and unsophisticated transhumanism is as good an example of this as anything. If MIRI’s published statements are the motte, you can say that kneejerk technophobia is the bailey; or you can say that Kurlweilian optimism is the bailey; or you can pick pretty much any other view that is strongly associated with general AI and claim that the group as a whole is ‘strategically equivocating’ between the sophisticated claim and the unsophisticated one. And, indeed, some individuals may benefit from the confusion; but many others within the group do not benefit, and there is little to no deliberate effort to produce the confusion. Which is why almost no one in the community is actively hostile to conceptual distinctions between schools of thought, and many prize such distinctions, contrary to the naive motte-and-bailey prediction.

          • veronica d says:

            +1.

            For example, I think “privilege” is a very important concept — although I don’t want to get into specifically why, cuz race and gender. But there are obviously bad ways and good ways to talk about it. Making it somehow off-limits is also a failure mode.

          • houseboatonstyx says:

            Otoh, quite often I see a motte statement from X replied to by Y with something like “So you support [exaggerated bailey].” This usually derails X into defending zimself by attacking the [exaggerated bailey], which at worst gives Y more ammunition and even at best makes the conversation useless.

            This sort of Y reply seems too frequent to all be legitimate misunderstandings by Ys.

    • Cauê says:

      I’ve had to think of a translation when trying to explain the concept to my friends – our technical equivalents of “motte” and “bailey” are even less common than in English.

      Turns out “castle” and “field” worked just fine.

    • Eli says:

      The human mind loves to visualize, and thus prefers terms conducive to visualizations.

  34. Protagoras says:

    Since this is the open thread, I had a thought recently. The whole Adrian Peterson thing has produced some discussion of the correlations between race and corporal punishment of children, and it seems from the polling data that in America, blacks are most inclined to employ it, and asians least likely to do so (though it was a majority in all groups, sadly). And suffering corporal punishment as a child seems to be correlated with poorer outcomes later in life, including IQ. I wonder how much of the observed differences between racial groups is explained by this. And how much of a benefit it would be to eliminate this pattern of behavior, since it seems to be so harmful and seems to be practiced so widely.

    • Roman Davis says:

      I’ve always been of the opinion that it goes the other way around: some one who makes less/ hasn’t absorbed upper class values is more likely to beat your kids is more likely to have lower IQ and higher chance of going to jail. Both are really heritable.

    • Scott Alexander says:

      Goshdarnit, I forget the usual “no race or gender in the open thread” disclaimer, and within seventeen minutes…

      Okay, you’re grandfathered in this time, and it’s a really interesting and good question, but I’m editing the disclaimer back in.

      • Protagoras says:

        I completely forgot that that was usually a rule.

      • 27chaos says:

        Can we get a SUPER RACE AND GENDER thread every six months or so, please? Race and gender are important, and some other posts on this site talk about them, so excluding them from free discussion entirely seems like a bad idea.

        You could put a disclaimer at the top of the thread saying that most people’s opinions are ones you disagree with. You could also elect some temporary moderators for the event, if you think the trolling would be very high. But I think such an event would be fun and beneficial.

        • Nornagest says:

          I really don’t think we need more race and gender here.

        • coffeespoons says:

          I would really like a gender open thread, because I like talking to the more reasonable anti-feminists here (I am myself a feminist). I don’t really want a race open thread here, mostly because I find the topic stressful personally.

          • Protagoras says:

            I regret starting a race topic, the discussion has been kind of unpleasant. On talking to reasonable anti-feminists, I’ve found there to be some at feministcritics.org.

          • coffeespoons says:

            I want SSC style conversations with anti-feminists though 🙂

          • Multiheaded says:

            High standards; free speech; diverse public forum – unless you are on SSC, choose two. 🙁

          • 27chaos says:

            What if it were a race and culture and gender topic, so we could discuss nongenetic ideas like social and cultural integration, Weber’s Protestant Ethic, Muslim people emigrating to Europe, too? Is it the idea some groups are intrinsically or culturally better than others that you’d find stressful to engage with, or genes stuff and human nature, or the trolls who would appear?

          • coffeespoons says:

            @27chaos – I find talking about what I find stressful about race discussions stressful! I was just expressing a personal preference though – obviously I don’t get to decide what Scott talks about on here. Currently, I just tend to avoid race conversations on here.

        • Emile says:

          Count me as also voting against a race open thread (not that we commenters should get a vote – Scott’s Whim Is Law).

          • Multiheaded says:

            You know who else supports dictatorial and unaccountable control of internet communities? Leftists, that’s who! 😉

        • Anonymous` says:

          Wow, you even picked a nick with “chaos” in it. That’s an impressively brash level of transparency.

        • Multiheaded says:

          I’m also for gender and against race.

          • Deiseach says:

            I don’t know, Multiheaded; I’d be quite willing to impart the superiority of the Irish to all you poor benighted Americans, which Scott can then back up with experiences from his time in The People’s Republic of Cork, the True Capital of Ireland and Apex of Human Civilisation 🙂

    • Tom Hunt says:

      Corporal punishment as a child seems to me to be a supremely unlikely causal agent in later life outcomes. I think I’ll take the same tack as Roman Davis here; by coincidence, at the moment, corporal punishment is considered low-class, and so disproportionately many of the people who experienced it have low-class parents. That’s all the causal link you need.

      • Protagoras says:

        I seem to recall Scott mentioning around these parts some of the research on the long term harm that can result from bullying. That seems similar enough that I wonder why you find this a “supremely unlikely causal agent.”

        • 27chaos says:

          In a vein similar to the Holocaust study’s explanation, maybe only the worst human beings are bullied as children, which is the cause of this result.

          Victim blaming is bad and I don’t think that is literally true as stated, but I do think that there is some sort of selection effect involved related to social skills and life competency, though it wouldn’t explain everything.

          • Auroch says:

            From personal experience, my social skills were average before I started being bullied, and stunted afterwards. I’m fairly confident that was a causal relationship.

            Generalizing from one example, I know, but even if there is a social skills/ life competency relationship to start out with (which is fairly likely; deficits for either of those make a kid an easy target), bullying is going to isolate the kid at the time social skills are naturally developing the fastest.

          • nydwracu says:

            From personal experience, my social skills were average before I started being bullied, and stunted afterwards. I’m fairly confident that was a causal relationship.

            I suspect that there’s a causal relationship between evidence of status from others and performance of status, in addition to the obvious causal relationship between performance of status and others’ assessments of status.

            I also suspect that this only holds to an extent, since evidence of status from others is a lot more likely to pull downward than upward, and performance of high status requires skills that have to be learned over time and a mindset that has to be learned over time, and low status creates a mindset that has to be unlearned. (Anxiety disorders? If you’re low-status, most other people are threats, so you learn to minimize interaction, at least outside environments where you’re sure no one else is a threat — and those are rare. If you escape into a place with few threats, your previous learning persists.)

      • Deiseach says:

        It also depends on what you consider “corporal punishment”. I am sceptical of the “ban smacking/criminalise smacking” campaigns that usually pop up every so often in Ireland because let’s face it: the kind of person who is really worried that one smack across the legs will irreversibly damage little Johnny is probably going to stop smacking anyway whether or not it’s legal, while the kind of person who hits their child hard enough to bruise, bleed or throws them into a wall and breaks their bones is not going to care one way or the other if it’s legal.

        And again, can we please compare “like with like” in these studies? When they match up two families of upper-middle class white professional parents on the same income etc. where in one family little Johnny was smacked and in the other little Tommy wasn’t, and in later life little Johnny is suffering diminished I.Q. etc. by comparision to little Tommy, then I’ll take this kind of thing more seriously.

        Poor and lower-class families have a shit-ton (that’s the technical term we use in social housing provision) of problems to deal with, such that singling out “if you don’t smack your kids, they’ll all end up going to Harvard” is not really that convincing to me on its own.

        • Anonymous` says:

          “Skeptical of the campaigns” seems a bit ambiguous; skepticism about the causal link here makes plenty of sense, but even if no such causal link exists, HITTING PEOPLE IS FREAKING BAD; parents DON’T OWN CHILDREN, etc.

          • HITTING PEOPLE IS FREAKING BAD

            As a pure absolute, this is simply wrong. There are lots of situations where violence is allowed or even required for the purposes of maintaining order and discipline; the police don’t carry guns and clubs just for fun. It’s telling that we still allow agents of the state to exercise violence for the purpose of discipline, but we want to take this right away from parents, especially since the level of violence employed by parents is so much lower than that employed by police and soldiers.

            parents DON’T OWN CHILDREN

            Then who does own them? The law doesn’t allow them to own themselves, for the very good reason that almost all children are incapable of managing themselves responsibly. The actual answer from people who make this argument usually turns out to be that the state owns the children, as the state is the one who dictates to the parents what they are empowered to do. This we see yet another case where a “humanitarian” cause has the actual effect of empowering the state at the expense of the parents.

            I’ve barely ever heard an argument against corporal punishment which doesn’t lean heavily on the Worst Argument In The World. We conflate “spanking” with “abuse” or “hitting”, and then proceed to argue against the practice by conjuring the image of children with broken arms and serious injuries, completely ignoring the fact that 99% of all spankings are a swift smack on the butt that leaves no injury more serious than a few seconds of pain.

          • Deiseach says:

            I’m not going to go the road of “We were all beaten black-and-blue in school and it did us no harm!” because that’s not what we’re arguing about.

            Re: the causal link, as I said, if you boil it down to “poor people hitting their children wrecks their children’s chances of going on to win the Nobel Prize” and take nothing else about being poor into consideration, then I don’t think it’s anything more than victim-blaming (the reason your child is stuck with no job is because you slapped him when he was eight for biting his sister, you horrible cruel sadist, so it’s all your fault and society has nothing to do with it).

            Re: the campaigns, they do heavily rely on “If you smack your child, this is directly the equivalent in every way as if you took an iron bar and broke their arm, you horrible cruel sadist!” which doesn’t work because most people were smacked by their parents and even if they respond by stating they don’t intend to smack their kids if they have any, they know that (in general) their parents loved them, meant it as discipline not cruelty, and were not horrible sadists. So the campaigns fall flat on their face.

            So suppose we do get a law that impses a blanket ban: everyone, parents and childminders and people in the street alike, not so much as raise a hand or a loud voice to a child or else we’ll take your kids away and put them in care and put you in prison.

            In some cases, what goes on is not ‘reasonable chastistment’, it’s abuse and yes, great, a tool to protect children. In the vast majority of cases, the people who worry about ‘if I smack Johnny on the legs, is this physical abuse?’ would have given up anyway without legal penalty.

            And the kind of abusive person who does real physical damage is not going to stop and think, n the middle of a full blown rage, “Oops, if I hit Johnny NOW I might go to prison, not like the last time”.

            From what I’m seeing, abuse consists mostly not of slapping kids, but of neglect: from one parent who I strongly suspect hasn’t even taught her child how to speak (I can’t prove it, but from what I’ve seen of her interactions with her kids over the ten or so years I’ve had dealings with her in local government work, I get the very strong feeling) to parents who don’t bother about what hour of the night the kid is out on the street, have they done their homework, are they fed to the mother who is a heroin addict shooting up in front of her four year old.

            None of these physically beat their child. A self-congratulatory campaign on banning smacking is not going to improve these children’s lives. Now, I’m not against campaigning and trying to use social pressure to change attitudes, but I am leery of groups I’ve never previously heard about (APPROACH) trying to have laws enacted for bans, and not alone bans, but punitive measures for infringing the ban.

          • Anonymous says:

            The actual answer from people who make this argument usually turns out to be that the state owns the children,

            The actual answer is that like everyone else, they own themselves. Children do not, of course, completely own themselves, but ownership need not be binary; they own enough of themselves that you are violating their rights by hitting them.

            If you actually believe that parents own their children, then you have no principled reason to object to even severe beatings, and it’s no longer the Worst Argument in the World because it is correct to judge your principles by what they imply even if they are not always used that way.

          • Mary says:

            “The actual answer is that like everyone else, they own themselves. Children do not, of course, completely own themselves, but ownership need not be binary; they own enough of themselves that you are violating their rights by hitting them.”

            And why is the line drawn there and not at restraining their freedom of motion?

          • “Ownership” is a bad metaphor for the relationship between parents and children. My initial response used it because you had done so, but to say what I actually think I need to switch to a different metaphor.

            Parents and children have an unequal, hierarchical relationship which derives from their fundamental differences in ability, maturity, and intelligence. As such, parents have special responsibilities to their children, but also have powers over their children in ways that don’t apply to relationships between adults. In particular, parents are not just allowed but required to discipline their children. There is such a thing as abuse, which occurs when a parent fails to fulfill their responsibilities to the child. This can be neglect, as Deiseach mentioned above, or it can be in the form of violence which exceeds that which is necessary for the maintenance of discipline.

            But arguments which attempt to argue against spanking on universal moralistic grounds are anti-social and harmful, because they implicitly deny the particular hierarchical relationship between parents and children. If I spank a random person on the street, it’s assault, but arguing that “HITTING PEOPLE IS WRONG LOL” is asinine because it ignores that my child is not a random person on the street, and my relationship to my child gives me both special responsibilities and special powers over them. In the exact same manner, I can send my child to time-out, but if I do that to another adult it’s kidnapping.

          • Jaskologist says:

            Parents and children have an unequal, hierarchical relationship which derives from their fundamental differences in ability, maturity, and intelligence.

            This is one of the nrx critiques of “liberalism” that I find most compelling: the willingness to acknowledge that hierarchy is a real thing. Liberals really do seem to want to deny this altogether (with Marxists attacking it as an inherent evil). But nature loves Infinite Diversity in Infinite Combinations, which means that inequalities are inevitable where unlike meets unlike, and hierarchy is inevitable. Any theory that tries to wish this away is doomed to failure.

          • Illuminati Initiate says:

            This comment is directed mostly at Mai Le Dreapta

            Generally speaking* physical violence is only necessary to stop physical violence or withholding of vital resources. Unless a child is physically assaulting someone I don’t really see why hitting them would ever be necessary.

            *as a consequentialist there are always exceptions

          • Generally speaking* physical violence is only necessary to stop physical violence or withholding of vital resources.

            Is this a pragmatic or a moral argument? What if spanking is not strictly necessary, but is the most efficient disciplinary method at hand? Or, more realistically, what if it’s the optimum (or at least an optimum) for balancing the tradeoffs of parental patience, parental time, memorability, the child’s personality, and deterrent effect?

            (The only anti-spanking argument which is at all convincing to me is the one made upthread to the effect that reward-only discipline is more effective than reward-and-punishment discipline, but most anti-spanking advocates argue on fallacious moral grounds, instead.)

          • Illuminati Initiate says:

            @Mai

            It is both? It seems that generally responding with violence to situations other than violence and withholding of vital resources leads to less net utility than alternate methods would have.

          • houseboatonstyx says:

            In some incidents, there’s a middle way. If a child puts a clothespin on a kitten’s tail, putting a clothespin on the child’s ear to demonstrate what effect that has, seems a reasonable use of force. Same with other kinds of bullying, grabbing toys, etc.

          • Nornagest says:

            It is both? It seems that generally responding with violence to situations other than violence and withholding of vital resources leads to less net utility than alternate methods would have.

            I don’t know. I’m hardly a neoreactionary, but one of the neoreactionary talking points I find more compelling is that e.g. Singapore’s use of corporal punishment might be a saner deterrent than what we’ve got in the West. It’s got the right incentive structure: it happens all at once, so it’s nicely salient; it’s obviously aversive but in a direct and fairly circumscribed way; it doesn’t create a sprawling prison industry motivated to expand itself; it doesn’t create a prison culture; and, not to be underestimated, it’s cheap. Sure, aggression tends to spur retaliation, but that doesn’t require any spooky moral miasma around violence to explain; it’s basic game theory.

            (On the other hand, Sweden’s system, in which anything even vaguely inhumane gets the enthusiastic reception of RuPaul at a Southern Baptist convention, seems to be working out fairly well for them. So maybe this is less a corporal punishment thing and more that the US justice system is just systemically prone to being sadistically cretinous when it isn’t being cretinously sadistic.)

            The analogy to parenting should be fairly obvious.

          • Emile says:

            I agree with most of what Mai wrote here, but not with Jascologist’s “inequalities are inevitable where unlike meets unlike, and hierarchy is inevitable.” – I don’t think there’s anything inevitable in hierarchy (unless you take a really broad definition that includes pretty much any difference in power).

            Illuminati: so if a parent believes that spanking his child is the best way he has of solving a particular problem with his child (say, not throwing the cat out of the window, which he doesn’t know how to solve with “rewards” because he’s an average joe and not a PhD in frickin’ horse training), do you disagree with him on empirical grounds (i.e. a strategy consisting solely of “not spanking the child” will work better to solve that problem), or on moral ones (even if spanking works, he shouldn’t do it because it’s Evil)?

          • Multiheaded says:

            Emile: morality is really just TDT, so I feel rather justified in saying “both” to that last question.

            Nornagest: that Singapore actually substitutes corporal punishment for imprisonment is a total myth! (In reality, caning as the sole punishment is apparently only used on schoolboys.)

            No advanced nation actually practices the kind of thing that the people enthusiastic about corporal punishment talk about.

            My guess is that Foucault (gasp!) was basically right as to why this is the case: the Enlightenment state had to shift away from (spectacular, gratitious) “punishment” to (internalized, all-pervasive) “discipline” to get more order and loyalty out of the population with less effort. So basically the reactionary model of punishment went extinct through being outcompeted on its own terms.

            Mai:

            Or, more realistically, what if it’s the optimum (or at least an optimum) for balancing the tradeoffs of parental patience, parental time, memorability, the child’s personality, and deterrent effect?

            All else aside, we really want to encourage a culture of patience and self-sacrifice for parents. Plus, as has already been said, the “irrational”/”sentimental” aversion to the brutalizing effect of punishment upon the punisher has a very important rational core: it reinforces the punisher. There are likely few things worse to reinforce than physical violence against people in one’s care!

            To all reactionaries: what if your beloved biodeterminism did happen to be the most important factor in parenting, and 90% of disciplining or spoiling one’s children (not counting outliers) really made vanishingly little difference in the long run? Wouldn’t it then make sense for all parents to be as liberal as possible all the time, being utterly unable to fix much anything about their children?

          • Nornagest says:

            @Multi — So much for my hopes for a natural experiment. I’ll need to look into this in more detail, but I still think it’s an interesting possibility.

          • Multiheaded says:

            Nornagest: before you theorize in isolation, at least read the summary of Foucault’s ideas on punishment. He is deeply original and spares neither the ancien regime nor the post-Enlightenment system.

          • Illuminati Initiate says:

            Putting clothespins on a cat and throwing cats out windows IS physical violence.

            But actually I may have been a bit too blanket in my statements. But spanking children for things like general “disobedience” is still probably bad. Unless they are causing severe enough harm to others (“others” here includes cats), physical punishment is probably not effective enough to justify it.

          • houseboatonstyx says:

            Unless they are causing severe enough harm to others (“others” here includes cats), physical punishment is probably not effective enough to justify it.

            Moving the clothespin from the cat to the child’s ear could be seen not as punishment but as giving information about the effect of being pinched by a clothespin.

            The information value “This is what your action feels like to the victim” would be lost if pain-by-force were the punishment for other kinds of misbehavior also.

          • I’m not responding further to this thread because I’m reading the SparkNotes on Foucault, and I’ll probably fork the thread after that.

            (First reax: like a lot of deconstructionist stuff, it combines really excellent analysis with really bizarre normative conclusions.)

    • Lesser Bull says:

      Consider that you may have the causal arrow backwards.

      • Protagoras says:

        I considered that, of course, but thought it seemed likely that there was causal influence in the direction from corporal punishment to worse outcomes because there’s actually been a lot of study of this and so I expect that researchers have considered at least some confounders, and because there’s a general pattern of lots of research in lots of areas showing doing nasty things to people seems to make them worse off.

      • DavidS says:

        I have read no relevant studies, but I will point out there are at least two plausible causal patterns besides corporal punishment leads to worse child behavior/intelligence. The first one, which I imagine is what Lesser Bull is getting at, is that worse behaved/less intelligent children may get punished more often and more harshly. (Either because their behavior is worse, or because they get caught.) If the cause of their poor behavior or low intelligence remains into adulthood, then you have a correlation between childhood punishment and poor adult outcomes.

        The other one is the one that drives me nuts whenever I try to research any question about child raising methods. I strongly suspect that paying attention to the professional consensus about good child rearing methods is correlated to being a good parent. Not necessarily because the experts are right, but because conscientious parents who think through their strategy tend to be good parents. I suspect that a study on people raised in the 1950’s would show that parents who warn their children about homosexuality have healthier children, and a study on people raised in the 1950’s in the 1900’s would show that people who baptize their children have healthier children.

    • Viliam Búr says:

      I am reading a book “Don’t Shoot the Dog” about animal training (with a lot of thoughts about how it would apply to humans), and the author claims that training using rewards only (when done correctly) is more effective than training using rewards and punishments.

      The author says that the effects of punishments are usually unpredictable (as I understand it, if you punish someone for doing X in situation Y, the result may be avoiding doing X, or avoiding the situation Y, or avoiding you). Also punishments are often wrongly timed; for example, if you punish your child for getting bad grades, the period of greatest fear (when the child comes home from the school with the bad grades) happens to be the period where already nothing could done; so all that negative emotion is wasted.

      I haven’t finished the book yet, but I keep thinking about what would a human society look like if we used all the methods we alredy experimentally proved with animals. I mean, I wouldn’t mind someone training me to do something using the rewards-only method, especialy if it means that I don’t have to do anything else than what feels good at the moment.

      • taelor says:

        I am reading a book “Don’t Shoot the Dog” about animal training (with a lot of thoughts about how it would apply to humans), and the author claims that training using rewards only (when done correctly) is more effective than training using rewards and punishments.

        This is one of the main points of Skinnerian Behaviorism.

      • Deiseach says:

        especialy if it means that I don’t have to do anything else than what feels good at the moment

        Yeah, but the trouble is, we need to be trained to do stuff that isn’t what feels good at the moment, from “get off the computer/stop texting your friends and study” to “give up chocolate and start walking an extra two miles a day”.

        Reward-only training may work well, but what do you do when there’s a clash between “I am doing something that rewards me now and you want me to stop, do something I don’t like and get a reward later“?

      • fubarobfusco says:

        Also punishments are often wrongly timed; for example, if you punish your child for getting bad grades, the period of greatest fear (when the child comes home from the school with the bad grades) happens to be the period where already nothing could done; so all that negative emotion is wasted.

        And even well-timed punishments have a serious problem:

        Punishment often immediately terminates the undesired behavior, if only by distraction: if you hit your kid for scribbling on the wall, she will stop scribbling right now. (She may do something else you don’t like, such as crying or running away; but she’ll stop scribbling.)

        But punishment doesn’t reduce the future incidence of that behavior very much. (She’ll scribble on the wall again when you’re not looking, because scribbling on the wall is fun.)

        And that means that punishment is more positively reinforcing to the punisher (who has the satisfaction of seeing the undesired behavior stop; of being obeyed!) than it is negatively reinforcing to the punished.

        IOW, punishers are mostly wireheading themselves at the expense of the people they punish.

        • Multiheaded says:

          I’ve seen this point made before (on LW, iirc); this is fascinating. And a simple, reductionist alternative to some Freudo-Marxist explainations of hierarchy and sadism that were in vogue like half a century ago.

    • caryatis says:

      It’s a good point, but parenting practices differ globally between different cultures and classes, so you can’t just consider spanking. For instance, not talking to very young children much, or talking to them only to tell them what to do, is associated with worse academic outcomes. I’ve also read that lower-class people also have less of a curatorial mindset towards children, giving them more independence and devoting less effort to intentionally shaping how their skills develop.

    • Deiseach says:

      I wonder how much of the observed differences between racial groups is explained by this

      None, unless you have some snazzy reason as to why, up to about thirty years ago, when white people were beating their children as enthusiastically as any other racial grouping, there were still “observable differences” and plenty of people who would happily explain the scientific reasons white people were just naturally smarter/nicer/more monogamous/less criminal/harder-working than black/brown/yellow/red people.

      • Protagoras says:

        You have data on attitudes toward punishment of children, broken down by race, from 30 years ago, and from earlier periods? That would certainly be interesting. I only have anecdata, that 30 years ago my own (white) parents, who weren’t particularly bizarre in most respects (they certainly weren’t hippies or anything) weren’t believers in spanking. Obviously the data you can apparently cite would be much more useful. Er, you do have actual data, right?

        • houseboatonstyx says:

          I can think of a place to look for indications, which is objective, quantifiable, dated, and easily sorted by race, sex, age, etc — on the assumption that popular movies and other entertainment usually portray the sympathetic characters as not doing things that most of their target audience find unacceptable. Of course that would be subject to false negatives.

    • Anonymous says:

      I’d like a source for asians not hitting their kids as much as whites. That surprises me.

  35. Vivificient says:

    Wow, that fanart is fantastic! Kudos to whoever made that. I spy a cute picture of a whale that hasn’t died and become a macabre metaphor yet.

    Although the large horned shadow is somewhat worrying. As is the box with paperclips seeping out the corner….

  36. hamiltonianurst says:

    Especially loved the Ozymandias and the not-very-well-boxed paperclip maximizer. Haven’t figured out what the foreground shadow is yet, though.

    • Anonymous says:

      Looks like a minotaur to me.
      Other things: a jackdaw and a sphinx of quartz, codex seraphius, and a whale (possibly with cancer?)

      No sign of a squid though

    • Somebody needs to make an annotated version of that fanart.

      • Scott says:

        Here’s an annotated version I whipped up, though I’m sure I’m missing a lot.

        • GG says:

          I endorse it as 90% accurate.
          One pretty obvious thing marked incorrectly (possibly because I suck at drawing?) some minor things missed. Believe it of not, the bust was supposed to be recognizable. (I’m guessing it isn’t.)

          • Scott says:

            Is the big black book not the necronomicon? I wasn’t sure on that one. Also not sure if the raven or the miniature buildings were references to something. And no sadly I don’t recognise the bust, though that probably has more to do with my not knowing who it is than the drawing itself.

            EDIT: It’s the slate star codex! Can’t believe it took me an hour to get that.

          • Susebron says:

            Someone elsewhere in the thread thought that the bird was a jackdaw, not a raven. It has a (presumably quartz) sphinx on the cage.

        • Emile says:

          I’d say it’s missing:

          * the brain in the vat

          * all the buildings probably have something to do with Micras/Conworlds (none of the maps look like parts of Micras I recognize…)

          * The jackdaw loves his big sphinx of quartz

          * For the bottom right map, probably something something archipelago

          I don’t know what the deal is with the whale, the polar bear and the skull.

          • GG says:

            I dunno if I should make the definitive list of interpretations, or will it ruin the enigma, so to say. The SSC comment section hive-mind did a good job identifying most of the stuff, proving that it is in fact a very formidable and clever hive-mind, not that I doubted it.

          • I think you should publish the definitive list. The enigma seems to have been mostly dissolved by the hive-mind, and it would be cool to know exactly how much of it we got.

          • GG says:

            Here’s a complete list of references in rot13. If you don’t like an idea of art having one, definitive interpretation, if you prefer it ambiguous, don’t read it.

            1. Prageny cbfgre: yrtf bs Bmlznaqvnf. Uvrebtylcuf fcryy bhg “ezff”, juvpu vf abg ubj napvrag Rtlcgvnaf jbhyq jevgr “Enzrfrf” ng nyy, ohg pybfr rabhtu.

            2. Evtugzbfg cbfgre: shaqnzragny qvfgvapgvba orgjrra n znc naq nabgure, fyvtugyl zber ernyvfgvp znc.

            3. N oveq va gur pntr: wnpxqnj ybirf zl gval fcuvak bs znlor dhnegm.

            4. Gval sybjre ba gur gnoyr: uggc://vyzhfrboyvmmneqhf.nygreivfgn.bet/Vzzntvav/obgnavpn/Frensvav.wct Abobql tbg vg, orpnhfr vg’f whfg gbb fznyy cyhf gur qenjvat pbhyq or orggre. N yvggyr infr unf n flzoby sebz gur fnzr fbhepr ba vg.

            5. N fxhyy: qbrfa’g zrna nalguvat FFP-fcrpvsvp, whfg jung lbh jbhyq abeznyyl rkcrpg n fxhyy gb ercerfrag.

            6. N zbqry bs n ohvyqvat ba gur gnoyr: ybbfryl onfrq ba guvf bar: uggc://envxbgu.arg/Fghss/Zvpebangvbaf/mnrqv1.cat

            7. N obk orybj gur gnoyr: n cncrepyvc znkvzvmre fnsryl ybpxrq va n obk jurer vg pna’g unez nalbar. V vzntvar nyy gur cncrepyvcf gung xrrc frrcvat bhg bsgra pbzr va unaql jvgu nyy gung cncre ynlvat nebhaq. N fgvpxre zrnaf “zrzrgvp unmneq”, V pna’g vzntvar jul. Jevgvat ba gur obk fnlf “guvf jnl hc”, ohg gur obk zhfg unir orra ghearq hcfvqr qbja, orpnhfr gur neebj vf cbvagvat qbjajneqf.

            8. Gb gur yrsg bs gur obk, cnegvnyyl bofpherq ol vgrz #9: n zbqry bs gur zbyrphyne fgehpgher bs fbzr betnavp pbzcbhaq. Cnegf bs vg ner uvqqra znxvat vg vzcbffvoyr gb vqragvsl, ohg vg jnf zrnag gb or zbqnsvavy.

            9. Pragre, n thl jnevat wrnaf: Fpbgg Nyrknaqre, gur cngeba fnvag ba Abg Orvat Njshy Ba Gur Vagrearg.

            10. Inevbhf zncf naq n tybor: qba’g ercerfrag nalguvat va cnegvphyne, ohg bar znc fubjf n yvgreny nepuvcryntb.

            11. N funqbj ba gur yrsg: gur fcrpgre bs jul “vg’f ab tb, jul guvatf ner jung gurl ner” nxn Zbybpu nxn engvbanyvfg’f fgrryzna bs Fngna. Gung, be n enaqbz pbj.

            12. Sybbe gvyrq jvgu cragntbaf: na uczbe ersrerapr. Vg vf cbffvoyr gb gvyr n cynar jvgu cragntbaf, rira rdhny cragntbaf, rira rdhny naq snveyl flzzrgevpny cragntbaf, whfg abg *erthyne* barf. N fvzcyr jnl gb qb vg vf gb gvyr lbhe cynar jvgu urkntbaf, erthyne be bgurejvfr, juvpu lbh xabj ubj gb qb. Abj, svther bhg ubj gb oernx qbja n urkntba vagb frireny cragntbaf naq lbh’er qbar. (Gung pna or qbar va ng yrnfg 3 jnlf gung V pna guvax bs, znlor zber.)

            13. N oynpx ynetr bowrpg ba gur yrsg: abg Arpebabzvpba, ab. Vg’f znqr bs fgbar naq unf fbzr jevgvatf naq n fgne ba vg. Vg’f n fyngr fgne pbqrk, ng yrnfg zl zragny vzntr bs vg. Vg jbhyq or zber erpbtavmnoyr vs V qvq n orggre wbo qenjvat vg, V fhccbfr. Guvf vf jung fyngr ybbxf yvxr, sbe gubfr jub qba’g funer zl ybir bs trbybtl: uggc://jjj.jvyyvnzffyngr.pbz/dhneel_fyngr.wct uggc://jjj.jrfgbarf.pbz/fyngr/408o/408o_jrfgbar_fyngr_zngrevny.wct uggcf://jjj.genqrovg.pbz/hfe/fgbpx-cubgbf/cho/9002/949318.wct

            14. Obbxf ner whfg trarevp naq qba’g ercerfrag nal npghny obbxf. Bar unf gjb yrggref sebz Pbqrk Frencuvavnahf ba vg, gubhtu.

            15. Oenva va n wne: V thrff vg pbhyq ercerfrag cflpubybtl, xvaq bs? Ernyyl, gur bayl ernfba V qerj vg, vf orpnhfr jura V jrag gb fpubby gurer hfrq gb or na npghny qvffrpgrq uhzna oenva va n wne va bar bs gur pynffebbzf. V hfrq gb fgner ng vg sbe n ybat gvzr, jbaqrevat jung xvaq bs yvsr vg yvirq naq ubj vg shapgvbarq.

            16. Inevbhf obggyrf gb gur evtug bs gur oenva: zrqvpvar naq qehtf, cerggl frys rkcynangbel. V’z cerggl fher gung’f abg ubj lbh ner fhccbfrq gb fgber gurz, gubhtu.

            17. Cbfg-vg abgrf: fbzrbar jnf gelvat gb qrpvcure gur jevgvat flfgrz bs gur pbqrk qverpgyl orybj.

            18. N fznyy junyr cbfgre: zbeovq zrgncubef. Jvyy vgr znantr gb bhgynfg gur ghzbef gelvat gb xvyy vg, be jvyy vg qvr naq orpbzr sbbq sbe gur perngherf bs gur qrrc? Bayl gvzr jvyy gryy.

            19. Ohvyqvatf: whfg ohvyqvatf bs zl qrfvta, abguvat fcrpvsvp.

            20. Pguhyuh vf cerggl boivbhf. V jvfu gurer jnf n jnl gb ercerfrag Ybirpensg jvgu fbzrguvat yrff onany, ohg abobql jbhyq erpbtavmr Nmngubgu be gur pvgl bs Pryrcunïf vs V qerj vg. Fcrnxvat bs Ybirpensg, Wnfba Gubzcfba’f jro-pbzvp nqncgngvba bs gur Qernz Plpyr fgbevrf vf ernyyl, ernyyl tbbq naq vs lbh yvxr Ybirpensg ng nyy, lbh fubhyq ernq vg: uggc://zbpxzna.pbz/2011/03/21/pryrcunvf-cntr-1/ Ur ernyyl trgf Ybirpensg ba n yriry orlbaq Pguhyuh naq gur yvxr. Vs lbh yvxrq zl qenjvat, lbh jvyy cebonoyl yvxr uvf jbex zber.

            21. Cvbarre cyndhr vf nyfb cerggl boivbhf. Pnhgvba: znl pbagnva Ibyqrzbeg.

            22. Gur ohfg: abobql tbg vg, vg jnf fhccbfrq gb or Ibygnver. Ntnva, vg jbhyq or zber erpbtavmnoyr vs V qvq n orggre wbo ba vg, ohg vg’f ernyyl gval. uggc://jjj.uvfgbevpnycbegenvgf.pbz/NegJbexVzntrf/338.WCT Jul Ibygnver? Ur unfa’g orra zragvbarq nebhaq urer irel bsgra, ohg V guvax ur svgf jryy. Lbh or gur whqtr bs gung.

            23. Rira V qba’g xabj jung’f gur qrny jvgu cbyne ornef. Vs lbh qba’g nfx Fpbgg, lbh’yy arire svaq bhg.

            24. N fpvffbef cynag: lrg nabgure, naq gur zbfg boivbhf Pbqrk Frencuvavnahf ersrerapr. uggc://qernzfnaqqvnyrpgf.jrroyl.pbz/hcybnqf/1/8/4/9/18494674/359591645.wct V’ir orra n sna bs gung obbx sbe n irel, irel ybat gvzr naq jnf ernyyl unccl nobhg na bccbeghavgl gb qb fbzr snaneg bs vg.

          • Scott says:

            Re: 13
            Vg’f qrsvavgryl erpbtavfnoyr nf n fyngr fgne pbqrk. V xarj V unq frra fbzrguvat yvxr vg orsber ba FFP, ohg vg gbbx zr sne gbb ybat gb ernyvmr gung jurer V unq frra vg jnf ba gur fvgr onaare.

            Good job on the artwork!

  37. Elissa says:

    Thanks Scott, you’re a boss. Update: as of Wednesday Brent is in fact fired, no better reason was given, and ADA-related wheels are turning but can’t talk much more about that yet. LW thread: http://lesswrong.com/r/discussion/lw/kzq/open_thread_september_1521_2014/bc5i

    Also, whoever made that fanart is awesome and should feel awesome.

    • Scott Alexander says:

      “Thanks Scott, you’re a boss.”

      I hope not, bosses seem terrible and discriminatory.

      • Paul Goodman says:

        At least in this case it seems like the bosses were trying to help but underestimated the perversity of HR.

        • Yeah, I’m baffled by this element of the story. Like, I thought the entire purpose of HR was to act as employee advocates and ensure compliance with various kinds of hiring regulations, etc? I’ve never heard of an HR department maliciously canning an employee for this sort of thing. Weird.

          • Anonymous says:

            HR can be an impartial third party resolving disputes between employees and bosses, which makes them look like employee advocates. But they aren’t – they work for the bosses higher up the line. Often the impartial decision is that the boss is in the wrong, but is too valuable to the company and they make the problem disappear.

            Yes, their job is to know employment regulations, in particular, know how to cover their asses when they fire someone. It is common for them to do this for bosses, although usually they don’t contact the employee, but advise the boss, so this aspect of the job isn’t widely know.

            In this case, they may have thought that they received a coded message from the boss, but more likely they didn’t trust the boss’s judgement and thought it better to dispose of ADA employees as soon as possible, and that this seemed like a good opportunity. Which is not to say that they followed the letter of the law, just that they thought it a more defensible action now than later, and it was central in their minds. Lay reaction that this violates the spirit of ADA is correct, but completely missing the point.

          • Deiseach says:

            I don’t know about America, but (at least in Irish experience), Human Resources (and I remember when it used to be called “Personnel”, instead of being nakedly honest that your workers are resources, i.e. strip mine ’em and discard ’em) are NOT employee advocates.

            That’s what you have unions for and why you have unions. Even the HR department where I’m currently working (public service, hurrah!) are a lovely bunch of people but their ultimate duty is to the employer which means if you, the worker, need to be screwed over, these are the people who get lumbered with the job.

            The case in point sounds like one of those grey areas; the college may have the right to dismiss him because of illness likely to interfere with the performance of his work, but on the other hand if SAD can be considered a long-standing illness, he should have rights under equality legislation (I don’t know how that works in America, either).

            At least you would have imagined that he’d get a warning period, etc. but then again, U.S. companies coming to Ireland have been notorious for imposing conditions of employment about ‘no unions here’ (even though you are legally entitled to join a union), so I would not expect American employers to behave better in America itself.

          • Nornagest says:

            Unions for software-type jobs are practically unheard of in the US. Like, on the level of needing to burn two black candles and kill a chicken before you even speak the word “union” in most shops, lest you call the attention of dark powers. It’s usually not forbidden (at least, it wasn’t at anywhere I’ve ever worked for), it’s just not part of the culture. And to be honest, software skills are enough in demand that it usually doesn’t need to be.

            ADA situations are one place where they would be handy to have around, though.

          • Multiheaded says:

            Like, I thought the entire purpose of HR was to act as employee advocates and ensure compliance with various kinds of hiring regulations, etc?

            HR is by definition concerned with maximizing the sustainable rate of exploitation in the interests of the employer. You might be thinking of unions.

            Unions are supposed to assist the workers against the owners’ demands, because the workers are their power base. HR is supposed to assist the owners against the workers’ demands, because that’s what the owners are paying for. Obvious, no?

            Unions for software-type jobs are practically unheard of in the US. Like, on the level of needing to burn two black candles and kill a chicken before you even speak the word “union” in most shops, lest you call the attention of dark powers. It’s usually not forbidden (at least, it wasn’t at anywhere I’ve ever worked for), it’s just not part of the culture. And to be honest, software skills are enough in demand that it usually doesn’t need to be.

            This is one of the most disturbing features about the rise of the tech mindset to me. These people, even when legitimately well-intentioned, don’t seem to have the imagination to see outside their position as a “labour aristocracy”.

          • Deiseach says:

            Everybody needs a union. You think employers don’t have unions, even if they call them by different names? Over here we have IBEC (Irish Business and Employers’ Council) which calls itself “the national umbrella organisation for business and employers in Ireland” but acts pretty much as a union/lobbying group.

            American companies came in here (because we were and are gagging for jobs) with the ‘soft tyranny’ of “But you won’t need a union, we’ll take care of good conditions!”

            Yeah, right. As my secretarial course tutor told us “Businesses are not charities. Businesses have one purpose: to make money. All an employer cares about is can you make money for him, or will you cost him money?” and she followed it up with an anecdote of an employer she was trying to place students with, who fired one of those students later – because outside of work, he’d seen her not wearing a raincoat in bad weather. His logic? “She’s not taking care of herself; if she gets sick, I’m down an employee and I make a loss. Better to get rid of her and get someone who won’t cost me extra in the long run”.

            Colleges/universities are no different: in the long run, you the employee/staff/worker are there to make money/not drive up costs. Of course HR are going to pull this stunt, because they’re gambling that the stress, effort and expense of fighting a discrimination/unlawful dismissal case will discourage the guy from going ahead; that he’ll just take the hit and they can go on their way with the same policy in place.

            That’s why everybody – not just the bluecollar workers or the guys sweeping the streets – needs a union to back them up. Nobody is too important to fire (even Steve Jobs got the heave, though they had to go back to him cap in hand in the end). CEOs get fired everyday; founders of family firms get squeezed out by big institutional shareholders.

            A common-or-garden IT minion is not going to be considered any more special than the janitor or the women working in the canteen (indeed, that’s why cleaning staff are all ‘contract cleaners’; technically they’re self-employed so they have bugger-all rights if it comes to wages, time off, etc.)

            I am constantly amazed by the wide-eyed American belief in the goodness, rightness, justice and Divine Birth of capitalism, and how – as long as I am not working for one of the Evil Companies we all know are evil, and as long as I’m not one of the unskilled labour knuckledraggers on the shop floor – why, I won’t get screwed over by the Invisible Hand!

            I consider myself centrist-right by European standards, and I’m certainly highly socially conservative, but I don’t trust the boss class!

          • Multiheaded says:

            You think employers don’t have unions, even if they call them by different names?

            “The executive of the modern state is but a committee for managing the common affairs of the whole bourgeoisie.”

            A common-or-garden IT minion is not going to be considered any more special than the janitor or the women working in the canteen (indeed, that’s why cleaning staff are all ‘contract cleaners’; technically they’re self-employed so they have bugger-all rights if it comes to wages, time off, etc.)

            But Flexibility! A lean and mobile workforce for a new age! Get with the program! /sarcasm

          • Deiseach says:

            These people, even when legitimately well-intentioned, don’t seem to have the imagination to see outside their position as a “labour aristocracy”.

            And that position as “labour aristocracy” relies more on scarcity than anything else; with the push for everyone to go to college and be funnelled into white collar jobs (since manufacturing and heavy industry are dead as far as job growth goes), then there will be plenty of smart, qualified young drones coming up to take your job.

            Add in that now you are competing for employment against graduates in South and East Asia (and yes, I do mean “in” and not “from” – outsourcing, remember?), and the time of being top of the pecking order is coming to an end.

            The very, very cream of the crop will probably continue to thrive; the ones underneath that level badly need to wake up and realise that sure, twenty years ago programming was an elite job but this is not twenty years ago any more.

          • Multiheaded says:

            P.S.: apparently, Uber also sucks and is characteristic of the awful hyper-proletarianized near future.

            https://www.jacobinmag.com/2014/09/against-sharing/

            The very, very cream of the crop will probably continue to thrive; the ones underneath that level badly need to wake up and realise that sure, twenty years ago programming was an elite job but this is not twenty years ago any more.

            In the sub-subculture that has formed over the years and that the Tech Right endorses, some people seem to have graduated from temporarily embarrassed millionaires to temporarily embarrassed Neo-Victorian nobility.

            (I darkly wonder what Paul Graham would say about labour and labour struggles – other noting the fallen demand for skilled labour from a decidedly pro-employer view.)

          • Ariel Ben-Yehuda says:

            It’s not like unions are going to help you against outsourcing.

          • Nornagest says:

            This is one of the most disturbing features about the rise of the tech mindset to me. These people, even when legitimately well-intentioned, don’t seem to have the imagination to see outside their position as a “labour aristocracy”.

            Wow, rude. I know this is a clear undertone of that whole syndicalism thing, but in the future do you think you could refrain from directly calling me an unimaginative dupe being strung along by the delusion that I’m going to found the next Google as soon as I get off my ass? I grew up in a blue-collar town, you know, and I’ve read my share of anarchist literature; I know how this sort of thing is supposed to work.

            Now, I want to preface this by saying that there’s a lot of things that can keep the strict libertarian view of employer-employee relations from working out in practice. Worker organization is by no means a bad thing in principle. But Moloch works on guild interests as much as he works on corporate ones, and that can lead to all manner of potential pitfalls. Frictional costs are, yes, one angle of this, and a particularly acute one in an industry that completely reinvents its best practices every three to eight years. Barriers to entry are another, although in tech that’s largely mediated through academia and that looks unstable enough that I have no idea how it’ll shake out in a decade or two.

            But I think the big one is that talent is scarce in tech, and none of the usual boogeymen — outsourcing, immigration, automation, more people going to college, and so forth — look like they’re poised to reverse this trend. I’ve been watching those factors for the last twelve years, and if anything it’s getting scarcer. (At least outside of some sub-industries like game development, because every other idiot CS grad wants to make games and either doesn’t realize they’re signing up for underpaid 14-hour days or doesn’t care.) That’s not to say that exploitative work conditions don’t happen — I’ve been the victim of them. But they came as symptoms of a sick company, not the self-interested behavior of a healthy one, and worker organization wouldn’t have saved it. The correct move would have been to leave. So enough with the schadenfreude.

            (The most widespread issue with employer relations in tech, as far as I can tell, is that employers are highly [and irrationally] averse to training. App Academy couldn’t exist if companies were willing to simply take the hit and spend two months teaching people how to do rudimentary web development. But this isn’t an issue that a traditional trade union would be motivated to fix — the incentives don’t line up that way.)

          • veronica d says:

            Right. The tech shortage seem to be (and remain) very real, at least at the higher ends.

            Myself, I keep hoping that this will lead the major companies to put real effort into plugging the “leaky pipeline” for women and minorities (lip-service does not count), under the general rubric that fighting over the tech-bros as become rather zero-sum. This might be a naive hope.

          • Re: “leaky pipeline”, at the companies I’ve worked for they hire qualified women and minorities as quickly as they can find them, with the result that women are 20-30% of the workforce and non-whites are often a majority. A certain kind of non-whites, of course: mostly Indians and Chinese. And the companies haven’t hired that number of people out of political correctness; rather, they’ll hire absolutely anyone that meets their (stringent) qualifications. So I don’t buy the notion that racism and sexism are keeping the proportion and women and minorities down, since as far as I can tell the proportion of minorities working in tech is a fair representation of the applicant pool.

            (It’s theoretically possible, of course, that the sexist filter lies before the job application, in higher ed, say. But I doubt that, for a whole bunch of reasons which I doubt because NO RACE OR GENDER ON THE OPEN THREAD THAT NEVER HELPS.)

          • Curious: What do you think companies could/should be doing to fix the pipeline problem? (I ask as someone who’s very interested in this problem myself.)

          • veronica d says:

            Oops. Forgot about the race/gender thing. Let’s nix this now.

            Sorry.

      • Ialdabaoth says:

        Hello Scott! Now that I’ve met you in person, I have a question.

        Your body language when we first talked seemed to indicate that you preferred to avoid eye contact while talking, except for emphasis. I’ve so far attempted to respect that; but since I didn’t explicitly ask, I’m uncertain if I was correct.

  38. Multiheaded says:

    I really cracked up at the Ozymandias!