OT93: Giant Threadwood

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. Jeremiah is running a free SSC podcast. It’s already got all the latest articles, and is gradually catching up on some of the older ones. Get it from Stitcher, LibSyn, or iTunes.

2. People who answered the Patreon-related questions on the survey overwhelmingly preferred that it be switched to per-month rather than per-post (I can present this in more depth later). I’ve changed it, but this has decreased my blog-related income by ~80% (everyone’s old per-post donation is now applied only once per month, instead of once for each of the five to ten posts per month). If you’re a Patreon supporter, I would really appreciate it if you went to the Patreon site and adjusted your donation accordingly.

I’ve previously downplayed this, saying that Patreon donations wouldn’t change my output, but this is no longer entirely true – I’m more able to choose how many hours I work now, and support levels might make me shift some marginal hours from work to blogging. Note that I have enough money and you definitely should not donate if you are at all financially strapped or funging against useful charities.

3. Comment of the week is Actinide Meta giving an update on the Paul Marik sepsis study – basically, the US version has gotten hopelessly bogged down in cost overruns, so Actinide wants to help run a cheaper faster version in South Africa. They’re seeking “one or more people with either clinical trials or critical care experience who are willing to take some time to read a proposal and ask questions”.

4. David Friedman is holding another South Bay SSC meetup on Saturday, January 27th, starting at 3 P.M. Location is the usual: 3806 Williams Rd, San Jose, CA.

Posted in Uncategorized | Tagged | 775 Comments

Self-Serving Bias

Alex Tabarrok beat me to the essay on Oregon’s self-service gas laws that I wanted to write.

Oregon is one of two US states that bans self-service gas stations. Recently, they passed a law relaxing this restriction – self-service is permissable in some rural counties during odd hours of the night. Outraged Oregonians took to social media to protest that self-service was unsafe, that it would destroy jobs, that breathing in gas fumes would kill people, that gas pumping had to be performed by properly credentialed experts – seemingly unaware that most of the rest of the country and the world does it without a second thought.

…well, sort of. All the posts I’ve seen about it show the same three Facebook comments. So at least three Oregonians are outraged. I don’t know about the rest.

But whether it’s true or not, it sure makes a great metaphor. Tabarrok plays it for all it’s worth:

Most of the rest of the America–where people pump their own gas everyday without a second thought–is having a good laugh at Oregon’s expense. But I am not here to laugh because in every state but one where you can pump your own gas you can’t open a barbershop without a license. A license to cut hair! Ridiculous. I hope people in Alabama are laughing at the rest of America. Or how about a license to be a manicurist? Go ahead Connecticut, laugh at the other states while you get your nails done. Buy contact lens without a prescription? You have the right to smirk British Columbia!

All of the Oregonian complaints about non-professionals pumping gas–“only qualified people should perform this service”, “it’s dangerous” and “what about the jobs”–are familiar from every other state, only applied to different services.

Since reading Tabarrok’s post, I’ve been trying to think of more examples of this sort of thing, especially in medicine. There are way too many discrepancies in approved medications between countries to discuss every one of them, but did you know melatonin is banned in most of Europe? (Europeans: did you know melatonin is sold like candy in the United States?) Did you know most European countries have no such thing as “medical school”, but just have college students major in medicine, and then become doctors once they graduate from college? (Europeans: did you know Americans have to major in some random subject in college, and then go to a separate place called “medical school” for four years to even start learning medicine?) Did you know that in Puerto Rico, you can just walk into a pharmacy and get any non-scheduled drug you want without a doctor’s prescription? (source: my father; I have never heard anyone else talk about this, and nobody else even seems to think it is interesting enough to be worth noting).

And I want to mock the people who are doing this the “wrong” way – but can I really be sure? If each of these things decreased the death rate 1%, maybe it would be worth it. But since nobody notices 1% differences in death rates unless they do really good studies, it would just look like some state banning things for no reason, and everyone else laughing at them.

Actually, how sure are we that Oregon was wrong to ban self-service gas stations? How do disabled people pump their gas in most of the country? And is there some kind of negative effect from breathing in gas fumes? I have never looked into any of this.

Maybe the real lesson of Oregon is to demonstrate a sort of adjustment to prevailing conditions. There’s an old saying: “Everyone driving faster than you is a maniac; anyone driving slower than you is a moron”. In the same way, no matter what the current level of regulation is, removing any regulation will feel like inviting catastrophe, and adding any regulation will feel like choking on red tape.

Except it’s broader than regulation. Scientific American recently ran an article on how some far-off tribes barely talk to their children at all. New York Times recently claimed that “in the early 20th century, some doctors considered intellectual stimulation so detrimental to infants that they routinely advised young mothers to avoid it”. And our own age’s prevailing wisdom of “make sure your baby has listened to all Beethoven symphonies by age 3 months or she’ll never get into college” is based on equally flimsy evidence, yet somehow it still feels important to me. If I don’t make my kids listen to Beethoven, it will feel like some risky act of defiance; if I don’t take the early 20th century advice to avoid overstimulating them, it will feel more like I’m dismissing people who have been rightly tossed on the dungheap of history.

And then there’s the discussion from the recent discussion of Madness and Civilization about how 18th century doctors thought hot drinks will destroy masculinity and ruin society. Nothing that’s happened since has really disproved this – indeed, a graph of hot drink consumption, decline of masculinity, and ruinedness of society would probably show a pretty high correlation – it’s just somehow gotten tossed in the bin marked “ridiculous” instead of the bin marked “things we have to worry about”.

So maybe the scary thing about Oregon is how strongly we rely on intuitions about absurdity. If something doesn’t immediately strike us as absurd, then we have to go through the same plodding motions of debate that we do with everything else – and over short time scales, debate is interminable and doesn’t work. Having a notion strike us as absurd short-circuits that and gets the job done – but the Oregon/everyone-else divide shows that intuitions about absurdity are artificial and don’t even survive state borders, let alone genuinely different cultures and value systems.

And maybe this is scarier than usual because I just read Should Schools Ban Kids From Having Best Friends? I assume this is horrendously exaggerated and taken out of context and all the usual things that we’ve learned to expect from news stories, but it got me thinking. Right now enough people are outraged at this idea that I assume it’ll be hard for it to spread too far – and even if it does spread, we can at least feel okay knowing that parents and mentors and other people in society will maintain a belief in friendship and correct kids if schools go wrong. But what if it catches on? What if, twenty years from now, the idea of banning kids from having best friends has stopped generating an intuition of absurdity? Then if we want kids to still be allowed to have best friends, we’re going to have to (God help us) debate it. Have you seen the way our society debates things?

And I know some people see this and say it proves rational debate is useless and we should stop worrying about it. But trusting whatever irrational forces determines what sounds absurd or not doesn’t sound so attractive either. I think about it, and I want to encourage people to be really, really good at rational debate, just in case something terrible loses its protective coating of absurdity, or something absolutely necessary gains it, and our ability to actually judge whether things are good or bad and convince other people of it is all that stands between us and disaster.

And, uh, maybe the people who say kids shouldn’t be allowed to have best friends are right. I admit they’ve thought about this a lot longer than I have. My problem isn’t that someone thinks this. It’s that so much – even the legitimacy of friendship itself – can now depend on our culture’s explicit rationality. And our culture’s explicit rationality is so bad. And that the only alternative to dragging everything before the court of explicit rationality is some version of Chesterton’s Fence, ie the very heuristic telling Oregonians to defend full-service gas stations to the death. There is no royal road.

Maybe this is a good time to get on our chronophones with Oregon (or more prosaically, use the Outside View). Figure out what cognitive strategies you would recommend to an Oregonian trying to evaluate self-service gas stations. Then try to use those same strategies yourself. And try to imagine the level of careful thinking and willingness to question the status quo it would take to make an Oregonian get the right answer here, and be skeptical of any conclusions you’ve arrived at with any less.

Posted in Uncategorized | Tagged | 621 Comments

Fight Me, Psychologists: Birth Order Effects Exist And Are Very Strong

“Birth order” refers to whether a child is the oldest, second-oldest, youngest, etc. in their family. For a while, pop psychologists created a whole industry around telling people how their birth order affected their personality: oldest children are more conservative, youngest children are more creative, etc.

Then people got around to actually studying it and couldn’t find any of that. Wikipedia’s birth order article says:

Claims that birth order affects human psychology are prevalent in family literature, but studies find such effects to be vanishingly small….the largest multi-study research suggests zero or near-zero effects. Birth-order theory has the characteristics of a zombie theory, as despite disconfirmation, it continues to have a strong presence in pop psychology and popular culture.

I ought to be totally in favor of getting this debunked. After all, the replication crisis in psychology highlights the need to remain skeptical of poorly-supported theories. And some of the seminal work disproving birth order was done by Judith Rich Harris, an intellectual hero of mine who profoundly shaped my worldview with her book The Nurture Assumption.

So I regret to have to inform you that birth order effects are totally a real thing.

I first started thinking this at transhumanist meetups, when it would occasionally come up that everyone there was an oldest child. The pattern was noticeable enough that I included questions about birth order on the latest SSC survey. This blog deals with a lot of issues around transhumanism, futurology, rationality, et cetera, so I thought it would attract the same kind of people.

7,248 people gave me enough information to calculate their birth order, but I am very paranoid because previous studies have failed by failing to account for family size. That is, people of certain economic classes/religions/races/whatever tend to have larger family sizes, and if you’re in a large family, you’re more likely to be a later-born child. In order to be absolutely sure I wasn’t making this mistake, I concentrated on within-family-size analyses. For example, there were 2965 respondents with exactly one sibling…

…and a full 2118 of those were the older of the two. That’s 71.4%. p ≤ 0.00000001.

The same effect occurs in sibships of other sizes. Of the 1884 respondents from families with three children (n = 1884), 56.8% are the oldest, compared to predicted 33%. In families with four children (n = 765), 48.2% are the oldest, compared to predicted 25%.

Number of responses by birth order in sibships of different sizes; graph is by Emile and uses the public data only, which means exact numbers may be slightly different

This effect reaches the same scale as other effects people consider important. For example, the survey population drew heavily from STEM fields and was predictably very white; however, the birth order gap was larger in magnitude than the racial gap. It is bigger than gender gaps in some fields traditionally considered to have major gender gaps, like undergraduate economics. This can fairly be considered a large effect.

So what is going on here?

It’s unlikely that age alone is driving these results. In sibships of two, older siblings on average were only about one year older than younger siblings. That can’t explain why one group reads this blog so much more often than the other.

And all of the traditional pop psychology claims about birth order don’t seem to hold up. I didn’t find any effect on anything that could be reasonably considered conservativism or rebelliousness.

But there is at least one reputable study that did find a few personality differences. This is Rohrer et al (2015), which examined a battery of personality traits and found birth order effects only IQ and Openness to Experience, both very small.

I was only partly able to replicate this work. Rohrer et al found that eldest siblings had an advantage of about 1.5 IQ points. My study found the same: 1.3 to 1.7 IQ points depending on family size – but this did not reach significance. My other measure of intelligence was SAT, but SATs have been renormed and changed so many times over the past few decades that making apples-to-apples comparisons were really tough. I was able to get only a couple of weak and inconsistent effects: in sibships of two, eldest children had a slightly higher SAT1600 (1481 vs. 1458, p = 0.002) but not SAT2400; in sibships of 3+, eldest children had a slightly higher SAT2400 (2214 vs. 2248, p = 0.03), but not SAT1600. Overall this seems way too weak to say anything with certainty. Average SATs and IQs were already around the 99th percentile, so there may have been too much of a selection effect / ceiling effect to get good results.

The Openness results were clearer. Eldest children had significantly higher Openness (73rd %ile vs. 69th %ile, p = 0.001). Like Rohrer, I found no difference in any of the other Big Five traits.

Because I only had one blunt measure of Openness, I couldn’t do as detailed an analysis as Rohrer’s team. But they went on to subdivide Openness into two subcomponents, Intellect and Imagination, and found birth order only affected Intellect. They sort of blew Intellect off as just “self-estimated IQ”, but I don’t think this is right. Looking at it more broadly, it seems to be a measure of intellectual curiosity – for example, one of the questions they asked was, “I am someone who is eager for knowledge”. Educational Testing Service describes it as “liking complex problems”, and its opposite as “avoiding philosophical discussion”.

This seems promising. If older siblings were more likely to enjoy complex philosophical discussion, that would help explain why they are so much more likely to read a blog about science and current events. Unfortunately, the scale is completely wrong. Rohrer et al’s effects are tiny – going from a firstborn to a secondborn has an effect size of 0.1 SD on Intellect. In order to contain 71.6% firstborns, this blog would have to select for people above the 99.99999999th percentile in Intellect. There are only 0.8 people at that level in the world, so no existing group is that heavily selected.

I think the most likely explanation is that tests for Openness have limited validity, which makes the correlation look smaller than it really is. If being an eldest sibling increases true underlying Openness by a lot, but your score on psychometric tests for Openness only correlates modestly with true underlying Openness, that would look like being an eldest sibling only increasing test-measured-Openness a little bit.

(cf. Riemann and Kandler (2010), which finds that the heritability of Openness shoots way up if you do a better job assessing it)

If we suppose that birth order has a moderate effect size on intellectual curiosity of 0.5 SD, that would imply that science blogs select for people in the top 3% or so of intellectual curiosity, a much more reasonable number. Positing higher (but still within the range of plausibility) effect sizes would decrease the necessary filtering even further.

If this is right, it suggests Rohrer et al undersold their conclusion. Their bottom line was something like “birth order effects may exist for a few traits, but are too small to matter”. I agree they may only exist for a few traits, but they can be strong enough to skew ratios in some heavily-selected communities like this one.

When I asked around about this, a couple of people brought up further evidence. Liam Clegg pointed out that philosophy professor Michael Sandel asks his students to raise their hand if they’re the oldest in their family, and usually gets about 80% of the class. And Julia Rohrer herself was kind enough to add her voice and say that:

I’m not up to fight you because I think you might be onto something real here. Just to throw in my own anecdotal data: The topic of birth order effect comes up quite frequently when I chat with people in academic contexts, and more often than not (~80% of the time), the other person turns out to be firstborn. Of course, this could be biased by firstborns being more comfortable bringing up the topic given that they’re supposedly smarter, and it’s only anecdotes. Nonetheless, it sometimes makes me wonder whether we are missing something about the whole birth order story.

But why would eldest siblings have more intellectual curiosity? There are many good just-so stories, like parents having more time to read to them as children. But these demand strong effects of parenting on children’s later life outcomes, of exactly the sort that behavioral genetic studies consistently find not to exist. An alternate hypothesis could bring in weird immune stuff, like that thing where people with more older brothers are more likely to be gay because of maternal immunoreactivity to the Y chromosome (which my survey replicates, by the way). But this is a huge stretch and I don’t even know if people are sure this explains the homosexuality results, let alone the birth order ones.

If mainstream psychology becomes convinced this effect exists, I hope they’ll start doing the necessary next steps. This would involve seeing if biological siblings matter more or less than adopted siblings, whether there’s a difference between paternal and maternal half-siblings, how sibling age gaps work into this, and whether only children are more like oldests or youngests. Their reward would be finding some variable affecting children’s inherent intellectual curiosity – one that might offer opportunities for intervention.

If you want to double-check these results or analyze them further, you can download the data as .xlsx or .csv. Some people have complained of weird problems in the csv format and I recommend the xlsx if at all possible. I have removed the data of a few people who did not want their answers to be public, so you may not get exactly the same numbers I did, but they should be pretty close. If you think this could be turned into a paper and are interested in making it happen, please get in contact with me.

Book Review: Madness And Civilization

[Content warning: Severe mistreatment of the mentally ill. Through this post, I’ll be following Foucault in using the politically incorrect term “madness” rather than the more modern “mental illness”, because a big part of his point is worrying about the assumptions contained in the latter term.]

I.

I started reading Foucault’s Madness And Civilization with the expectation that it would be tedious and incomprehensible. You know, the stereotype that postmodernism / post-structuralism / Continentalism / etc. involves a lot of negation of the negation of the inversion of the Other within the Absolute within [and so on for 200 pages]. There was a little of that. But there was also a fascinating look at the history of mental illness, an entertainingly bombastic writing style, and a few ideas that I might have actually half-understood.

The book asked: how have we historically drawn the category boundaries around madness? If there is some great continent containing nations like Irrationality, Immorality, Illness, and Inspiration, from which of these countries have cartographers carved out a homeland for Madness? What wars have been fought over which provinces? What propaganda supports the current international order. And how accurate is it?

II.

Foucault starts with the Late Middle Ages / early Renaissance, when there was suddenly an explosion of interest in madness. The most famous works from this tradition are Ship of Fools and In Praise Of Folly (“fool” originally meant “insane person”) – not to mention pretty much everything by Hieronymus Bosch – and of course Foucault has intensely studied two hundred other examples I’ve never heard of. His theory is that this shares a source with the late medieval fascination with death (think all of those pictures of dancing skeletons). In both cases, the stable tidy medieval order is teetering towards collapse, and so the popular imagination is seized by images of the Outside invading the familiar world.

I feel bad juxtaposing so eminent a figure as Foucault with Lovecraft, but I found his description of Renaissance madness easiest to understand as basically Lovecraftian. We’ve somehow lucked into a bubble of comfortable stability within a vast and horrifying universe. We’ve become so complacent that we’ve forgotten about the bubble and are starting to poke around the edges. When bits of the Outside leak in, madness is the inevitable result. Lovecraft came of age during the First World War, as a European order that considered itself too enlightened to die went down in flames. The end of the Middle Ages must have been a similar period. The insane are those who have seen too much of the horrors that lurk beyond the veil – Yog-Sothoth, Protestantism, whatever.

And like Cthulhu, madness has an affinity for water:

One thing at least is certain: water and madness have long been linked in the dreams of European man. Already, disguised as a madman, Tristan had ordered boatmen to land him on the coast of Cornwall…And more than once in the course of time, the same theme reappears: among the mystics of the fifteenth century, it has become the motif of the soul as a skiff, abandoned on the infinite sea of desires, in the sterile field of cares and ignorance, among the mirages of knowledge, amid the unreason of the world — a craft at the mercy of the sea’s great madness, unless it throws out a solid anchor, faith, or raises its spiritual sails so that the breath of God may bring it to port. At the end of the sixteenth century, De Lancre sees in the sea the origin of the demoniacal leanings of an entire people: the hazardous labor of ships, dependence on the stars, hereditary secrets, estrangement from women—the very image of the great, turbulent plain itself makes man lose faith in God and all his attachment to his home; he is then in the hands of the Devil, in the sea of Satan’s ruses.

And so, Foucault tells us, in the fifteenth century there is a sudden emergence of a complex of artistic and philosophical themes linking madmen, the sea, and the terrible mysteries of the world. These culminate in the Ship Of Fools:

Renaissance men developed a delightful, yet horrible way of dealing with their mad denizens: they were put on a ship and entrusted to mariners because folly, water, and sea, as everyone then knew, had an affinity for each other. Thus, “Ships of Fools” crisscrossed the seas and canals of Europe with their comic and pathetic cargo of souls. Some of them found pleasure and even a cure in the changing surroundings, in the isolation of being cast off, while others withdrew further, became worse, or died alone and away from their families. The cities and villages which had thus rid themselves of their crazed and crazy, could now take pleasure in watching the exciting sideshow when a ship full of foreign lunatics would dock at their harbors.

This was such a great piece of historical trivia that I was shocked I’d never heard it before. Some quick research revealed the reason: it is completely, 100% false. Apparently Foucault looked at an allegorical painting by Hieronymus Bosch, decided it definitely existed in real life, and concocted the rest from his imagination.

Foucault apologists try to rescue this, say that he was just being poetic in some way. He wasn’t. Page 8 in my copy: “Of all these romantic or satiric vessels, the Narrenschiff [Ship Of Fools] is the only one that had a real existence — for they did exist, these boats that conveyed their insane cargo from town to town.” He really, really doubled down on this point. As far as I can tell, this is just as bad a failing of scholarship as it sounds – and surprising, since everything else about the book gives the impression of Foucault as an incredibly knowledgeable and wide-ranging scholar.

I couldn’t find any mention of equally bad flaws in the rest of the book, and Foucault really does seem to know his stuff, so I’m tempted to treat this as a one-off error, albeit a completely inexplicable one. I’m including it anyway as a warning before getting into some other pretty weird stuff.

III.

Eventually the Renaissance became less of an impending threat and more of a fait accompli, and people’s worries died down a bit. Madness began to be treated more as ordinary immorality. This didn’t necessarily mean people freely chose to be mad – the classical age didn’t think in exactly the same “it’s your fault” vs. “it’s biological” terms we do – but it was considered due to a weakness of character in the same way as other failures.

In some cases, it was the result of an excess of passions, flightiness, or imagination: the most famous example is Don Quixote, who went crazy after reading too many fiction books. This was actually considered a very serious risk by practically all classical authorities, especially for women. Foucault quotes Edme-Pierre Beauchesne:

In the earliest epochs of French gallantry and manners, the less perfected minds of women were content with facts and events as marvelous as they were unbelievable; now they demand believable facts yet sentiments so marvelous that their own minds are disturbed and confounded by them; they then seek, in all that surrounds them, to realize the marvels by which they are enchanted; but everything seems to them without sentiment and without life, because they are trying to find what does not exist in nature.

And a newspaper of the time:

The existence of so many authors has produced a host of readers, and continued reading generates every nervous complaint; perhaps of all the causes that have harmed women’s health, the principal one has been the infinite multiplication of novels in the last hundred years … a girl who at ten reads instead of running will, at twenty, be a woman with the vapors and not a good nurse.

Novels weren’t the only danger, of course. There were other hazards to watch for, like waking up too late:

The moment at which our women rise in Paris is far removed from that which nature has indicated; the best hours of the day have slipped away; the purest air has disappeared; no one has benefited from it. The vapors, the harmful exhalations, attracted by the sun’s heat, are already rising in the atmosphere.

Also, freedom:

For a long time, certain forms of melancholia were considered specifically English; this was a fact in medicine and a constant in literature…Spurzheim made a synthesis of all these analyses in one of the last texts devoted to them. Madness, “more frequent in England than anywhere else,” is merely the penalty of the liberty that reigns there, and of the wealth universally enjoyed. Freedom of conscience entails more dangers than authority and despotism. “Religious sentiments exist without restriction; every individual is entitled to preach to anyone who will listen to him”, and by listening to such different opinions, “minds are disturbed in the search for truth.”

These are a very selective sampling of quotes from just one of Foucault’s many chapters, and some of them are separated by centuries from others, but the overall impression I got was that conformity/wholesomeness/clean living was salubrious, and deviations from these likely to cause madness. Essentially, if you deviate from your humanity a little bit of the way – by failing to be a godly, sober-living, and industrious person – then that can compound on itself and make you lose practically all of your humanity. You will end up a feral madman, little different from a beast.

This naturally lumped madness in together with the other failures of industry and godliness: crime and poverty. During the seventeenth century, madmen, beggars, and criminals were all crammed together in workhouses. These were always sort of ambiguous between “maybe the structure and routine of work will help these poor souls find the right path” and “let’s keep these losers away from the rest of us”. The opening of the workhouses was sudden and dramatic: in Paris, it began Monday May 14, 1657, when “the archers began to hunt down beggars and herd them into the different buildings of the Hospital.” In England, it started around 1630, when the King recommended prosecuting:

…all those who live in idleness and will not work for reasonable wages, or who spend what they have in taverns…for these people live like savages without being married, nor buried, nor baptized; and it is this licentious liberty which causes so many to rejoice in vagabondage.

Foucault stresses that this wasn’t some plot on the part of authorities to enslave beggars and profit off their labor. The people in charge of the workhouses generally failed at assigning work that was profitable or productive, even in the weak sense of productive at lining their own pockets. They seemed genuinely driven by a belief in the curative power of Honest Work:

Measured by their functional value alone, the creation of the houses of confinement can be regarded as a failure. Their disappearance throughout Europe, at the beginning of the nineteenth century, as receiving centers for the indigent and prisons of poverty, was to sanction their ultimate failure: a transitory and ineffectual remedy, a social precaution clumsily formulated by a nascent industrialization. And yet, in this very failure, the classical period conducted an irreducible experiment. What appears to us today as a clumsy dialectic of production and prices then possessed its real meaning as a certain ethical consciousness of labor, in which the difficulties of the economic mechanisms lost their urgency in favor of an affirmation of value.

In this first phase of the industrial world, labor did not seem linked to the problems it was to provoke; it was regarded, on the contrary, as a general solution, an infallible panacea, a remedy to all forms of poverty. Labor and poverty were located in a simple opposition, in inverse proportion to each other. As for that power, its special characteristic, of abolishing poverty, labor – according to the classical interpretation — possessed it not so much by its productive capacity as by a certain force of moral enchantment. Labor’s effectiveness was acknowledged because it was based on an ethical transcendence. Since the Fall, man had accepted labor as a penance and for its power to work redemption. It was not a law of nature which forced man to work, but the effect of a curse. The earth was innocent of that sterility in which it would slumber if men remained idle: “The land had not sinned, and if it is accursed, it is by the labor of the fallen man who cultivates it; from it no fruit is won, particularly the most necessary fruit, save by force and continual labor.”

According to Johan Huizinga, there was a time, at the dawn of the Renaissance, when the supreme sin assumed the aspect of Avarice, Dante’s cièca cupidigia. The seventeenth-century texts, on the contrary, announced the infernal triumph of Sloth: it was sloth which led the round of the vices and swept them on. Let us not forget that according to the edict of its creation, the Hôpital Général must prevent “mendicancy and idleness as sources of all disorder.” Louis Bourdaloue echoes these condemnations of sloth, the wretched pride of fallen man; “What, then, is the disorder of an idle life? It is, replies Saint Ambrose, in its true meaning a second rebellion of the creature against God.” Labor in the houses of confinement thus assumed its ethical meaning: since sloth had become the absolute form of rebellion, the idle would be forced to work, in the endless leisure of a labor without utility or profit.

Foucault presents confinement alternately as workhouses mixing together madmen and poor people, and as ultra-secure special hospitals for the insane. I’m not sure what to make of this contradiction; maybe the less ill people were in one, and the more ill people in the other? Maybe he’s just interested in the general phenomenon of confinement? In any case, the places for the insane were pretty bad too:

In his Report on the Care of the Insane Desportes describes the cells of Bicetre as they were at the end of the eighteenth century: “The unfortunate whose entire furniture consisted of this straw pallet, lying with his head, feet, and body pressed against the wall, could not enjoy sleep without being soaked by the water that trickled from that mass of stone.” As for the cells of La Salpêtrière, what made “the place more miserable and often more fatal, was that in winter, when the waters of the Seine rose, those cells situated at the level of the sewers became not only more unhealthy, but worse still, a refuge for a swarm of huge rats, which during the night attacked the unfortunates confined there and bit them wherever they could reach them; madwomen have been found with feet, hands, and faces torn by bites which are often dangerous and from which several have died.”

It would be nice to think these kinds of things only survived because the public didn’t know about them, but that, uh, doesn’t seem to be quite what was happening:

As late as 1815, if a report presented in the House of Commons is to be believed, the hospital of Bethlehem exhibited lunatics for a penny, every Sunday. Now the annual revenue from these exhibitions amounted to almost four hundred pounds, which suggests the astonishingly high number of 96,000 visits a year.

All of these locks and chains and cages and exhibitions draw an obvious analogy of madmen and animals. Foucault doesn’t think this is a coincidence:

When practices reach this degree of violent intensity, it becomes clear that they are no longer inspired by the desire to punish nor by the duty to correct. The notion of a “résipiscence” is entirely foreign to this regime. But there was a certain image of animality that haunted the hospitals of the period. Madness borrowed its face from the mask of the beast. Those chained to the cell walls were no longer men whose minds had wandered, but beasts preyed upon by a natural frenzy: as if madness, at its extreme point, freed from that moral unreason in which its most attenuated forms are enclosed, managed to rejoin, by a paroxysm of strength, the immediate violence of animality. This model of animality prevailed in the asylums and gave them their cagelike aspect, their look of the menagerie…

What is most important is that it is conceived in terms of an animal freedom. The negative fact that “the madman is not treated like a human being” has a very positive content: this inhuman indifférence actually has an obsessional value: it is rooted in the old fears which since antiquity, and especially since the Middle Ages, have given the animal world its familiar strangeness, its menacing marvels, its entire weight of dumb anxiety. Yet this animal fear which accompanies, with all its imaginary landscape, the perception of madness, no longer has the same meaning it had two or three centuries earlier: animal metamorphosis is no longer the visible sign of infernal powers, nor the result of a diabolic alchemy of unreason. The animal in man no longer has any value as the sign of a Beyond; it has become his madness, without relation to anything but itself: his madness in the state of nature. The animality that rages in madness dispossesses man of what is specifically human in him; not in order to deliver him over to other powers, but simply to establish him at the zero degree of his own nature.

There is a lot I didn’t understand about this section, but the overall gist seems to be trying to lump the insane in together with other forms of badness and deviation from the moral norm – whether animals or criminals – and shutting them away where they could not be seen.

IV.

The late eighteenth century on was the period of reform, when the mentally ill were taken out of the prisons and workhouses and brought to nice benevolent asylums in the countryside where they could convalesce in peace under the supervision of expert doctors.

…or at least this is the prevailing narrative. Foucault is having none of it.

The houses of confinement weren’t just for criminals and madmen. They were also for the “undeserving poor” – homeless, beggars, unemployed. But the Industrial Revolution was changing the conception of poverty. Foucault places this in the context of modern economics, which introduced abstract ideas like “jobs” and “workers”. In this model, the poor were potential workers who just lacked jobs, not the weird exotic subspecies of humanity called “paupers”. The past paradigm had focused on healing their souls through the redemptive power of make-work; the new paradigm said that if they could be enlisted to work productive industrial jobs it would improve the Economy and everyone would be better off.

Out they went, and now instead of just being an undifferentiated mass of undesirables, the hospitals were more obviously just the two disparate populations of criminals and madmen. But surely now is the point where people see how inhumane it is to stick the mentally ill together with criminals, right?

Sort of.

When the Prior of Senlis asked that madmen be separated from certain convicts, what were his arguments? “He is deserving of mercy, as well as two or three others who would be better off in some citadel, because of the company of six others who are mad, and who torment them night and day.” And the meaning of this sentence would be so clearly understood by the police that the internees in question would be set free. And the demands of the Brunswick overseer have the same meaning: the workshop is disturbed by the cries and the confusion of the insane; their frenzy is a perpetual danger, and it would be better to send them back to the cells, or to keep them in chains. And already, we can anticipate that from one century to the next, the same protests did not have, at bottom, the same value. Early in the nineteenth century, there was indignation that the mad were not treated any better than those condemned by common law or than State prisoners; throughout the eighteenth century, emphasis was placed on the fact that the prisoners deserved a better fate than one that lumped them with the insane […]

La Rochefoucauld-Liancourt bears witness to this in his report to the Committee on Mendicity: “One of the punishments inflicted upon the epileptics and upon the other patients of the wards, even upon the deserving poor, is to place them among the mad.” The scandal lies only in the fact that the madmen are the brutal truth of confinement, the passive instrument of all that is worst about it.

There was apparently general agreement that it was unfair to criminals to keep them confined together with madmen, so out went the criminals – with the madmen staying around in institutions that were starting to sort of resemble the idea of a modern psychiatric hospital.

The beginning of the nineteenth century did start to see fewer chains and rats, and more attempt to treat madmen as human beings. But Foucault is perversely annoyed by this, convinced that this was secretly a way of respecting the mentally ill even less. He notes that their newfound rights were conditional on good behavior and on acting sane, and so in a sense these new more compassionate hospitals gave them less freedom than the old ball-and-chain deal. In the older hospitals, you could do whatever you wanted. You’d be doing it on the wrong side of iron bars, mocked by people who hated you, but you could do it. In the new hospitals, you were forced to constantly perform and please your guards and nurses in order to maintain your privileges. Madmen went from being treated like criminals – who at least are still adult citizens – to being treated like children:

We must therefore re-evaluate the meanings assigned to Tuke’s work: liberation of the insane, abolition of constraint, constitution of a human milieu – these are only justifications. The real operations were different. In fact Tuke created an asylum where he substituted for the free terror of madness the stifling anguish of responsibility; fear no longer reigned on the other side of the prison gates, it now raged under the seals of conscience. Tuke now transferred the age-old terrors in which the insane had been trapped to the very heart of madness. The asylum no longer punished the madman’s guilt, it is true; but it did more, it organized that guilt; it organized it for the madman as a consciousness of himself, and as a non-reciprocal relation to the keeper; it organized it for the man of reason as an awareness of the Other, a therapeutic [and so on for two hundred pages].

A lynchpin of this system was doctors. Foucault says that at this time, doctors really didn’t make much pretense to being able to cure mental illness. Their main role was as a representative of polite society and healthy living. The doctor would go in, talk to some mad people about how really being virtuous and healthful was better than being degenerate and crazy, and this would help the process of drawing them back into the social order (and so out of the excessive wild liberty that was madness). The more high-status and authoritative the doctor, the better – and he has lots of examples of doctors supposedly curing madmen with a couple of stern words delivered in a suitably censorious tone.

He theorizes that after the restore-to-social-order idea of mental health became obsolete, doctors were stuck without a purpose. That is, it was known that it was important to have doctors treating the mentally ill, but unclear exactly what they were supposed to do. One response was to flounder around for a while on various scams and miracle cures. Another response was Freud’s: to accept that the doctor-patient relationship itself somehow had magical properties, and that the doctor being a silent authority figure sitting in judgment of you was actually an effective way to cure psychiatric disease.

V.

Everything above is a really superficial reading of Madness And Civilization and probably misses the whole point of the book.

This point is something that alternately seems postmodern or kabbalistic or – for lack of a better term – insane. It’s not just saying that This Historical Period treated the mad This Way, but That Historical Period treated them That Way. It’s trying to peek beneath the hood (or the veil?) to find the zeitgeist, the animating spirit of the European continent that led them to do things as they did and which transformed one schema into another. This is rarely anything sensible, like “the economy improved” or “there was a revolution”. More often it’s some kind of deep subconscious beliefs about the meaning of humanity or freedom or symbolism or something. If Europe was one guy, this book would be Foucault performing Freudian dream analysis on that guy.

For example, the Europeans didn’t put their madmen on Ships Of Fools just because it was a convenient way to get rid of them, but also because:

Water adds to this the dark mass of its own values; it carries off, but it does more: it purifies. Navigation delivers man to the uncertainty of fate; on water, each of us is in the hands of his own destiny; every embarkation is, potentially, the last. It is for the other world that the madman sets sail in his fools’ boat; it is from the other world that he comes when he disembarks. The madman’s voyage is at once a rigorous division and an absolute Passage. In one sense, it simply develops, across a half-real, half-imaginary geography, the madman’s liminal position on the horizon of medieval concern—a position symbolized and made real at the same time by the madman’s privilege of being confined within the city gates: his exclusion must enclose him; if he cannot and must not have another prison than the threshold itself, he is kept at the point of passage. He is put in the interior of the exterior, and inversely. A highly symbolic position, which will doubtless remain his until our own day, if we are willing to admit that what was formerly a visible fortress of order has now become the castle of our conscience.

Confined on the ship, from which there is no escape, the madman is delivered to the river with its thousand arms, the sea with its thousand roads, to that great uncertainty external to everything. He is a prisoner in the midst of what is the freest, the openest of routes: bound fast at the infinite crossroads. He is the Passenger par excellence; that is, the prisoner of the passage. And the land he will come to is unknown—as is, once he disembarks, the land from which he comes. He has his truth and his homeland only in that fruitless expanse between two countries that cannot belong to him. Is it this ritual and these values which are at the origin of the long imaginary relationship that can be traced through the whole of Western culture? Or is it, conversely, this relationship that, from time immemorial, has called into being and established the rite of embarkation?

Let’s appreciate a few things about this passage. First, it’s phenomenal writing. I apologize for thinking all Continental philosophy had to be badly-written; in retrospect Nietzsche should have cured me of this delusion.

But second, it’s totally bonkers. Like, forget the fact that there weren’t any real Ships Of Fools and Foucault is analyzing a literary motif. Forget that the literary motif actually comes from a metaphor by Plato which is about something else. Even if the rivers of Europe were choked with such Ships, this is just a phenomenally unproductive way to think about anything. This is the kind of thought process where we drill for oil because we are symbolically sexually penetrating Mother Earth (insert kabbalistic analysis of the word “fracking” here).

There is a lot along these lines, none of which I am really able to follow, especially because the book never gives a clear definition of its crucial term “unreason”. The closest I can come is a theory that the Renaissance (and to some degree the later classical period) thought of madness as potentially interesting and valuable. They didn’t like madmen, but they occasionally tried to have a “dialogue between madness and reason”, where they would try to understand where the mad were coming from and what they had to offer civilization. In later periods, this was lost, and the mad were just confined away from human sight – but there was still at least some dignity in it, because madness was allowed to exist on its own terms. Later, when bleak prison workhouses transitioned to humane medical asylums, even that dignity was lost, as sane people’s imperative changed to forcing the mad to conform to the sane world’s standards and deny their madness’ existence.

This is the thrust of the last chapter, and Foucault ties all of this together into a case that all of the reformers were just jerks, and they sought more humane treatment for the mentally ill out of a desire to judge and dominate them. This is fantastically contrarian. Foucault does not give an inch to the position that maybe there was something good and wholesome about the desire to rescue people from being crammed by the dozen in rat-infested cells with all of their limbs chained together. He doesn’t specifically say the rat-infested cells were better, but he sure hints at it pretty hard.

I always like contrarian takes. But I can’t make sense of what Foucault is trying to do here. And also, some of the same sites that debunk the Ship Of Fools thing say that actually the Renaissance was super-cruel to mad people, and Foucault’s picture of them as tolerant and understanding is composed entirely of cherry-picking and imagination.

The best I can do here is say that Foucault is too much of an Idealist where I am a Materialist. I measure humanitarian victories in prisoners freed and rat bites averted. He seems to measures them how the dream sequences of Personified Europe are treating the dialogue between Madness and Reason. Probably there’s a perspective in which this makes sense, but this book didn’t manage to teach me to appreciate it.

VI.

Granted that I couldn’t appreciate the philosophy and remain doubtful of the scholarship, I still enjoyed this book. It was a weird tour of parts of history I wouldn’t otherwise have thought of. And it did accomplish the post-modernist goal of broadening my perspective enough to be more doubtful of my own society’s institutions.

The idea of novel-reading causing insanity seems ridiculous to us. But is it any more ridiculous than the idea of video games causing violence? Or stereotype threat causing poor test performance? The dustbin of scientific history is filled with weird claims that various social and cultural phenomena have powerful effects on the mind, from refrigerator-mother schizophrenia to low self-esteem causing crime. Surely this is one more warning to before voodoo psychology.

But that’s too easy. I also worry about the idea – constant in its essence in every time period, though changing in its particulars – that mental illness is the result of living your life in an unwholesome way and indulging in illicit pleasures. In the classical period, this included everything from waking up too late, to not working long enough hours, to getting too romantically infatuated, to, well…

Heat clears the way for liquids. It is precisely for this reason that all the hot drinks the seventeenth century used and abused risk becoming harmful: relaxation, general humidity, softness of the entire organism. And since these are the distinctive traits of the female body, as opposed to virile dryness and solidity, the abuse of hot drinks risks leading to a general feminization of the human race. [Thomas Sydenham warns:] “Most men are censured, not without reason, for having degenerated in contracting the softness, the habits, and the inclinations of women. Excessive use of humectants immediately accelerates the metamorphosis and makes the two sexes almost as alike in the physical as in the moral realm. Woe to the human race, if this prejudice extends its reign to the common people.”

And I can’t help noticing a resemblance between this and the modern insistence on diet and exercise. They’ve got the same kind of element of “if you don’t exert a lot of willpower to live your life in a diligent way, then you shouldn’t be surprised when you end up mentally ill.” Although the role of poor diet/exercise in physical illness is beyond questioning, its role in mental illness is more anecdotal and harder to pin down. Don’t get me wrong, there are lots of studies showing it works. But there’s also lots of anomalous data, like how exercise performed as part of your job doesn’t help. This has led some people to suggest that the physical effects of exercise are less important than the social role – the feeling of doing something to fight your depression and conform to a virtuous mode of life. Exercise works for that – but so might avoiding novels and staying away from hot drinks, if that was what your society wanted. I’m not saying this is definitely true. I’m just saying I give it higher credence now that the pattern of “people always want to use willpower-induced conformity to social order as a bulwark against mental illness” is more apparent.

On the other hand, it’s also important not to dismiss something we believe today just because people in the past believed the same thing. I was tempted to say something like I’m skeptical of modern-day behavioral activation because it sounds exactly like past-days “work will cure you because idleness is the mother of all sins” doctrine. But on closer examination, I’m using evidence wrong here. If people in the past believed something, that should be at least some positive evidence it was true – or at least not negative evidence. Of course, they’re going to phrase it in really awkward politically-incorrect ridiculous-seeming terms, because they’re the past. And they’ll probably figure out some way to make it imply a moral atrocity, because, again, past. But that doesn’t mean they’re wrong. Very possibly it’s a timeless truth that routine and purposeful activity help depression. The past phrased this as “idleness is the mother of sin so we should force everyone into workhouses”, and now we’re not as much about forced labor and tracing out sin’s family tree. But behavioral activation therapy still seems pretty powerful.

I think I am going to be suspicious when the implied message stays the same but the specifics keep changing – “stay away from exciting novels” vs. “stay away from violent video games”, or “avoid hot drinks” vs. “avoid sugary foods”. It seems potentially safer when the specifics stay the same, with only the wording and the proposed responses changing.

There was one more thing that worried me about the past, much broader than any of these specific issues: doctors were very sure their cures worked. I knew in principle that there were a lot of placebo cures and cherry-picking, but it’s another thing to have to read story after story of doctors trying ridiculous treatments – one of them had his patient eat soap to cleanse their circulation – and reporting that it definitely worked, every time, and patients who had been violently insane for years were restored to perfect health. I have worked in a lot of excellent psychiatric hospitals, and not one of them has worked anywhere near as well as people in the seventeenth century record their completely ridiculous mental health system of telling people not to reading novels to have worked.

Either everyone in the past is a total liar (given this effect, probably true), Foucault himself is a total liar (given the Ship of Fools thing, probably true), or we need even more constant vigilance than we’ve been applying thus far (alas, probably also true).

SSC Survey Results 2018

Thanks to the 8,077 people (!) who took this year’s SSC survey.

I don’t have the energy to screenshot/copy/paste the graph for every single question the way I have in previous years, so let’s do it differently.

The survey page is changed so that you can just press “okay” and “submit”, and it will bring you to the results page and see all the results. I’m not sure you can take the whole survey anymore, but if you find a way to do so, please don’t. Just press “okay” and “submit” and you should be fine. Don’t worry, all identifying questions (including the identifier string and all long answers) have been hidden.

See the exact questions for the SSC survey.

See results from the SSC survey.

See results from the Mechanical Turk comparison survey.

(this might have a lot of lag if you try to do it at the same time as everyone else; if you tell your browser to stop scripts it might improve)

I plan to post longer analyses (including the ones in the pre-registered hypotheses) later on, hopefully dragging them out into a bunch of Least Publishable Units.

If you want to scoop me, or investigate the data yourself, you can download the answers of the 7298 people who agreed to have their responses shared publicly:

Main survey: .xlsx, .csv

Turk survey: .xlsx, .csv

2017 Predictions: Calibration Results

At the beginning of every year, I make predictions. At the end of every year, I score them. Here are 2014, 2015, and 2016.

And here are the predictions I made for 2017. Strikethrough’d are false. Intact are true. Italicized are getting thrown out because I can’t decide if they’re true or not.

WORLD EVENTS
1. US will not get involved in any new major war with death toll of > 100 US soldiers: 60%
2. North Korea’s government will survive the year without large civil war/revolt: 95%
3. No terrorist attack in the USA will kill > 100 people: 90%
4. …in any First World country: 80%
5. Assad will remain President of Syria: 80%
6. Israel will not get in a large-scale war (ie >100 Israeli deaths) with any Arab state: 90%
7. No major intifada in Israel this year (ie > 250 Israeli deaths, but not in Cast Lead style war): 80%
8. No interesting progress with Gaza or peace negotiations in general this year: 90%
9. No Cast Lead style bombing/invasion of Gaza this year: 90%
10. Situation in Israel looks more worse than better: 70%
11. Syria’s civil war will not end this year: 60%
12. ISIS will control less territory than it does right now: 90%
13. ISIS will not continue to exist as a state entity in Iraq/Syria: 50%
14. No major civil war in Middle Eastern country not currently experiencing a major civil war: 90%
15. Libya to remain a mess: 80%
16. Ukraine will neither break into all-out war or get neatly resolved: 80%
17. No major revolt (greater than or equal to Tiananmen Square) against Chinese Communist Party: 95%
18. No major war in Asia (with >100 Chinese, Japanese, South Korean, and American deaths combined) over tiny stupid islands: 99%
19. No exchange of fire over tiny stupid islands: 90%
20. No announcement of genetically engineered human baby or credible plan for such: 90%
21. EMDrive is launched into space and testing is successfully begun: 70%
22. A significant number of skeptics will not become convinced EMDrive works: 80%
23. A significant number of believers will not become convinced EMDrive doesn’t work: 60%
24. No major earthquake (>100 deaths) in US: 99%
25. No major earthquake (>10000 deaths) in the world: 60%
26. Keith Ellison chosen as new DNC chair: 70%

EUROPE
27. No country currently in Euro or EU announces new plan to leave: 80%
28. France does not declare plan to leave EU: 95%
29. Germany does not declare plan to leave EU: 99%
30. No agreement reached on “two-speed EU”: 80%
31. The UK triggers Article 50: 90%
32. Marine Le Pen is not elected President of France: 60%
33. Angela Merkel is re-elected Chancellor of Germany: 60%
34. Theresa May remains PM of Britain: 80%
35. Fewer refugees admitted 2017 than 2016: 95%

ECONOMICS
36. Bitcoin will end the year higher than $1000: 60%
37. Oil will end the year higher than $50 a barrel: 60%
38. …but lower than $60 a barrel: 60%
39. Dow Jones will not fall > 10% this year: 50%
40. Shanghai index will not fall > 10% this year: 50%

TRUMP ADMINISTRATION
41. Donald Trump remains President at the end of 2017: 90%
42. No serious impeachment proceedings are active against Trump: 80%
43. Construction on Mexican border wall (beyond existing barriers) begins: 80%
44. Trump administration does not initiate extra prosecution of Hillary Clinton: 90%
45. US GDP growth lower than in 2016: 60%
46. US unemployment to be higher at end of year than beginning: 60%
47. US does not withdraw from large trade org like WTO or NAFTA: 90%
48. US does not publicly and explicitly disavow One China policy: 95%
49. No race riot killing > 5 people: 95%
50. US lifts at least half of existing sanctions on Russia: 70%
51. Donald Trump’s approval rating at the end of 2017 is lower than fifty percent: 80%
52. …lower than forty percent: 60%

COMMUNITIES
53. SSC will remain active: 95%
54. SSC will get fewer hits than in 2016: 60%
55. At least one SSC post > 100,000 hits: 70%
56. I will complete an LW/SSC survey: 80%
57. I will finish a long FAQ this year: 60%
58. Shireroth will remain active: 70%
59. No co-bloggers (with more than 5 posts) on SSC by the end of this year: 80%
60. Less Wrong renaissance attempt will seem less (rather than more) successful by end of this year: 90%
61. > 15,000 Twitter followers by end of this year: 80%
62. I won’t stop using Twitter, Tumblr, or Facebook: 90%
63. I will attend the Bay Area Solstice next year: 90%
64. …some other Solstice: 60%
65. …not the New York Solstice: 60%

WORK
66. I will take the job I am currently expecting to take: 90%
67. …at the time I am expecting to take it, without any delays: 80%
68. I will like the job and plan to continue doing it for a while: 70%
69. I will pass my Boards: 90%
70. I will be involved in at least one published/accepted-to-publish research paper by the end of 2017: 50%
71. I will present a research paper at the regional conference: 80%
72. I will attend the APA national meeting in San Diego: 90%
73. None of my outpatients to be hospitalized for psychiatric reasons during the first half of 2017: 50%
74. None of my outpatients to be involuntarily committed to psych hospital by me during the first half of 2017: 70%
75. None of my outpatients to attempt suicide during the first half of 2017: 90%
76. I will not have scored 95th percentile or above when I get this year’s PRITE scores back: 60%

PERSONAL
77. Amazon will not harass me to get the $40,000 they gave me back: 80%
78. …or at least will not be successful: 90%
79. I will drive cross-country in 2017: 70%
80. I will travel outside the US in 2017: 70%
81. …to Europe: 50%
82. I will not officially break up with any of my current girlfriends: 60%
83. K will spend at least three months total in Michigan this year: 70%
84. I will get at least one new girlfriend: 70%
85. I will not get engaged: 90%
86. I will visit the Bay in May 2017: 60%
87. I will have moved to the Bay Area: 99%
88. I won’t live in Godric’s Hollow for at least two weeks continuous: 70%
89. I won’t live in Volterra for at least two weeks continuous: 70%
90. I won’t live in the Bailey for at least two weeks continuous: 95%
91. I won’t live in some other rationalist group home for at least two weeks continuous: 90%
92. I will be living in a house (incl group house) and not apartment building at the end of 2017: 60%
93. I will still not have gotten my elective surgery: 90%
94. I will not have been hospitalized (excluding ER) for any other reason: 95%
95. I will make my savings target at the end of 2017: 60%
96. I will not be taking any nootropic (except ZMA) daily or near-daily during any 2-month period this year: 90%
97. I won’t publicly and drastically change highest-level political/religious/philosophical positions (eg become a Muslim or Republican): 90%
98. I will not get drunk this year: 80%
99. I get at least one article published on a major site like Huffington Post or Vox or New Statesman or something: 50%
100. I attend at least one wedding this year: 50%
101. Still driving my current car at the end of 2017: 90%
102. Car is not stuck in shop for repairs for >1 day during 2017: 60%
103. I will use Lyft at least once in 2017: 60%
104. I weight > 185 pounds at the end of 2017: 60%
105. I weight < 195 pounds at the end of 2017: 70%

Some justifications for my decisions: I rated the civil war in Syria as basically over, even though Wikipedia says otherwise, since I don’t think there are any remaining credible rebel forces, and ISIS is pretty dead. Trump’s approval rating is taken from this 538 aggregator and is currently estimated at 38.1%. I rated the border wall as not currently under construction, despite articles with titles like The Trump Administration Has Already Started Building The Border Wall, because it was referring to a 30-foot prototype not likely to be included in the wall itself (have I mentioned the media is terrible?). I refused to judge the success of the Less Wrong renaissance attempt, because it seemed unsuccessful but was superseded by a separate much more serious attempt that was successful and I’m not sure how to rate that. I refused to judge whether or not I got a new partner because I am casually dating some people and not sure how to count it. I refused to judge whether I got 95th percentile+ on my PRITE because they stopped clearly reporting percentile scores.

This is the graph of my accuracy for this year:

Of 50% predictions, I got 5 right and 3 wrong, for a score of 62%
Of 60% predictions, I got 14 right and 8 wrong, for a score of 64%
Of 70% predictions, I got 8 right and 5 wrong, for a score of 62%
Of 80% predictions, I got 16 right and 2 wrong, for a score of 89%
Of 90% predictions, I got 24 right and 1 wrong, for a score of 96%
Of 95% predictions, I got 8 right and 1 wrong, for a score of 89%
Of 99% predictions, I got 4 right and 0 wrong, for a score of 100%

Blue is hypothetical perfect calibration, red is my calibration. The multiple crossings of the blue line indicate that I am neither globally overconfident or globally underconfident.

Last year my main concern was that I was underconfident at 70%. I tried to fix that this year by becoming more willing to guess at that level, and ended up a bit overconfident. This year I’ll try somewhere in the middle and hopefully get it right.

There weren’t enough questions to detect patterns of mistakes, but there was a slight tendency for me to think things would go more smoothly than they did. I overestimated the success of my diet, my savings plan, my travel plans my job start date, my long-FAQ-making ability, and my future housing search (this last one led to me spending a few weeks at a friend’s group house, failing on a 95% certainty prediction). I only made one error in favor of personal affairs going better than expected (SSC got more hits than last year; maybe this isn’t a central example of “personal affairs going smoothly”). None of these really caused me any problems, suggesting that I have enough slack in my plans, but apparently I’m not yet able to extend that to being able to make good explicit predictions about.

My other major error was underestimating the state of the US economy, leading to a couple of correlated errors. I think I got Trump mostly right, although I may have overestimated his efficacy (I thought he would have started the border wall by now) and erred in thinking he would lift sanctions on Russia.

Otherwise this is consistent with generally good calibration plus random noise. Next year I’ll have played this game five years in a row, and I’ll average out all my answers for all five years and get a better estimate; for now I’ll just be pretty satisfied.

Predictions for 2018 coming soon.

Posted in Uncategorized | Tagged | 216 Comments

OT92: Ocean Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. This is your last chance to take the 2017-2018 Slate Star Codex reader survey. I will be closing it tomorrow.

2. New ad on the sidebar: Shearwater, a Boston tech startup that helps universities run mentorship programs, is looking for software engineers.

3. Happy new year!

Posted in Uncategorized | Tagged | 968 Comments

Adderall Risks: Much More Than You Wanted To Know

[Previously in series: Antidepressant Pharmacogenomics: Much More Than You Wanted To Know; SSRIs: Much More Than You Wanted To Know, etc. This is all preliminary and you should not take it as a reason to change successful medical care. None of this necessarily applies to your particular case and you should talk to your doctor if you have questions about that.]

I. Confessions Of A Gatekeeper

I didn’t realize how much of a psychiatrist’s time was spent gatekeeping Adderall.

The human brain wasn’t built for accounting or software engineering. A few lucky people can do these things ten hours a day, every day, with a smile. The rest of us start fidgeting and checking our cell phone somewhere around the thirty minute mark. I work near the financial district of a big city, so every day a new Senior Regional Manipulator Of Tiny Numbers comes in and tells me that his brain must be broken because he can’t sit still and manipulate tiny numbers as much as he wants. How come this is so hard for him, when all of his colleagues can work so diligently?

(it’s because his colleagues are all on Adderall already – but telling him that will just make things worse)

He goes on to give me his story about how he’s at risk of getting fired from his Senior Regional Manipulator Of Tiny Numbers position, and at this rate he’s never going to get the promotion to Vice President Of Staring At Giant Spreadsheets, so do I think I can give him some Adderall to help him through?

Psychiatric guidelines are very clear on this point: only give Adderall to people who “genuinely” “have” “ADHD”.

But “ability to concentrate” is a normally distributed trait, like IQ. We draw a line at some point on the far left of the bell curve and tell the people on the far side that they’ve “got” “the disease” of “ADHD”. This isn’t just me saying this. It’s the neurostructural literature, the the genetics literature, a bunch of other studies, and the the Consensus Conference On ADHD. This doesn’t mean ADHD is “just laziness” or “isn’t biological” – of course it’s biological! Height is biological! But that doesn’t mean the world is divided into two natural categories of “healthy people” and “people who have Height Deficiency Syndrome“. Attention is the same way. Some people really do have poor concentration, they suffer a lot from it, and it’s not their fault. They just don’t form a discrete population.

Meanwhile, Adderall works for people whether they “have” “ADHD” or not. It may work better for people with ADHD – a lot of them report an almost “magical” effect – but it works at least a little for most people. There is a vast literature trying to disprove this. Its main strategy is to show Adderall doesn’t enhance cognition in healthy people. Fine. But mostly it doesn’t enhance cognition in people with ADHD either. People aren’t using Adderall to get smart, they’re using it to focus. From Prescription stimulants in individuals with and without attention deficit hyperactivity disorder:

It has never been established that the cognitive effects of stimulant drugs are central to their therapeutic utility. In fact, although ADHD medications are effective for the behavioral components of the disorder, little information exists concerning their effects on cognition…stimulant drugs do improve the ability (even without ADHD) to focus and pay attention.

I cannot tell you how much literature there is trying to convince you that Adderall will not help healthy people, nor how consistently college students disprove every word of it every finals season.

That makes “only give Adderall to people with ADHD” a moral judgment, not a medical one. Adderall doesn’t “cure” the “disease” of ADHD, at least not in the same way penicillin cures syphilis. Adderall will give everyone better concentration, and we’ve judged that it’s okay for people with terrible concentration to use it to overcome their handicap, but not okay for people with already-fine concentration to use it to become superhuman.

We could still have a principled definition of ADHD. It would be something like “People below the Nth percentile in ability to concentrate.” Instead, we use the DSM, which advises us to diagnose people with ADHD if they say they have at least five symptoms from a list. The list has things like “often has difficulty sustaining attention” and “often has difficulty organizing tasks”. How often? You know, often! And if you work as a Senior Regional Manipulator Of Tiny Numbers, you’re going to have attention problems a lot more “often” than the rest of us.

So the DSM criteria are kind of meaningless, but that’s fine, because people can just lie about them anyway.

There are whole websites for this: How To Convince Your Shrink You Have ADHD, How To Get Your Doctor To Prescribe You Adderall In Five Easy Steps, et cetera. But I can’t imagine most people need them. Just talk about all the times in your life that you had attention and concentration problems, and if your doctor asks you a more specific question (“Do you often lose things?”) you give the obvious right answer (“Wow, it’s like you’ve known me my whole life!”).

Aren’t psychiatrists creepy wizards who can see through your deceptions? There are people like that. They’re called forensicists, they have special training in dealing with patients who might be lying to them, and they tend to get brought in for things like evaluating a murderer pleading the insanity defense. They have a toolbox of fascinating and frequently hilarious techniques to ascertain the truth, and they’re really good at their jobs.

But me? At best, I can have a vague suspicion you’re not telling the truth. And how many patients genuinely in need of treatment do I want to risk accidentally rejecting just so I can be sure of thwarting you? A lot of 100% honest psychiatric patients’ stories are pretty unbelievable, really, and I don’t want to have to treat every patient like a convicted murderer. Unless you give me some specific reason to doubt you, I start with the assumption that you’re telling the truth.

Think about how wasteful all of this is. We throw people in jail for using Adderall without a prescription. We expel them from colleges. We fight an expensive and bloody War on Drugs to prevent non-prescription-holders from getting Adderall. We create a system in which poor people need to stretch their limited resources to make it to a psychiatrist so they can be prescribed Adderall, in which people without health insurance can never get it at all, in which DEA agents occasionally bust down the doors of medical practices giving out Adderall illegally. All to preserve a sham in which psychiatrists ask their patients “Do you have ADD symptoms?” and the patients say “Oh, yeah, definitely,” and then the psychiatrists give them Adderall. It’s like adding twenty layers of super-reinforced concrete to a bunker with a wide-open front door.

(Also, if by some chance a psychiatrist doesn’t give a patient Adderall, that patient practically always goes to another psychiatrist, and that next psychiatrist does. Trust me, no matter how unsuitable a candidate you are, no matter how bad a liar you are, somewhere there is a psychiatrist who will give you Adderall. And by “somewhere”, I mean it will take you three tries, tops.)

Psychiatrists’ main response to this perverse and unwinnable system is to give people Adderall, but feel guilty about it. Somebody should do an anthropological study on this, but my preliminary observations:

Some people will lecture their patients on how Medication Can Never Address The Root Cause Of A Problem, and the patient will agree that Medication Can Never Address The Root Cause Of A Problem, and then the psychiatrist will give them Adderall and feel good about it.

Some people will discuss alternative options, like behavioral treatments, or non-stimulant medications, and the patient will come back in a month and say that the behavioral treatments didn’t work, and then the psychiatrist will give them Adderall and feel good about it.

Some people will give their patients a formal test where they have to answer questions like “I often have trouble concentrating – strongly disagree, disagree, neutral, agree, or strongly agree?” Then the patient will give whatever answers get them Adderall, the psychiatrist will add up all the answers and score the test and find that it means the patient needs Adderall, and then the psychiatrist will give the patient Adderall and feel good about it.

Some people will occasionally find some little issue with one patient’s story, deny them Adderall, and then ride out the moral high for weeks, feeling so virtuous that they can give the next few people Adderall and feel good about it.

Some people will demand multiple evaluation sessions, lots of laboratory tests, make a patient tell them their whole life story. And after learning that they had a bad relationship with their stepfather in 8th grade and still have sexual hangups over that time they ejaculated prematurely with Sally one time in freshman year, the psychiatrist will give the patient Adderall and feel good about it.

I have been guilty of all of these at one time or another. I still wrestle with these issues a lot. The latest step in my evolving position was reading Kelsey’s blog post about having ADHD and trying to get Adderall. Her doctor gave her a list of things she had to do before he would give her Adderall, and she – having ADHD – got distracted and never did any of them.

(by my calculations, that decreased Kelsey’s effectiveness by 20%, thus costing approximately 54 billion lives.)

So lately I’ve been trying to be smarter about all this. What about good old consequentialism? Most people will get some benefit from Adderall, but it’s a powerful drug with a lot of potential risks. Maybe I should figure out exactly how bad the risks are, and then I can figure out how bad people’s concentration problems would have to be for the risks to be outweighed by the benefits.

Trying to discover the risks of Adderall is a kind of ridiculous journey. It’s ridiculous because there are two equal and opposite agendas at work. The first agenda tries to scare college kids away from abusing Adderall as a study drug by emphasizing that it’s terrifying and will definitely kill you. The second agenda tries to encourage parents to get their kids treated for ADHD by insisting Adderall is completely safe and anyone saying otherwise is an irresponsible fearmonger. The difference between these two situations is supposed to be whether you have a doctor’s prescription. But what if you are the doctor, trying to decide who to prescribe it to? Then what? All they tell you in medical school is to give it to the people who actually have ADHD – which, I repeat, is kind of meaningless.

This post records my attempt to figure out something better. Apologies for the length.

II. Medical Risks

Most people on stimulants will have some minor side effects. Feeling jittery, feeling cold, feeling sick, leg cramps, arm cramps. Some will feel “like a robot” or otherwise psychologically uncomfortable. But these don’t discourage me from giving stimulants to people who need them. If someone needs the drugs, let them try them, see how many side effects they get, and decide for themselves whether it’s worth it.

I’m much more concerned about side effects that are permanent and dangerous. These people give us a list:

Sounds pretty bad. On the other hand, I’ve prescribed Adderall to lots of people and none of them have ever gotten any of these things, except mild hypertension. How common are these, really?

The best source for exact numbers is the guidelines by sinister-sounding European organization EUNETHYDIS. I’ll use US medical database UpToDate as a secondary source. Both lump together Adderall and Ritalin – something I’ll be doing too throughout most of this essay, except where it becomes important to distinguish them.

Seizures: EUNETHYDIS doesn’t believe this happens at normal doses. They write:

There are occasionally concerns that, as with other psychotropics, ADHD medications may lower the seizure threshold so as to cause seizures in previously seizure-free individuals. However, in prospective trials, retrospective cohort studies and post-marketing surveillance in ADHD patients without epilepsies, the incidence of seizures did not differ between ADHD pharmacotherapy and placebo [relative risk (RR)] for current versus non-use for methylphenidate, 0.8; RR for atomoxetine, 1.1

UpToDate is so unimpressed by this that they don’t even mention it. If you ask them about seizure risk for ADHD medications, they start telling you about bupropion. Overall I wouldn’t give these medications to people with a known seizure disorder without a neurologist’s approval, but they seem pretty okay otherwise.

Hypertension: Broad agreement from both sources that stimulants cause hypertension. EUNETHYDIS says 1-4 mm systolic, UpToDate says 3-8 mm.

The main problem with hypertension is that it increases risk for things like heart attacks. I calculated an average 40 year old’s risk of heart attack and got 1% over 10 years. Adding on an average Adderall-related increase in blood pressure, I got 1.1%.

What about in high-risk adults? I calculated risk for a 60 year old smoker with high cholesterol and high blood pressure. He has a 30.5% base risk of heart attack. Then I added in a typical Adderall-related rise in blood pressure, and he ended up at 32.0%. So Adderall only increased risk by about 1/1000 per year, even in this worst case scenario. Also, I never meet 60 year old smokers asking for Adderall. Overall this seems not too interesting.

I haven’t looked into other hypertension-related problems like kidney disease as much, but these seem like things you’ll hopefully have a lot of warning about and be able to talk to your doctor about whether to stop stimulants over.

Heart Attack and Stroke: My usual sources fail me here, but BioMed Central Cardiovascular Disorders comes to the rescue. They review three major studies on stroke and heart attack in stimulant patients.

Study #1 finds that stimulant users have 3x more risk of transient ischaemic attack (a small mini-stroke that does no lasting damage), but no increased risk of stroke.

Study #2 is the best and biggest study, and finds that stimulants actually reduce heart attack and stroke. They suspected “healthy-user bias”; that is, only healthy people would use such a supposedly-dangerous medication.

Study #3 is the most recent, and found no increased risk of heart attack or stroke.

UpToDate writes:

Patients receiving stimulant therapy visited the emergency department or clinician office more frequently than those who were not treated with medications because of cardiac symptoms (10.9 versus 9.1 events per 1000 patient-years, adjusted hazards ratio 1.2, 95% CI 1.04-1.38) [26]. The cardiac symptoms included syncope, tachycardia, or palpitations. However, the group that received stimulant therapy was more likely to receive other psychotropic medications (antidepressants and antipsychotic agents), be male, and be non-Hispanic. The incidence of fatal and serious cardiac abnormalities was low and not different between the two groups, and was similar to the rates seen in the general pediatric population.

The 1/1000 extra ER visit per patient year sounds bad, but “palpitations” means “your heart feels like it’s beating in a weird way”, and Adderall clearly causes this, so my guess is this is mostly just people feeling this and freaking out. I have had patients call me after feeling this and freaking out, and we dealt with it, and they were fine. If I hadn’t been available, maybe they would have gone to the ER and turned themselves into a statistic.

There might be some bias in these studies, but overall there doesn’t seem to be much evidence this is worth worrying about unless your risk of heart attack or stroke is already really high.

Psychosis: I saw this a lot when I worked in inpatient. Somebody would take five times the recommended dose, or take more Adderall every time they felt tired until they hadn’t slept for a week, and then they would start hearing voices or feeling like something was crawling on their skin. After a day or two off Adderall, and a night or two getting a normal amount of sleep, they’d be fine. Take enough stimulants and you will become psychotic – but it’s rare on prescribed doses, and it usually resolves pretty quickly.

What dose can cause psychosis? Amphetamine-Induced Psychosis says:

Early studies demonstrated that amphetamines could trigger acute psychosis in healthy subjects. In these studies, amphetamine was given in consecutively higher doses until psychosis was precipitated, often after 100–300 mg of amphetamine. The symptoms subsided within 6 days.

Compare this to the standard daily dose of Adderall of about 10 – 60 mg.

Can psychosis ever happen at normal doses? EUNETHYDIS is skeptical. They write:

Data from population-based birth cohorts indicate that self-reported psychotic symptoms are common and may occur in up to 10% of 11-year-old children. In contrast, the prevalence of psychotic symptoms in children treated with ADHD drugs from RCTs is reported as only 0.19%. While this very low observed event rate in trials is likely to reflect a lack of systematic assessment and reporting, there is no compelling evidence to suggest that the observed event rate of psychotic symptoms in children treated with ADHD drugs exceeds the expected (background) rate in the general population. In the US FDA analysis, ADHD drug overdoses did not contribute significantly to reports of psychosis adverse events.

So basically, “kids are always kind of weird, studies say kids aren’t weird on Adderall, clearly they’re not paying attention, but it doesn’t look like things got any worse.”

UpToDate links these people, who say:

We analyzed data from 49 randomized, controlled clinical trials in the pediatric development programs for these products. A total of 11 psychosis/mania adverse events occurred during 743 person-years of double-blind treatment with these drugs, and no comparable adverse events occurred in a total of 420 person-years of placebo exposure in the same trials. The rate per 100 person-years in the pooled active drug group was 1.48. The analysis of spontaneous postmarketing reports yielded >800 reports of adverse events related to psychosis or mania. In approximately 90% of the cases, there was no reported history of a similar psychiatric condition. Hallucinations involving visual and/or tactile sensations of insects, snakes, or worms were common in cases in children.

I think their use of “psychotic events per person-year” is misleading. Their study includes 5717 people, which means that for them to have 743 person-years each person must have been monitored for two months or so. But if you’re going to get psychotic on stimulants, usually it’s right after the stimulant is started. That means it might be better framed as “11/5717 patients had a psychotic event”, or even “one in every five hundred patients had a psychotic event”. Note that this matches the 0.19% number given by EUNETHYDIS. And the most common psychotic event was a feeling of snakes or insects on the skin which resolved after the drug was stopped, so we’re not talking “person is forever schizophrenic” here.

Also, I feel like EUNETHYDIS makes a good point with the “kids are always weird” thing. Here’s one of the psychotic events mentioned in the paper:

A spontaneous report from the manufacturer of Strattera (atomoxetine) described a 7-year-old girl who received 18 mg daily of atomoxetine for the treatment of ADHD. Within hours of taking the first dose, the patient started talking nonstop and stated that she was happy. The next morning the child was still elated. Two hours after taking her second dose of atomoxetine, the patient started running very fast, stopped suddenly, and fell to the ground. The patient said she had “run into a wall” (there was no wall there). The reporting physician considered that the child was hallucinating. Atomoxetine was discontinued.

Have these people ever seen a child?

The methylphenidate prescribing information suggests an 0.1% risk of psychosis, which matches the other two studies pretty well.

Does stimulant psychosis always get better after the stimulant is discontinued? My strong impression is “yes”, but I am told that this study claims 5% to 15% of stimulant psychosis patients do not recover. I cannot find the full text to figure out exactly what they mean, and it looks like it was done on chronic meth addicts rather than prescription users.

So a few lines of evidence converge on 0.1% – 0.2% of children who use prescription stimulants become psychotic. I don’t know numbers for adults, but a few people who have read drafts of this article mention they have personally seen someone get psychotic on Adderall, which seems anecdotally to argue for a higher rate. I don’t know if those people were using it correctly or using anything else alongside it. Of people who get psychotic on Adderall, perhaps 5-15% stay psychotic after discontinuation (I predict this is about meth-heads and exaggerated).

Aggressive Behavior: This is just going to be the same as psychosis. Adderall isn’t going to magically turn gentle old grandmothers into killing machines. If you’re already a kind of violent guy, and you take a lot of Adderall, maybe it’ll push you over the edge.

Sudden Death: This is usually cardiovascular – something goes very wrong with your heart and it stops beating without warning. But UpToDate writes:

Reports of unexpected deaths of children receiving stimulant therapy have led to concerns that these medications increase the risk of cardiovascular (CV) adverse events, including sudden unexpected deaths (SUD) [1,2]. However, large cohort studies have not shown an increased risk of serious CV adverse events in children treated with stimulant therapy compared with the general pediatric population…

Among adult patients who are either current or new users of stimulant medications, there appears to be no increased risk of serious CV events. This was illustrated in a large retrospective cohort study of adults (age range 25 to 64 years) based on data from four large health plans that was done in parallel with the study performed in children discussed above [3,16]. Multivariant analysis demonstrated a lower risk of serious CV events (defined as myocardial infarction, stroke, and sudden cardiac death) in individuals who were current users of stimulant therapy versus nonusers (relative risk [RR] 0.83, 95% CI 0.72-0.96). In new users of ADHD medications compared with controls, the risk of serious CV events was even lower (RR 0.77, 95% CI 0.63-0.94). However, there may be a modest amount of healthy-user bias that favored the current users of stimulant therapy. To adjust for this potential bias, a multivariant analysis that compared current users with individuals who had used stimulant therapy more than one year ago (defined as remote use) found no difference in the risk of serious CV events (RR 1.03, 95% CI 0.86-1.24). The crude incidence of serious CV events in the overall cohort was 1.34 per 1000 person-years. These results showing no increased risk of serious CV events are consistent with previously discussed studies in pediatric patients.

And EUNETHYDIS:

when the number of patient-years of prescribed medication was incorporated into the evaluation, the frequency of reported sudden death per year of ADHD therapy with methylphenidate, atomoxetine or amfetamines among children was 0.2–0.5/100,000 patient-years [99]. The analysis of 10-year adverse-event reporting in Denmark resulted in no sudden deaths in children taking ADHD medications [5]. While it is recognised that adverse events are frequently under-reported in general, it is likely that sudden deaths in young individuals on relatively new medications may be better reported. Death rates per year of therapy, calculated using the adverse events reporting system (AERS) reports and prescription data, are equivalent for two ADHD drugs (dexamfetamine and methylphenidate): 0.6/100,000/year [37]. (The accuracy of these estimates is limited however, for instance because in moving from number of prescriptions to patient-year figures assumptions must be made about the length of each prescription). It seems likely, using these best available data, and assuming a 50% under-reporting rate, that the sudden death risk of children on ADHD medications is similar to that of children in general.

Despite this, I am always very wary prescribing stimulants to anyone with any history of heart problems. I always make these people go see a cardiologist. The cardiologist always says yeah, sure, whatever, but it makes me feel a lot better.

In General: Probably the most informative passage I’ve seen on the medical risks of stimulants is this one from Misuse Of Study Drugs:

In 1990, there were about 271 emergency room reports involving methylphenidate, 1,727 in 1998, and 1,478 in 2001 [32]. The total number of emergency department visits resulting from use of all psychotherapeutic CNS stimulants was 4091 in 1998, 3644 in 1999, 3336, in 2000, 3146 in 2001 and 3275 in 2002 [33]. There are approximately 25 emergency room deaths per year among up to 3 million users of prescription stimulant drugs (including both those medically prescribed and not prescribed these drugs). Thus, the likelihood of dying from such drugs appears to be approximately 1 in 120,000.

But isn’t 25 deaths per year still bad?

Here’s another passage from the same source:

Intravenous use of prescription stimulants is particularly dangerous. In particular, intravenous (IV) abuse of methylphenidate may result in talcosis. Talcosis is a reaction to talc, a filler and lubricant in methylphenidate and other oral medication. This inflammation reaction occurs in the lungs and related consequences include lower lobe panacinar emphysema.

People aren’t dying because their psychiatrist gave them Adderall 10 mg bid. They’re dying because they ground it up, injected it into their bloodstream, and had their lungs turn into talc. The people dying of stimulant use are doing things so horrifying you could not possibly imagine them even if you took ten times your prescribed dose of Adderall and used all of it to focus on writing a report on the most horrifying ways you could possibly use Adderall. Did you know that 13% of Massachusetts college students have ground up Ritalin and snorted it up their nose? Did you know the first case report of Ritalin abuse involved a patient who was taking 125 Ritalin pills daily? All of these people are out there, and still only 25 people die of stimulant-related causes per year!

My impression is that, in particularly at-risk people, stimulants may add +1/1000 to the risk of heart attacks per year, and +1/10,000 risk of long-term psychosis. Everything else in this category can be rounded down to zero.

III. Addiction

What about addiction risk?

The data on this are really poor because it’s hard to define addiction. If a prescription stimulant user uses their stimulants every day, and feels really good on them, and feels really upset if they can’t get them…well, that’s basically the expected outcome.

Wilens et al finds that over ten years, 10% of adolescents surveyed got high on their medication, and 22% sometimes used more than prescribed. Does that mean those 10% or 22% are “addicted”? Not really – some of them probably have a tough day one time, so they take two Adderall that day and no Adderall the day after. As for getting high – well, a lot of people get high on alcohol who aren’t alcoholics, and a lot of people get stoned who nobody would call addicted to marijuana.

A lot of studies in this area ask the kind of different question of whether children put on stimulants are more likely to be addicted to drugs in general as adults. Most of them find these children are less likely, which is hypothesized to be an effect of successfully treating their ADHD.

And there’s a book on narcolepsy which apparently claims that between less than 1% and 3% of people taking stimulants for that condition get addicted, but I can’t track down their methodology or really anything beyond one reference. And narcoleptics are a different population than ADHD patients and results might not generalize (though that number sounds kind of right).

I don’t think there are good data here, but my intuitions and personal experience is that “addiction” of the sort you get with heroin or tobacco is very rare, at least when responsible people without a personal or family history of addictive behavior take stimulants as prescribed. Most people agree the risk is lower for extended-release stimulants (eg Adderall XR), and very low for Vyvanse.

IV. Tolerance

Tolerance is when you keep needing more and more of a drug to get an effect. In the worst cases, your baseline changes so that you need the drug to feel normal. The concern is that long-term use of Adderall will make your attention naturally worse, so that medicated-you is only as good at concentrating as unmedicated-you was before, and unmedicated-you is even less attentive.

We know tolerance occurs over the short-term, and we encourage patients to take a few days off Adderall every week or two to let their bodies reset. More concerning is whether it happens over the space of years, where people’s bodies adjust in a more permanent way.

The best study of this phenomenon was the Multimodal Treatment of ADHD (MTA) study, which randomized children to be treated with stimulants or “behavioral therapy” (eg learning coping skills, etc). Behavioral therapy for ADHD is not very good and I interpret it as a nice way of saying placebo.

For the first year, the kids getting stimulants did much better on all metrics than behavioral-therapy-only. For the second year, they did a little better. By the third year, they were the same. In the eighth year, which was as long as anyone kept checking, they were still the same.

This is pretty concerning. It sounds like over three years people’s bodies built up some tolerance to stimulants, after which they provided no further benefit. The only saving grace is that there’s no evidence of stimulants ever making people worse than normal (even on people who stopped the medications later).

People have critiqued this study on the grounds that although they started off giving the experimental group stimulants vs. the control group behavioral therapy, any patient could switch treatments at any time and many of them did. By year three when the groups equalized, only 66% of the medication group was on medication, and a full 43% of the therapy-only group was. So maybe this just drowned out any original effect?

The authors of the study are not convinced:

It is tempting to conclude that intensive medication management beyond 14-months could have resulted in continued differences between the randomly assigned treatment groups…In a previous multimodal treatment study where medication was carefully titrated and monitored for two years, treatment gains were maintained for the entire period. However, after 14 months the MTA became an uncontrolled naturalistic follow-up study and inferences about potential advantages that might have occurred with continued long-term study-provided treatment are speculation. Moreover, with one exception (math achievement), children still taking medication by 6 and 8 years fared no better than their non-medicated counterparts despite a 41% increase in the average total daily dose, failing to support continued medication treatment as salutary (at least, continued medication treatment as monitored by community practitioners)…Finally, a previous analysis of the MTA data through 3 years did not provide evidence that subject selection biases towards medication use in the follow-up period accounted for the observed lack of differential treatment effects.

Thus, although the MTA data provided strong support for the acute reduction of symptoms with intensive medication management, these long-term follow-up data fail to provide support for long-term advantage of medication treatment beyond two years for the majority of children—at least as medication is monitored in community settings.

As far as I can tell, pretty much everyone has ignored this, using the usual range of meaningless excuses like “Well, treatment must be individualized to the patient”.

This is very tempting, because for example I have a lot of patients who have been on stimulants for decades, are still very excited about them, and think they’re doing great. Every so often these patients go off their stimulants, are very unhappy, and insist on going back on them again. They say that pre-stimulant, they were scatterbrained and always losing things and missing appointments and failing to do work, and now, after ten years of stimulant treatment, they feel great.

We can imagine ways these people are wrong. Maybe the stimulants worked for the first three years, stopped working so gradually they didn’t notice, and now they only notice the difference between being on stimulants (baseline), and immediate post-stimulant withdrawal (very bad). But this would require a lot of people to be really wrong about their internal experience.

I asked a question on the Slate Star Codex survey about this. People on Adderall more than one month were asked to tell me whether they had no tolerance problems, some tolerance requiring dose escalation, or high tolerance that made the medications stop working entirely. The preliminary results:

Adderall for between one month and one year: (n = 124)
62 (50%) No tolerance, worked as well as ever
57 (46%) Some tolerance, or required dose escalation, but still worked well in general
5 (4%) High tolerance, stopped working

Adderall for one to five years: (n = 117)
33 (28%) No tolerance, worked as well as ever
78 (67%) Some tolerance, or required dose escalation, but still worked well in general
6 (5%) High tolerance, stopped working

Adderall for more than five years: (n = 59)
23 (39%) No tolerance, worked as well as ever
33 (56%) Some tolerance, or required dose escalation, but still worked well in general
3 (5%) High tolerance, stopped working

All three categories were evenly divided between “no tolerance” and “some tolerance but still worked well”, with only about 5% saying the tolerance became a big problem. This matches my clinical experience. So either I’m right, or the problem where they get confused and forget their baseline is affecting my survey-takers.

There are occasional claims that magnesium or some other substance can help reverse Adderall tolerance. As far as I know these have never really been investigated.

So: there’s no good evidence that taking Adderall will actively make your ADHD worse in the long run. There is good evidence from clinical trials that benefits will decrease to zero over the space of a few years, apparently contradicted by the personal experiences of doctors and patients. Overall not sure what to do with this one.

V. Neurotoxicity

There’s some evidence that amphetamines can cause permanent cellular damage, but it’s not clear whether this happens in humans at typical therapeutic doses.

If you give rats very high doses of IV amphetamines, they accumulate so much dopamine in the cytoplasm of their neurons that it causes oxidative stress and destroys dopaminergic nerve terminals. This doesn’t happen to rats at doses matching human doses of Adderall. But it does happen at those doses to squirrel monkeys. At least this is the claim:

Adult baboons and squirrel monkeys were treated with a 3:1 mixture of D/L–amphetamine similar to the pharmaceutical Adderall for 4 weeks. Plasma concentrations of amphetamine (136±21 ng/ml-1) matched the levels reported in human ADHD patients after amphetamine treatment lasting 3 weeks (120–140 ng/ml-1) or 6 weeks in the highest dose (30 mg/day-1) condition (120 ng/ml-1). When the animals were killed 2 weeks after the 4-week amphetamine treatment period, both non-human primate species showed a 30–50% reduction in striatal dopamine, its major metabolite (dihydroxyphenylacetic acid (DOPAC)), its rate-limiting enzyme (tyrosine hydroxylase), its membrane transporter and its vesicular transporter. These consequences are similar, if not identical to the effects of neurotoxic doses in rodents.

I’m not really sure what they’re getting at here – surely they’re not saying just one month of Adderall permanently decreases striatal dopamine by 50%? But it sounds like something bad is happening, and since humans are more like monkeys than rats, maybe there’s cause for concern.

What would it look like if people got this kind of brain damage? One likely possibility is Parkinson’s disease, a condition caused by poor dopaminergic function in the brain. If you were going to tell a story about how Adderall could cause long-term neurotoxic damage, it would look like gradual decrease of brain dopaminergic function without obvious effects through most of the lifespan (since most people have dopaminergic function to spare). As the patient got older and started naturally losing brain function, Parkinson’s would appear. This happens to genetically and environmentally predisposed people anyway (which is why old people get Parkinson’s so often), but in this scenario amphetamine use would present an extra risk factor.

Several studies have shown that meth addicts do have higher rates of Parkinson’s disease. This one says people hospitalized for meth addiction are 60% more likely to get Parkinson’s than people hospitalized for other reasons. This one finds Parkinson’s rates three times higher in meth addicts compared to non-drug-users.

What about at therapeutic doses? This article claims there was a study that found people who used Benzedrine and Dexadrine (early forms of prescription amphetamine) in the 1960s have rates of Parkinson’s Disease about 60% higher than non-users today, but I can’t find the study itself and I don’t know the methodology. Another study finds similar results. Since both ADHD and stimulant addiction are very hereditary, you could make an argument that people who already have problems with their dopamine system are more likely to get Parkinson’s later on. There’s a little bit of conflicting evidence for this. Also, ADHD patients might have three times the rate of dementia with Lewy bodies, a condition closely related to Parkinson’s. On the other hand, there doesn’t seem to be any genetic connection. Overall my guess is this is not what’s going on.

About 1-2% of people will get Parkinson’s if they live long enough. If Adderall increases that risk 60%, then presumably it could cause a 1% absolute increase in risk.

Some people claim various substances (magnesium, minocycline, etc) will protect your brain from amphetamine neurotoxicity. None of these have been studied in anywhere near the depth they would need to be to make me feel comfortable with this.

The good news is that as far as anyone can tell, Ritalin doesn’t cause these problems, even if you give it to rats at super-high doses. It seems to be a difference in the mechanism of action. I’ve been talking about Adderall this whole post because it’s the most commonly-used stimulant and some studies have suggested it’s more effective for a few people, but this might be a strong argument in favor of starting with Ritalin and only switching to Adderall if Ritalin fails. [EDIT: Never mind, recent studies suggest Ritalin is just as likely to cause this problem.]

So overall there is plausible, but not incontrovertible, evidence linking Adderall to a somewhat increased risk of Parkinson’s disease in old age.

VI. Summary

My impression is that the risks of proper, medically supervised Adderall use are the following:

1. High risk of minor short-term side effects that might make you want to stop taking the medication with no long-term issues
2. Extremely low risk of serious medical side effects like stroke or heart attack, except maybe in a few very vulnerable populations
3. Maybe one percent risk, but not literally zero risk, of addiction if patients are well-targeted by their doctors and use the medication responsibly.
4. Perhaps one in five hundred risk, but not literally zero risk, of psychosis. Some anecdotal evidence suggests it is more common than this. Most of these cases will be mild and resolve quickly. Some people find a very small number of cases of stimulant-induced psychosis may be permanent, though I still find this hard to believe.
5. Some evidence for tolerance after several years, though most patients will continue to believe it is helping them. No sign of supertolerance where it actually makes the condition worse.
6. Plausibly 60% increased relative risk (+~1% absolute risk) for Parkinson’s disease with long-term use.
7. Unknown unknowns.

Of these, I find the psychosis, tolerance, and Parkinson’s to be the most concerning. But I am pretty upset about the overall terrible state of this research. In particular, nobody except the MTA takes the possibility of tolerance seriously, and the MTA results really ought to have inspired a lot more soul-searching and hand-wringing than they actually did. The numbers on addiction and psychosis are inexcusably terrible given how easy they would be to collect. Getting good data on the Parkinson’s risk would be harder, but one so-far-unexplored possibility would be to compare past prescription Adderall history to past prescription Ritalin history in Parkinson’s patients to adjust for the potential ADHD confounder. I really think somebody should do this.

Despite all this, I compare these risks to the risks of eating one extra strip of bacon per day and decide that overall this is not enough for me to stop prescribing stimulants to patients who I think might benefit from them. These are about the standard level of side effects for a powerful medication and I think there’s a major role for these in ADHD treatment as long as patients are well-informed about the risks they’re taking.

PS: I don’t accept blog readers as patients, and I won’t prescribe you Adderall just because you liked this post.

A History Of The Silmarils In The Fifth Age

[Spoiler warning for The Silmarillion]

I.

The Silmarillion describes the fate of the three Silmarils. Earendil kept one, and traveled with it through the sky, where it became the planet Venus. Maedhros stole another, but regretted his deed and jumped into a fiery chasm. And Maglor took the last one, but threw it into the sea in despair.

Well, Venus is still around. But what happened to the latter two? Surely over all the intervening millennia, with so many people wanting a Silmaril, they haven’t just hung around in the earth and ocean?

After some research, I’ve developed a couple of promising leads for the location of the Silmarils in the Fifth Age.

II.

I previously sketched out the argument that Maglor’s Silmaril probably belongs to a Los Angeles crime lord.

The movie Pulp Fiction centers around a mysterious briefcase. We’re never told exactly what’s inside, but we get some clues:

1. It’s described as “so beautiful” and captivates anyone who looks at it
2. It shines with an inner light
3. When Jules and Vincent are trying to get it, all the shots aimed at them miss, implying they’re miraculously immune to bullets, implying that they’re on some kind of divine quest.
4. Marsellus Wallace really wants to get it, and keeps killing anyone else who has it

So far this is only suggestive, but there’s more. While searching for the briefcase, Jules (!) keeps quoting a verse:

The path of the righteous man is beset on all sides by the inequities of the selfish and the tyranny of evil men. Blessed is he who, in the name of charity and good will, shepherds the weak through the valley of the darkness, for he is truly his brother’s keeper and the finder of lost children. And I will strike down upon thee with great vengeance and furious anger those who would attempt to poison and destroy my brothers.

They describe this as Ezekiel 25:17, but it isn’t. In fact, it isn’t anywhere in the Bible, and it doesn’t match any Biblical story. This isn’t from the Old Testament at all. It’s a description of the life of Maglor in the Silmarillion!

During the First Age, Maglor ruled “Maglor’s Gap”, a valley which connected the lands of the Elves and the lands of Morgoth. Maglor held Maglor’s Gap for 450 years until Morgoth finally conquered the valley; Maglor led the retreat of his people, thus “shepherding the weak through the valley of darkness”.

He fled to the fortress of his brother, Maedhros, in Himling, where he helped defend Maedhros’ lands and people in battle – making him “his brother’s keeper”.

In the ensuing battles, he captured the young Elrond and Elros, who had been orphaned after their parents fled across the sea, and adopted them – making him “the finder of lost children”.

As for “striking down with great vengeance and furious anger those who would attempt to poison and destroy my brothers”, that’s about as Noldor as it gets.

What is going on here, and why do we keep finding these connections to Maglor?

Maglor is unique as possibly the only Noldo still remaining in the world. According to Wikipedia:

Maglor, along with Galadriel and Gil-galad, was the greatest surviving Noldo at the beginning of the Second Age. There is speculation that he remained even after the Third Age in Middle-earth, forbidden forever from returning to Valinor.

If he were still alive in our times, he would remain bound by his oath and be hunting the Silmaril. So: could Marsellus Wallace, the mysterious gang boss who wants the briefcase so badly, be Maglor himself? Given that the name “Maglor” is a Sindarinization of his birth name “Makalaure”, “Marsellus” doesn’t even sound like much of a pseudonym.

The main argument against this point is that Tolkien’s elves are usually depicted as fair-skinned and lithe, but Marsellus Wallace is shown in the movie as a big black guy. Does this disprove the theory?

It would, unless Marsellus were under some kind of magical glamor to hide his true appearance. And there’s actually some evidence for this.

There’s one character in Pulp Fiction who is clearly able to cast illusion-related magic: Mia Wallace. In the parking lot of the restaurant, she tells Vinnie “Don’t be a…”. Then she traces a square in the air with her finger, and the square appears in glittering light. Marsellus Wallace is married to someone who can cast visual illusions.

But why should we believe Marsellus’ appearance is itself such an illusion? Well, in the scene with Jules and Brett, Jules puts a gun to Brett’s head and asks him what Marsellus looks like. Brett says he looks like a tall bald black guy, which seems to satisfy Jules.

The hit men try to play this off as some kind of intimidation thing, but they’re just going to shoot Brett anyway – there’s no need to intimidate him. It would only make sense if they’re actually checking how Marsellus appears to Brett – ie whether a certain illusion he’s projecting is working. When they follow up with “Does he look like a bitch?“, this is their foul-mouthed way of asking whether he looks androgynous. When Brett confirms that he looks masculine, this seems to satisfy the hit men, who then go ahead and shoot him. Unclear why they’re expecting the illusion to fail in Brett’s case, but it seems like if it has they’ll need to interrogate him further and maybe track down anybody else who might have learned too much.

How is Mia Wallace able to cast these illusions?

I would guess that “Mia” is actually Maia, ie one of the Maiar who is sent from Valinor to guide Elves and Men with their good counsel and magic powers. There’s a previous example of a female Maia marrying an elflord to guide him: Melian and Thingol. Mia is following in this tradition, and just as Melian granted Thingol’s kingdom invulnerability to attack, so Mia grants Maglor/Marsellus the ability to look like a big muscular black guy.

We actually have further proof of this in the movie. Mia overdoses on heroin and goes unconscious. It looks like she goes a really long time without breathing. You get anoxic brain injury in like four or five minutes; Mia was out way longer than that. But once they give her adrenaline, she instantly and completely recuperates in a medically implausible way. Suffice it to say that she’s proven beyond a shadow of a doubt that she doesn’t have a human circulatory system, and given us at least strong evidence that she is literally immortal.

I would guess that Maglor survived, found his Silmaril, lost his Silmaril again, and that Pulp Fiction is an account of him getting it back. “Quentin Tarantino” is probably a made-up pen name for a group of elvish historians – the name “Quentin” obviously deriving from “Quendi”, the elvish word for elves. “Tarantino” is more obscure, but it may be a reference to Tar-Atanamir, the Numenorean king who refused to die when his time came – something which must carry a lot of metaphorical associations for any elves remaining on Earth.

If all of this is true, Maglor’s Silmaril probably remains with Maglor in his Los Angeles mansion.

III.

The fate of Maedhros’ Silmaril is less clear, but one promising possibility is linked with the fate of Utumno.

Utumno was the fortress of the dark god Melkor before the First Age. It was built in the far north of Middle-Earth, “upon the borders of the regions of everlasting cold”. Tolkien Gateway writes that “the frigid temperatures of the northern regions were thought to originate from the evil of [Melkor’s] realm”.

What was Utumno like? Like most of Tolkien’s villains, Melkor was at least partly a technologist; his realm was one of forges and smithies ceaselessly building weapons for his war against the gods. This page describes it as “a fortress for war, with many armories, forges, dungeons and breeding pits.” Some of the descriptions sound like it was emitting pollution, destroying the land around it: “The lands of the far north were all made desolate in those days; for there Utumno was delved exceeding deep, and its pits were filled with fires and with great hosts of the servants of Melkor.”

Who manned these factories? Enslaved elves. As per the book, “All those of the Quendi who came into the hands of Melkor, ere Utumno was broken, were put there in prison, and by slow arts of cruelty were corrupted and enslaved”.

Eventually the gods decided enough was enough and marched against Utumno with a mighty host led by Tulkas, God of War. He wrestled with Melkor, defeated him, and bound him with a mighty chain.

What happened to Utumno after this? The Silmarillion is vague, but in retrospect it’s super obvious. What happened to the magical factory at the North Pole run by elves? Everyone knows the answer to that one!

Presumably Tulkas and the other gods, after defeating Melkor, decided it was poetically appropriate to turn Utumno from a place of darkness to a wonderland of holiday cheer. The elves agreed to stay on to help, and they repurposed Melkor’s forges to create toys for children around the world.

“Santa Claus” supposedly derives from St. Nicholas, on the grounds that “Santa” means “saint” and “Claus” is short for “Nicholas”. But “Santa” means a female saint; a male saint is “San”. Santa is male, so a more reasonable derivation would be “San Tulkas”. Once a year, Tulkas goes forth and distributes the toys created by the elves of Utumno.

(remember, the Silmarillion describes Tulkas as a huge bearded man who “laughs ever, in sport or in war, and even in the face of Melkor he laughed in battles before the Elves were born”. And remember, of his wife Nessa, it says “Deer she loves, and they follow her train whenever she goes in the wild”. Having deer follow your family around everywhere seems sounds pretty annoying, but at least it gives you a ready-made supply of draft animals.)

Since we never see Santa’s workshop, it must be hidden from the world in the same manner as the Undying Lands. How does Tulkas cross back into the mortal world to deliver gifts?

The only successful example of such a journey we have from Tolkien is that of Earendil, who travels from Middle-Earth to the Undying Lands using a Silmaril worn on his brow. Later, even after the two worlds are separated entirely, he is able use the same Silmaril to voyage through the sky in his flying boat. “The wise have said that it was by reason of the power of that holy jewel that they came in time to waters that no vessels save those of the Teleri had known”. So presumably any living being with a Silmaril upon their head can fly through the gulfs between the worlds safely.

Tulkas is a god and should have no trouble finding the only unclaimed Silmaril, the one Maedhros dropped into a chasm in the earth. His main issue would be preventing the surviving Noldor from learning what he has and invoking their vendetta. He would have to disguise it as something else, something so ridiculous that the stick-up-their-ass Noldor would never think to identify it with their holy jewels.

So…

Rudolph the Red-Nosed Reindeer
Had a very shiny nose
And if you ever saw it
You would even say it glows…

Posted in Uncategorized | Tagged , | 99 Comments

Preregistration Of Hypotheses For The SSC Survey

[This post is about the 2018 SSC Survey. If you’ve read at least one blog post here before, please take the survey if you haven’t already. Please don’t read on until you’ve taken it, since this could bias your results.]

I’m preregistering my hypotheses for the survey this year. So far I’ve glanced at Google’s bar graphs for each individual question but haven’t started exploring relationships yet, so I’m not cheating too badly. I’ll still look for things I haven’t preregistered, but I’ll admit they’re preliminary results only. This is the stuff I’ve been thinking about beforehand and will be taking more seriously:

1. I plan to replicate the general thrust of last year’s results reported in Can We Link Perception And Cognition on the sample of new people who didn’t take the survey last year. In particular, I’m expecting that weirder, more autistic, more liberal, more schizophrenic, and more transgender people will be more likely to display unusual patterns of perception (hollowness or ambiguity) in the Hollow Mask illusion. I expect this to become much more obvious since I’ve included three examples of the illusion this year including one that seems to give a wider diversity of results.

1a. I plan to replicate the results from last year that people who were better at noticing duplicate “the’s” are more likely to display unusual patterns of perception on the Hollow Mask illusion.

2. I plan to conceptually replicate Mitchell et al’s study showing that autistic people are less susceptible to the Shepherd Table Illusion.

3. I plan to conceptually replicate Caparos et al’s study showing that politically further-right people are more likely to use global processing on a Navon task (eg when there’s an H made of tiny Es, they see the H more than the Es).

4. I plan to investigate a general construct of “first sight and second thoughts” that involves people being better able to see what’s actually there, and less susceptible to illusions, priors, stereotypes, and assumptions. This will involve correlations between the two Duplicate Thes illusions, the Hollow Mask illusion, the Shepherd Table illusion, the Cookies illusion, the Parentheses palindrome, the Map riddle, the Surgeon riddle, the Switched Answers task, the Cognitive Reflection test, and the Wason task.

4a. If I can figure out how to get a common factor out of all of these, I plan to see if it’s the same thing I’m looking at in 1, and how it relates to the same groups.

4b. Whether this relates to a general willingness to believe strange or unpopular things. Check vs. AI risk concern and HBD support.

5. I plan to investigate a general construct of “ambiguity tolerance” that involves people being okay with a superposition of different conflicting ideas. This will involve correlations between ambiguous results on the Hollow Mask illusion, the Spinning Dancer illusion and the Squares-Circles illusion, and with answers to the questions from the Tolerance Of Ambiguity and Tolerance of Uncertainty scales.

5a. Whether perceptual ambiguity relates to cognitive ambiguity. I want to check whether people with high ambiguity tolerance on the optical illusions are more likely to say their political opponents have some good points, are less likely to say their political opponents are evil, and are less likely to say the existing political system is justifiable. Also if they’re more likely to enjoy puns.

5b. To what degree this is the same construct as (1), and is stronger among the same demographic groups.

5c. I also want to see if people with high ambiguity tolerance give less extreme answers on questions in general. I’ll probably use Ambition, Social Status, Romantic Life, and Morality for this, just because these seem like complicated questions there’s no obvious right answer to.

5d. I plan to confirm previous studies showing low ambiguity tolerance correlates with conservative philosophy; check vs. Political Spectrum 1-10. I predict that this will be stronger for populists than for “business conservatives”, so I expect the low ambiguity correlation will be weak for generic conservatives, stronger for Trump supporters, strongest for people who identify as alt-right.

6. I plan to investigate whether autistic people are more likely to give process-centered rather than person-centered answers to the two political categorization questions (categorizing Nazis, categorizing civil disobedience on gay marriage). That is, neurotypical people will be more likely to categorize based on which side wins, and autistic people will be more likely to categorize based on what procedures were followed (eg violence, civil disobedience).

6a. I also want to investigate how these correlate with political views. I may end up controlling for this as a confounder in (6) above.

6b. This is a totally wild out-of-left field idea, but I suppose I should check how these relate to the Navon figures since they’re both about categorization.

7. I plan to confirm or disprove, once and for all, whether our community has more older siblings. For lack of a fancier way to do this, I’ll take the set of all people who have exactly one sibling, and see what percent of them are older vs. younger. If it’s significantly above 50% older, I’m going to interpret this as a birth order effect. I’ll do the same with the set of people who have two siblings, three siblings, etc, and combine them all for a final determination. Half-siblings will be ignored. If you have any problems with this methodology, tell me now.

7a. If I find we’re disproportionately older, try to use subgroups to figure out where the effect is stronger or weaker, to try to find exactly what’s going on. For example, are Less Wrongers more older-skewed than SSC readers in general?

7b. Birth order by autism, Openness, and IQ/SAT.

7c. One traditional birth-order claim is that younger children are more rebellious, so check birth order vs. people who think system needs to be fine-tuned or destroyed.

8. I plan to conceptually replicate studies showing that the more older brothers (but not younger brothers, or older or younger sisters) you have, the more likely you are to be gay.

8b. See if this predicts anything else: bisexuality, transgender, gender non-conformity, political leftism, autism, possibly ‘first sight and second thoughts’, possibly ‘ambiguity tolerance’.

9. I plan to see whether people with ADHD are more likely to prefer the buzzing city aesthetic to the quiet village aesthetic, more likely to rate themselves as more risk-taking, and more likely to describe themselves as ambitious.

10. I plan to investigate the hypothesis about sexual harassment mentioned here: that it’s higher in gender imbalanced industries only due to potential-perpetrator-to-victim ratio. I predict that in relatively gender imbalanced industries (in terms of survey categories, all three Computers fields, Finance, Physics, and Mathematics) compared to relatively gender-balanced industries (Health Care, Psychology, Art, Law, Biology), a higher percent of women will report being harassed at work, but the percent of men reporting harassing at work will remain the same.

10b. I predict that the more people identify with social justice, and the more positively they feel about feminism, the more likely they are to report both being harassed and harassing others, due to more awareness and lower threshold to report. I predict poor social skills and autism spectrum will predict more likely to say one is a harasser, due to causing unintentional offense. I predict people who are harassed more at work will also be harassed more outside of work.

11. A long time ago, I randomized people into groups and made them read articles on AI risk to see how it changed their minds. The effect mostly persisted after one month. Since those groups were randomized by birth date, and I asked respondents their birthdates, I plan to see if those effects continue to persist after a year.

These are mostly conceptual descriptions of what I’m going to do rather than algorithmic descriptions of exactly how I’m going to process the data. Part of that is that a lot of this involves statistical techniques at the limits of my abilities and I’m going to have to see if I can actually do them. Most important, I would like to learn enough about factor analysis to actually check for a General Factor Of First Sight/Second Thoughts, and a General Factor of Ambiguity Tolerance. If I have them, I’d like to use them to see if they correlate with the other things I’m wondering if they correlate with. If I can’t make this work or beg someone else to do it for me, I’ll just eyeball the correlations between individual questions, see which ones are highest, and maybe take an average of those questions or something.

Mostly I won’t be doing anything fancy or with too many branching paths to the data, but I plan to operationalize autism in two ways. First, a scale where professional diagnosis equals 3, self-diagnosis equals 2, family member equals 1, and no personal/family history equals 0. Second, the Autism Spectrum Quotient test I made people take at the bottom of the survey. I’m not at all confident these will correlate more than a weak amount, but I’ll try it and see. I might also try some kind of average of the two measures. Since there are a few things I expect to be correlated with autism – mathematical careers, bad response to clothing tags, poor social skills – I might check to see whether the first measure, the second measure, or the combination does a better job of predicting these, and stick with whichever one does. I’ll try not to base which measure I use on any of the variables I’m actually testing.