codex Slate Star Codex


Five More Years

Those yearly “predictions for next year” posts are starting to reach the limit of their usefulness. Not much changes from year to year, and most of what does change is hard to capture in objective probabilistic predictions.

So in honor of this blog’s five year anniversary, here are some predictions for the next five years. All predictions to be graded on 2/15/2023:

AI will be marked by various spectacular achievements, plus nobody being willing to say the spectacular achievements signify anything broader. AI will beat humans at progressively more complicated games, and we will hear how games are totally different from real life and this is just a cool parlor trick. If AI translation becomes flawless outstanding, we will hear how language is just a formal system that can be brute-forced without understanding. If AI can generate images and even stories to a prompt, everyone will agree this is totally different from real art or storytelling. Nothing that happens in the interval until 2023 will encourage anyone to change this way of thinking. There will not be a Truckpocalypse before 2023. Technological unemployment will continue to be a topic of academic debate that might show up if you crunch the numbers just right, but there will be no obvious sign that it is happening on a large scale. Everyone will tell me I am wrong about this, but I will be right, and they will just be interpreting other things (change in labor force composition, change in disability policies, effects of outsourcing, etc) as obvious visible signs of technological unemployment, the same as people do now. AI safety concerns will occupy about the same percent of the public imagination as today.

1. Average person can hail a self-driving car in at least one US city: 80%
2. …in at least five of ten largest US cities: 30%
3. At least 5% of US truck drivers have been replaced by self-driving trucks: 10%
4. Average person can buy a self-driving car for less than $100,000: 30%
5. AI beats a top human player at Starcraft: 70%
6. MIRI still exists in 2023: 80%
7. AI risk as a field subjectively feels more/same/less widely accepted than today: 50%/40%/10%

The European Union will not collapse. It will get some credibility from everyone hating its enemies – Brexit, the nationalist right, etc – and some more credibility by being halfway-competent at its economic mission. Nobody will secede from anywhere. The crisis of nationalism will briefly die down as the shock of Syrian refugees wears off, then reignite (possibly after 2023) with the focus on African migrants. At some point European Muslims may decide they don’t like African migrants much either, at which point there may be some very weird alliances.

1. UK leaves EU (or still on track to do so): 95%
2. No “far-right” party in power (executive or legislative) in any of France, Germany, UK, Italy, Netherlands, Sweden, at any time: 50%
3. No other country currently in EU votes to leave: 50%

Countries that may have an especially good half-decade: Israel, India, Nigeria, most of East Africa, Iran. Countries that may have an especially bad half-decade: Russia, Saudi Arabia, South Africa, UK. The Middle East will get worse before it gets better, especially Lebanon and the Arabian Peninsula (Syria might get better, though).

1. No overt major power war in the Middle East (Israel spending a couple weeks destroying stuff in Lebanon doesn’t count): 60%
2. Mohammed bin Salman still in power in Saudi Arabia in 2023: 60%
3. Sub-Saharan Africa averages GDP growth greater than 2.5% over 2018 – 2023: 60%
4. Vladimir Putin is still in charge of Russia: 70%
5. If there’s a war in the Middle East where US intervention is plausible, US decides to intervene (at least as much as it did in Syria): 70%

Religion will continue to retreat from US public life. As it becomes less important, mainstream society will treat it as less of an outgroup and more of a fargroup. Everyone will assume Christians have some sort of vague spiritual wisdom, much like Buddhists do. Everyone will agree evangelicals or anyone with a real religious opinion is just straight-out misinterpreting the Bible, the same way any Muslim who does something bad is misinterpreting the Koran. Christian mysticism will become more popular among intellectuals. Lots of people will talk about how real Christianity opposes capitalism. There may not literally be a black lesbian Pope, but everyone will agree that there should be, and people will become mildly surprised when you remind them that the Pope is white, male, and sexually inactive.

1. Church attendance rates lower in 2023 than 2018: 90%

The crisis of the Republican Party will turn out to have been overblown. Trump’s policies have been so standard-Republican that there will be no problem integrating him into the standard Republican pantheon, plus or minus some concerns about his personality which will disappear once he personally leaves the stage. Some competent demagogue (maybe Ted Cruz or Mike Pence) will use some phrase equivalent to “compassionate Trumpism”, everyone will agree it is a good idea, and in practice it will be exactly the same as what Republicans have been doing forever. The party might move slightly to the right on immigration, but this will be made easy by a fall in corporate demand for underpriced Mexican farm labor, and might be trivial if there’s a border wall and they can declare mission accomplished. If the post-Trump standard-bearer has the slightest amount of personal continence, he should end up with a more-or-less united party who view Trump as a flawed but ultimately positive figure, like how they view GW Bush. Also, I predict we see a lot more of Ted Cruz than people are expecting.

1. Trump wins 2020: 20%
2. Republicans win Presidency in 2020: 40%

On the other hand, everyone will have underestimated the extent of crisis in the Democratic Party. The worst-case scenario is Kamala Harris rising to the main contender against Bernie Sanders in the 2020 primary. Bernie attacks her and her followers as against true progressive values, bringing up her work defending overcrowded California prisons as a useful source of unpaid labor. Harris supporters attack Bernie as a sexist white man trying to keep a woman of color down (wait until the prison thing gets described as “slavery”). Everything that happened in 2016 between Clinton and Sanders looks like mild teasing between friends in comparison. If non-Sanderites rally around Booker or Warren instead, the result will be slightly less apocalyptic but still much worse than anyone expects. The only plausible way I can see for the Dems to avoid this is if Sanders dies or becomes too sick to run before 2020. This could tear apart the Democratic Party in the long-term, but in the short term it doesn’t even mean they won’t win the election – it will just mean a bunch of people who loathe each other temporarily hold their nose and vote against Trump.

1. Sanders wins 2020: 10%
2. Democrats win Presidency in 2020: 60%

It will become more and more apparent that there are three separate groups: progressives, conservatives, and neoliberals. How exactly they sort themselves into two parties is going to be interesting. The easiest continuation-of-current-trends option is neoliberals+progressives vs. conservatives, with neoliberals+progressives winning easily. But progressives are starting to wonder if neoliberals’ support is worth the watering-down of their program, and neoliberals are starting to wonder if progressives’ support is worth constantly feeding more power to people they increasingly consider crazy. The Republicans used some weird demonic magic to hold together conservatives and neoliberals for a long time; I suspect the Democrats will be less good at this. A weak and fractious Democratic coalition plus a rock-hard conservative Republican non-coalition might be stable under Median Voter Theorem considerations. For like ten years. Until there are enough minorities that the Democrats are just overwhelmingly powerful (no, minorities are not going to start identifying as white and voting Republican en masse). I have no idea what will happen then. Maybe the Democrats will go extra socialist, the neoliberals and market minorities will switch back to the Republicans, and we can finally have normal reasonable class warfare again instead of whatever weird ethno-cultural thing is happening now?

1. At least one US state has approved single-payer health-care by 2023: 70%
2. At least one US state has de facto decriminalized hallucinogens: 20%
3. At least one US state has seceded (de jure or de facto): 1%
4. At least 10 members of 2022 Congress from neither Dems or GOP: 1%
5. US in at least new one major war (death toll of 1000+ US soldiers): 40%
6. Roe v. Wade substantially overturned: 1%
7. At least one major (Obamacare-level) federal health care reform bill passed: 20%
8. At least one major (Brady Act level) federal gun control bill passed: 20%
9. Marijuana legal on the federal level (states can still ban): 40%
10. Neoliberals will be mostly Democrat/evenly split/Republican in 2023: 60%/20%/20%
11. Political polarization will be worse/the same/better in 2023: 50%/30%/20%

The culture wars will continue to be marked by both sides scoring an unrelenting series of own-goals, with the victory going to whoever can make their supporters shut up first. The best case scenario for the Right is that Jordan Peterson’s ability to not instantly get ostracized and destroyed signals a new era of basically decent people being able to speak out against social justice; this launches a cascade of people doing so, and the vague group consisting of Jordan Peterson, Sam Harris, Steven Pinker, Jonathan Haidt, etc coalesces into a perfectly respectable force no more controversial than the gun lobby or the pro-life movement or something. With social justice no longer able to enforce its own sacredness values against blasphemy, it loses a lot of credibility and ends up no more powerful or religion-like than eg Christianity. The best case scenario for the Left is that the alt-right makes some more noise, the media is able to relentlessly keep everyone’s focus on the alt-right, the words ALT-RIGHT get seared into the public consciousness every single day on every single news website, and everyone is so afraid of being associated with the alt-right that they shut up about any disagreements with the consensus they might have. I predict both of these will happen, but the Right’s win-scenario will come together faster and they will score a minor victory.

1. At least one US politician, Congressman or above, explicitly identifies as alt-right (in more than just one off-the-cuff comment) and refuses to back down or qualify: 10%
2. …is overtly racist (says eg “America should be for white people” or “White people are superior” and means it, as a major plank of their platform), refuses to back down or qualify: 10%
3. Gay marriage support rate is higher on 1/1/2023 than 1/1/2018: 95%
4. Percent transgender is higher on 1/1/2023 than 1/1/2018: 95%
5. Social justice movement appear less powerful/important in 2023 than currently: 60%

First World economies will increasingly be marked by an Officialness Divide. Rich people, the government, and corporations will use formal, well-regulated, traditional institutions. Poor people (and to an increasing degree middle-class people) will use informal gig economies supported by Silicon Valley companies whose main skill is staying a step ahead of regulators. Think business travelers staying at the Hilton and riding taxis, vs. low-prospect twenty-somethings staying at Air BnBs and taking Ubers. As Obamacare collapses, health insurance will start turning into one of the formal, well-regulated, traditional institutions limited to college grads with good job prospects. What the unofficial version of health care will be remains to be seen. If past eras have been Stone Age, Bronze Age, Iron Age, Information Age, etc, the future may be the Ability-To-Circumvent-Regulations Age.

1. Percent of people in US without health insurance (outside those covered by free government programs) is higher in 2023 than 2018: 80%
2. Health care costs (as % of economy) continue to increase at least as much as before: 70%

Cryptocurrency will neither collapse nor take over everything. It will become integrated into the existing system and regulated to the point of uselessness. No matter how private and untraceable the next generation of cryptocurrencies are, people will buy and exchange them through big corporate websites that do everything they can to stay on the government’s good side. Multinationals will occasionally debate using crypto to transfer their profits from one place to another, then decide that would make people angry and decide not to. There may be rare crypto-related accounting tricks approximately of the same magnitude as the “headquarter your company in the Cayman Islands” trick. A few cryptocurrencies might achieve the same sort of role PayPal has today, only slightly cooler. Things like Ethereum prediction markets might actually work, again mostly by being too niche for the government to care very much. A few die-hards will use pure crypto to buy drugs over the black market, but not significantly more than do so today, and the government will mostly leave them alone as too boring to crush.

1. 1 Bitcoin costs above $1K: 80%
2. …above $10K: 50%
3. …above $100K: 5%
4. Bitcoin is still the highest market cap cryptocurrency: 40%
5. Someone figures out Satoshi’s true identity to my satisfaction: 30%
6. Browser-crypto-mining becomes a big deal and replaces ads on 10%+ of websites: 5%

Polygenic scores go public – not necessarily by 2023, but not long after. It becomes possible to look at your 23andMe results and get a weak estimate of your height, IQ, criminality, et cetera. Somebody checks their spouse’s score and finds that their desirable/undesirable traits are/aren’t genetic and will/won’t be passed down to their children; this is treated as a Social Crisis but nobody really knows what to do about it. People in China or Korea start actually doing this on a large scale. If there is intelligence enhancement, it looks like third-party services that screen your gametes for genetic diseases and just so happen to give you the full genome which can be fed to a polygenic scoring app before you decide which one to implant. The first people to do this aren’t necessarily the super-rich, so much as people who are able to put the pieces together and figure out that this is an option. If you think genetics discourse is bad now, wait until polygenic score predictors become consumerized. There will be everything from “the predictor said I would be tall but actually I am medium height, this proves genes aren’t real” to “Should we track children by genetic IQ predictions for some reason even though we have their actual IQ scores right here?” Also, the products will probably be normed on white (Asian?) test subjects and not work very well on people of other races; expect everyone to say unbelievably idiotic things about this for a while.

1. Widely accepted paper claims a polygenic score predicting over 25% of human intelligence: 70%
2. …50% or more: 20%
3. At least one person is known to have had a “designer baby” genetically edited for something other than preventing specific high-risk disease: 10%
4. At least a thousand people have had such babies, and it’s well known where people can go to do it: 5%
5. At least one cloned human baby, survives beyond one day after birth: 10%
6. Average person can check their polygenic IQ score for reasonable fee (doesn’t have to be very good) in 2023: 80%
7. At least one directly glutamatergic antidepressant approved by FDA: 20%
8. At least one directly neurotrophic antidepressant approved by FDA: 20%
9. At least one genuinely novel antipsychotic approved by FDA: 30%
10. MDMA approved for therapeutic use by FDA: 50%
11. Psilocybin approved for general therapeutic use in at least one country: 30%
12. Gary Taubes’ insulin resistance theory of nutrition has significantly more scholarly acceptance than today: 10%
13. Paleo diet is generally considered and recommended by doctors as best weight-loss diet for average person: 30%

There will be two or three competing companies offering low-level space tourism by 2023. Prices will be in the $100,000 range for a few minutes in suborbit. The infrastructure for Mars and Moon landings will be starting to look promising, but nobody will have performed any manned landings between now and then. The most exciting edge of the possibility range is that five or six companies are competing to bring rich tourists to Bigelow space stations in orbit.

1. SpaceX has launched BFR to orbit: 50%
2. SpaceX has launched a man around the moon: 50%
3. SLS sends an Orion around the moon: 30%
4. Someone has landed a man on the moon: 1%
5. SpaceX has landed (not crashed) an object on Mars: 5%
6. At least one frequently-inhabited private space station in orbit: 30%

Global existential risks will hopefully not be a big part of the 2018-2023 period. If they are, it will be because somebody did something incredibly stupid or awful with infectious diseases. Even a small scare with this will provoke a massive response, which will be implemented in a panic and with all the finesse of post-9/11 America determining airport security. Along with the obvious ramifications, there will be weird consequences for censorship and the media, with some outlets discussing other kinds of biorisks and the government wanting them to stop giving people ideas. The world in which this becomes an issue before 2023 is not a very good world for very many reasons.

1. Bioengineering project kills at least five people: 20%
2. …at least five thousand people: 5%
3. Paris Agreement still in effect, most countries generally making good-faith effort to comply: 80%
4. US still nominally committed to Paris Agreement: 60%

And just for fun…

1. I actually remember and grade these predictions publicly sometime in the year 2023: 90%
2. Whatever the most important trend of the next five years is, I totally miss it: 80%
3. At least one prediction here is horrendously wrong at the “only a market for five computers” level: 95%

If you disagree, make your own predictions with probabilities. I’m tired of people offering to bet me on these and I’m not interested unless you provide me overwhelmingly good odds.

Current list of updates here.

Posted in Uncategorized | Tagged | 531 Comments

Even More Search Terms That Led People To This Blog

[Previously in series: Search Terms That Have Led People To This Blog and More Search Terms That Have Led People To This Blog. Content warning: profanity, rape, and other unfiltered access to the consciousness of the Internet]

Sometimes I look at what search terms lead people to SSC. Sometimes it’s the things you would think – “slate star codex”, “rationality”, the names of medications I’ve written about.

Other times it’s a little weirder:

why is my sister so pretty
I mentioned this query last post, probably based on this article, and the onslaught hasn’t stopped.

my sis is so pretty
Sometimes I can pretend it’s just people happy for their family members’ good qualities…

my sister is really pretty
…many people, very very happy…

my sister is so sexy what to do
…and other times, not so much.

sweet sister so pretty
You can stop any time now.

how we attract our sister for sex

sister aroused by my touch
ANNNNNY TIME NOW. sister pic

the fate of a cruel snake re arrange ssc answer
Other times people really seem to think I have an article on something but I have no idea what they mean.

what is the hormone responsible for soliloquising?
Other times I just have no idea.

hivemind ape and young girl army experiment
Garrett Jones, what exactly are you doing over at GMU?

Glasgow coma scale
I got a lot of these from people looking for this.

glassco coma scale
Some of the spellings were very creative

glass go coma scale
glascov coma scale
glasco comma scale
glawcow coma scale
glasscoma scale
glascoma score
glassma coma scale
glascow comma scale
glosvow coma scale
glass cow coma scale
glass comma scale

slate satr codex
slate star xodex
slate static codex
slate star cosex
slate star cpdex
str slate codex
sstar slate codex
astark star codex
slate state kodaks

delay cool condom codex

slate star codex of hate

i canot talret anything
You’re probably looking for the Slate Star Codex Of Hate

rapist linked to prostate cancer

which star sign is most likely to be a rapist

scott alexander the gay guy’s biography

scott alexander is no gay

fuck scott alexander endless

criticize the statement “you can see atoms”
It’s really dumb

fnord soros fnord
…did someone just try to see the fnords by typing it into Google? That’s great.

considered armed and dangerous for cow pox

i am polynesian and so is my husband both with brown eyes however our son has blue eyes how

if hundreds of americans die tragically today, it’ll be on account of sarcastic cocksucker citizens

unreasonable autism cures
It makes me so happy when our medical system is set up to satisfy someone’s needs this perfectly

whale…. medical cartel hoax . borax
I don’t know what conspiracy this person thinks connects all these things, but they should probably put some fnords in there if they want results.

victory lotto forecast for tomorrow tuesday 11-07-17
Nice try.

using secret pyramid to hit 3 digit lottery

was rene descartes racist

what is going on here? how can just the opinion of 1068 people always determine the opinion of the entire population, no matter how large it is? is statistics broken? is 1068 just a magical number?
It’s because of [fnord] the medical cartel, whales, and borax [/fnord].

there are many things i want to say but do not know how to say. hope you will understand

I am [person’s real name removed] iwant to join illumenatic member want can i do or who can help me in order direct me
Find a member of the medical cartel and say the code word “whale”. They’ll do the rest.

i want to join illuminati brotherhood church in uganda, south africa and kenya post comments on usa blogger
Find a whale and say the code word “medical cartel”. What happens next is up to you.

is fentanyl being used for population reduction by the illuminati
No, that’s what the borax is for

jews hate alcoholics anonymous

can i get a s sample ogre biscuit factory dimension

what decisions might the police make base on crime
One would hope all of them.

why is aa mostly bullshit
Looks like we’ve got a Jew here.

list of human experiences
1. Birth
2. Eating
3. Sleeping
4. Being attracted to your sister
5. Misspelling “Glasgow Coma Scale”
6. Using secret pyramid to hit 3 digit lottery

how can we deal with cactus person?

now if we can prove the electoral college was seeded with purely partisan voters for trump, and illegal, then we can have a new vote
[164/777] …and as these documents clearly prove, Putin sent the files to Trump through the medical cartel, hidden in envelopes marked “WHALE BORAX”.

islamists deport murderous racists
Well, that was a wild garden path of a four-word sentence

bigotry xxx yup to video

creators of remote neural monitoring are gay designing an ass weapon

massage with bombastic words of wishing good lucky to all people doing matriculant

how to start a zombie story

100 statements about albion’s seed: four british folkways in america that almost killed my hamster

give directly illuminati

how to summon abraham lincoln

you are my pleasant gustatory sensation

i hate polyamory
polyamory people are ugly
are all polyamorous people ugly
why are polyamorous people ugly
polyamory aspies
polyamory is sick
polyamorists tend to be narcissists


good looking polyamorous
pics of women who are polyamory

versailles ohio alien military genetics
A clone of Louis XIV spliced with whale DNA by aliens is being stored in a secret base disguised as a borax mine beneath Columbus by the medical cartel.

many people agree that there should be “some sort” of restraint considering abortion
I think regardless of our beliefs on this issue, we can agree those “quotation marks” are creepy.

what is cost disease by elephant

victim to organ harvesting yankee bob found murdered
If I were a beneficiary of black market organ harvesting, I would be pretty concerned about the possibility that my new organs came from someone named “Yankee Bob”.

explain d theories of truth n d best one dat suits d saying if u can’t beat dem, join dem using events in university as a case study

you r destiny, you r fate, is the gift that god bestowed me. you so different that i had been before. anh is something special that i must keep. i hope time will help me answer everything. time will help me keep you.
I think this person has too much of the hormone responsible for soliloquizing.

satanic company (that public likes)
Apple. Trust me on this one.

Posted in Uncategorized | Tagged , | 126 Comments

More Testimonials For SSC

[Content note: various slurs and insults]

Last post I thanked some of the people who have contributed to this blog. But once again, it’s time to honor some of the most important contributors: the many people who give valuable feedback on everything I write. Here’s a short sample of some of…most interesting. I’m avoiding names and links to avoid pile-ons. Some slightly edited for readability.

“A cowardly autistic cuckolded deviant Jew who uses his IQ to rationalize away wisdom”

“He’s part of the self-declared ‘Rationalish Community’. Imagine the ridiculous level of self-regard implied by that. Picture cb2 with a graduate degree. Scott Alexander, if brevity is the soul of wit, you’re a witless soulsucking fuck.”

“Dude sounds like a crackpot. Blaming Republicans for everything and hailing liberals for… well, that part isn’t clear exactly. Thanks for helping me find something to add to my “never read” collection.”

“The men tweeting about how bad the women’s march is are also the guys who didn’t get invited to parties + blamed it on feminism…I know a few men who make this seem like actual fact. Like that guy from Slate Star Codex.”

“I don’t know what I was expecting from a jew quack but I suppose reasonable fits the description.”

“Ross Douthat somehow manages to recommend a person with a theology less plausible than Catholicism”

“I refuse to read Slate Star Codex anymore. It has become the epitome of IYI (Intellectual Yet Idiot) “pragmatic”(i.e. spineless) centrism.”

“He wants his readers to adopt a strategy of misogynist sabotage.”

“Slate Star Codex is THE definition of ‘autism spiral into infertility and death'”

“Scott Alexander is a LessWrong cultist. He has ALWAYS been a tremendous asshole who thinks he’s Mister Fucking Spock.”

“Slate Star Codex is to cognitive dissonance what Goddard was to rocketry.”

“I used to really enjoy Slate Star Codex before joining the dark side, now I find the blog almost insufferably autistic.”

“Laughing my ass off as Slate Star Codex’s “The Anti-Reactionary FAQ” figuratively burns in the fires of Berkeley.”

“I’ve started to be bothered by clothes tags a little bit lately. I blame SSC for putting this idea in my head”

“Slate Star Codex was always a shill, but this was craven even for him”

“He *literally* thinks that humans are horses”

“This is entirely reductive statement from me but I think that in an important sense SSC is just the Scott Alexander ego inflation program. Some of the best blogs can function this way. However a reasonable about the disingenuous use of his explicitly stated preferences for objectivity and the unstated outcomes of his blog can be made. Is Scott a scientist promoting a radical new social program or perhaps he is interested in the cult of personality and trapping of psychiatric performance?”

“Sexist asses: It’s not us, females just genetically hate liberty.”

“You asexual twerp”

“Fucking tech-libertarian cockroaches everywhere preaching total derp. Deeply disappointing.”

“I see you reduced to making excuses for a career criminal [Hillary Clinton] because you’re afraid of change. I expected more from you, Scott. I expected you to remember hope. I never expected the Dark would take you. Enjoy this. The thousands of comments, the last remarks from departing readers. It will never be the same.”

“Honestly, every time I read Scott, I am super conflicted. I have never found a writer whom I agree with so consistently while finding their personality, as expressed through their writing, so intolerable. I always feel like I want to shout, “You’re exactly correct! Now the shut the fuck up!” and pop him one in the mouth.”

“Go figure, Slate Star Codex blog readers are politically *and* literally, a bunch of phags. Look @ weightlifting data [on the survey]; found the problem.”

“Scott isn’t really dogmatic about anything besides niceness, honesty, puns, and growth mindset.”

“Scott Alexander over at the popular blog Slate Star Codex is an interesting case study in classical liberalism; nowhere else will you find someone who better exemplifies the phenomenon of skirting within microns of the event horizon of Getting It before screaming ‘Nooooo’ and zooming off in some other direction; nor will you find many who choose a crazier direction in which to flee.”

“Basically imagine a guy drinking Soylent and having a flamewar about how in the future they will too be able to unfreeze his head and you’ve got a basic idea of the ideology at play here.”

“So we come to answering the question I asked at the beginning: What is it that allows men like Scott Alexander, men of some intelligence and sensitivity, to get so close to understanding, and fail so miserably, over and over? We can find the answer here at the end of his piece: we see that he stumbled, baffled, like a giraffe with a head injury loose in Manhattan, through the entire book, then through an entire long review, without comprehending its basic point.”

“Slate Star Codex: if you’re a man who is involved in tech and not interested in any legitimate philosophical or sociological inquiry, we’ve got you covered”

“He is riddled with all kinds of spooks and leftist ideology and it shows in his commentariat. This also doesn’t bode well for psychiatry if someone as emotionally weak as Alexander is allowed to become a psychiatrist.”

“The Slate Star Codex guy is a living fable against the idea that u can solve problems with pure tedious reason instead of ever reading a book”

“it’s basically one of the hubs for autistic people really into Bayesianism, so like half the posters could either transition or become Nazis. or both idk”

“Hey man I took your survey it made me feel all weird and insecure about my gender identity thanks a lot!”

“Anyway, I don’t mean to pick on Alexander, whose heart is in the right place, but he is a walking, talking, male prophylactic. If I absolutely did not want any grandchildren (say, high risk of insanity in the bloodline) I’d have Alexander teach my sons the birds & bees. He is a weirdo autistic who has no understanding of normal women based on the few writings on sexual politics of his I’ve read, which are filled with the usual libertard lonely-boy pablum about the awfulness of “slut shaming” in our society, and how if we could just get rid of that and any sort of gender roles and treat everyone the same, we’d be living in a flippin’ sexual Nirvana where our genitals would be as happily interoperable as any pair of USB ports. (Alexander, IIRC, is in a relationship a technically female but maybe not womyn-aligned webcam star with whom he may or may not actually be bumping uglies)”

“To be fair to Alexander, the million leaked credit cards #’s from ASHLEY MADISON from men who really think there any normal women out there trolling for one-off sex on the Internet shows the cluelessness out there is pretty broad.”

“Since people are sharing around a Slate Star Codex article let’s have a reminder that he’s a moral cripple”

“$500 Reward. Seeking the testimony of victims of Scott Alexander, human rights abuser. I am also willing to pay for the stories of the victims of any other ‘prominent’ ‘internet famous’ psychiatrists/human rights abusers.”

“I hate to go ad hom, but i can’t think of anyone who would benefit more from TRT and getting laid on the reg.”

“After reading Scott’s article to a friend of mine, he decided to get “Border Reaver” tattooed on his neck”

“So basically he’s an athiest jewish kikeiatrist named Schlomo Schlomovich who mingles with the social elite while still being afraid of antifa? I couldn’t have strung together that many ridiculous stereotypes at once if I tried honestly. This is fucking hilarious.”

“Discovering Scott Aaronson is way into Slate Star Codex is like finding spiders in my favorite flavor of ice cream. Slate Star Codex is ‘Well, actually…’ personified, with a dusting of evil. But mostly it bugs me that it passes for good writing.”

“id like to fight the guy who runs slate star codex, he’s a smarmy faggot”

“Why do people I otherwise like keep insisting to me that slate star codex is good”

“[Slate Star Codex] split off from Less Wrong because even massive faggots sometimes have standards. His clique don’t exactly get along with Yudkowsky and will point out that he’s basically running a cult. Nonetheless, Scott’s still a huge fag and sucks Eliezer’s dick when it comes to rationalism and his fucking gay “Sequences”, which he and his commenters will tell you to read as if it’s the fucking gospel.”

“Slate Star Codex, an extremely verbose blog that I have complicated feelings about.”

“YouTube Skeptics, Slate Star Codex rationalists, Stefan Molyneux and Ayn Rand all ruined “rationality” and ‘logic’ for me. Must be a horseshoe theory conspiracy of sorts.”

“The disturbing thing is that they’re all aware of the criticisms people level at them for their autism, but no matter how many times they’re inundated by people telling them they’re being inhuman spergs, they’re just like ‘Hmm…am I out of touch? No…it’s the normal people who are wrong.'”

“*making racist laser gun noises* computer, engage Near Mode and navigate me to slate star codex please”

“‘Bigoted shits’ is basically the Slate Star Codex demographic”

“[Scott] wants the SJWs to take over. He wants you to dawdle around appealing to ‘reason’ until the Commies have indoctrinated enough of the youth to allow PC Culture to permeate all things.”

“It’s cool to watch the slate star codex guy inch closer and closer to actually knowing something while his comment sections get stupider and stupider”

“Is it just me, or is the guy who runs slate star codex kind of a wanker?”

“But this is… just incredible. I read this SSC article last night, and my jaw dropped. What was I missing? How could Scott Alexander be so fucking stupid? I spent all day with a slow burning anger in my belly. This pure nonsense, from the “Red Tribe vs. Blue Tribe” guy, in the same week as McConnell holding millions of children hostage so he doesn’t accidentally upset the avowed racists over in the House, not to mention Stormy Daniels, McCabe’s loyalty test, Trump trying to fire Mueller, and all the other usual shit that I already forgot all coming to light? And you choose now—January 24, 2018 and not November 9, 2016—to equate George Soros and the Koch brothers not once but twice in an overlong Tumblr note that amounts to saying, “huh I just realized maybe I’m missing something but I still think all politically active people are retarded”? Have you read the news once in the last year, or do you just get summaries from the same place as your political theories—random fucking commenters on your blog? Or was I mistaken this whole time thinking that Scott both lived in America and wanted the world to get better not worse? Because this post would make way more sense if his political climate was actually recess on the fucking playground of a quarantined elementary school for experimental Nazi test tube babies in a bubble on the dark side of the fucking moon.”

“nobody has ever read a slate star codex article to the end”

Posted in Uncategorized | Tagged | 328 Comments

We’ve Got Five Years, What A Surprise

Today is the fifth anniversary of Slate Star Codex. Overall I’m very happy with how this project is going so far, and I want to take this opportunity to thank everyone who’s made things work behind the scenes.

Trike Apps generously volunteered to host me free of charge. I give them the highest praise it is possible to give a hosting company – namely, that I completely forgot about their existence until right now because I’ve never had to worry about anything. Special thanks to Matt Fallshaw and Cat Truscott for their kindness and patience.

Bakkot has done various things behind the scenes to make the blog more useable – fixing WordPress bugs, helping with moderation tasks, and adding cool new features like the green highlights around new comments. A big part of the success of the comments section is thanks to his innovations; the remaining horribleness is mostly my fault. Rory O and Alice M have also helped with this.

Michael K and Mason H have done other behind-the-scenes work, especially improving the design. Remember when the front page looked like this? No? Thank Michael and Mason.

The subreddit moderation team – led by Bakkot again, but also _Vulture_, coderman9, PM_ME_YOUR_OBSIDIAN, tailcalled, werttrew, utilsucks, and cjet79 – keeps the subreddit in line, and deserves special gratitude for wading through all of your horrible offensive opinions in the process of redirecting them into the Culture Wars thread.

Deluks917 started the Discord, which is thriving. Thanks to him and the rest of the Discord moderation team – Celestia, notaraptor, and vivafringe.

Jeremiah started the podcast, which continues to update very regularly.

Mingyuan runs the general repository of meetups. Thank her if you’ve been able to find an SSC meetup in your area – and if you haven’t, give her a little while, since she’s working on a better and higher-tech version. Thanks also to everyone who organizes meetups. I’m most familiar with David Friedman’s very large and regular South Bay meetups, but I understand there are decent-sized groups all around the world and I’m grateful for everyone who puts work into it.

Cody, Oliver, and other people organized the Unsong wrap celebration so efficiently that I’m still not sure exactly what happened or who did what. I just showed up and everything seems to have worked. Apologies if I am forgetting people or exactly what they did.

Katja has organized the Open Thread system so that it posts on time and with the right decimals. She also gives general moral support and puts up with me. She also arranged the Unsong wrap after-party and has given me lots of interesting things to think about.

Ozy and Elizabeth have guest-blogged here very briefly. Thanks to them – and congratulations to Ozy on the recent birth of their first child.

Roland helped transform my post on antidepressants into a scientific paper that got accepted by a pharmacology journal. Ada Palmer’s thoughts on finally publishing a novel are pretty much how I feel about finally having published a journal article. Preliminary thanks also to everyone currently working with me on similar projects for other posts.

Scott Aaronson, Leah Libresco, and other more established bloggers – plus some Real Journalists like Conor Friedersdorf, Tom Chivers and Ross Douthat – linked to me and encouraged me when I was relatively new to this, and helped convince me that this “blogging” thing might work out.

Thanks to all our advertisers, but especially to Beeminder and MealSquares, who have stuck with me since the beginning and put up with everything from me never remembering to respond to their emails to me gratutitiously and unfairly insulting their products. I actually think they’re both great companies and totally recommend that you Beemind the number of MealSquares you eat, or something.

Thanks to everyone who supports me on Patreon. Your money pays for things like the Mechanical Turk version of the SSC survey, my books and subscriptions, and me having more time to work on the blog.

Thanks to everyone I’ve engaged with. Again and again I’ve had the experience of reading something, criticizing it (sometimes savagely), and having the author be incredibly nice to me, walk me through where we disagree, and then continue being friendly and supportive afterwards. Some people in this category include Ezra Klein, Adam Grant, Nathan Robinson, Curtis Yarvin, Bryan Caplan, David Friedman (again), Dylan Matthews, Brendan Nyhan, Nick Land, Julia Rohrer, and Freddie de Boer. I continue to disagree with them strongly on a lot of things but can’t find even the tiniest bit of fault with their decency on a personal level.

Thanks to all the people who seem to genuinely dislike me and wish me ill, but who have been decent enough not to let it get to the point of doxxing me or threatening my personal safety, my career, or my relationship with my patients.

Thanks to everyone who comments and contributes to discussions. Some people who I’ve heard praised again and again – John Schilling on international affairs, David Friedman on economic issues, Deiseach on religious and cultural issues, and of course Bean on battleships. But everybody is appreciated. Godwin’s Law says that if you want a question answered online, you shouldn’t ask – you should post something wrong and let people correct you. But I have had good luck groping towards the best answer I can, then letting a bunch of experts show up and fill in the details.

Thanks to everyone who takes the (long, often poorly worded) surveys. Without you, interesting findings like this would not be possible – and trust me, there’s more where that came from once I have time to write it up. One of my goals is to find more ways to use this blog’s readership to advance psychological research, and your surprising willingness to waste time on any crazy questions I throw at you has been delightful.

Thanks to everyone who sent me emails, requests, and questions for not being a jerk when I didn’t answer for several weeks or, in many cases, ever. Without your tolerance for my rudeness, I would have much less time to write.

I am probably forgetting all sorts of people – if so, no offense meant. You are all great.

Posted in Uncategorized | Tagged | 90 Comments

OT95: Zoetropen Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. Happy Valentine’s Day! While you’re waiting for blockchain-based dating to materialize, remember that there’s already a rationalist dating site, Go through your Facebook friends and check boxes for which ones you want to date or hang out with, and if they check you too the site will let you both know. It does require your Facebook friends also use the site, but if you’re socially exposed to the rationalist community many of them will. The Reciprocity team wants me to remind you that even if you’re already on there, this might be a good time to go back, update your selections, and see if anyone new has joined.

2. There will probably be an SSC meetup in Berkeley on March 3. I’ll post a clearer announcement later, but just wanted to give some advance warning.

Posted in Uncategorized | Tagged | 1,074 Comments

Guyenet On Motivation

Rereading The Hungry Brain, I notice my review missed one of my favorite parts: the description of the motivational system. It starts with studies of lampreys, horrible little primitive parasitic fish:

How does the lamprey decide what to do? Within the lamprey basal ganglia lies a key structure called the striatum, which is the portion of the basal ganglia that receives most of the incoming signals from other parts of the brain. The striatum receives “bids” from other brain regions, each of which represents a specific action. A little piece of the lamprey’s brain is whispering “mate” to the striatum, while another piece is shouting “flee the predator” and so on. It would be a very bad idea for these movements to occur simultaneously – because a lamprey can’t do all of them at the same time – so to prevent simultaneous activation of many different movements, all these regions are held in check by powerful inhibitory connections from the basal ganglia. This means that the basal ganglia keep all behaviors in “off” mode by default. Only once a specific action’s bid has been selected do the basal ganglia turn off this inhibitory control, allowing the behavior to occur. You can think of the basal ganglia as a bouncer that chooses which behavior gets access to the muscles and turns away the rest. This fulfills the first key property of a selector: it must be able to pick one option and allow it access to the muscles.

Many of these action bids originate from a region of the lamprey brain called the pallium…

Spoiler: the pallium is the region that evolved into the cerebral cortex in higher animals.

Each little region of the pallium is responsible for a particular behavior, such as tracking prey, suctioning onto a rock, or fleeing predators. These regions are thought to have two basic functions. The first is to execute the behavior in which it specializes, once it has received permission from the basal ganglia. For example, the “track prey” region activates downstream pathways that contract the lamprey’s muscles in a pattern that causes the animal to track its prey. The second basic function of these regions is to collect relevant information about the lamprey’s surroundings and internal state, which determines how strong a bid it will put in to the striatum. For example, if there’s a predator nearby, the “flee predator” region will put in a very strong bid to the striatum, while the “build a nest” bid will be weak…

Each little region of the pallium is attempting to execute its specific behavior and competing against all other regions that are incompatible with it. The strength of each bid represents how valuable that specific behavior appears to the organism at that particular moment, and the striatum’s job is simple: select the strongest bid. This fulfills the second key property of a selector – that it must be able to choose the best option for a given situation…

With all this in mind, it’s helpful to think of each individual region of the lamprey pallium as an option generator that’s responsible for a specific behavior. Each option generator is constantly competing with all other incompatible option generators for access to the muscles, and the option generator with the strongest bid at any particular moment wins the competition.

The next subsection, which I’m skipping, quotes some scientists saying that the human motivation system works similarly to the lamprey motivation system, except that the human cerebrum has many more (and much more flexible/learnable) options than the lamprey pallium. Humans have to “make up our minds about things a lamprey cannot fathom, like what to cook for dinner, how to pay off the mortgage, and whether or not to believe in God”. It starts getting interesting again when it talks about basal ganglia-related disorders:

To illustrate the crucial importance of the basal ganglia in decision-making processes, let’s consider what happens when they don’t work.

As it turns out, several disorders affect the basal ganglia. The most common is Parkinson’s disease, which results from the progressive loss of cells in a part of the basal ganglia called the substantia nigra. These cells send connections to the dorsal striatum, where they produce dopamine, a chemical messenger that plays a very important role in the function of the striatum. Dopamine is a fascinating and widely misunderstood molecule that we’ll discuss further in the next chapter, but for now, its most relevant function is to increase the likelihood of engaging in any behavior.

When dopamine levels in the striatum are increased – for example, by cocaine or amphetamine – mice (and humans) tend to move around a lot. High levels of dopamine essentially make the basal ganglia more sensitive to incoming bids, lowering the threshold for activating movements…Conversely, when dopamine levels are low, the basal ganglia become less sensitive to incoming bids and the threshold for activating movements is high. In this scenario, animals tend to stay put. The most extreme example of this is the dopamine-deficient mice created by Richard Palmer, a neuroscience researcher at the University of Washington. These animals sit in their cages nearly motionless all day due to a complete absence of dopamine. “If you set a dopamine deficient mouse on a table,” explains Palmiter, “it will just sit there and look at you. It’s totally apathetic.” When Palmiter’s team chemically replaces the mice’s dopamine, they eat, drink, and run around like mad until the dopamine is gone.

The same can happen to humans with basal ganglia injuries:

Consider Jim, a former miner who was admitted to a psychiatric hospital at the age of fifty-seven with a cluster of unusual symptoms. As recorded in his case report, “during the preceding three years he had become increasingly withdrawn and unspontaneous. In the month before admission he had deteriorated to the point where he was doubly incontinent, answered only yes or no questions, and would sit or stand unmoving if not prompted. He only ate with prompting, and would sometimes continue putting spoon to mouth, sometimes for as long as two minutes after his plate was empty. Similarly, he would flush the toilet repeatedly until asked to stop.”

Jim was suffering from a rare disorder called abulia, which is Greek for “an absence of will”. Patients who suffer from abulia can respond to questions and perform specific tasks if prompted, but they have difficulty spontaneously initiating motivations, emotions, and thoughts. A severely abulic patient seated in a bare room by himself will remain immobile until someone enters the room. If asked what he was thinking or feeling, he’ll reply, “Nothing”…

Abulia is typically associated with damage to the basal ganglia and related circuits, and it often responds well to drugs that increase dopamine signaling. One of these is bromocriptine, the drug used to treat Jim…Researchers believe that the brain damage associated with abulia causes the basal ganglia to become insensitive to incoming bids, such that even the most appropriate feelings, thoughts, and motivations aren’t able to be expressed (or even to enter consciousness). Drugs that increase dopamine signaling make the striatum more sensitive to bids, allowing some abulic patients to recover the ability to feel, think, and move spontaneously.

All of this is standard neuroscience, but presented much better than the standard neuroscience books present it, so much so that it brings some important questions into sharper relief. Like: what does this have to do with willpower?

Guyenet describes high dopamine levels in the striatum as “increasing the likelihood of engaging in any behavior”. But that’s not really fair – outside a hospital, almost nobody just sits motionless in the middle of a room and does no behaviors. The relevant distinction isn’t between engaging in behavior vs. not doing so. It’s between low-effort behaviors like watching TV, and high-effort behaviors like writing a term paper. We know that this has to be related to the same dopamine system Guyenet’s talking about, because Adderall (which increases dopamine in the relevant areas) makes it much easier to do the high-effort behaviors. So a better description might be “high dopamine levels in the striatum increase the likelihood of engaging in high-willpower-requirement behaviors”.

But what is high willpower requirements? I’m always tempted to answer this with some sort of appeal to basic calorie expenditure, but taking a walk requires less willpower than writing a term paper even though the walk probably burns way more calories. My “watch TV” option generator, my “take a walk” option generator, and my “write a term paper” option generator are all putting in bids to my striatum – and for some reason, high dopamine levels privilege the “write a term paper” option and low dopamine levels privilege the others. Why?

I don’t know, and I think it’s the most interesting next question in the study of these kinds of systems.

But here’s a crazy idea (read: the first thing I thought of after thirty seconds). In the predictive processing model, dopamine represents confidence levels. Suppose there’s a high prior on taking a walk being a reasonable plan. Maybe this is for evo psych reasons (there was lots of walking in the ancestral environment), or for reinforcement related reasons (you enjoy walking, and your brain has learned to predict it will make you happy). And there’s a low prior on writing a term paper being a reasonable plan. Again, it’s not the sort of thing that happened much in the ancestral environment, and plausibly every previous time you’ve done it, you’ve hated it.

In this case, confidence in your new evidence (as opposed to your priors) is a pretty important variable. If your cortex makes its claims with high confidence (ie in a high-dopaminergic state), then its claim that it’s a good idea to write a term paper now may be so convincing that it’s able to overcome the high prior against this being true. If your cortex makes claims with low confidence, then it will tentatively suggest that maybe we should write a term paper now – but the striatum will remain unconvinced due to the inherent implausibility of the idea.

In this case, sitting in a dark room doing nothing is just an action plan with a very high prior; you need at least a tiny bit of confidence in your planning ability to shift to anything else.

I mentioned in Toward A Predictive Theory Of Depression that I didn’t understand the motivational system well enough to be able to explain why systematic underconfidence in neural predictions would make people less motivated. I think the idea of evolutionarily-primitive and heavily-reinforced actions as a prior – which logical judgments from the cortex have to “override” in order to produce more willpower-intensive actions – fills in this gap and provides another line of evidence for the theory.

Posted in Uncategorized | Tagged | 125 Comments

Predictions For 2018

At the beginning of every year, I make predictions. At the end of every year, I score them. So here are a hundred more for 2018.

Some changes this year: I’ve eliminated a bunch of predictions about things that are very unlikely where I just plug in the same number each year, like “99% chance of no coup in the US”. I’ve tried to have almost everything this year be new and genuinely uncertain. I’ve also included some very personal predictions about friends and gossip that I’m keeping secret for now – I have them written down somewhere else and they’re for my own interest only.

My rule is that I have to make all of these without checking existing prediction markets – otherwise I wouldn’t be learning anything about my own abilities. I’m also not doing any research beyond what I already know, because otherwise this will take forever. I bet some of these are terribly misinformed, but that’s part of what I’m including in my calibration estimate. These were written a few days ago; a few already seem obsolete.

I’m keeping 50% predictions even though everyone keeps telling me they don’t matter. My only excuse is that I write everything down first and then decide what I think the likelihood is, and sometimes my best guess really is 50%.

1. Donald Trump remains president at end of year: 95%
2. Democrats take control of the House in midterms: 80%
3. Democrats take control of the Senate in midterms: 50%
4. Mueller’s investigation gets cancelled (eg Trump fires him): 50%
5. Mueller does not indict Trump: 70%
6. PredictIt shows Bernie Sanders having highest chance to be Dem nominee at end of year: 60%
7. PredictIt shows Donald Trump having highest chance to be GOP nominee at end of year: 95%
9. Some sort of major immigration reform legislation gets passed: 70%
10. No major health-care reform legislation gets passed: 95%
11. No large-scale deportation of Dreamers: 90%
12. US government shuts down again sometime in 2018: 50%
13. Trump’s approval rating lower than 50% at end of year: 90%
14. …lower than 40%: 50%
15. GLAAD poll suggesting that LGBQ acceptance is down will mostly not be borne out by further research: 80%

16. Dow does not fall more than 10% from max at any point in 2018: 50%
17. Bitcoin is higher than $5,000 at end of year: 95%
18. Bitcoin is higher than $10,000 at end of year: 80%
19. Bitcoin is lower than $20,000 at end of year: 70%
20. Ethereum is lower than Bitcoin at end of year: 95%
21. Luna has a functioning product by end of year: 90%
22. Falcon Heavy first launch not successful: 70%
23. Falcon Heavy eventually launched successfully in 2018: 80%
24. SpaceX does not attempt its lunar tourism mission by end of year: 95%
25. Sci-Hub is still relatively easily accessible from within US at end of year (even typing in IP directly is relatively easy): 95%
26. Nothing particularly bad (beyond the level of an funny/weird news story) happens because of ability to edit videos this year: 90%
27. A member of the general public can ride-share a self-driving car without a human backup driver in at least one US city by the end of the year: 80%

28. Reddit does not ban r/the_donald by the end of the year: 90%
29. None of his enemies manage to find a good way to shut up/discredit Jordan Peterson: 70%

30. SSC gets more hits in 2018 than in 2017: 80%
31. SSC gets mentioned in the New York Times (by someone other than Ross Douthat): 60%
32. At least one post this year gets at least 100,000 hits: 70%
33. A 2019 SSC Survey gets posted by the end of the year: 90%
34. No co-bloggers make 3 or more SSC posts this year: 80%
35. Patreon income less than double current amount at end of year: 90%
36. A scientific paper based on an SSC post is accepted for publication in real journal by end of year: 60%
37. I do an adversarial collaboration with somebody interesting by the end of the year: 50%
38. I successfully do some general project to encourage and post more adversarial collaborations by other people: 70%
39. New SSC meetups system/database thing gets launched successfully: 60%
40. LesserWrong remains active and successful (average at least one halfway-decent post per day) at the end of the year: 50%
41. LesserWrong is declared official and merged with 80%
42. I make fewer than five posts on LessWrong (posts copied over from SSC don’t count): 70%
43. CFAR buys a venue this year: 50%
44. AI Impacts has at least three employees working half-time or more sometime this year: 50%
45. Rationalists get at least one more group house on Ward Street: 50%
46. No improvement in the status of (either transfer to a new team or at least one new feature added): 70%

47. I fail at my New Years’ resolution to waste less time on the Internet throughout most of 2018: 80%
48. I fail at my other New Years’ resolution to try one biohacking project per month throughout 2018: 80%
49. I don’t attend the APA National Meeting: 80%
50. I don’t attend the New York Solstice: 80%
51. I travel outside the US in 2018: 90%
52. I get some sort of financial planning sorted out by end of year: 95%
53. I get at least one article published on a major site like Vox or New Statesman or something: 50%
54. I get a tax refund: 50%
55. I weigh more than 195 lb at year end: 60%
56. I complete the currently visible Duolingo course in Spanish: 90%
57. I don’t get around to editing Unsong (complete at least half the editing by my own estimate) this year: 95%
58. No new housemate for at least one month this year: 90%
59. I won’t [meditate at least one-third of days this year]: 90%
60. I won’t [do my exercise routine at least one third of days this year]: 80%
61. I still live in the same house at the end of 2018: 60%
62. I will not have bought a house by the end of 2018: 90%
63. Katja’s paper gets published: 90%
64. Some other paper of Katja’s gets published: 50%

SECRET: (mostly speculating on the personal lives of friends who read this blog; I don’t necessarily want them to know how successful I expect their financial and romantic endeavors to be)
65. [Secret prediction]: 80%
66. [Secret prediction]: 70%
67. [Secret prediction]: 70%
68. [Secret prediction]: 60%
69. [Secret prediction]: 70%
70. [Secret prediction]: 60%
71. [Secret prediction]: 50%
72. [Secret prediction]: 50%
73. [Secret prediction]: 50%
74. [Secret prediction]: 90%
75. [Secret prediction]: 90%
76. [Secret prediction]: 60%
77. [Secret prediction]: 70%
78. [Secret prediction]: 60%
79. [Secret prediction]: 50%
80. [Secret prediction]: 60%
81. [Secret prediction]: 80%
82. [Secret prediction]: 70%
83. [Secret prediction]: 50%
84. [Secret prediction]: 70%
85. [Secret prediction]: 70%
86. [Secret prediction]: 70%
87. [Secret prediction]: 60%
88. [Secret prediction]: 50%
89. [Secret prediction]: 50%
90. [Secret prediction]: 70%
91. [Secret prediction]: 90%
92. [Secret prediction]: 50%
93. [Secret prediction]: 90%
94. [Secret prediction]: 50%
95. [Secret prediction]: 60%
96. [Secret prediction]: 60%
97. [Secret prediction]: 60%
98. [Secret prediction]: 95%
99. [Secret prediction]: 70%
100. [Secret prediction]: 70%

Other properly formatted predictions for this year:
– Socratic Form Microscopy (2017 results, 2018 predictions)
– Anatoly Karlin (2017 results, 2018 predictions)
– Various people from the subreddit
– Very many people on Metaculus

[EDIT: List of predictions I’ve already been convinced are miscalibrated as of 3 AM 2/6/18:
– Bitcoin prices are already too high (they were higher when I wrote these predictions a few days ago).
– Stock market is more likely to have large fall (it was higher when I wrote these predictions a few days ago)
– Chance of Trump’s approval not breaking 50% probably closer to 95% than 90%.]

Posted in Uncategorized | Tagged | 301 Comments

Powerless Placebos

[All things that have been discussed here before, but some people wanted it all in a convenient place]

The most important study on the placebo effect is Hróbjartsson and Gøtzsche’s Is The Placebo Powerless?, updated three years later by a systematic review and seven years later with a Cochrane review. All three looked at studies comparing a real drug, a placebo drug, and no drug (by the third, over 200 such studies) – and, in general, found little benefit of the placebo drug over no drug at all. There were some possible minor placebo effects in a few isolated conditions – mostly pain – but overall H&G concluded that the placebo effect was clinically insignificant. Despite a few half-hearted tries, no one has been able to produce much evidence they’re wrong. This is kind of surprising, since everyone has been obsessing over placebos and saying they’re super-important for the past fifty years.

What happened? Probably placebo effects rode on the coattails of a more important issue, regression to the mean. That is, most sick people get better eventually. This is true both for diseases like colds that naturally go away, and for diseases like depression that come in episodes which remit for a few months or years until the next relapse. People go to the doctor during times of extreme crisis, when they’re most sick. So no matter what happens, most of them will probably get better pretty quickly.

In the very old days, nobody thought of this, so all their experiments were hopelessly confounded. Then people started adding placebo groups, this successfully controlled for not just placebo effect but regression to the mean, and so people noticed their studies were much better. They called the whole thing “placebo effect” when in fact there was no way to tell without further study how much was real placebo effect and how much was just regression to the mean. If we believe H&G, it’s pretty much all just regression to the mean, and placebo was a big red herring.

The rare exceptions are pain and a few other minor conditions. From H&G #3:

We found an effect on pain, SMD -0.28 (95% CI -0.36 to -0.19)); nausea, SMD -0.25 (-0.46 to -0.04)), asthma (-0.35 (-0.70 to -0.01)), and phobia (SMD -0.63 (95% CI -1.17 to -0.08)). The effect on pain was very variable, also among trials with low risk of bias. Four similarly-designed acupuncture trials conducted by an overlapping group of authors reported large effects (SMD -0.68 (-0.85 to -0.50)) whereas three other pain trials reported low or no effect (SMD -0.13 (-0.28 to 0.03)). The pooled effect on nausea was small, but consistent. The effects on phobia and asthma were very uncertain due to high risk of bias.

So the acupuncture trials seem to do pretty well. This probably isn’t because acupuncture works – some experiments have found sham acupuncture works equally well. It could be because acupuncture researchers have flexible research ethics. But Kamper & Williams speculate that acupuncture does well because it’s an optimized placebo. Normal placebos are just some boring little pill that researchers give because it’s the same shape as whatever they really want to give. Acupuncture – assuming that it doesn’t work – has been tailored over thousands of years to be as effective a pain-relieving placebo as possible. Maybe there’s some deep psychological reason why having needles in your skin intuitively feels like the sort of thing that should alleviate pain.

I want to add my own experience here, which is that occasionally I see extraordinary and obvious cases of the placebo effect. I once had a patient who was shaking from head to toe with anxiety tell me she felt completely better the moment she swallowed a pill, before there was any chance she could have absorbed the minutest fraction of it. You’re going to tell me “Oh, sure, but anxiety’s just in your head anyway” – but anxiety was one of the medical conditions that H&G included in their analysis. Plausibly they studied chronic anxiety, and pills are less good chronically than they are at aborting a specific anxiety attack the first time you take them. Or maybe her anxiety was somehow related to a phobia, one of the conditions H&G find some evidence in support of a placebo for. (Really? Phobia but not anxiety? Whatever.)

Surfing Uncertainty had the the best explanation of the placebo effect I’ve seen. Perceiving the world directly at every moment is too computationally intensive, so instead the brain guesses what the the world is like and uses perception to check and correct its guesses. In a high-bandwidth system like vision, guesses are corrected very quickly and you end up very accurate (except for weird things like ignoring when the word “the” is twice in a row, like it’s been several times in this paragraph already without you noticing). In a low-bandwidth system like pain perception, the original guess plays a pretty big role, with real perception only modulating it to a limited degree (consider phantom limb pain, where the brain guesses that an arm that isn’t there hurts, and nothing can convince it otherwise). Well, if you just saw a truck run over your foot, you have a pretty strong guess that you’re having foot pain. And if you just got a bunch of morphine, you have a pretty strong guess that your pain is better. The real sense-data can modulate it in a Bayesian way, but the sense-data is so noisy that it won’t be weighted highly enough to replace the guess completely.

If this is true, placebo should be strongest in subjective perceptions of conditions sent to the brain through low-bandwidth relays. That covers H&G’s pain and nausea. It doesn’t cover asthma and phobias quite as well, though I wonder if “asthma” is measured as subjective sensation of breathing difficulty.

What about depression? My gut would have told me depressed people respond very well to the placebo effect, but H&G say no.

I think that depressed mood may respond well to the placebo effect on a temporary basis – after all, mood seems noisy and low-bandwidth and hard to be sure of in the same way pain and nausea are. But most studies of depression use tests like the HAM-D, which measure the clinical syndrome of depression – things like sleep disturbance, appetite disturbance, and motor disturbance. These seem a lot less susceptible to subjective changes in the way the brain perceives things, so probably HAM-D based studies will show less placebo effect than just asking patients to subjectively assess their mood.

The Invention Of Moral Narrative

H/T Robin Hanson: Aeon’s The Good Guy / Bad Guy Myth. “Pop culture today is obsessed with the battle between good and evil. Traditional folktales never were. What changed?”

The article claims almost every modern epic – superhero movies, Star Wars, Harry Potter, Lord of the Rings, etc – shares a similar plot. There are some good guys. There are some bad guys. They fight. The good guys win. The end.

The good guys are usually scrappy amateurs; the bad guys usually well-organized professionals with typical fascist precision. The good guys usually demonstrate a respect for human life and the bonds of friendship; the bad guys betray their citizens and their underlings with equal abandon. They gain their good guy or bad guy status by either following the universal law, or breaking it.

This is not exactly a scintillatingly original observation, except that the article claims you’ll almost never find an example of this before 1700. Take the Iliad. Neither the Greeks nor Trojans are especially good nor villainous. The Trojans lose some points for kidnapping a woman, but the Greeks lose some points for killing and enslaving an entire city. Neither side is scrappier or more professional than the other. Neither seems to treat civilians better or demonstrate more loyalty. Exciting things happen, but telling the story of how Good triumphed over Evil was definitely not on Homer’s mind. Nor was it on the mind of the authors of Mahabharata, the Norse sagas, Jack and the Beanstalk, et cetera.

Where ancient works do have good-vs-evil overtones, it’s usually because we’re reading more modern adaptations. Robin Hood doesn’t rob from the rich to give to the poor until much later versions of the story; King Arthur’s knights don’t start out as especially good people and don’t really fight a unified team of evildoers; the virtuous-Arthur-vs-evil-Mordred theme doesn’t really dominate until Victorian retellings. Disney’s Hercules, which reimagines Hades from perfectly-reasonable-underworld-god to classic-cartoon-villain is a striking late-20th-century example (I forgot that it ended with Hercules punching Hades so hard that he falls into the River Styx and gets pulled under by his own damned souls, not the most Hellenic of conclusions).

The article concludes this is because of nationalism. Nation-states wanted their soldiers to imagine themselves as fighting on the side of good, against innately-evil cartoon-villain enemies. This was so compelling a vision that it shaped culture from then on:

Good guy/bad guy narratives might not possess any moral sophistication, but they do promote social stability, and they’re useful for getting people to sign up for armies and fight in wars with other nations. Their values feel like morality, and the association with folklore and mythology lends them a patina of legitimacy, but still, they don’t arise from a moral vision. They are rooted instead in a political vision, which is why they don’t help us deliberate, or think more deeply about the meanings of our actions. Like the original Grimm stories, they’re a political tool designed to bind nations together.

When I talked with Andrea Pitzer, the author of One Long Night: A Global History of Concentration Camps (2017), about the rise of the idea that people on opposite sides of conflicts have different moral qualities, she told me: ‘Three inventions collided to make concentration camps possible: barbed wire, automatic weapons, and the belief that whole categories of people should be locked up.’ When we read, watch and tell stories of good guys warring against bad guys, we are essentially persuading ourselves that our opponents would not be fighting us, indeed they would not be on the other team at all, if they had any loyalty or valued human life. In short, we are rehearsing the idea that moral qualities belong to categories of people rather than individuals. It is the Grimms’ and von Herder’s vision taken to its logical nationalist conclusion that implies that ‘categories of people should be locked up’.

Watching Wonder Woman at the end of the 2017 movie give a speech about preemptively forgiving ‘humanity’ for all the inevitable offences of the Second World War, I was reminded yet again that stories of good guys and bad guys actively make a virtue of letting the home team in a conflict get away with any expedient atrocity.

What are we to think of this?

A quick check of the article’s claims finds them kind of lacking. Robin Hood started stealing from the rich to give to the poor as early as the 1592 edition of his tale. And doesn’t the Bible contains lots of good vs. evil? The author sweeps this under the rug by saying that the Israelites don’t seem much more virtuous than the Canaanites, but one could argue that they’re just not more 2018-virtuous; maybe 1000 BC-virtue was worshipping God and smashing idols. What about Armageddon? Ragnarok? Zoroastrianism? The Mayan Hero Twins? The very existence of Crusades seems to point to “all the good people get together and fight all the bad people, in the name of Goodness” being a recognizable suggestion. [EDIT: @scholars_stage lists some more here].

Are there any differences between the way ancients and moderns looked at this? Maybe modern stories seem more likely to have two clear sides (eg made up of multiple different people) separated by moral character. Villains (as opposed to monsters, or beings that are evil by their very nature) seem more modern. So does the idea of heroes as necessarily scrappy, and villains as necessarily well-organized. And just eyeballing it, modern stories seem to use this plot a lot more, and to have less deviation from the formula.

But even if that’s true, the rise of nation-states seems like a uniquely bad explanation for the rise of these narratives. The past stories seem much more conducive to blind nationalism than our own. The amorality of the warriors in the Iliad manifested as total loyalty: Hector fought for Troy not because Troy was in the right, but because he was a Trojan. Achilles fought for Greece not because he believed in the Greek cause, but because that was his side and he was sticking to it. The whole point of the Mahabharata is the whole ‘theirs not to question why, theirs not to reason why, theirs but to do and die’ philosophy that makes for effective nationalist soldiering. In Jack and the Beanstalk, we root for Jack because he’s human and we are Team Human. Jack can steal and kill whatever and whoever he wants and we’ll excuse him. What more could a nationalist want?

In contrast, the whole point of modern good-vs-evil is that you should choose sides based on principle rather than loyalty. The article gets this exactly right in pointing out the literary motif of virtuous betrayal. We are expected to celebrate Darth Vader or Severus Snape virtuously betraying their dark overlords to help the good guys. In Avatar, the main character decides his entire species is wrong and joins weird aliens to try to kill them, and this is good. Compare to ancient myths, where Hector defecting to Greece because the abduction of Helen was morally wrong is just totally unthinkable. This is a super-anti-nationalist way of thinking.

I suppose nationalists could make the very dangerous bargain of telling their soldiers to always fight for the good guys, then get really good propaganda to make sure they look like the good guys. And maybe this would make them fight harder than if they were just doing the old fight-for-your-own-side thing? But honestly, Achilles seems to have been fighting really hard. Is this whole convoluted process really easier than just telling people from the start to fight for their own side and not betray it?

Also do we really want to claim that concentration camps worked because the Nazis believed you should take principled positions based on moral values, instead of unquestioningly supporting your in-group? Really?

If nationalism didn’t drive the (possibly) increasing prevalence of good-vs-evil stories, what did?

One theory: the broad democratization process marked by the shift from sword-based aristocratic armies to gun-based popular armies. Old stories celebrated warrior virtues – strength, loyalty, bravery. The new stories celebrate populist virtue – compassion, altruism, protecting Democracy. The new nation-states would have liked to maintain the warrior virtues, it just wasn’t an option for them in the face of having to suddenly win the loyalty of a bunch of people they hadn’t cared about before.

A second theory: this is just part of widening moral circles of concern. Pre-1700s, people were still at the point where slavery seemed like an okay idea. Maybe we didn’t have the whole Care/Harm foundation down all that well. Once we got that, through whatever process of moral progress we got it from, having heroes who shared it started seeming more compelling.

A third theory: properly-written good-vs-evil stories are just better, in a memetic sense, but it took a long time to get the formula right. Coca-Cola is better than yak’s milk, but you’ve got to invent it before you can enjoy it – and just having a vague cola-ish mix of spices in water doesn’t count. But once you invent it, it spreads everywhere, and people throw out whatever they were doing before.

I realize this is pretty unsophisticated-sounding, but I’m basing this off of my continuing confusion over the rise of Christianity. Christianity came out of nowhere and had spread to 10% – 20% of the Roman population by the time Constantine made it official. And then it spread to Germany, England, Ireland, Scandinavia, Eastern Europe, Armenia, and Russia, mostly peacefully. Missionaries would come to the tribe of Hrothvalg The Bloody, they would politely ask him to ditch the War God and the Death God and so on in favor of Jesus and meekness, and as often as not he would just say yes. This is pretty astonishing even if you use colonialism as an excuse to dismiss the Christianization of the Americas, half of Africa, and a good bit of East Asia.

I’ve looked around for anyone who has a decent explanation of this, and as far as I can tell Christianity was just really appealing. People worshipped Thor or Zeus or whoever because that was what people in their ethnic group did, plus Thor/Zeus would smite them if they didn’t. Faced with the idea of a God who was actually good, and could promise them eternity in Heaven, and who was against bad things, and never raped anybody and turned them into animals, everyone just agreed this was a better deal. I know this is a horrendously naive-sounding theory, but it’s the only one I’ve got.

And there seems to be a deep connection between Greek paganism and the narrative structure of the Iliad, and a deep connection between Christianity and the narrative structure of (eg) Harry Potter. Achilles fights for Greece because he’s Greek, and the pagan worships Zeus because he (the pagan) is Greek, and that’s all there is to it. But Harry Potter fights for Dumbledore and against Voldemort because the one is good and the other evil, and the Christian worships God and resists the Devil because the one is good and the other evil. Achilles and Hector wear their impressiveness on their sleeves, much like Zeus. Harry Potter is a seemingly ordinary and really quite weak guy who just happens to be fated to save everything through destiny, parentage, and the power of love/sacrifice, much like Jesus.

(this isn’t a joke – one could describe Luke Skywalker or Frodo Baggins the same way)

Maybe this good-vs-evil thing is just really attractive, and naturally replaces whatever was there before – but it’s just really hard to get exactly right. There was a 1500 year lag time between when people got the magic formula for religion (Zoroastrianism wasn’t good enough!) and when they got the magic formula for stories. Wasn’t the high-grade Colombian ultra-purified version of the good-vs-evil fantasy plot invented by Tolkien and CS Lewis sitting around in Oxford specifically trying to figure out how to translate Christianity into narrative form? Maybe this was more of an innovation than it seemed. Maybe they actually did the same thing that St. Paul or whoever did and created a totally new memetic species capable of overwhelming everything that came before.

If this is so, maybe the next question is whether there’s anything else waiting to be good-vs-evil-ified, what form that will take, and what will happen afterwards.

Posted in Uncategorized | Tagged , | 600 Comments

Highlights From The Comments On Conflict Vs. Mistake


Thanks to everyone who commented on the posts about conflict and mistake theory.

aciddc writes:

I’m a leftist (and I guess a Marxist in the same sense I guess I’m a Darwinist despite knowing evolutionary theory has passed him by) fan of this blog. I’ve thought about this “conflict theory vs. mistake theory” dichotomy a lot, though I’ve been thinking of it as what distinguishes “leftists” from “liberals.”

I went through the list of “conflict theory says X, mistake theory says Y” nodding my head and hoping that you and everybody else reading it had the same impression as me -that both theories are important and valuable frameworks through which to view the world. There are definitely common interests that everybody in America shares, and there are definitely some pretty significant conflicts of interests as well.

The reason that I do identify as a leftist and sometimes feel like an evangelist for conflict theory is that I get the impression that most people don’t even have conflict theory in their mental framework. Leftists all understand what the “we’re all in this together” liberal viewpoint is, while even incredibly smart and on it liberals like yourself can go for a long time without even thinking about the “politics is the clash of interest groups with conflicts of interest” leftist viewpoint.

I’m starting with this one, before I get to all the criticism and objections, to insist that there is some core worth saving here. To everyone who says this was obvious, I can only plead that you listen to all the people saying it wasn’t obvious to them at all. There are probably a lot of things wrong with how I described it, and the remaining comments will go over a lot of them, but I can only make the excuse that I’m groping towards something useful without any of the people who claim to already understand it perfectly helping me at all.

HeelBearCub writes:

First off, I think Scott’s post is a good post that seems to make good points and takes steps on the path to true knowledge. But, as I read it I kept thinking about Indian wise men describing the elephant. Or rather, I pictured two amateur carpenters arguing about how wood is joined together, one being a “nail” theorist and the other a “screw” theorist. Now obviously, you build using wood or nails, but structures out in the real world are not built exclusively using one or the other. This seems to me to be a recurring failure of thinking on Scott’s part, honestly. The tendency to naturally think in binary terms (even while knowing this is incorrect).

The goal of those posts wasn’t to present a perfect understanding and tell people that was it, but to gesture at a concept that needed refining.

By analogy, should we ever talk about the left vs. right axis in politics? Isn’t that just a “dichotomy”? After all, almost nobody is 100% leftist on everything or 100% rightist on everything.

What about introverts and extraverts? There’s no 100% introvert or 100% extravert. Everyone is a combination of introverted in some situations and extraverted in others. That doesn’t mean we should never use these terms, or that psychologists are being too binary and dichotomizing. It means we establish the terms so that everyone knows what we’re talking about, and then discuss where everybody is on the axis in between them.

I tried to exaggerate conflict and mistake theory equally, making it obvious I was presenting caricature versions to be filled in later. I think I was at least equally unfair to mistake theorists – presenting them as believing there’s no such thing as selfishness, as thinking that surely tobacco companies just deny their products cause cancer because their CEOs have an genuinely different interpretation of health statistics. The point isn’t “mistake theory is good and normal and conflict theory is bad”. The point is that both look ridiculous at the extremes and everyone combines them in some (different) proportion in reality.


Tumblr user unknought writes:

Principal-agent problems, rent-seeking, and aligning incentives are things that socialists do talk about. Like, a lot. But even if they weren’t, it’s totally bizarre to represent these as mistake theory concepts. All three of these are concepts which are used to describe ways in which we don’t all want the same things, and how agents in positions of power whose goals don’t align with the common good can fuck things up for the rest of us. If conflict theory means anything, it means that.

On my hospital analogy for why mistake theorists like free speech:

So like, if you learned that your doctor’s recommendation was influenced by Pfizer owning the hospital and restricting doctors’ choices and suppressing information about their medication…wouldn’t that be an excellent reason not to be a mistake theorist in this case? Like, this is literally an example of where you can’t trust expert opinion because things are being controlled by a powerful entity whose interests do not align with the common good. This is about as clear-cut an example of conflict theory getting it right as can be imagined.

They quote another Tumblr user who I’m not sure I have permission link directly, responding to my analogy of the shill scientist with a PowerPoint:

In this scenario, even the strawman conflict theorist acknowledges that the people who believe the powerpoint are mistaken, and could be won over through reason and debate, thus undercutting the narrative scott is presenting in which strawmen ‘conflict theorists’ are generally uninterested in debate…the whole tendency to caricature opponents as anti-rational also plays in to his assumption that conflict theorists would be anti-intellectual, which ignores that in the context of a conflict, having intelligent people working on your side to win the conflict is obviously desirable…

even if i do think scott amounts to the hypothetical “Elite shill” with a powerpoint saying yellowstone will erupt, proving his claims wrong on a technical level is still my first and most important priority if i want him to not be a successful shill. simply accusing him of being a shill, and pointing out the $1,627,000 which MIRI (which scott is affiliated with) has received from Peter Thiel would be useless if i couldn’t also demonstrate to people that he was incorrect. the insinuation that viewing a particular disagreement as a conflict of interest presupposes any possibility of engaging on a factual level is ludicrous. even if i think the “conflict theorist”/”mistake theorist” is just a cheap rhetorical trick to sell people on anti-democratic ideology, i still need to demonstrate that it’s not reflective of material reality.

(one correction: I am not affiliated with MIRI. I spent a few weeks doing a minor writing job for them five years ago; I have had no formal relationship since then besides donating money)

The three comments above all seem right to me, and seem like the best examples of how my intuitive concepts of “conflict theorist” and “mistake theorist” fail to be captured by the ways I described them.

How fatal is this objection? A lot of things aren’t accurately captured by the words we use to describe them. Consider again the analogy of the political spectrum. Someone might say “You say Republicans are conservative, but they don’t want to conserve the traditional public school system.” Or “You say libertarians freedom, but what about freedom from powerful corporations?” Yeah, okay, you got me there, sometimes words are not perfect 100% handles for complex concepts. There’s got to be a balance between having simple words that mean exactly what they say, and accurately tracking the weird subtleties of how people really associate.

A lot of this discussion is conflating “conflict vs. mistake” with “protester vs. wonk” and “Marxist vs. neoliberal”. Sometimes they’re being conflated by me, where I say something is conflict theorist when I mean Marxist, or vice versa. Other times they’re being conflated by the commenters, who say “You say X is conflict theorist, but Marxists don’t actually believe X!” Well, maybe that’s because Marxists aren’t 100% stereotypical conflict theorist. The word “libertarian” can shed light on Ron Paul, even though he supports government intervention in some areas.

Obviously this idea isn’t useful if it totally fails to correspond to existing politics. But I think there are more subtle (read: dangerously susceptible to just being overfitting) ways of understanding this that make sense of these contradictions. For example, conflict theorists can certainly engage in intellectual debate if it helps them win. That doesn’t mean there’s not a difference between people who choose their side based on intellectual debate, and people who engage in intellectual debate if it helps their side. Also, it’s unclear whether a conflict theorist thinks intellectual debate is better than equally persuasive propaganda, whereas a good mistake theorist definitely should.

Likewise, any mistake theorist who didn’t acknowledge that there are lots of self-interested parties would be an extraordinary idiot. The mistake theorist calls these “special interests”, which I suppose is a pretty-loaded term suggesting they’re a few weird exceptions to the rule of supporting the general interest. The question can’t be whether special interests exist – obviously they do. It can’t even be whether they’ve seized control of the government – I think this is something of a consensus position right now. Maybe it’s something like “Is there anything other than special interests?”

In a book review a while ago, I talked about figure-ground inversions:

An example of what I mean, taken from politics: some people think of government as another name for the things we do together, like providing food to the hungry, or ensuring that old people have the health care they need. These people know that some politicians are corrupt, and sometimes the money actually goes to whoever’s best at demanding pork, and the regulations sometimes favor whichever giant corporation has the best lobbyists. But this is viewed as a weird disease of the body politic, something that can be abstracted away as noise in the system.

And then there are other people who think of government as a giant pork-distribution system, where obviously representatives and bureaucrats, incentivized in every way to support the forces that provide them with campaign funding and personal prestige, will take those incentives. Obviously they’ll use the government to crush their enemies. Sometimes this system also involves the hungry getting food and the elderly getting medical care, as an epiphenomenon of its pork-distribution role, but this isn’t particularly important and can be abstracted away as noise.

I think I can go back and forth between these two models when I need to, but it’s a weird switch of perspective, where the parts you view as noise in one model resolve into the essence of the other and vice versa.

I think any difference between mistake and conflict theorists on this issue has to be of the figure-ground inversion type. This doesn’t have to be about the number of people who are vs. aren’t special interests. It can be about the amount of each person’s soul devoted to their special interest as rich/poor/white/black/rural/urban, vs. the amount of each person’s soul devoted to loving truth and doing good. Framed this way, it sounds like we’re pretty much screwed, except that free rider problems might be working in our favor here.

A related difference: because mistake theorists think there’s some stable ground other than conflict, they picture themselves as potentially neutral referees (even better: Geneva Convention delegates) looking for ways to circumvent the conflict. Conflict theorists just want to win it.

Some examples of circumventing conflicts: free religion, free speech, federalism, reliance on scientific consensus. Free religion because instead of pushing for any one religion to win, you just create a system that defuses the threat of religious violence. Free speech for the same reason. Federalism because instead of saying that any one side wins, you just tell Side A to do it their way over there, and Side B to do it their way over there (even better: polycentric law). Reliance on scientific consensus, because instead of arguing over whether to have school vouchers or not, everyone just agrees to have unbiased scientists do a study and trust their findings.

I notice Jacobite’s describes itself as “the post-political magazine”. Politics is about having conflict. Mistake-theorists would love to become post-political, in the sense of circumventing all conflicts. Conflicts actually happening as conflicts is a failure, deadweight loss. This wouldn’t mean that nobody has different interests. It would mean that those different interests play out in some formalized way that doesn’t look conflict-y. Think of the Patchwork / charter city / seasteading dream, where there are lots of different polities and people vote with their feet for which one they prefer. Protests, bring-out-the-vote efforts, arguments – all have become obsolete. These ideas don’t deny the existence of conflict – they just represent a desire to avoid it rather than win it. Conflict theorists could theoretically want to avoid conflict, especially if they think they’d lose. But most of the ones I’ve met think that avoiding conflict is a better deal for the enemy than for them, and so would rather just have it and see what happens.

I think this is why I think of public choice theory and its relatives as basically more mistake-y. It’s not just that they sometimes say government doesn’t work, get seized on by libertarians, and so get a bad reputation among Marxists. It’s that they take this god’s eye perspective of trying to micromanage the rules of political conflicts instead of winning them.


Some other good comments:

Fluffy Buffalo comments:

What struck me in your post was that the examples you gave for conflict theories all came from the Marxist perspective. While (cultural) Marxists may be the most obvious, unabashed conflict theorists these days, the behavior of the American right wing looks like they have their fair share of conflict theorists, and Republican tax and health care policy often smells more of an undeclared class warfare than of careful consideration of the pros and cons.

It’s definitely true that there’s a lot of right wing conflict theory and that my post mostly ignored this. They continue:

This looks not like a fundamental question of what the world is really like, and more like a multi-player game theory problem, in particular a multi-player prisoner’s dilemma. It’s all fine and dandy – in fact, it’s probably the most constructive, helpful thing to do – to play “mistake theory” if everyone else is playing the same game, but if you have a sufficiently strong faction playing “conflict theory” (refusing to compromise, because everyone else is the devil), they have more success than they should. “Conflict theory” is like a bad Nash equilibrium, a self-fulfilling prophecy – if everyone behaves according to the diagnosis “It’s power-hungry, uncompromising people on the other side who cause the problem”, there will be no lack of power-hungry, uncompromising people on all sides, causing all sorts of problems.

David Friedman makes the really bizarre topsy-turvy argument that conflict theorists should be people who want minor tweaks to the existing system, and mistake theorists people who want to get rid of it entirely:

A lot of whether you believe in mistake theory or conflict theory depends on how many mistakes you think there are, which comes down to how badly you think existing systems work. If you believe that government decisions are very far from optimal, that FDA regulation has little positive effect on how good drugs are, a huge negative effect on what drugs are developed and how expensive they are, and believe it not only from observation (Peltzman’s old article, for instance) but as what you would expect from the incentive structure, then you see the main problem as persuading the 99.9% of people who are worse off as a result. If you think government decisions are close to optimal, on the other hand, then all that is left to change is how they weight payoffs to different people, which is a conflict issue.

I guess this reinforces the point from above: taking some of these terms literally, or trying to reason with them, gets very different results from looking at how they operate on political coalitions.

From Jan Samohýl:

I really enjoyed the post, the two worldviews are interesting. But as many people have noted, things are often not so clear-cut. I would like to point out some interesting connections.

I think of myself being a more conflict-theorist, probably being on the left. But in many cases, I look at the world as mistake-theorist, having quite liberal views. To me, it’s somehow related to the idea “expect the worst (conflict), hope for the best (mistake)”. I am a big fan of direct democracy (which I see as a way to peacefully resolve innate conflict), and less of a fan of resolving politics through debate (which implicitly assumes the debaters are honest). To me, direct democracy is related to trust, as it is understood in computer security, for example. In computer security, trust is not given, it is earned and can be revoked at any time. You can only trust somebody who you don’t have to trust. In computer security, we assume worst intentions (conflict), not the best ones (mistake). And so I believe direct democracy is needed as a baseline, because the politicians are not to be trusted (principal agent problem).

This is another weird topsy-turvy thing. I was thinking in terms where mistake theorists are afraid of direct democracy because any random demagogue can sabotage it, and trust more in “post-political” systems like the ones mentioned in Part I above.

From Nootropic Cormorant:

A Marxist reader of your blog reporting in! I feel that this way of analyzing things is harmful (a charitable mistake) because ironically it conflict-theorizes the debate so that suspected conflict-theorists are automatically seen as beyond object-level discussion leaving you with no options but to act like one yourself. I would question whether this distinction is even relevant to people and ideologies rather than to situations and debates. Many of the commentariat have identified themselves as mistake-theorists, but I have to wonder whether this is because they abstain from the sin of conflict-reasoning or because it bothers them when the outgroup does it. This is some conflict-theorizing on my part. Also the willingness to discuss policy issues, aka the reformism vs. revolutionism axis, should not be conflated with this as one can go along it in a purely mistake-theorizing framework.

From Ben Wave:

I count myself as a Marxist, as someone who adores your blog, and as someone who primarily uses what you’ve deemed mistake theory as my lens on the world. It’s naive to expect that everyone else would or should have the same terminal values as I do, or that everyone should agree on the relative role of government in bringing those values about. So I don’t. I don’t share the view of some of my comrades who celebrate death and misfortune upon the rich. I do share their desire to take power from the powerful, and create a world in which the weak have more power. I desire this independently of a desire to increase living standards/lifetimes/happiness over all, and I desire this for three reasons – the first is that I see high levels of inequality as an existential threat to society. The second is that I would not consider the current distribution fair, if I was to be incarnated into a human chosen at random. The third is that the more unequal society is, the less the ethical premise of capitalism (that it is right to reward those who fill market needs because they are equivalent to human needs and desires) is valid. Which I feel is important seeing as it is the dominant method by which we decide what gets done by the ensemble of human endeavour. As regards the conflict view, yes many comrades use that. I see it as largely unhelpful, but I’m sympathetic to their situation. Marginalised people are often harmed or killed by inequality (medical, prison and law-enforcement, unsafe conditions of work or home life), so conflict is the reality that some of them face. I find it more productive to try and cooperate with my opposites who like cooperation than to fight against my opposites who do not. Hopefully the coalition of those willing to negotiate can improve conditions globally. Gather with those who share your terminal values. Gain power through that unity, and use that power in negotiation. Stay open to new information to best pursue your goals. I guess that’s the dream.

I find this interesting except that I can’t figure out why marginalized people being harmed by inequality, bad medical care, bad working conditions, etc makes conflict “the reality” for them. Surely it means bad things are the reality, and we’re back to the original question of whether bad things are mostly due to conflict or mistake?

From bosun_of_industry:

You position these in opposition or at least orthogonal to each other but I think that’s missing both how often they are layered and/or modal. Moreover, the Marxist portion of this essay is red herring carried forward from the Jacobite article and does it a disservice because it lends it a uniformity to any given actor’s approach. Instead, a common split is people approaching policy issues as mistake theory when within-group and as conflict out-group. There’s distinct strategic reasons for this sort of behavior, especially within a winner takes all political system. Similarly there’s a strong history of making a public show of conflict theorizing while acting as a mistake theorist in private amongst politicians and activists alike and for similar reasons.

From Alkatyn:

To go up a meta level, maybe conflict theorists are mistake theorists who have applied the same methodology to “how do we get our ideas implemented” and decided based on the empirical evidence that emotional appeals and group action are more effective than policy analysis.

The effective altruists have a lot of discussion on what the most effective way to push their preferred policies are, and usually they settle on some combination of appealing to academics and founding think-tank style organizations. I have noticed this working very well so far. This piece on how neoliberalism caught on is a really interesting example of doing this right and effectively. I suppose the conflict theory answer would be that this only works for policies that elites are already predisposed to like or at least not resist.

From John Schilling:

One thing that seems to be getting lost in a number of places here, so I’ll just address it this once. Mistake Theory doesn’t require denying that Rich Plutocrats are genuinely trying to further enrich themselves at the expense of the poor and that the Poor are trying to tear down the rich out of spite righteously take back what is theirs. More broadly, it does not require denying that people will selfishly pursue their selfish interests to the detriment of others or of the whole of humanity, does not simplify to “if we are all rationalist altruists the right answer would be…” Mistake Theory would hold that in almost all real conflicts, the best outcome for everyone is a negotiated solution and that the relevant facts (including the balance of power between competing interest groups) makes the range of plausibly negotiated agreement reasonably narrow. So failing to sit down and quietly negotiate that agreement, instead escalating to pointless conflict, is usually a Mistake and often an Easy Mistake. Figuring out what to do about people who persist in making that Easy Mistake, is the sort of problem that often leads to Hard Mistakes and sometimes to solutions that look like Easy Conflict.

Butlerian writes:

I think that the conflict/mistake distinction is being vastly overblown here. Scott is proposing some sort of fundamentally different personality type / rhetorical diachotomy between agents. I think these differences in behaviour can be explained in a much more proximate way, instead simply being answers of “Yes” or “No” to the question: “Do you believe that shills are amongst us RIGHT NOW”? If I am in a discussion where my answer to that question is “No”, I will discuss in a mistake-theorist manner: belief that other participants are arguing in good faith, and personaly arguing in good faith, and sticking to truth-searching, and at least trying to be open to the possibility of changing my mind. If I am in a discussion where my answer to that question is “Yes”, I will discuss in a conflict-theorist manner: having no genuine interest in the merits of suspect participant’s arguments, reading them only with an eye to finding errors / weak-man-able faultlines, and refusing to allow myself to change my mind even in the face of apparently convincing evidence because I strongly auspect that the evidence is the contaminated product of Yudkowsky’s Clever Arguer. I would think that this is obvious. When Marxists / feminists / Nazis are in their safe-space forums, and are happy that everyone in the discussion is a like-minded Marxist / feminist / Nazi who genuinely shares their endgoal of a well-functioning communist / non-cis-hetero-patriachal/ white ethnostate society, they will have amongat themselves truth-seeking mistake-theoretical diacussions. It is only when they go out into enemy territory and find themselves surrounded by perfidious capitalist / misogynist / Judeobolshevik agents that they switch to conflict-theoretical mode.

This reduction to “belief in shills” as the most important difference between the theories is interesting.

Aside from the people who wear it on their sleeves, like PR people and lobbyists, I find myself practically never believing anyone is a “shill” in the strong sense – ie they don’t believe the arguments they’re making, but they make them anyway for some personal advantage. Even when people do very shill-ish stuff – like put out a biased paper for a think tank that promotes its agenda – I assume the think tank probably just recruited people who already believed in their agenda, and let their personal bias do the rest.

In fact, the idea of “shill” is really complicated here. People accuse me of being biased or selfish all the time if I make arguments that seem to benefit rich people, or Jews, or white people, or any other category I’m a member of. But by far the most important thing that affects my finances and social status right now is any threat to the guild-based nature of the medical profession. And I’ve consistently argued in favor of things like letting psychologists prescribe, even though this will cost me money and status way more directly than anything that happens regarding class or race. If people were interpreting shilling in its most literal sense, they would get surprised that I’m not writing 100% about how great medical guilds are, finally assume I was insane or some vanishingly rare moral saint, and everything about race or class would be so irrelevant to this calculation that it would never come up.

The fact that nobody thinks this way has to be tied into some sort of answer to the class warfare free rider problem. Nobody is a first-level shill literally pushing their own self-interest, but weird class-consciousness/ideology style forces give people biases (mostly based on race or class) that they push at least somewhat honestly. I think this is at least sort of in accord with some forms of orthodox Marxism.

But this sounds a lot like…mistake theory. If people push their policies because they’re biased into thinking that’s the morally correct thing to do, then surely solving their biases and convincing them otherwise could change their policy preferences. Is there anyone who doesn’t believe this model? If so, what exactly are we talking about?

And from christhenottopher:

All this talk about how Marxists don’t frequent this blog and you go and make a post arguing for a Hegelian dialectic. Yes, yes of course this blog has always been on the side of Thesis. And certainly we must always beware the great enemy Antithesis. But let’s end the essay by arguing that what we really need is Synthesis! The professor in college who taught me what dialectical reasoning was warned the class that once we understood this, we’d see dialectics everywhere. And once again he was proven right.

This had BETTER NOT BE TRUE. If all of this time all this incomprehensible stuff about dialectics was just basic “start understanding a concept by giving binary examples of opposite sides, then correct it and make it more sophisticated later”, I am going to be SO ANGRY.