[Related: Tyler Cowen on rationalists, Noah Smith on rationalists, Will Wilkinson on rationalists, etc]
If I were an actor in an improv show, and my prompt was “annoying person who’s never read any economics, criticizing economists”, I think I could nail it. I’d say something like:
Economists think that they can figure out everything by sitting in their armchairs and coming up with ‘models’ based on ideas like ‘the only motivation is greed’ or ‘everyone behaves perfectly rationally’. But they didn’t predict the housing bubble, they didn’t predict the subprime mortgage crisis, and they didn’t predict Lehman Brothers. All they ever do is talk about how capitalism is perfect and government regulation never works, then act shocked when the real world doesn’t conform to their theories.
This criticism’s very clichedness should make it suspect. It would be very strange if there were a standard set of criticisms of economists, which practically everyone knew about and agreed with, and the only people who hadn’t gotten the message yet were economists themselves. If any moron on a street corner could correctly point out the errors being made by bigshot PhDs, why would the PhDs never consider changing?
A few of these are completely made up and based on radical misunderstandings of what economists are even trying to do. As for the rest, my impression is that economists not only know about these criticisms, but invented them. During the last few paradigm shifts in economics, the new guard levied these complaints against the old guard, mostly won, and their arguments percolated down into the culture as The Correct Arguments To Use Against Economics. Now the new guard is doing their own thing – behavioral economics, experimental economics, economics of effective government intervention. The new paradigm probably has a lot of problems too, but it’s a pretty good bet that random people you stop on the street aren’t going to know about them.
As a psychiatrist, I constantly get told that my field is about “blaming everything on your mother” or thinks “everything is serotonin deficiency“. The first accusation is about forty years out of date, the second one a misrepresentation of ideas that are themselves fifteen years out of date. Even worse is when people talk about how psychiatrists ‘electroshock people into submission’ – modern electroconvulsive therapy is safe, painless, and extremely effective, but very rarely performed precisely because of the (obsolete) stereotype that it’s barbaric and overused. The criticism is the exact opposite of reality, because reality is formed by everybody hearing the criticism all the time and over-reacting to it.
If I were an actor in an improv show, and my prompt was “annoying person who’s never read anything about rationality, criticizing rationalists”, it would go something like:
Nobody is perfectly rational, and so-called rationalists obviously don’t realize this. They think they can get the right answer to everything just by thinking about it, but in reality intelligent thought requires not just brute-force application of IQ but also domain expertise, hard-to-define-intuition, trial-and-error, and a humble openness to criticism and debate. That’s why you can’t just completely reject the existing academic system and become a self-taught autodidact like rationalists want to do. Remember, lots of Communist-style attempts to remake society along seemingly ‘rational’ lines have failed disastrously; you shouldn’t just throw out the work of everyone who has come before because they’re not rational enough for you. Heck, being “rational” is kind of like a religion, isn’t it: you’ve got ‘faith’ that rational thought always works, and trying to be rational is your ‘ritual’. Anyway, rationality isn’t everything – instead of pretending to be Spock, people should remain open to things like emotions, art, and relationships. Instead of just trying to be right all the time, people should want to help others and change the world.
Like the economics example, these combine basic mistakes with legitimate criticisms levied by rationalists themselves against previous rationalist paradigms or flaws in the movement. Like the electroconvulsive therapy example, they’re necessarily the opposite of reality because they take the things rationalists are most worried about and dub them “the things rationalists never consider”.
There have been past paradigms for which some of these criticisms are pretty fair. I think especially of the late-19th/early-20th century Progressive movement. Sidney and Beatrice Webb, Le Corbusier, George Bernard Shaw, Marx and the Soviets, the Behaviorists, and all the rest. Even the early days of our own movement on Overcoming Bias and Less Wrong had a lot of this.
But notice how many of those names are blue. Each of those links goes to book reviews, by me, of books studying those people and how they went wrong. So consider the possibility that the rationalist community has a plan somewhat more interesting than just “remain blissfully unaware of past failures and continue to repeat them again and again”.
Modern rationalists don’t think they’ve achieved perfect rationality; they keep trying to get people to call them “aspiring rationalists” only to be frustrated by the phrase being too long (my compromise proposal to shorten it to “aspies” was inexplicably rejected). They try to focus on doubting themselves instead of criticizing others. They don’t pooh-pooh academia and domain expertise – in the last survey, about 20% of people above age 30 had PhDs. They don’t reject criticism and self-correction; many have admonymous accounts and public lists of past mistakes. They don’t want to blithely destroy all existing institutions – this is the only community I know where interjecting with “Chesterton’s fence!” is a universally understood counterargument which shifts the burden of proof back on the proponent. They’re not a “religion” any more than everything else is. They have said approximately one zillion times that they don’t like Spock and think he’s a bad role model. They include painters, poets, dancers, photographers, and novelists. They…well…”they never have romantic relationships” seems like maybe the opposite of the criticism that somebody familiar with the community might apply. They are among the strongest proponents of the effective altruist movement, encourage each other to give various percents of their income to charity, and founded or lead various charitable organizations.
Look. I’m the last person who’s going to deny that the road we’re on is littered with the skulls of the people who tried to do this before us. But we’ve noticed the skulls. We’ve looked at the creepy skull pyramids and thought “huh, better try to do the opposite of what those guys did”. Just as the best doctors are humbled by the history of murderous blood-letting, the best leftists are humbled by the history of Soviet authoritarianism, and the best generals are humbled by the history of Vietnam and Iraq and Libya and all the others – in exactly this way, the rationalist movement hasn’t missed the concerns that everybody who thinks of the idea of a “rationalist movement” for five seconds has come up with. If you have this sort of concern, and you want to accuse us of it, please do a quick Google search to make sure that everybody hasn’t been condemning it and promising not to do it since the beginning.
We’re almost certainly still making horrendous mistakes that people thirty years from now will rightly criticize us for. But they’re new mistakes. They’re original and exciting mistakes which are not the same mistakes everybody who hears the word “rational” immediately knows to check for and try to avoid. Or at worst, they’re the sort of Hofstadter’s Law-esque mistakes that are impossible to avoid by knowing about and compensating for them.
And I hope that maybe having a community dedicated to carefully checking its own thought processes and trying to minimize error in every way possible will make us have slightly fewer horrendous mistakes than people who don’t do that. I hope that constant vigilance has given us at least a tiny bit of a leg up, in the determining-what-is-true field, compared to people who think this is unnecessary and truth-seeking is a waste of time.
You set up a straw anti-rationalist, you actually admit that that it what you are doing, and then you knock it down.
I’m saying that this is the kind of criticism we actually get. There’s a big debate thing going on now on social media; the links on the top of the post should be good starts.
Your straw anti-rationalist does not summarise the views of Will Wilkinson (just to pick the quickest read). Why not address what he says directly?
As I understand him, the issue is not so much “you are doing the wrong things in your attempt to get the right answer to everything”, but “getting the right answer to everything is typically the wrong goal” (and he doesn’t mention “wanting to help others and change the world”).
Hey there friendly neighborhood commenter, it is I, here to use you to make a somewhat related point:
The problem that people seem to have with what Scott has done here is that it seems like a way to knock down some weak arguments, basically a strawman. But if people really have those weak arguments, then it’s not necessarily a strawman; maybe if he extends it to the entire movement as seems alleged here, but even then it would depend on how much of the movement makes use of these weak arguments seriously. In other words, “strawman” isn’t the right word here.
This community would encourage Scott to “steelman” these arguments; maybe it would be a good idea to say that currently, Scott is “Tin-Manning” these arguments.
— well, here I am trying to put two different terms into the vernacular in the same comment thread. Life takes you to crazy places I guess. (Either way though, this isn’t much of a strawman, so take that to heart if anything.)
Scott is addressing his own contrived criticisms of rationalists, and not the actual, stronger, criticisms of rationalists. At best, there is some overlap. I feel like straw is an appropriate characterisation.
The most objectionable part of straw manning is the claim that a specific person or group has a certain (poorly thought out) belief, without any solid evidence that this is the case.
I think that it has a lot more merit to claim that some people hold a belief and then to address the problems with that belief, where you ask people to check whether they have to beliefs and consider your criticisms if that is the case.
Ashley: But he claims that he is addressing real criticism coming from real people; namely, tin men. So now the discussion moves to: are those tin men real? Are they numerous? Obviously you could prove that they aren’t real or that they are few in number. But that’s what you gotta do at this point.
I think there needs to be a distinction drawn between “stuff rationalists do” and “stuff most rationalists have discussed and are aware of”. We are flawed human beings and make mistakes, but we also know a lot about how we make mistakes.
The implicit point of criticism is to make known particular problems with the object of criticism that aren’t common knowledge, but a huuuuge share of criticisms discussed are not only unoriginal but should count as “rationalist common knowledge”. I think that’s Scott’s issue.
Now, we can’t really expect every critic to have spent time reading all the self-criticism on LW and elsewhere, but I think the least that could be done when you engage with us is provide us with examples of what you’re talking about.
I did address him directly. My response is on that Twitter thread. But in the same thread, other threads on his Twitter, and various other places, there are also a lot of commenters saying the sorts of things I’m talking about above. Just to pick the clearest example, see if you can find the literal picture of Spock.
I definitely claim partial credit!
https://twitter.com/davidmanheim/status/849957632735674374
Why is one only allowed to address the strongest arguments out there? If there’s weak /but common/ crap out there, must one ignore it?
Because steel-manning?
Isn’t steel-manning about taking a weak argument and addressing it’s strongest possible form? Not about letting weak arguments look as if they were unanswered.
Sometimes others will think the weak arguments don’t have answers; sometimes you’ll realize a weak argument wasn’t so easily dismissed as seemed at first.
(Arguing only the meta-point here, right? I don’t know all the arguments Scott may have missed etc.)
Steel-manning is something you do, on your own, to improve your understanding of an idea. Trying to do it in a live argument with somebody else usually comes off frustrating at best and rude at worst, because the frame you think is strongest usually isn’t the frame they do.
@Nornagest that’s really well put
Scott’s response is good enough. He address how Rationalists are open to the possibility of being wrong.
Which is to say, the ones who are prominent aen’t good, and the ones who are good aren’t prominent.
If that’s the case, it’s only because people suck at making cogent criticisms. I suspect the real reason “rationalists” are called a religion is probably because they have been known to wear robes and chant ritual songs in an explicit bid to adopt religious-like practices.
I mean maybe the critics don’t actually *know* this, and maybe they are indeed wrong based on a the information they have. But once you do X, you ought to forfeit the right to complain that people are unjustifiably accusing you of doing X
The critics don’t usually mention this. But if they did it would be a poor criticism, implying that because some rationalists occasionally do amusing rituals (like a religion) they are therefore epistemically bad (like a religion). In general, arguments that X-is-like-a-religion seem to usually be the worst argument in the world unless the person making the argument is really exceptionally careful.
Eh, wouldn’t it be a Gettier criticism? True, and for a particular reason, but if the reason is a bad one then addressing the criticism by explaining the background will do nothing.
Most people accusing rationalism of being a religion are not objecting to anything on the basis of it being epistemically bad. Most non-rationalists do not care about anything being epistemically bad, unless it causes clear real-world problems.
The “rationalism is a religion” objection might be rephrased “rationalism centers around a strong and cohesive subculture, therefore any weird claims it makes are probably based on quirky cultural customs rather than universal reason”.
I don’t know whether this is true, but I completely understand why people would use that rule of thumb, especially when rationalism-associated ideas make a lot of broad universal claims and counterintuitive moral demands.
I don’t know if you’re kidding about the robes and chants, but if you’re not I’d love to learn more about them.
It may be unfair to pull in the LessWrong days as evidence of rationalism as religion, but screw it, living under the same roof to develop stronger bonds in the shared belief, calling the community’s exalted text The Sequences, having an obsession with a vanishingly unlikely AI apocalypse, and (this is by far the pettiest objection) following a guy named Eliezer Yudkowsky, which is as cult-leader as names get, all suggest rationalism as a religion.
Some of the above is serious and some is not. If Scott wants to strike down a strawman, by God I’ll give him one. But if he’s mad about the perception of the rationalist community as a religion by drive-by commentators, the rationalists have made it incredibly easy to distort themselves.
Assumptions in this critique that I think are unfounded (or at least need to have their justifications spelled out):
– living under the same roof to develop stronger bonds in the shared belief (I don’t think any of the ~10 group houses I know of exist for this reason?)
– calling the community’s exalted text The Sequences (those words are yours, and ‘The Sequences’ is a completely neutral title based on the fact that there were several sequences of posts that had a chronological order and were grouped by theme or purpose)
– an obsession with a vanishingly unlikely AI apocalypse (citation needed—plenty of impartial, intelligent people have publicly weighed in on the side of “this is worth considering,” and put significant sound reasoning into the spotlight)
… the bit about rationalists having made it easy to distort themselves seems true, but in addition, comments like the one above add distortions that they had nothing to do with paving the way for, and which people just … like to invent, I guess?
This might count as a point in the other direction. “Pentateuch” just means “five scrolls”, and “Bible” derives from a word meaning “books”.
The Sequences (all caps) and the way it’s used — “read The Sequences” — does _not_ pass as neutral language to outsiders, fyi.
This prompts an interesting question, because this story sounds so familiar–is there some reason in human nature that turns wildly disparate versions of “rationalism” into cults (or is it just that human nature turns everything into cults?)
Progressive technocrats, Ayn Rand, and Yudkowsky all promoted something they call “rationalism,” all defined primarily by rejecting spiritual views of human nature (the progressives were pure materialists; Rand’s version of “rationalism” claims that all principles of philosophy can be reached through deductive reasoning; and Yudkowsky reverses Rand by promoting more or less pure empiricism, but none had much use for spirituality or tradition, except to some extent Yudkowsky would recognize the Chesterton’s fence concept). And all of them look to outsiders a lot like religions, if not cults of personality (Rand was probably the worst here).
Or maybe it’s just that totalizing philosophies end up looking a lot like religions when you try to practice them? You don’t see a lot of Burkean conservative traditionalists becoming cult members.
Rationalists should believe that religion is an attractor, since they need a way of explaining its prevalence without its being true, Rationalists should not causally regard themselves as exempt. Rationalists should notice that they may even have a particular susceptibility, the on Ilya mentioned, where rationalists tend to be lacking in social contact, and may therefore be tempted to start exchanging adherence for acceptance.
Oh… so it’s the lack of spirituality that always makes the outsiders compare us to an organized religion!
😀
Okay, trying to steelman this, because I feel there is actually a good point…
How do typical people treat religions? With lukewarm respect. (Even those who don’t like it, usually say something like: “I believe in god, but not in the church” or “religion is a great idea, but unfortunately many spiritual and political leaders abuse it for their selfish purposes”.)
How do cult leaders and cult members treat (the other) religions? They consider it important to explain why they are false. As a pretext for introducing their own solution.
That is probably a factor in why atheists and similar groups, ironically, often send the cultish vibes. Because dismissing (existing) religions is exactly what one would do when trying to recruit people into their own.
“When men stop believing in God, they don’t believe in nothing, they believe in anything.”
Viliam,
I don’t think that’s it–skeptic types (Penn Gillette, the Amazing Randi, Richard Feynmann) and even vocal anti-religion atheists (Christopher Hitchens, Sam Harris, Bill Maher) don’t send cultish vibes, but they all stop at “unfalsifiable beliefs are bad and shouldn’t be followed.” But none of them follow up “dismiss all religions” with “and adopt my totalizing philosophy instead;” their promoted value systems are pretty much all “just be nice to each other, OK?”
There’s some point where a movement like effective altruism goes from “hey, maybe we should make sure our charity gets the most bang for its buck” to “follow these rules that read like something out of Leviticus to be a good person.” I’m not sure where that shift happens, or what causes it. Haidt’s idea that humans are coded for hive behavior seems as good an explanation for this phenomenon as any.
“Oh… so it’s the lack of spirituality that always makes the outsiders compare us to an organized religion!”
No, Viliam what makes people compare you to organized religion is Eliezer wants to be the pope, and you (the community) want to let him be the pope. If both parties want it to happen, it’s going to happen.
—
Have you ever noticed that Scott doesn’t want to be the pope? I hold Scott in much higher esteem than Eliezer for many reasons, but this is a big one.
I think this would have been much more accurate circa 2010-12.
Which part changed? When? Why?
Just to make sure… you mean the guy who doesn’t even post on LW anymore, right? Yeah, that behavior reminds me of pope Benedict XVI.
What does posting or not posting on LW have to do with anything? It’s about the person, and the social dynamics, not about the specific medium.
The straw that broke the camel’s back for me was him openly trolling for sex on facebook (since deleted). I am sure this type of iffy stuff is now done informally via the social network in the Bay Area.
—
I mean, I don’t really want to spend time digging up a long trail of disappointing shit EY said/did over the years. It’s also kind of a discussion-quality-lowering exercise for SSC. But if you want, we can go through it together.
—
“The pope” is a figure of speech. I mean a guru. A guy who writes epistles to the NYC community. A guy who officiates marriages. A guy who writes parables. etc. etc. etc.
What the heck kind of look is that?
—
As I mentioned before in another context, I am into virtue ethics (learning a model of a person), not into explicitly drawing lines in the sand (looking for rule/norm violations). The advantage of that view is you quickly get a read on a person even if no specific thing they did on its own was a particularly strong signal.
Exactly. I’ve never seen the pope post on LW.
The short version is “the community splintered”. Before 2013 or so, rationalism was basically synonymous with the Sequences and the Less Wrong blog, but then a bunch of stuff happened over the next couple of years: Eliezer largely stopped producing new rationality content (this overlaps with HPMoR, but there was a long hiatus even on that) and committed a bunch of embarrassing administrative and social gaffes; most of the other major LW contributors left to start their own blogs (like this one!) or to focus on work for CFAR or SingInst; the Bay Area meetups hit their Eternal September moments (Berkeley first, South Bay later). Rationalist Tumblr and Facebook became significant.
There wasn’t a sole turning point, but in 2012 it would still have been meaningful to talk about a single rationality community with a more-or-less unified agenda. By 2015, on the other hand, it was more of a scene or a movement: a collection of social circles and small institutions with different priorities, that just all happened to be pointed in roughly the same direction. And as far as I can tell, Rationalist Facebook and some personal social circles are the only ones that Eliezer still owns.
So we can agree that rationalism was a religion circa 2010-2012 😛
The Original Mr X:
That meme might well be true on a technicality, but it’s still monstrously arrogant and I wish it would go away. It boils down to ‘When men stop believing in [implausible and far-fetched belief A] they believe in [implausible and far-fetched belief B, or C, or…]’.
That is, the sanity waterline might, sadly, be fixed; people simply need to gravitate towards shared belief in almost-certainly-factually-mistaken claims in order to have nice things, but phrasing it like that looks like it is trying to smuggle in the assumption that ‘God’ is less unreasonable than ‘whatever people come to believe as an alternative’ – without doing any of the work of demonstrating that assumption.
I’m pretty sure the quote is from Chesterton, who did a good deal of the work of demonstrating that Christianity was not an implausible and far-fetched belief, whether or not successfully. You are taking one sentence and complaining that it doesn’t, by itself, do the entire job of justifying the author’s position.
I don’t know if I’d ever have called it a religion. I do think it had a lot more religion-y flavor then than it does now.
i suggested this was a bad idea in 2011 ish…i turned out I was the problem.
If rationalists recite litanies, I desire to believe that rationalists recite litanies. If rationalists do not recite litanies, I desire to believe that rationalists do not recite litanies.
Soooo, I remember a moment a couple of years ago that kinda crystallized the “This feels creepily like a cult” thing to me. There was a note on Scott’s Tumblr about how Ozy was very angry about some criticism of the rationalist movement, and so decided to go cam (i.e., do sexual stuff on camera for money) in order to donate to the rationalist movement. This was while the two of them were dating, and Scott posted this approvingly on his Tumblr.
And I can just barely see from an inside view how this might seem reasonable. My immediate reaction, though, was along the lines of “Holy crap, this movement has got people pimping themselves out for money to donate to it, and it’s brainwashed not just them but their romantic partners into thinking this is reasonable.” I’ve walked back from that a little bit, but I still think it was creepy as hell.
Then there was the bit about a year ago about the young lady who was basically the open mistress of some bigshot rationalist guy, got pregnant, and despite massive pressure from said guy and all his other bigshot rationalist friends to abort the baby, kept it and found herself somewhat ostracized.
All this sexual stuff sure looks a lot like powerful folks systematically taking advantage of less powerful folks to an outsider. Which is like a huge red flag for “STAY AWAY! THIS IS A CULT!”
If I didn’t read this blog for a year now I would probably consider very strongly if I should delete the bookmark.
RE: your second link, I’d forgotten how she actually came on here and commented about how guilty she felt, like she’d committed a serious offense against her ex-lover by not aborting their child.
Any community that encourages that sort of thinking, and regards the man as having acted properly in both requiring the initial promise of abortion and refusing child support later, deserves all the RED ALERT THIS IS A CULT RED ALERT it gets.
@Loquat
Oh, if you did any further reading on it, it’s worse than that. Can’t find the links any more (I think the blogs where I read it at the time got made private/comments got deleted from the thread) but he drove her to the abortion clinic with several other “friends” as “moral support” and then drove her to tears and collapse inside the abortion clinic when she still refused to abort the baby. And, as you say, she was still commenting about how guilty she felt about not doing what everyone wanted her to do.
I think a community that can happily supply several people willing to try and browbeat a terrified crying woman into an abortion is one that has some problems.
Some critical parts of the story are missing… such as the fact that the biological father of the baby already had a wife(?, primary partner anyway), and the lady was aware of that, and they had an agreement that this is going to be sex without making babies.
It’s still a bad story, I am not denying that. 🙁
Horrifying. Didn’t know about this. Makes me less likely to interact with the rationalist community in meatspace.
@Viliam
To a lot of non-Rationalists, that makes him look even worse. The guy had a wife and kids, but felt the need to risk it all by screwing around on the side with a mistress.
It also makes Rationalism look even worse if this guy’s actions are totally okay under prevailing Rationalist sexual ethics, and this is exactly the kind of situation that leads to arguments like this one posted in the Determining Consent comments the other day, that “everyone involved consented to it” is actually not sufficient to prove an act was ethical.
Wow, that thread is horrible. If your “decision theory” tells you to abandon your new born child, you deserve a good smacking and a lifetime ban on reasoning anything from first principles.
What? Okay, this thread can maybe use a bit less grandstanding and moralizing about other people’s private lives okay??
—
That said, it is my personal impression that a bunch of people come to rationality because they are not mentally healthy and thus excluded from other social avenues. I’m fairly sure the increase in mental issues from that should not be counted against; furthermore, having children in the current legal climate is a potentially horribly toxic topic and if you think I can’t make a good case the other way, while knowing nothing about the particulars in this case, you have a lack of imagination that should immedlately disqualify you from criticizing other people’s lives by itself.
As a random aside, this reminds me a lot of the Insane Clown Posse’s defense of Juggalo culture…
Yes, this Weird Sex Stuff with Poly-relationships Stuff is one the reasons I want to keep a certain distance to the Rationalist sphere with big R, and consider myself as a member of “I write and read comments on some blogs” sphere.
It seems play out exactly like every and all stereotypical “free love” experiments at any point of history where they have been prominent. (Polyamory is not a new idea, folks.) In practice, it seems to fuel the exact kind of power dynamics that are most damning evidence of cult-like behavior.
edit. Reading it now, that particular Vaniver’s post and the following comment thread scream “outrageous”. I’m feeling slightly ill reading posters lines about feeling “incredible guilt and suffering” because of not aborting a baby. This is much more damning than any joke about Roko’s basilisk.
You made this up. The linked post just says “donate”. (Knowing Ozy, the donation was probably to AMF or something like that, but you don’t have to know Ozy to not make things up.)
I see that Greg Egan’s maxim, “it all adds up to normality,” has wider application than recondite philosophy.
“a bit less grandstanding and moralizing about other people’s private lives okay??”
I am staying out of the victim’s life here (or of this thread generally). I wish her and her child the best in life.
But let’s get one thing straight. Those people who drove her to the hospital and caused her to break down and cry? They are _abusers_. That is abusive behavior.
Please do not run interference for abusive behavior.
@FeepingCreature:
Look, I didn’t comment on those two incidents at the time because, as you say, they’re about other people’s personal lives and not my business. (Albeit once you post personal things on your public blogs you have a limited scope to complain about people discussing your personal life.)
This post, however, is all about whining about how unfair treatment of the rationalist movement is, and that they are sorely misjudged. Since one of the major public concerns is “Wow, these guys are a little creepily cult-like” then I figured some pointed links to things that struck me as particularly cultish would be helpful in crystallizing that for others.
For the record, I don’t actually think the rationality movement is a cult. I think it had a reasonable chance of becoming one early on, but it didn’t turn out that way. (In large part because it seems apparent that Eliezer rather desperately wanted to be a cult leader.) What I do think is true now is that at best, rationalists don’t care about looking like a cult and quite possibly are deliberately appropriating various cultish things.
@Nick T:
If my reading of that post was wrong and Ozy was donating to general charitable causes rather than to the rationalist movement then I wholeheartedly apologize. I did not deliberately make anything up, but I may well have misread it. I hope you can see that my (potential) misreading is a reasonable one to make for someone not broadly affiliated with the movement.
If true, that makes me feel someone less creeped out by it, but not to the level of not creeped out at all.
Maybe I just have a really bad model of people and sexual relationships, but imagine that in a completely unrelated debate, someone tells you the following:
“I know a guy who has a wife, and also a mistress. Yesterday, the mistress told him she was pregnant with him…”
Now, imagine that this is all you know. It’s just a random American guy. And his mistress got pregnant. And he has a wife. How would you expect this story to continue? Which endings would make you feel “yeah, I totally expected this”, and which endings would make you feel “wow, this is totally shocking; I can’t imagine anyone in my neighborhood doing that”?
I can’t speak for you, and I admit I am not an expert on relationships, but I would consider the following three outcomes, in random order, to be all within the “normal” range (i.e. this is what I would expect people around me are generally doing in such situation, without necessarily approving any of that behavior) —
a) the guy tells the mistress to get an abortion;
b) the guy leaves his wife, and marries the mistress;
c) the guy makes a deal with the mistress that she will raise the baby, and he will secretly support her financially.
Somewhat less likely, but still plausible:
d) the guy tells his wife, trying to keep both women in his life, but the wife divorces him;
e) the guy kills the mistress — okay, maybe I am watching detective stories too much, and this option shouldn’t really make it into the top 5.
My point is, some of these options create a better impression of the guy than other ones, but none of them makes me go “that’s unpossible… there must be some hidden reason for all this weird behavior… the guy must be a secret agent, or a cult member, or an alien from Omicron Persei 8″. They all seem to me like an everyday human behavior.
Now let’s turn it around. Suppose I tell you with 100% certainty that the guy is a cult member, and again, I leave the story unfinished. Which endings would you consider likely, and which unlikely? I may be obtuse here, but again, all five endings mentioned above seem like “yeah, that could happen”. (Actually, for a cult leader, I would also give a decent probability to an outcome where he keeps both women successfully, because he convinces his wife that god told him to do this.)
Also, correct me if I am wrong, but in America abortion is considered a more essential human right than having food or education (at least judging by the revealed preferences, because many people are stupid or starving, but mere accusations that someone hypothetically could make abortions less convenient are used as a weapon during the elections), so let’s not act shocked that someone actually considered that option. Imagine a gender-reversed version: a woman has a husband, and a boyfriend. One day she finds out her boyfriend made her pregnant. Despite her boyfriend crying and begging her not to do it, she goes and gets an abortion. The End. Such a non-story, right?
tl;dr — what I see here is a normal human behavior (note: I didn’t say “nice”); without being primed, I think most people wouldn’t feel a need to look for unusual explanations, such as cults
That isn’t parallel because in the original, the mistress is having the abortion, and the man has power over the mistress, not vice versa.
Also, the cultishness of the scenario changes when you add in polygamy.
@Viliam
In what people around here call blue tribe culture (i.e. upper middle / upper class, urban, professional, U.S. coastal, center-left to left) it is considered uncouth, if not outright immoral, to pressure a woman to either get an abortion or to not get an abortion. For the more christian parts of the country it would certainly be considered immoral to pressure a woman to get an abortion. So for that part you have pretty broad agreement across American cultures.
That said, I agree it isn’t shocking that a man would pressure his mistress to get an abortion, the shocking part is that he would have the support of his social group in doing so.
To expand on what Brad said, it’s totally the social group. One guy on his own keeping a mistress, pressuring her to get an abortion, and then refusing to take any responsibility for the baby is a simple cad with no broader implications. A social group that approves of and defends all of those actions – and if you read the linked original thread you’ll see multiple people arguing that it is immoral to require a man pay child support when he’d explicitly asked his mistress to promise him consequence-free sex, and even one person suggesting that society would be fairer if women could be forced to have abortions in such scenarios – that’s a social group that’s going to raise some eyebrows among outsiders, to say the least.
Any community that encourages that sort of thinking…
… the shocking part is that he would have the support of his social group in doing so.
Look, I totally agree that what happened was terrible and that defending or supporting that kind of thing is just awful. But could we not use that event as the yardstick by which to measure the whole community? To my understanding this was an event involving just a few people; though I don’t know for sure, because although I count myself as a part of the rationality community, I don’t live where these people live and only heard about the whole story around the same time that it blew up and everyone else heard of it, too.
The rationalist community, by this point, is relatively big. Big enough that it’s going to have all kinds of fucked up episodes, because it contains a lot of people and in any community with enough people you’re going to get some pretty fucked up episodes sooner or later.
Scott actually wrote about this before: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
(As for the linked thread, it was in an SSC open thread where everyone can comment. Hopefully not much more needs to be said.)
I am simply fascinated to discover that apparently monogamous people never experience unintended pregnancy where the father and the mother disagree about what ought to be done about the fetus, and that no monogamous person has ever done unconscionable things due to being desperate and in an awful situation. How have you achieved this remarkable feat?
Nick T: IIRC, GiveDirectly, which is of course a rationalist front organization intended to bribe poor Africans into reading the Sequences.
caethan: You did misread it, and it seems to me that this sort of misreading can be easily avoided through not using strangers’ Tumblr posts to pearl-clutch about their personal lives.
@Viliam
“Also, correct me if I am wrong, but in America abortion is considered a more essential human right than having food or education”
You’re wrong. Access to food is subsidized at a greater level than all of family planning medicine*. Access to a K-12 education is subsidized entirely at a level which is the typical expectation of the broad populace (not to mention state funding for community colleges and public universities which drastically reduces the price tag).
If any politican on the right or the left dared state we should remove all state funding for SNAP or K-12 education, or even community colleges, they would be kicked out of their party. The most you ever see is proposed means testing and drug testing, or vouchers in the case of K-12. You can be a Republican, and even a Democrat in some states, and be totally against abortion.
* – SNAP alone is a legally obligated entitlement which costs the federal government ~$70 billion / year versus less than $600 million for Planned Parenthood. https://en.wikipedia.org/wiki/Supplemental_Nutrition_Assistance_Program
It’s not just how the participants reacted to the situation. The situation itself was created by the community through mores they were pushing.
Like somebody said upthread, polyamory is not a new idea. If you didn’t even notice that skull, why should we believe you’ve noticed the others?
Jaskologist: What about this situation is caused by polyamory? Monogamous people often have unintended pregnancies. If you are using the term “polyamory” to mean “people having sex when they don’t want to have children with each other”, this is a very unusual usage of the phrase, and your criticisms apply identically to e.g. the average college campus.
“Those skulls were there when we got here”.
Talk about skull-unawarness.
The argument isn’t “Those skulls were there when we got here,” it’s “That other road you want us to take instead has those same skulls.”
IIRC, the case involved a married women having sex with a married man who was not her husband. It wouldn’t have taken deep wisdom to expect that to turn out badly.
reasoned argumentation: Most people– monogamous or polyamorous– have sex with people when they don’t particularly want to have children. For instance, a married couple may wish to delay children until they are further in their careers, or may wish to have no more children than they currently have. It is my understanding that even monogamous couples are not generally celibate in these situations, and therefore run a risk of one partner wishing to abort the fetus while the other partner wishes to raise it. I suppose one could become Quiverfull, but complaining about rationalists’ skulls as a Quiverfull person seems a bit like tossing stones from a glass .
Jaskologist: It seems like your prediction is that polyamorous relationships will end poorly (whether or not there is an unintended pregnancy) while monogamous relationships will be happier (whether or not there is an unintended pregnancy). My prediction is that relationships (monogamous or polyamorous) in which there is a pregnancy and one partner wishes to keep the baby and the other wishes to abort it will end poorly. Can you explain why you think the former is more plausible? This seems quite strange to me.
Ozy –
You are literally incapable of even describing the problem being pointed out – all you can do is read what Jaskologist wrote, think “crimethink – must stop considering it” then spit out an entirely different argument to dismiss. This isn’t even failing to see the skull pile – this is having a mental block that requires you to see skull piles as rosebushes.
reasoned argumentation: Yes, I admit I’m very confused! In my defense, no one in this thread appears to have provided any sort of justification, instead saying “well, it would obviously end poorly.” I hope if it is so obvious then it will be easy for you to explain your causal model!
If it helps, I can explain my model of what went wrong here! The failures are as follows: the couple failed to use adequate contraception; Katie did not successfully predict her response to becoming pregnant and made a promise she couldn’t keep and preserve her mental health; the child’s father attempted to coerce Katie into an abortion; the child’s father failed in his ethical duty as a father to play a role in his child’s life. None of these are poly-related.
In fact, “a man arranges with another woman, who is not a party to any kind of relationship with his (other?) wife, to have “consequence-free sex” seems so not-poly-related that I question whether it should be called polyamory at all. At least to my understanding the ‘mainstream’ polyamory movement tries to distinguish itself from old-school exploitative polygamy (and from cheating / open relationships) by only admitting relationships where everyone has more or less equal status.
This is of course orthogonal to whether it reflects well on the rationalist movement in general, or the rationalist movement’s “flavor” of polyamory in particular.
The big problem is not that they disagree or that he was polyamorous. If the father and mother were monogamous, most American communities would certainly expect the father to pay up and support the child. And it would be considered morally despicable for the community to put pressure on the mother to abort whether the father was monogamous or not.
I imagine why people are seeing added problems with polyamory is that because the father wasn’t her husband and she found herself unable to fulfill her original commitment to abort, it shattered her relationships and left her even worse off than you would expect this sort of thing to turn out on average.
Polyamory has upsides, but it also has downsides.
quanta: Katie has deliberately chosen not to pursue child support because she thinks it’s wrong to force him to support a child he did not consent to. She is following her moral beliefs at a significant personal cost. These are Katie’s personal beliefs, and I do not think that the rationalist community as a whole has any consensus on child support.
I do not think a disagreement about an unintended pregnancy is any less likely to shatter a relationship if the relationship is monogamous. Indeed, it seems very beneficial to me that her spouse was a different person than the person who was trying to coerce her into having an abortion. Score one for polyamory!
random: That’s not what polyamory is.
Good thing that skull pile was actually a rosebush!
I am aware that it was Katie’s choice; but her choice was influenced by her choice of polyamory. I think it’s fair to argue that more money for child = good and thus it can be beneficial to hold a belief that the father should share even if it was an accident and you promised. The fact that Katie doesn’t hold that belief may be an argument against the law interfering against her will, but I don’t think it’s a good argument against the general principle that fathers have a duty to support their children.
And most people aren’t terribly libertarian about these things and would prefer that people have beliefs they view as more likely to lead to a child who is well provided for.
Unless I misunderstood the original story, she and her spouse separated as well. It’s hard for me to imagine any evidence short of someone’s death (in which case I am very sorry I dragged this around even more here, it’s already somewhat unkind of me) or a god-like view into someone’s else’s past that could convince me that this wasn’t influenced by the pregnancy.
And my personal view was that she lost two relationships instead of one which is worse. There’s only so much time to spend with people so I figure you’re either splitting time that would normally go to one spouse across multiple or your spending time that would be spent with platonic friends or alone by instead having more sexual relationships. But thanks to what you say it now seems to me that how bad this sort of thing feels and how it… scales? (I guess that’s the word) is really pretty contingent on your own psyche so I can see how this could go either way. And of course, if you lose only one out of two relationships this may be an upside compared to the monogamous model.
But my other point (and I think most normal people would share my intuition) would be that you are more likely to end up with unbridgeable issues between parties in accidental pregnancies in a polyamorous setting because you can go from having only two parties involved to three. And maybe other people find differently about this, but I find the odds of an acceptable if unpleasant compromise or renegotiation drop very sharply as the number of parties goes from 2 to 3.
@Viliam
Access to food or education is not brought up in elections because it’s relatively uncontroversial. Access to abortion is highly controversial because some people consider allowing it to be a moral evil, while others consider forbidding it to be a moral evil.
It’s like saying that the fact that there’s a lot of Second Amendment law but very little Third Amendment law means that Americans care more about owning guns than not having soldiers in their homes. No, it’s just that agreement on “don’t put soldiers in people’s homes” is so universal that nobody disputes it and the government doesn’t even try, while people have major differences of opinion about guns and the government sometimes does try to ban or restrict them.
@ Ozy:
Unintended pregnancies are easier to handle when it’s your wife getting unexpectedly pregnant than when it’s your mistress.
The polygamous person here who has done awful things is not the woman who had become pregnant, it’s the other two who put pressure on the woman. They’re not “desperate and in an awful situation”.
And polygamy is relevant because power imbalances are one of the problems people suspect about polygamy in the first place. It isn’t a defense to “polygamy encourages this” to point out that polygamy isn’t a necessary condition.
It’s not just how the participants reacted to the situation. The situation itself was created by the community through mores they were pushing.
Like somebody said upthread, polyamory is not a new idea. If you didn’t even notice that skull, why should we believe you’ve noticed the others?
Polyamory comes with its own set of problems, yes. But so does monogamy.
If you want to conclude that the community is failing terribly by being accepting of polyamory (or for that matter any other set of social norms), it’s not enough to point out a single disaster to which polyamory arguably contributed. That’s like finding a single case in which Western-style mixed markets do worse than Soviet central planning did, and concluding on this basis that any community which doesn’t reject Western-style mixed markets in favor of Soviet-style central planning is failing horribly. You need to do a much more comprehensive analysis of the merits and drawbacks of both.
@quanta
As someone who has firsthand knowledge of this story (e.g. not just what’s online about this) this is actually the exact opposite of the situation; the spouse left for entirely different reasons and the pregnancy, if it did anything, caused the spouse to be more supportive of Katie.
(In addition, when I said someone thought the entire situation “was because of polyamory” I received a chuckle back from the spouse)
I am exceedingly skeptical about how equal that status really is.
I’m old enough to remember when this was called “free love”, and perhaps in hindsight it’s good that I wasn’t old enough to have enjoyed it at the time because I did eventually see the damage it caused then. Mostly to young women, and what I’m hearing now sounds like a really bad flashback.
“Old-school exploitative polygamy”, at least had rules to mitigate the damage. The high-status man whose mistress shows up pregnant, may quietly ask her to have an abortion (at his expense), but if she says no he doesn’t push it and he does pay child support or suffer severe legal and social consequences. The mistress’s peers, if no one else, ought to be supportive.
The old rules were based on a sound understanding of how real people are actually wired. People do unpredictably bond with embryos they didn’t plan to create, to the point of seeing at least that one potential abortion as murder. People do unpredictably fall in love with their casual sex partners, to the point of sometimes suicidal despair when they find that, no, the partner still only sees them as a fuck buddy. And, yes, people get jealous, also unpredictably and sometimes lethally. Also, status equality in sexual relationships is impossibly difficult to pin down. Most of us are pretty good at coming up with rules to let people sometimes have sex and make babies while mitigating the harm caused by all of this.
What the free love folks had then, what you all seem to be reinventing now, is not any sort of enlightenment or improvement, but a new set of rules for maximizing the number of orgasms experienced by high-status people based on the assumption that all the very real problems can be willed away by Pure Applied Reason. And now, when someone points out that no, they’re still being hurt even though everybody is playing by the ingroup’s rules, a policy of trying to shame them into silence and sending them off to the closest thing your society has to a convent.
Bay Area Rationalist Polyamory isn’t big enough or old enough to have amassed the pile of skulls that Free Love did in its day, but it’s more than just one woman. And I’m seeing the same arrogant unwillingness to acknowledge the harm now as there was then.
What the free love folks had then, what you all seem to be reinventing now, is not any sort of enlightenment or improvement, but a new set of rules for maximizing the number of orgasms experienced by high-status people based on the assumption that all the very real problems can be willed away by Pure Applied Reason. And now, when someone points out that no, they’re still being hurt even though everybody is playing by the ingroup’s rules, a policy of trying to shame them into silence and sending them off to the closest thing your society has to a convent.
? Everyone in this discussion that I’ve seen has very clearly acknowledged that there was clear harm done and that what happened was bad. Your comment doesn’t seem to describe the discussion so far at all.
That’s actually also the general feeling that I get from reading many of these comments: that they seem to be describing an entirely different reality from the one that I, or anyone that I know who does polyamory, actually lives in. These always make it sound like there’s this top cabal of (mostly if not entirely male) “high-status people” who go around having sex with everyone, leaving everyone lower-status out and used.
Whereas my experience is much closer to Scott’s: that polyamory is so unremarkable as to be boring. There are just totally ordinary people who happen to have a few more relationships going on at once than usual. This experience is echoed by e.g. a couple I know, who after observing their polyamorous friends for several years figured that “well, if poly is this ordinary then we guess that we could do it as well” and opened their relationship, with no bad results that I’d have heard of.
As for the part about “high-status men and their mistresses”, the ordinary situation is that it’s the women who have more partners than men do. This is actually a relatively well-known problem in poly circles: that if you’re a man, opening your relationship may suddenly mean that your girlfriend is getting into a lot of relationships while you aren’t. (or as Ferrett charmingly put it: So these dudes open up their relationship, expecting to be drowned in sex, and then are astonished when they’re left dry on a beach and their girlfriend is out swimming in seas of strange dick.)
And then there’s the other side of this, which is that this can be great for low-status men. I’m saying this because I spent a long time being one (maybe still am? dunno), and up to age 28 or so, all of my romantic relationships had been ones where the woman already had a boyfriend/husband and dating her was only an option because of polyamory. In other words, if poly wasn’t a thing, my first relationship would have come about ten years later than it actually did. And while I admit that it wasn’t always so great to always be the secondary, it was still a hell of a lot better than not having any relationship at all.
My interpretation of this is that poly is great for low-status people because poly makes dating them feel less risky to people who are already in committed relationships. If you were in a monogamous culture where you could only have one partner, you’d have much more of an incentive to make sure that they were as good as possible, because you can only have one. Whereas with poly, if you’re a woman (or man) who’s already in a relationship, why not date someone low-status if they’re otherwise nice?
…assuming that you want to look at relationships and dating through a status lens in the first place. While I agree that status definitely does affect these things a lot, there’s also a lot that it doesn’t, and it seems like a common mistake to have a too status-centric view of relationships. Many relationships basically form because two people feel good in each other’s presence – some of that good feeling may come from status issues, but there are also other sources. Depending on the personalities involved, status differences may even inhibit those feelings of goodness, if the people would prefer to feel like they were on equal terms.
(Incidentally, I always feel a little bit weird reading these comments characterizing polyamorists as these naive people who don’t really understand human nature and think everything can be solved by Pure Reason, and which then try to argue for this by making up simplistic models of human relationships in which everything seems to reduce to status…)
Okay, I’m very confused. Either I’ve massively misunderstood this entire metaphor, or you’re saying that it’s a bad thing that this woman’s spouse did not try to coerce her into having an abortion. In which case, with all due respect, you’re basically saying that the path will all the rosebushes is actually full of skull piles, and we should instead take a safe detour by ascending Skull Mountain.
You guys have robes? I knew about the T-shirts and Solstice meetings, but real actual robes?
Now I’m
jealousenvious! 🙂We have bathrobes, does that count?
I have cloaks. Do those outrank robes?
More charitably, I would say it’s more likely that people are conflating rationalists-as-“people who try to advance the art/philosophy of thinking correctly” with Rationalists-as-“members of the technocratic subculture that think Bayes’ Theorem is great”. After that it’s just a matter of pattern matching.
And when you consider that this pattern matching turns up:
A “prophet/messiah” (Eliezer Yudkowsky), a “bible” (The Sequences), “Burial customs” (Cryonics), a “god” (superintelligent FAI), an “afterlife” (the Singularity), and a “holy land” (the Bay Area)
It’s really not surprising at all that the only ideas that most people can think of that match this pattern are either “religion” or “cult”.
My interpretation here is that Scott is saying “Among the many criticisms we receive are a frustrating number that are actually straw. It’d be cool if we could get fewer of those, and more of the good ones.” Yeah, he didn’t emphasize the good ones in this particular post (though he did go out of his way to point out that there’s no assumption of rationalists having all the right answers, and that there are almost certainly new mistakes being made), but I think it’s okay for a single post to have a single ask.
In this case, that ask was “Please, more of the criticisms that actually land, and less of the noise that drowns out the useful critical signal.”
Y’know, if I’m going to put words in his mouth, and all. I could be wrong.
Edit: By the way, I’m genuinely curious in your response to my reading, Ashley, if you have the time and are willing to spare it.
Well, that’s pretty close to being obviously wrong, simply because it is hyperbolic.
But given that EY spawned a certain kind of modern rationalism, it’s also wrong in another way.
#vaguebooking
No, it’s a weak man argument.
OK, this is a more accurate term.
https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/
The weak man argument would be “these bad arguments against rationality are wrong; therefore rationality is right.” I don’t think that’s what Scott is saying. The fact that he makes the same point about economics and psychology should be telling–does anyone seriously believe that Scott thinks current economists and psychologists have it all figured out?
This just seems to be “People, please stop making terrible and outdated arguments so we can talk about much more interesting criticisms?”
The weak man criticism is “these bad arguments against rationality are wrong; therefore the arguments against rationality are wrong”. You don’t actually need a “therefore rationality is right” in there; the weakman is being used to attack one’s critics
That doesn’t seem right to me. Addressing a weak argument only means the weak argument is wrong, and if people are making it, it should be addressed.
The condition suggested by bbeck310 — that it must be stated or implied that the weak argument(s) are the only ones — is reasonable; if someone explicitly acknowledges that the weak argument isn’t the best one out there, as Scott has here, the accusation of weakmanning seems obviously wrong.
Maybe the rationality movement could take a lead……
Thanks for posting this. I’ve been reading the back and forth posts. And had been thinking about the spock analogy as a good frame. But now I see the (embarrassingly obvious of course in retrospect) point that the rationalist people have thought about it 10x or 50x more than I had.
There’s a softer version of this, where like Caplan argued, it’s about an aesthetic (along with certain kinds of personality types) that are drawn to the movement. But of course no doubt there are many posts on this as well. So let me dig into that a bit more.
I’m excited to read the soon-to-be-printed articles from the people that say “the name rationalism implies that rationalists think they’re perfectly rational” when they find out there’s a charity called Cops
For Cancer or that Microsoft “””Windows””” is actually an operating system and not a literal window.
Glad you’ve come around on Libya! Maybe add that one to your short list of Mistakes? (Unless imissed a follow-up piece on your old Livejournal.)
But this was pleasantly encouraging. Thanks as always.
He wrote a thing somewhere about updating against interventionism in general because Libya seemed like a good idea going in and a mistake in hindsight — I was actually wishing today I knew where he’d written that so I could post it.
ETA: Aha, Google likes me today
Is Libya now a universally acknowledged mistake?
From what I know about it currently, it seems fractured, but mostly peaceful, kind of like Somalia.
“Mostly peaceful” being in comparison to Syria. Since the rebels in the actual Libyan Civil War was able to depose Gaddafi so quickly, is it reasonable to assume they would have probably been able to carry out a lengthy campaign anyway without US intervention, just on a longer timescale with more bloodshed for an equally dismal final result?
I think “this country is currently better than Syria” is a pretty appalling standard to use. At a minimum, I think the standard to judge military interventionism should be “this country is not orders of magnitude worse than pre-intervention days”. By which standard, every single country that we know the US has successfully intervened in over the past few decades fails (I guess the last case where this is not true would be the Serbian intervention, though I’m sure many would disagree with me about that example).
I’m under no illusions that, e.g., Assad, Hussein, Ghaddafi, the Taliban, etc are/were not monsters. But we’ve done the quite amazing trick of making things so much worse in their countries that it is destabilizing the entire world order. Interventionism is not looking good.
I think a better standarrd would be “the country is not worse than it would be if the intervention had not happened”. Of course that is not something that can be meaningfully assessed.
That is a better standard, and I don’t think it’s impossible to assess. It’s not possible to measure as cleanly as my previous standard (which obviously isn’t perfectly clean either), but someone with a decent knowledge of the situation in a given region can make informed arguments for what might have happened without intervention. And to the extent that I’ve heard those people who I trust, US intervention fails according to that standard, almost universally.
The sole exceptions over the past century that I can think of: WWII and Korea (which are obviously big ones), and… uh.. Serbia I guess?
I don’t want to argue for blanket anti-interventionism. Aside from the examples I just gave, I think the Tanzanian invasion of Uganda to depose Idi Amin and the Vietnamese invasion of Cambodia to depose the Khmer Rouge, were entirely justified and made the world a substantially better place. But US interventionism has a very, very bad history.
(I suppose things like placing massive numbers of troops in friendly countries is a kind of interventionism I mostly approve of, but it’s not what we usually mean.)
>I think “this country is currently better than Syria” is a pretty appalling standard to use.
Why? As far as I can see that’s the most likely alternative. It’s another mostly Arab Middle eastern country that entered civil war in an attempt to depose their dictator at about the same time. Seems like a pretty decent point of comparison. The Syrian civil wars is lasting many years, gaddafi was deposed in 8 Months. Without US intervention, this presumably would have taken longer, and that’s a bad thing? The only way intervention is worse is if otherwise Gaddafi wins, but I find that kind of unlikely given the brevity of the war.
As for other recent interventions being bad, I think Kuwait is pretty glad they aren’t Iraqi right now. There are also cases where noninterventionalism looks to have led to very bad outcomes for those involved, see the Rwandan Genocide.
Syria is possibly the single worst country to live in on earth right now, if not the worst it’s certainly in contention. Saying “our intervention led to this country not being the single worst place on earth” is kind of a low bar, I would have thought.
Libya is currently far worse than it was under Gaddafi. Iraq is far worse that it was under Hussein. Syria is far worse than it was pre-civil war. Afghanistan, or at least large chunks of it, are as bad and probably worse than they were under the Taliban. These specific countries are literally destabilizing the entire world right now, and I think that American interventionism in each of them bears a great deal of the responsibility for this.
I’m not an anti-interventionist by any means. As you say, Rwanda is an example of a situation where there should have been a strong military operation, and I gave another few examples above. But by and large, the US and other western powers seem to be supremely incompetent at it.
@Enkidum:
“Libya under Gaddafi” maps to “Syria under Assad”.
Syria is still under Assad.
What we don’t have direct access to is the counter-factual where the civil war continues.
Now, a compelling argument can be made that the civil war would be been brutally efficient and ended quickly. But we don’t know that, and I haven’t see anyone really try and fisk the idea that the Libyan civil war would have ended very quickly (although I think that was probably likely).
I’m saying that both the Syrian and Libyan civil wars are the result of Western intervention. Said intervention is clearly not the sole cause of these wars, but I think it is pretty clear that they would have gone very differently, or not started at all, without our meddling. In both cases, the end result is worse than the start.
The most likely alternative was that Qaddafi won the civil war he was about to win, and Libya ends up looking like pre-civil war Syria, which frankly, is not all that different from pre-civil war Libya.
It was extremely likely. He had penned up most of the people opposed to him in Benghazi and was about to invade the city to root them out. At the time of the intervention, the pro-intervention people stressed the need to intervene quickly, before it was all over.
@enkidum
Syria is the worst place on earth because of a drawn out civil war. If US intervention in Libya only served to shorten civil war, that’s a Good Thing. I’m trying not to treat American Interventionalism as a discrete thing. Let’s try to take each case on it’s merits, temporarily ignore what the American government did previously. Iraq wasn’t in a state of rebellion, and I think everyone here’s in agreement that it was a mistake. Egypt also had a rebellion and deposed their dictator all by themselves. Would you have supported US intervention to keep Mubarak in power?
@Cassander
I don’t know a lot of the nitty gritty details, which is why I was asking. I kind of had a low prior for Qaddafi easily winning the war without US help, since he was deposed relatively quickly and it’s hard to believe some airstrikes could topple him so easily when historically the US has struggled hard to take out regimes through traditional military means. Libya just seemed like a nudge through a terrible transition state that quickened a process that was going to happen anyway. But I guess there were some pretty crucial decision points? In that case I admit that intervention was a bad idea, at least in execution.
@Nelshoy
Agreed about treating each intervention at least somewhat distinctly. But I don’t think we can (or should) treat, say, every intervention in the Middle East since WWII as independent – they’re part of an ongoing and largely disastrously misguided policy that really could end up toppling the Western world in the long run.
I don’t know much about the tactical situation vis-a-vis Gaddafi winning or losing without airstrikes, but I do know that life in Libya is far worse than it was under him, and I strongly suspect that, as cassander says, the outcome would have been much better for the people of Libya without intervention.
I definitely would not have supported intervention to keep Mubarak in power. I might have supported intervention to keep Morsi in power, but honestly I think it was such a clusterfuck that I don’t know that there’s anything we could have done at that point. I do think that our decades of support for Mubarak have totally screwed over Egypt (cf virtually all dictators and their countries in the Middle East).
Somewhat tangentially…
I think that after 911 there was a grand historical moment that could have been seized by a competent American government. There was massive support internationally and within Afghanistan for a large-scale occupation and rebuilding of the country. With the right kind of rhetoric, we could have gone in with an explicit decades-long commitment and a Marshall Plan of sorts, and, critically, not invaded Iraq. But instead we wasted our time burning poppy fields, paying off warlords, pissing off every major player in the region, and essentially ignoring the needs of the actual people. Bunch of goddam amateurs.
we could have gone in with an explicit decades-long commitment
Problem right there. The American administration and public weren’t in any mood for that kind of long-term commitment; it was “go in fast, hit ’em hard, wipe out the bad guys (just like the movies about how we won the Second World War)”.
The idea that no, you’re here for the next thirty-forty years overseeing a complete rebuilding from the ground up? Nobody wanted to commit to that kind of money or manpower or, to be frank, occupation (a lot of that is based on “but we’re not a colonial power, we’re the plucky rebel underdogs who beat the big colonial power” image in popular history). Also learn from history! What do you think Britain and Russia were doing in Afghanistan playing The Great Game all that time and getting not very far in the end?
Back when the liberation of Iraq was being pushed forward, I was blue in the face posting everywhere that this would not be a Second Vietnam (as a lot of gloomy prognostication was forecasting), it would be America’s Ulster because you go in like that, you have to be prepared for the long haul or else you leave it worse than you found it. Fast and cheap solutions only work in the movies.
US intervention in Libya only served to lengthen the civil war, by about five years and counting. We’ve been through this before. The war was almost over, at what now looks like a laughably small death toll, when France and the US decided to intervene.
Our intervention only served to make sure the Guy Everybody Hates, didn’t win. Yay us.
@Deiseach:
The government certainly wasn’t in that kind of a mood, because it was composed of fools. A decent statesman could have made the argument, I think. But they are/were in vanishingly short supply.
I am probably (definitely) being overly naive here, but I think there were differences between this possibility and the Great Game, namely the support of a large number of the populace. But you’re probably right, in which case this is further grist for the anti-interventionist mill.
@ John Schilling
Technically, US intervention has prolonged the Korean civil war for 60+ years, but I don’t see anyone complaining about that. Libya is divided but there isn’t a lot of active fighting going on. I’m definitely of the opinion Qaddafi > current situation > active civil war a la Syria.
@ enkidum
It just seems like a lot of important American foreign policy questions have no good answers. When the US decides on a lesser of two evils, there’s still evil left over, but now it’s America’s fault. I think most interventions are probably wrong headed, but think some like South Korea have been pretty unequivocally good. I just can’t stand this attitude where America supports a bad dictator and we’re helping him oppress people, America removes a bad dictator and we’re destabilizing the region. Why are you so convinced that supporting Mubarak would have been a bad idea? Did you have enough information to be sure it wouldn’t dissolve into a chaotic mess afterwards that leaves most people worse off? I sure didn’t.
There’s a very obvious third option that you’re not bringing up, namely neither supporting nor removing the dictator. This is, in general, the right choice IMHO. I’d say that intervention is only justified in the presence of immediate peril to the intervenor, or massive human rights violations in the country that can clearly be made better by intervention. There are very few cases where this is the case.
We did support Mubarak, to the tune of billions of dollars, right up until 2012. We are now about to start supporting Sisi in the same way. Both work in the way that a lid on a pressure cooker works. But at some point it’s going to blow up (and did).
@nelshoy
Most US interventions aren’t against countries with active civil wars. When we have done so, in Libya, Afghanistan, we’re very effective at toppling regimes. And John Schilling is right, the U.S.intervention extended the war, it didn’t shorten it.
@Enkidum
If you’re going to spend a ton of money on a huge rebuilding edify, Iraq was a far better target for that effort than Afghanistan was. We know this because we did eventually launch huge reconstruction efforts in both places, spent remarkably similar amounts of money in both, and achieved far more in Iraq than Afghanistan.
Gaddafi was winning by the time the NFZ was imposed. He’d have likely wrapped things up in a few more months.
The rebels made a strong initial showing, but most of it dissolved by the time NATO airpower came through; by May 2, the start of the NATO intervention, they’d lost Misrata and been pushed back to the suburbs of Benghazi. It’s not impossible that they could have persisted as a guerrilla force, but in terms of conventional warfare I’d definitely have bet against them.
how about:
“aspirators”
*quiet applause*
Rational-inators
What about the already existing word “aspirant”?
also pretty good
possibly better.
“aspirant”?
Too easily perverted into “aspie-rant”, aka https://archive.org/stream/IndustrialSocietyAndItsFuture-TheUnabombersManifesto/IndustrialSocietyAndItsFuture-theUnabombersManifesto_djvu.txt
😛
Asp-rat. It’s a special kind of dog-octopus.
“aspirators”
Too easily perverted into “aspie-raptors”, aka http://lesswrong.com/lw/78e/antisocial_personality_traits_predict_utilitarian/
On a minor and related note, does rationalist opinion consider HPMOR to be among-st the list of mistakes? Because there is plenty of bad writing in that work which makes rationalism look terrible. Not exactly the most sophisticated thing to attack, but it has likely reached a wide audience by now.
I enjoyed reading it. I think it may have been a mistake in the sense that now anyone who dislikes anything associated with rationality says “Oh, you’re the group that’s entirely about writing Harry Potter fanfics and thinks it’s the most important thing, right?”. But it also attracted a lot of neat people, so many it was worth it. From my perspective in an existing movement I’m not going to criticize the steps needed to make it grow.
And from a different perspective, screw anybody who wants to dictate what kind of fiction people can or can’t write for PR reasons.
I enjoyed it too when I was younger. My primary point about it is the irony that the so-called rational decisions of its characters tend to be irrational, and the work generally having enough plot holes and sexism to make the rationalist movement look bad.
I don’t dispute that any analysis which draws conclusions from HPMOR isn’t a fair analysis of the movement, though I do wonder the flaws of the work should be said to be reflect badly on the author himself.
I agree that there is absolutely nothing wrong with writing fan-fiction for P.R reasons though, as long as it’s actually good fiction in the first place.
So, HPMOR is not a perfect story. There has been extensive criticism of HPMOR from within the rationality community, as can be seen here:
https://www.reddit.com/r/HPMOR/comments/3096lk/spoilers_all_a_critical_review_of_hpmor/
I include myself as someone who thoroughly enjoyed and continues to enjoy the story, and can still find flaws in it (the top comment there is mine).
But of the criticisms of the story, “plot holes, irrational actions and sexism” don’t seem to be justified ones, to me. Irrational actions are usually called out within the text, because the characters aren’t perfect and are allowed to make mistakes (indeed, if they didn’t that would be an even bigger issue). Accusations of plot holes tend to fall under the umbrella of “things that I didn’t quite understand” (not accusing you of that, but saying what I’ve observed). And sexism seems to just revolve around the females in the book not having a central enough role, which is not enough for me personally to call something sexist, and the story lampshades itself.
So if there’s anywhere that you’ve written about these flaws in the story in more detail, I’d appreciate being able to read them, if I can.
The actual problem with HPMOR is that while it’s supposed to be rationalist Harry actually totally stops doing any investigation into things about a third of the way through and just starts making guesses which, since he’s an author insert, are mostly right. He decides that it’s safest if he doesn’t share his research then assumes that since no one around him explains how the magical world works that no one knows – rather than concluding that magical researchers came to the same conclusion that he did and are keeping their results secret. Of course, since he’s an author insert, he’s right – other wizards are just idiots (except for his other author insert wizard, of course).
The author mistakes memorizing social science results for being “rational” – it’s fan fiction that got bitten by the replication crisis.
A good (although extremely) long critique and review can be found here:
https://forums.spacebattles.com/threads/the-wizard-of-woah-and-irrational-methods-of-irrationality.337233/
I remember one bit (it was in a conversation with Draco, I think it was about heritability of magic [on an unaccountably naive Mendelian model] and supposed inferiority of muggleborn wizards) where Harry openly declares that if you get an experimental result you are not allowed to perform any other experiments to test the same hypothesis or any related hypothesis, because of the supposed natural inclination to only do this to results you don’t like until you get one you do like.
“The actual problem with HPMOR is that while it’s supposed to be rationalist Harry actually totally stops doing any investigation into things about a third of the way through and just starts making guesses which, since he’s an author insert, are mostly right.”
This is a mostly fair point that a lot of people have made in the subreddit. I don’t think quite all of his logical leaps are as lucky as portrayed, and he continues to get a number of them wrong, but it’s definitely not as satisfying as the initial premise of actually investigating magic and learning things by researching what works and what doesn’t as he tries to figure out why. My guess is EY realized that the story was going to be a billion chapters long and just started skipping that stuff for the plot, which I feel somewhat sympathetic to, since I’m fighting the same urge in my pokemon rationalfic.
“A good (although extremely) long critique and review can be found here:
https://forums.spacebattles.com/threads/the-wizard-of-woah-and-irrational-methods-of-irrationality.337233/”
Ugh. I’m sorry I stopped reading at the first post… when they spend so many words on explaining why the story is bad because it doesn’t respect the source material enough to their satisfaction (really? they’re upset that Petunia left Vernon because of shallowness? Like Petunia was some amazing character that’s being terribly maligned by this representation?), I just want to shake them and say “Do you know what a FANFICTION IS?!”
I saw their whole paragraph about being upset because EY didn’t read the whole series and how they like Luminosity more, but I find it unpersuasive in making up for the irritation. I may be oversensitive to this as a fanfic writer myself, but it’s seriously just really offputting as a critique, and it looks like it’s going to keep popping up throughout the whole thing every single time any character or aspect of the world does anything not like their canon self. There’s even a couple points where I think it would be a justified critique, but they seem ready to jump on it at the drop of a hat, and if it irritates me this early it’s probably going to become torturous later.
But I don’t want to throw the baby out with the bathwater, so if you have any particularly salient criticisms from there, please feel free to highlight them.
As someone generally unimpressed by Yudkowsky who nevertheless enjoyed HPMOR, I construed the irony you mention as the whole point of the work. Essentially, that this is what you get when a self-important child has a wildly exaggerated view of his own intelligence: a poor understanding of why certain norms exist and lots of bad decisions. To be fair, Harry doesn’t get punished that much for any of these decisions and ends up with a mostly positive outcome, but not before quite a bit of internal self-flagellation for not being smart or rational enough. I didn’t find the amount of plot armor to be too offensive or out of line with similar works.
Now, I’ve heard claims that Yudkowsky actually did mean for HPMOR to be a kind of guide to human rationality, as opposed to something closer to the opposite. That would be pretty funny if it were true! But I think the work speaks for itself regardless, and the author’s intent doesn’t matter too much here.
Right, for me a lot of people saying “This kid is supposed to be the uber rationalist? He makes so many mistakes!” are kind of massively missing the point. Double extra negative points if they also say “HJPEV is too perfect!” Like, you can’t have it both ways: either you appreciate a flawed character or you don’t.
I think it comes from the idea that a lot of people think Harry is meant to be this inspirational figure, when to me that’s very clearly not the case. HJPEV makes mistakes and is called out on them in the story, and yeah, he suffers consequences for them, fairly often. He’s an *aspiring* Rationalist, and a young one: not a perfect embodiment, and EY never meant him to be that.
I think the style of the story pushes readers toward thinking Harry’s supposed to be an inspirational figure. We see him making mistakes, but he hardly ever suffers significant consequences, so the story doesn’t seem to recognize them as mistakes. We see him being arrogant, but he’s arrogantly insisting on things the author supports, like the power of Science!. And what’s more, didn’t Eliezer say the story was meant to inspire us by that?
It is possible for a character to be presented as being perfect even while he makes mistakes, if the author doesn’t characterize the mistakes as mistakes and doesn’t think we shiuld either. Also, remember that there are various categories of mistakes (and correspondingly, various categories of perfection) and it is possible for a story to show him kain g misyakes in one area but being too perfect in another area.
Also, Harry tends to have plot armor. In the real world, going around with no social skills saying “I know better than you” will fail badly, regardless of whether you actually do know something they don’t.
“, so the story doesn’t seem to recognize them as mistakes. ”
I’ve said it before and I’ll say it again: I think a lot of people misremember just how many times Harry makes mistakes in the story and is called out on them and suffers consequences for them. I regularly get surprised when I see people say it only happened “once or twice” or “a few times.” By my last count it was well over 30.
I’m going to have to document it on my next read through, whenever that is, and make a post about it.
“It is possible for a character to be presented as being perfect even while he makes mistakes, if the author doesn’t characterize the mistakes as mistakes and doesn’t think we shiuld either.”
Of course, but I’d contend that most people are pretty bad at assuming what EY meant to be mistakes and what he didn’t, just off of anecdotal experience.
“Also, Harry tends to have plot armor. In the real world, going around with no social skills saying “I know better than you” will fail badly, regardless of whether you actually do know something they don’t.”
Insofar as he loses allies and fails to ingratiate himself with many students, I think he DOES fail badly. And he suffers from being alone/being lonely quite a bit in the story, even after he gets his army.
Please do make that list! I’m especially interested in the first time he gets significant consequences – IIRC, it isn’t until McGonnegal restricts his Time Turner. (I don’t count almost being sorted into Slytherin, because it’s only “almost.” And I don’t count “failing to ingratiate himself with many students,” because it isn’t called out and portrayed as a consequence of that, nor does it clearly affect him in what he’s trying to do.)
I think Yudkowsky intended HPMOR to be a guide to rationality without being a perfect example of rationality. I suspect Harry is more intended as a reader insert than (as some shitty criticisms think) an author insert.
From memory, the first time Harry gets called out for doing something bad is when the sorting hat chastises him for being a bully. It’s not made explicit that Mcgonogal gives him less slack because of his antics regarding his money and spending at Diagon alley, but it’s there.
The first flaw HJPEV faces is his inability to resist being clever. He only manages that when forced to avoid doing clever things that might destroy the world, and then only for that subset of clever things.
Sometimes the clever things that he thinks of actually work. Sometimes he uses dark arts of persuasion, like convincing Draco that he had sacrificed his belief in blood purity to science. Sometimes he thinks that a clever idea to perform a jailbreak is “of course lets do it”.
He’s called out – then and other times – but he doesn’t suffer consequences. The Sorting Hat’s criticism has no consequences for him; even his momentary sorting into Slytherin is retracted a moment later and he gets into Ravenclaw after all. The only lasting effect would be what he decides to do differently because of that criticism, which doesn’t count as a narrative consequence.
I enjoyed reading HPMOR but feel that it essentially amounts to false advertising for the rationalist movement. Specifically, it portrays rationality and science as a path to power (including political power), when in the real world they really aren’t. Its role in attracting people to the community has increased the fraction of people who are interested in acquiring power, but the tools studied and taught by the community are as useless as ever for actually doing so.
Rationality is about doing what works. Using rationality for power looks exactly like using the methods that work best for power.
Gaining power has been refined a lot over the millennia, so there’s little that rationality-specific focus can do over power-specific focus.
The specific skillset taught in Bay Area rationality , nerdy maths stuff, couldnt be less fitted to the goal of gaining power.
I would never have read this blog, or known anything about this community, had I not encountered HPMOR.
I agree with your last paragraph, nobody should *dictate* anything. One is free to critique, though.
And I think the critique of the hpmor/rationalism relationship is not along the lines “*all* rationalists care about is HP fanfics” – that would be an idiotic one indeed, and any one who seriously advances such argument is an idiot. There’s however a completely non-idiotic (at least IMO) critique that goes along the following lines: *When* rationalists (of course, this is generalizing, as far as you can judge about multiple people by actions of one) write fanfics, they do it in a manner that shows lacunas in their thinking and obvious literary mistakes, that makes it look terrible as an apology of rationalism. And that reflects on other arguments, maybe not fairly, but it does. As much as hpmor is perceived as being an argument for rationalism (maybe not a logical argument but it’s not uncommon to argue for ideas by creating art promoting them) it is not a very good argument. It’s like a comedian combing his hair in a particular way and acting like a buffoon to criticize Trump – it may be hilariously funny (though too often it’s actually not) but it’s not a really good argument against Trump policies. And I think if one wants to criticize Trump policies one must be particularly careful to avoid ever mentioning comedians as contributing anything to that goal.
it’s the kind of book that, if you dislike it, you REALLY dislike it. but fwiw it continues to be well received on goodreads. wish goodreads made it easier to track ratings over time, but a quick glance at the last week:
5/5: 18
4/5: 11
3/5: 3
2/5: 1
1/5: 1
I had a fun time with it, but even though I’m a big fan of EY and his writings I do find it kinda embarrassing for rationalists. He’s free to write as he wishes and ideally shouldn’t be judged for it, but I think it does serve to lower the status of the community. Status plays a massively important role in recruiting people to help you accomplish goals, and EY’s goal is literally trying to Save The World.
Yeah, HPMOR’s gotten new people interested, which is great. But I think it’s also put off many others, and is a way easier target for ridicule IMO than actual LW beliefs.
Could you ever see a famous high status person like Elon Musk identifying as a rationalist? That would potentially make a way bigger difference for the community and AI risk than HPMOR. But think of the media scrutiny! Terminators and basilisks are bad enough, but throw Harry Potter fanfiction in there and LW is just something to point and laugh at. Rationality started out a low status group with weird beliefs, and HPMOR at best does nothing to improve that.
Is this all way too much responsibility to hold a guy to? Probably, but it follows from his own beliefs, and I worry his personal war on status is damaging to his higher goals.
I’m finally starting to see what Eric Raymond was talking about when he wrote about movements becoming independent of a charismatic founder by declaring him a nut and no longer relevant. Maybe it’s helpful in the long run? It’s kind of ugly to watch, though.
There are a number of us that deliberately pump against this. In a comment below, I note that Eliezer’s no longer particularly representative of the broader rationality community, but that’s not to say that he’s outside of it, either.
Eliezer is not a nut, and Eliezer is not irrelevant. I owe him a lot, and so do several thousand other people. There are definitely people that disagree, but making statements like this in public is part of how I push back against them.
Edit: Also, I don’t deny that Harry Potter fanfiction is already pretty inherently low status, but the vast, vast, vast majority of people who get a status boost out of sneering at it haven’t had a fraction of its impact on the world. As a guy who was a nerdy outcast in middle school, I take that as fairly solid consolation. I’d like to live in a world where we judge actual impact above sneerability anyway.
Edit edit: the above all sounds like I’m defending against an attack from sketerpot. I’m not—sketerpot clearly wasn’t making any sort of attack on me or people like me. I more took it as a chance to say words that were only vaguely in response. Thanks for the opening, sketerpot.
Agreed. Like I said, I wish people didn’t care about what kind of art you consume and create in your spare time. My criticism can be condensed to “pick your battles better”, and I’d hate to ostracize or disavow someone who’s done so much for us over a petty complaint like that.
Something which insiders know, and outsiders do not. The kind of mistake outsiders are making is understandable.
Agreed. It’s plausible that the world would be better off if MIRI fired Eliezer. Reasoning:
* Prospects for AI safety generally don’t look that good. Therefore it makes sense to try risky (high-variance) strategies.
* MIRI itself represents a tiny fraction of the world’s top math and CS talent. MIRI’s individual contributions are probably tiny compared to a small shift in the desirability of working on AI safety for the rest of the academic ecosystem.
* Academia is a red queen’s race. Reputation is the currency of academia. AI safety has lower reputational stock than it would otherwise due to Eliezer’s antics. Eliezer’s reputation as a public figure cannot be repaired (and even if it could, Eliezer would resist whatever steps are necessary).
* Eliezer did for AI safety research what Timothy Leary did for LSD: he made it popular and disreputable. The disreputable aspect is not inherent to AI safety. It’s a guilt by association thing.
* If MIRI fired Eliezer, that would be heard around the internet. It would represent a step towards the “gentrification” of AI safety. This wouldn’t do much to reduce MIRI’s research output. It doesn’t seem like Eliezer is super involved in MIRI’s research nowadays. And even if Eliezer is doing valuable research work, I’m sure he could find people to support him outside of the structure of MIRI.
The best counterargument: a non-amicable divorce could be harmful to the current ecosystem? Anyway, I think it would be pretty reasonable for MIRI to fire EY next time he does something substantially crazy, and maybe even before that.
Just a small note: in almost all cases when I actually talked to people about what they didn’t like about HPMOR, the specific criticisms were provably wrong. The “Harry is an author’s self-insert” thing has been addressed multiple times before, so I’m not hearing it as often as I used to. I’d very much like to see a more detailed critical review which doesn’t boil down to a variant of “Harry is not behaving like a normal kid would” (duh) or “Harry is often wrong even when he thinks he’s being rational” (duh) both are which [Spoliers, rot13] gur prageny cybg cbvagf bs gur svp.
I certainly hope Potter-Evans-Verres isn’t an authorial self-insert because he’s such an objectionable little toad I want to feed him toes first and inch by inch to the lake monster 🙂
But that probably is part of the problem; I’ve certainly read the “No, he’s meant to be this annoying know-it-all in the start but once you get to a certain point all this gets turned on its head and he learns humility by finding out that he’s been so freakin’ wrong all along”. The problem is, he’s such a toad up to that point that I for one would rather spork out my eyes than keep reading to the part where he gets hit by the dropping penny.
But eh, fanfic is down to personal taste. For someone else, Harry Potter AU may be the very thing they are longing to read, but it’s not my particular cup of tea (even the ordinary Potter fanfic never enticed me in). So me not liking it says nothing more than YKINMK and shouldn’t be taken as a critique of the writing style, subject matter, or execution of content.
I kinda get the impression that Eliezer’s approach to this evolved as the story went on. In terms of the bare bones of plot, calling it a “comes of age and learns humility” story isn’t really wrong; Harry does turn out to have been wrong or naive about a lot of things, and this does turn out to be important to his eventual happy ending. But this isn’t remotely telegraphed in the early chapters; even from reader perspective, there’s no indication then that he’s supposed to be seen as anything other than awesome.
It’s hard to call that anything but bad writing if the plot was always supposed to go in that direction — surprises are okay, but there need to be enough hints that you think “oh, I should have seen that coming” — but it fits well if Eliezer realized too late that he was writing a plot that only works if everyone but the leads is an idiot, or that Harry needed a character-development arc. Which, let’s be fair, is pretty common in episodic formats.
And yeah, Eliezer is that kind of guy IRL.
This is an example of a ridiculous criticism, either blatantly false or defining away the entire plot. Harry’s identity is central to that plot. EY foreshadows it at least as early as the child-services freakout (though it may actually be less blatant after revision, I forget). It was in my mind as an explicit possibility at least as early as Harry’s conversation with the Sorting Hat, which talked about his flaws and told him that if his scar held anything like a ghost, “it would be part of this conversation, being under my brim.”
I think any honest critic will grant that Harry’s identity changes the whole story – as does the fact that Voldemort’s thinking was flawed both practically and morally. I also think any honest critic will grant that all this was likely intended from the start.
Harry’s identity is properly foreshadowed. But Harry’s identity only explains why Harry acts like an arrogant know-it-all. It doesn’t explain why Harry gets away with being an arrogant know-it-all. The fact that he can act like that and not be treated as a disciplinary problem (or even just like a person with no social skills) makes him more like a wish-fulfillment Mary Sue than someone being realistically shown to have Voldemort inside his head.
Something that had zero evidentiary value to the reader for his situation (vis a vis souls and horcruxes etc) not being the same as canon, because there was no third party to the conversation in canon.
I don’t disagree with any of this, but I also don’t see it as relevant to my criticism above. Maybe “anything other than awesome” was overstating it; Harry’s occasional callousness, his narrow focus, his penchant for evil-overlord theatrics were all pretty clearly meant to be reinterpreted in light of his, er, existential status. But those aren’t especially central to the early chapters, and except for some of the theatrics they’re not what we’re supposed to admire him for. I’m pretty sure readers were meant to take his early relationship with the setting (“child prodigy pulls back the curtain on a mad world”) at face value, and I’m also pretty sure Eliezer was angling for an analogy to rationalists in the real world (bits of the same worldview are scattered throughout the Sequences). It’s only later that some of that got walked back.
Jiro, since that’s a different criticism, I’ll only say you didn’t seem to respond to the fact that the author meant you to read the story more than once. You might want to read the following as well.
Nornagest: in Chapter 2 Harry exclaims that what he’s seen would allow non-locality or “FTL signaling”. Within Rowling’s world this is correct; there are non-local FTL Time-Turners. McGonagall alludes to this in the same chapter, and I think you’ll grant this was intended from the start. Harry has a chance to make an accurate prediction. He does not.
Later he buys a soda drink which repeatedly confuses him. He tells himself he wants to know how it works, that he has to investigate this (his emphasis) and that he should try an “experimental test” on occasion. He notices that his initial thought as to how the drink works does not make sense. Harry has a chance to make an accurate prediction. He does not.
In Chapter 13 (this is the author giving you a rough idea of how long you should wait for shoes to drop), Harry has experiences which I immediately attributed to time travel. Eliezer added a note to that chapter assuring people that it made sense and they should try to solve the puzzle, which tells me that he expected everyone to get it right off. Harry has a chance to make an accurate prediction. He does not. (He does guess that Dumbledore controls the game, but that doesn’t seem nearly true enough.) This is also when we learn that he has a self-recognition code, which I mention because it is bloody important.
Later in the same chapter, a painting outright tells him that “the one who awards or takes points is always you.” Harry has a chance to make an accurate prediction. He does not.
Going over the events of that chapter, as it were, he notices something wrong with his actions related to my earlier point. As I explicitly said before, this is connected. It is not a case of “narrow focus.” Ask yourself what Voldemort believed his own goal to be. Then ask when he ‘died,’ what happened in the next nine years plus four months, and what that implies in the most natural reading of the story.
Harry does make a clever and useful suggestion in the next chapter, and seems properly impressed with the import of time travel in general. It takes him until Thursday to try something that he seems confident will work, and produce a new discovery; only after the fact does he see that reality went easy on him.
Okay.
The big problem is that having such behavior eventually fail is not realistic. It should immediately fail.
Having it eventually fail makes it seem more like the author changed his mind than that the character was meant to be flawed all along.
Realism aside, it means we only get to the consequences after dozens of chapters of him by all appearances succeeding. Guess which one sticks in our imaginations?
The big problem is that having such behavior eventually fail is not realistic. It should immediately fail.
Yeah, biting your teacher when you’re in third class shouldn’t be treated as an amusing quirk, it should be treated as “okay, you really need to be taught how to behave in a civilised manner and if you can’t learn then maybe you need professional intervention”. Biting is a normal developmental stage you go through (and then grow out of) when you’re aged two to three, not nine years old and in third class. I can’t help but wonder what Harry Three-Names would have done if my mother’s cure for biting had been applied 🙂
As an aside, are American primary schools different? Over here, there wouldn’t be a separate maths teacher, there would be one teacher for the class who taught all the subjects. Or am I misunderstanding and it only means ‘maths teacher’ in the context of why Harry Three bit her? Never mind that one explanation might be that she mightn’t have known the word logarithm but only refer to it as “this is what the log of a number is”, but she would know the mathematical concept okay.
A lot of my resistance is because of the suspicion I have that Nornagest mentions; Harry Triple-Decker was intended to be The Only Sane Rationalist and Ultimate Bossy-Boots Know-It-All Who Would Be Proven Right from the very start, but as the story went on and reader feedback came in, the author had to swerve and adjust course so that Harry Triple-Barrelled-Surname would get an attitude adjustment.
And this is pure nitpickery of the worst kind, but “I’m Evans-Verres and he’s Verres-Evans because we’re just that unique and special and ultra-precious about signalling how progressive and right-on about inverting and subverting the patriarchal custom of the woman taking the man’s name we are” rubbed me up the wrong way. Pick a surname and stick with it, and thank goodness they didn’t spawn: would the kid have been named Intelligencia Evans-Verres-Verres-Evans? Or possibly Brainella Verres-Evans-Evans-Verres?*
And do we ever find out if Harry’s parents were plain Mr and Mrs Potter or were they also Potter-Evans and Evans-Potter? I have a feeling they were too sensible to mess around with “We positively need two bijou separate sets of surnames, one for each of us, personally customised”.
*Names ripped off from Private Eye’s “Mary Ann Bighead, a parody of journalist Mary Ann Sieghart, often writes columns trumpeting her own brilliance and that of her daughters Brainella and Intelligencia.”
No, there’s usually one teacher for everything (except gym and music) until sixth grade or so, when it starts getting broken down into subjects. Details differ between programs but it’d be very rare to see a separate math teacher in third grade, at least outside of private schools.
And it’s implausible that logarithms would come up that early, but that at least can be written off as HJPEV reading his dad’s math textbooks and being his charming self.
The names thing was probably supposed to be satire roughly along the lines of Rowling’s “Privet Drive”, but Eliezer wouldn’t have had a native understanding of the British connotations.
The most plausible way for logarithms to come up in an elementary school discussion is talking about the Richter scale for earthquakes, I believe, to explain why the difference between 9 & 10 is more than that of 3 & 4. How likely kids are to be familiar with this scale depends on where they live, of course.
HJPEV reading his dad’s math textbooks and being his charming self
Yeah, I was imagining Harold Thrice-Blessed With Nomenclature doing his plum in the mouth “Pedagogue, pray instruct me – or rather, my sub-par class mates, for naturally I already know all about it! – in the logarithmic method if you would be so kind” routine and getting a You wha’? reaction, whereupon he sinks his teeth into her, under the impression that she is ignorant and not that he has been a toffee-nosed git. (Also, I rather doubt the adopted son of an Oxbridge professor is going to the local bog-standard comprehensive, unless the Verres-Evans-Evans-Verreses are radically signalling their leftist cred, which not even actual Labour Party Corbynista politicians do).
Yudkowsky could have used Brit-picking help (some of his jokes don’t come off but induce wincing) but then again, writing it in the vein of “Hogwarts High” rather than actual British schooling etc. is very much what I’d expect from Americans doing HP fanfic, so he was being mainstream there 🙂
To be fair to Corbyn, he has principles on this issue: he got a divorce when his wife wanted to send their son to a selective school.
Yeah, that’s pretty much my reaction, too. I tried reading HPMOR once, but quit halfway through because Harry was just acting like a smug, obnoxious Mary Sue all the time and not suffering any real consequences for doing so. Sure, people say this gets better later in the story, but a good story should be enjoyable from chapter one, not from chapter sixty.
Yeah that ending was a travesty. I really thought the story was going somewhere.
I have a horrible feeling that I’m both of your improv annoying people :/
I’m honestly pretty ignorant about economics, so I know that it’s better to keep my trap shut about that one. But with the rationalists I have at least put the time in! I’ve read a lot of the original Less Wrong content by now, read all the Slate Star Codex posts, engaged a reasonable amount with rationalist-adjacent tumblr and poked around a number of the other associated blogs.
And I still just don’t understand, unfortunately (those are links to my version of being the annoying person). I like the community – that’s why I spent all this time on it – but I still just don’t see how it’s reconciled its interest in “domain expertise, hard-to-define-intuition, trial-and-error, and a humble openness to criticism and debate” (to quote your annoying person) with the sort of framework it started out in in 2008 or so, which was highly focussed on very formal mathematical models of cognition. I’m not sure how far the rationalists really have left “the early days of our own movement on Overcoming Bias and Less Wrong”.
I know that I’m missing a lot of nuance from not hanging around one of the biggest communities in person, and maybe people do have sophisticated stances on how to reconcile the two. If so I want to know about them! But I think it’s still really hard work to get this information from the internet, so sometimes I just get frustrated and post ranty comments.
Update: I should maybe clarify that I haven’t read any of whatever this latest internet argument about rationalists is, so I’m missing context here. Maybe the arguments there really are terrible.
Any rationalist detractors, skeptics, and critics can email me at my work email (duncan at rationality dot org) and I’ll happily send you a ~200pg handbook of a fairly solid representation of this community’s up-to-date take on rationality, in exchange for you filling out a survey now and again six months later (you get the book and join the control group!).
It’s really left the early days of our own movement on Overcoming Bias and Less Wrong.
Interesting, and thanks! Have emailed you.
I assume from the email address that this will be related to the CFAR side of things. Do these ideas have broad traction in the wider rationality community now, or are they localised to a small part of it?
CFAR is generally pretty solid as both a magnet for trending community ideas and a shaper of community interest. I think that, due to both effects, we’re something like my-gut-tells-me 85% “up to date” on what the broader rationality community is paying attention to.
I’d be interested in reading the book but I don’t consider myself a skeptic or critic of the movement. Would you still accept me as part of the control group?
Sure!
I also emailed you with interest.
Thanks for this offer!
I don’t understand your question, but I’m going to try to answer it anyway. There is no way this can go wrong! (sarcasm) For bonus points, I also don’t hang around in person, so whatever nuance this makes you miss, I’m missing it too. Unlike you, I don’t know that it’s better to “shut my trap” when I don’t know what I’m talking about. Hence, the next few paragraphs.
Suppose you were to ask, say, Eliezer Yudkowsky, whether a perfect reasoner should “theoretically” need to know anything other than Bayes’ rule? What might he say? (This keeps getting better: I’m trying to answer I question I don’t understand by putting words into the mouth of famous people who I don’t even know.) I think he’d say something like this: if you have a good way to describe your hypothesis space, and an infinite amount of computing power, then using Bayes’ rule and almost nothing else, you could rapidly increase your knowledge and understanding of the world, using something like AIXI (see Wikipedia for more about AIXI). This sounds like “a formal mathematical model of cognition”; perhaps that’s the kind of thing you had in mind. If not, you’ll have to clarify your question.
But what’s the actual goal here? The goal is not to figure out an algorithm which we could use to program a robot to draw correct conclusions, assuming that robot has access to infinite computing power. There are several goals actually, but one of them is that we want an algorithm that we can actually implement on harder that we have available, and that makes as efficient use of that hardware as possible. The hardware that’s most relevant here is the modern CPU: Cerebral Processing Unit, more commonly known as the human brain 🙂
(Hey, Scott’s not the only one who can’t resist a pun. It’s not my fault he’s better at it.)
Which brings us to things like “domain expertise, hard-to-define-intuition, trail-and-error, and humble openness to criticism and debate”. Can we formally prove that all those things are good ideas? Probably not, depending on how strict your standard for “formal is”. But that’s okay, most mathematical proofs are not maximally formal either. Let’s back them up informally instead. One at a time:
1. Domain expertise. Even a perfect thinker should respect domain expertise, because domain experts have seen information you haven’t. For instance, if you haven’t ever seen a bacteria under a microscope, you should listen to biologists who say they know what they look like. If you’re not an ideal thinker, perhaps because you don’t think infinitely fast, you should also respect the fact that they’ve thought about this domain far longer than you have.
2. Hard-to-define-intuition. This goes back to what I said earlier about exploiting hardware resources as well as possible. What is a hard-to-define-intuition? It’s your brain telling you “I think this is the answer, but I can’t or won’t tell you why”. So, your brain computed the answer “for free”. Should you trust it, or discard the answer for fear it leads you astray? Depends! The ideal thing to do is collect some statistics about which sort of situation your intuition tends to be right, and pay proportionally more attention to your intuition in those cases. There’s that Bayes’ rule again! And sometimes, the way you formed that intuition is that your brain did some calculation akin to Bayes’ rule itself, and just didn’t tell you about it. And maybe over time your intuition improves (because your brain got exposed to more data and thus performed many Bayesian updates). Yet another reason to trust domain experts: there intuitions are better, and Bayes rule gives us at least one reason why 🙂
3. Trial-and-error. Again, even an ideal reasoner needs this a bit, and everyone else needs more. Trial-and-error is just a special case of getting more information about the world. For instance, Thomas Edison tried many different designs for a lightbulb before finding one that worked. You might be tempted to say, with a perfect knowledge of chemistry and a powerful computer, he wouldn’t have needed to actually build them, just simulate them. Fine, but first you need a perfect knowledge of chemistry, which probably requires chemistry experiments, which probably involves trial and error. Furthermore, even if you’re just simulating, that’s still trial and error, just faster since the computer is doing it so you don’t have to wait an hour to find out that your light bulb design will burn out in way too fast. Even things like solving Sudoku puzzles is trial and error. The best algorithms make fewer errors and catch them sooner, but I don’t think there will ever be a Sudoku solver that just goes directly to the solution, in the sense that if you want to solve for x in “x^2 + 7x – 9 = 0”, there is a way to do it “directly” that doesn’t feel like trial and error at all.
4. Openness to criticism and debate. This one is actually pretty useless for a perfect thinker who never makes any mistakes and thinks infinitely fast. But even if you never make mistakes, you probably only think finitely fast. So let’s say you’re a philosopher and you’re trying to solve a tricky problem, such as, say, the ultimate answer to life, the universe, and everything, to pick an example at random. One thing you can do when faced with a hard problem is parallelize. Concretely, you might be lucky enough to find other people who also want to solve this problem. Now, if this were an “easy” problem like sweeping the floor, then you just say “I do the left side of the room, you do the right side”. But in this case, the problem is so poorly understood that you don’t even know if the terms left side and right side make sense. It’s more like sweeping the floor of a space ship in zero gravity when the lights are turned off, or something. So if someone spends ten years to discover that a promising seeming line of attack is a dead end, and someone else discovers that a line of attack that looked hopeless is actually yielding some tasty low-hanging fruit (hurray for mixed metaphors!), it would be nice if they were to tell you about it. But if someone comes up to you and says “this line of attack is hopeless; save yourself 10 years”, you may need some convincing. So you give your reasons why you think it’s not. And they tell you they thought they same thing 7 years ago, but it turns out that there’s a subtlety with XYZ and so it doesn’t work. And maybe you point out some things they haven’t thought of, and maybe they point out things you haven’t thought of, and this sounds a whole lot like a debate. So even if all people thought exactly the same way, debates seem like a quick way to get each other up to speed on each others progress, so if before the debate you X and I know Y, then after the debate we both know X and Y, on a much deeper level then if we just skipped straight to the conclusion. Since debates of this form will often contain statements of the form “It seems to me that you are wrong” (or else what is there to debate), openness to criticism becomes important.
And that’s all. If you or anyone else has read this far, I apologize for wasting you time instead of hitting delete like I should have after writing all this.
But EY does not talk about Bayes only in the context of ideal reasoners. For instance, he thinks Bayes should replace science. Perhaps in his mind “Bayes” is a ragbag of heuristics that non-ideal reasoners could and should use — but to everyone else, Bayes in a mathematical rule. They are naturally going to hear him as recommending an algorithmic thingy as the only epistemology anyone needs, because of they are not party to his idiosyncratic definition. The misunderstanding is down to the way he expresses himself.
Eliezer does not think that Bayes should “replace” science, Eliezer thinks Bayes could fill the holes around science; he thinks that Bayes is the computationally expensive general case of which science is the approximate but well-understood “simple” instance. (This is in the context of people seriously claiming that you shouldn’t consider any arguments that are not scientifically proven, which would rule out all speculation about qualitatively different futures.) It’s like accusing relativity of wanting to replace Newton’s laws; if it works as advertised, the simpler laws will just fall out as a special case.
That’s not relevant. My point was about the (mis)use
of “Bayes” as a piece of terminology.
In Science or Bayes, EY says that the use of Bayes instead of Science would have led to MWI being accepted earlier. But his justifications for Many Worlds are not based on Bayes as a method of mathematical probability, they are based on handwaving conceptual reasoning (which is, incidentally, taken wholesale from the work of David Deutsch, who is not a Bayesian!).
So Bayes in that context does not mean maths…but to an outsider, it would mean maths.
Actually, AKA1Z, even in the early posts you’re talking about you can go read Eliezer talking about improved versions of Solomonoff Induction – specifically a version that assigned probabilities to sequences of data, which does sound like it would mathematically favor “MWI” over most if not all other interpretations.
Now, this business of improving SI is an open problem. Slightly more recent posts make this clear, and imply an argument for MWI that is not yet mathematical because we’re in the process of formalizing it. Feel free to engage with the actual work being done.
I predict that someone in the next twenty to forty years will come up with a definition of Bayesian “naturalized induction”, and insofar as we can apply it to quantum mechanics – perhaps in a simplified case – it will say that a person living in MWI would experience the Born rule.
“sounds like it would” -whiich is to say , no mathematical proof has been offered, and instead what we have is conceptual handwaving that such a proof is possible.
Is it or is it not misleading to say you have Bayesian proof of something, when in fact you have only handwaving about the future possibility?
Well, I tried before and guess what happened…
I thought his point was that science is “officially” only about testing hypotheses (I’m simplifying this a lot), but the question of “where do these hypotheses actually come from?” is kinda taboo.
The process of generating a hypothesis does not have to be “scientific” at all — your way to scientific fame can start by having a dream about a mythological snake, as long as it allows you to make measurable predictions, and the experiments confirm them.
So, where do the scientific hypotheses come from? Officially, anything goes… but intuitively, just generating random strings, and experimentally testing the ones that happen to make sense linguistically, would most likely not result in a great scientific career. There must be some way that is, at least, better than completely random. But we can’t call that “science”, because science is the word used for what happens after the hypothesis is proposed.
Bayes is an explanation how this could possibly work.
Well, Ok, everyone is seeing something different in that ink blot. But I am a trained scientist and I never noticed any taboo. OTOH, you don’t have courses in hypothesis formulation because no one has boiled it down to some algorithm or teachable set of techniques. Which might be a problem but isn’t the same problem.
That’s very widely understood.
?????
Does “possibly” mean within realistic computation limits? It’s known that Bayes in combination with some other things does “work” at generating and testing hypotheses algorithmically, but only if you ignore computability. But that is a theoretical discovery with no obvious practical applications, and EY seems to be talking about doing science practically, as far as I can interpret his ink blot prose.
AIXI is all about Bayes’s rule, quite explicitly. That’s probably not what you meant, but why did you say something so precise that it was simply wrong?
Thanks – I’m very happy to receive detailed answers to my really vague comment. I considered being more precise than “a formal mathematical model of cognition”, but realised I’m not exactly sure what specific model MIRI are interested in these days. I haven’t read their logical induction paper, for instance. However I assume they are still interested in the general probability-and-logic cluster of ideas, and are still interested in the attempt to explain cognition as some sort of formalisable reasoning process involving explicit mental representations that are then transformed by mathematical rules.
I don’t personally think any of this is going to fly, for the same sorts of reasons David Chapman doesn’t think it’s going to fly. How are we defining this ‘hypothesis space’? What is the process by which these abstract representations take on meanings in the world? Why do we expect explicit rules like this rather than a more opaque black box process that happens to produce reasonable heuristics? Why are we even assuming that cognition is localised as some sort of ‘representations in the head’, rather than at least partly arising out of interactions with the environment that don’t need to be explicitly ‘stored’ anywhere?
Chapman talks about this at length in a far more coherent way than I would ever manage here, and actually has some sort of expertise in the area that may lead people to take him seriously. My interest is more from the other side: using the experience of successful reasoning in a particular domain (my personal hobby horse is mathematical intuition) as hints towards what a theory of cognition should maybe be like. Mathematical intuition tends to draw on a wide range of human abilities – Thurston’s On proof and progress in mathematics is wonderful on this and discusses e.g. language, spatial sense, association and metaphor and the ability to think through processes in time).
I suppose it’s perfectly possible that all this mess is built on top of some kind of clean Bayesian formal reasoning process, but it’s not obvious to me why the idea is so compelling.
I’ve also never really understood the relevance of the Solomonoff/AIXI stuff. We don’t have infinite computing power, so as you say we are going to need ideas that work with the hardware we have available. MIRI’s intuition seems to be that some of the ideas are still useful for thinking about agents with finite computing power, and I’ve never quite grasped why.
A programmer analogy.
You have to build a complex system. You can either try to think about how the complex system would work and then just code for a few months and see if you get anything useful out at the end. Or you can try to build a simpler system and hope that you can upgrade it into the complex system. Somebody watching from the outside who knows nothing of systems design might say “I don’t see why this person is so sure that the toy problem they’re solving is going to scale to the real problem.” That’s not it at all. It’s just that we know that tackling the big problem directly very probably won’t work, whereas tackling the simple problem and trying to work our way up to the complex one conceivably might. It’s not just a question of what problem to solve, it’s also one of what path can we take to solve it, and the path of “solve a simpler problem and see if it illuminates the more complex one” has a lot of evidence behind it.
Yes, this is fair. I’m a big fan of toy models, extracting out individual interesting questions, finding the simplest non-trivial example of what I’m interested in, etc. It still doesn’t really help me understand why they’re building these particular sorts of toy models, or why they’re so excited about them.
“Excited about them” how? Also, see my comment right below this.
Since your comments talk about MIRI, I’ll just respond to that:
Current “machine learning” techniques could be a flash in the pan. Now, they could perhaps be capable of creating AGI. We’ve had evidence in that direction since 2008. But in putting together a theory of AGI that doesn’t kill us, it seems like a good idea to start with the abstract laws governing all rational minds, since those would apply to the next hot paradigm if that’s the one that goes anywhere. Remember that timelines from people at MIRI range from a few decades to more than fifty years from now for the median advent of AGI.
MIRI should nevertheless analyze “machine learning” as well. They are doing so. They started that about as soon as they had the resources to do so.
Perhaps you’d like to clarify your objection?
My objection is rational. Hiumans aren’t rational, in the technical von Neuman sense, and can still be dangerous. An assumption that all AIs worth considering will operate under vNM rationality underpins these “universal laws”, yet is not a given.
I think the rationalist community has its share of idiosyncratic prejudices, which are more or less predictable given its demographics and its origin – excessive concern with IQ, attraction to solutions, problems, or arguments that involve future technology, strong desire to put things in terms of numbers or equations even when it is inappropriate to do so, etc. Because of the way communities and memes work, this leads to certain ideas, such as utilitarianism, getting uptake disproportionately to their merit.
That said, those idiosyncrasies have also, I think, led to some things given attention that deserve it and which are overlooked elsewhere, and I don’t think other communities are better in this respect. Individual rationalists are inconsistently self aware and intellectually humble (reading, say, Eliezer Yudkowsky talking about philosophy is, for example, typically cringeworthy), which can grate more than usual given their explicit concern about these virtues, but to say that they exhibit these inconsistently is to say they do it a lot more than most people, and some members of the community, like our host, do it quite well indeed.
I don’t mean to jump on you here (not saying these are your criticisms), but the problems mentioned seems pretty similar to the complaints Scott criticizes in the post, i.e mostly question-begging “I disagree”:s. Saying “they do bad things” (or care about the wrong things) without specifying why those things are bad (or why those things aren’t important) isn’t exactly substantive.
What qualifies as cringeworthy also is quite a bit in the eye of the beholder. I cringe every day at things I read or hear (sometimes including academic texts), and EY:s writing comes pretty far down the list of the badly argued. That fits into a general pattern I might be partially imagining, but rationalists seem to be held by such critics to a much higher standard than anyone else (which you also say). A lot of it isn’t so far from “eww, nerds!”, let’s find fault!
Think of this hypothetical exchange:
Rationalist: “The world doesn’t work rationally at all! Let’s try to be more rational!”
Critic: “Everyone already knows that but you, and nobody cares. You shouldn’t either.”
Sums it up I think. But the critic is wrong, IMO (not completely wrong, just not completely right): we’re not at all aware of how irrational we are, typically, and a lot could be gained if society would run more rationally, in fact it runs more rationally today than historically and is much better for it.
But then again, “eww, nerds!”.
I don’t think that Scott’s criticism of Cowen and the other objectors to rationalism, at least in this post, is that they aren’t willing to engage the first order questions about whether utilitarianism, concern about A.I. risk, and other rationalist shibboleths are correct.
My comment wasn’t intended to give anyone who was sympathetic to any particular rationalist idea a reason to reject it. Obviously, if someone thinks that, say, utilitarianism is correct, they won’t find the rationalist tendency towards it any sign of failure. I would hope that most rationalists are self-aware enough to accept the general point that the pattern of concern of rationalists as a community is distorted in some way, so that an idea’s uptake among rationalists does not perfectly correspond to its justification, and that this has something to do with the cultural, historical, and demographic features of the rationalist community. The nitty gritty substantive disputes (of which I have relatively few with rationalists!) are for elsewhere.
In any case, I meant my comment more as a defense than an attack. All communities are subject to those kinds of distortions, and rationalists are going about things in basically the right way.
What bugs me about Yudkowsky’s writing, by the way, isn’t mostly about quality of arguments – I’m sure he’s hit more than miss, and everyone pulls a stinker now and again. But in contrast to, say, Scott, he often lacks adequate intellectual humility, so when he misses it’s very embarrassing to someone who can tell.
I think an “eww, nerds” critique of rationalism would be very strange coming from me, for more than one reason.
Yeah, I remember reading some posts where Yudkowsky set up a straw philosopher Bob, claimed Bob’s position was untenable, and then, in my view at least, failed to properly refute Bob. It was kind of cringeworthy!
Oh, I didn’t mean that “eww, nerds” was coming from you, more like an undertone in many other criticisms.
Overall I’m not that bothered with EY being overconfident sometimes (I guess that’s a personal thing) because to a certain extent in certain contexts I share his apparent feelings of “am I the only one that thinks this is obvious? I feel like I’m taking crazy pills here!” (justified or not) which I suppose makes it easier to forgive the occasional overstepping. YMMV.
To be more specific, there’s too much trust in psychological studies, and I believe this is not just because of respect for science, but also because of a desire for simple solutions.
The rationalist community is probably doing better than most people on this– there are a good many rationalists who at least know about the problem and try to not be influenced, but psychological studies are sufficiently unlikely to be replicated that I think they should get almost no trust.
Which ideas from psycholigal studies seem to be solid? I think loss aversion is sound, isn’t it? What others?
Depending on who you’re listening to, published psychology studies have somewhere between a 30-60% chance of being replicable (which is to say, true, or at least true-ish). As numerous people (including, if I remember correctly, Scott) have pointed out, this is orders of magnitude better than chance (where “chance” is something like “the probability of randomly-formulated statements about psychology being true”).
There are tens of thousands of psychology papers published every year. Even if we assume that only 10% of them are true/replicable (which seems far too conservative), this is still thousands of truths.
Now as for “ideas from psychological studies”, it depends on precisely what you mean by this. But there are certainly plenty of effects that have been replicated hundreds or thousands of times, because they have become standard tools in the research arsenal. To name three that I have personal experience with: contextual cueing in visual search, test-retest reliability criteria for unusual traits such as synaesthesia, and deficits associated with attentional set-shifting. These, and many thousands more, are real/true according to whatever sane standards of reality/truth you would like to apply.
A good rule of thumb is to treat anything that has been published once without replication in a decent journal as an interesting hypothesis worthy of serious consideration, anything which has been replicated hundreds of times as true, and to use a sliding scale for anything in between.
This isn’t to say that there is nothing wrong with the publication mill in psychology, and certainly you should basically treat virtually all statistics as ballpark estimates, rather than anything precise. There are plenty of problems with the field, but that is true of any field, and it certainly doesn’t mean the whole thing is bunk.
Each subfields has its own replication rate. The relevant studies are those of systematic errors. I think Kahneman’s work is solid, but in his popular book he quoted a lot of social psychology that has since failed to replicate.
What trust in psychology studies do you mean? What simple solutions?
Eliezer told people to study the books edited by Kahneman and Tversky and I think that those have held up. But the way he talked about them didn’t seem to me to be about about addressing specific biases, but about the general need to be skeptical of your own thought processes.
Kahneman is better than most but he’s far from flawless. For example, Thinking Fast and Slow leans fairly heavily on priming results, which IIRC have not consistently replicated.
That’s what I said. The books edited by K&T of original research by their collaborators has held up.
I have not read that book, so I don’t know how heavily it used priming. I suspect that it used it as a flashy and easily communicated example, not as a cornerstone for a theory. Nor, as I said, do I think Eliezer really used specific examples, either.
>reading, say, Eliezer Yudkowsky talking about philosophy is, for example, typically cringeworthy
Any chance you could expand on/explain that?
“This criticism’s very clichedness should make it suspect.”
Why?
Because if you’re criticizing someone with a cliché, there’s a good chance they’re already aware of the content of your criticism (it being a cliché, and therefore the sort of thing people have heard before).
It depends on how you think people and organisations tend respond to criticism. If they are mostly responsive and effective, then a cliched criticism will likely be outdated. But if they can’t or won’t change, then a cliched criticism will likely be true – others having noticed the same thing is evidence that your criticism is accurate.
It’s a cliche that Scientology is a cult, that Forever Living is a pyramid scheme, that Peter Beardsley is ugly, that Somalia is a dysfunctional mess. All arguable, but none refuted merely by noting that they are common criticisms.
Can’t / won’t is, in my view, at least as common as fixing the problem, but I don’t know how to prove it.
But it’s also useless to criticize Scientology as a cult, &cet, because they already know the content of that criticism.
It’s useless (at least on its own) if your aim is to reform Scientology. It’s useful if your aim is to warn off prospective recruits.
It *is* useful to e.g. criticize Scientology as a cult etc, because while *they* know the content of that criticism, the subset of population that are their potential recruits don’t know that.
The audience of criticism is often not the person/group/thing that you’re criticizing but other people.
The actionable purpose of criticism is often not to change the criticised person/group/thing but to inform others that they should avoid it and choose something else instead.
“Foobar sucks because of cliched reasons X,Y and Z; avoid it at all costs” is constructive, actionable advice even if X,Y or Z won’t ever change – because people can rightfully choose to avoid it.
Good point. So is it rational o the rationalist community to criticise things they don’t like, such as religion and “postmodernism” with cliched arguemnts?
I also think that is odd. Common criticisms can be correct – criticisms against anti-vaxxers, flat earthers, creationists for example. Why should we give specific groups the benefit of the doubt in their ability to grapple with criticism? I would argue, only if we already believe that they have some mechanism for self correction.
Economists are in a somewhat better position than rationalists in that regard – many economists work for private companies that would only pay them if their models were worth something. But many economists are not under as great a pressure to be correct, particularly ones in academia and I think the cliche criticisms are at least partly correct (looking straight at Bryan Caplan).
I suspect much of the reason people today continue to associate rationalists with the views espoused on LessWrong circa 2008 is because rationalist folk today do in fact still frequently include links to LessWrong articles from 2008 in their current writings.
If these old views have fallen out of favour, why keep bringing them back up?
The fact that people still regularly link to individual examples of philosophy or argument from ~2008 does not contradict the statement “the community doesn’t endorse the general scope and content of the ~2008 zeitgeist.”
People don’t keep bringing up the ideas that have fallen out of favor, which is a set that includes the majority of the ~2008 era stuff.
In which case, why not do a rewrite? The problem with linking to a small number of old posts you do like, embedded in a much larger collection of posts you mostly don’t like, is that people will, quite naturally, see all of the links to the surrounding content and assume that’s part of what you’re pointing to.
That’s fair. I think the “why not” is “it takes a ton of time and effort to recollect, rewrite, and rehost a newly curated selection.” I guess people jumping to conclusions is the opportunity cost of that action.
Indeed. If you have made every effort to put forward your POV and people still don’t get it, the blame is on them — otherwise, you have nothing to complain about.
Eliezer has done this already.
Has he edited that? I thought that was just a compilation of everything from c. 2008.
I have not actually read it, but I have read the whole of the Sequences, and I saw the discussion of AItoZ at the time. My understanding is that it is an edited selection. Whether edited to the extent of calling it a rewrite is less interesting that would be a list of that majority that tk17studios thinks is wrong.
I guess I’m rather out of it, because I don’t actually know which parts have fallen out of favor since 2008.
If you want people to think that you have drawn a line under certain outdated vies, why not make a big public announcement?
You want a criticism? Here it goes:
The pictures of Spock don’t exactly hit the mark but they don’t totally miss either. It’s just that “rationalism” as practiced is a giant “act like Spock to come up with complicated rationalizations for your emotional urges”. Example:
https://www.facebook.com/yudkowsky/posts/10151804857224228
Sure, he’s acting sort of Spock-like “its pragmatic efficacy relies on your fat cells being willing to relinquish lipids before your body cannibalizes muscle tissue and otherwise starts doing serious damage to itself, [Captain]” but at the same time it’s just an argument for him not wanting to lift or eat less.
It’s reasonable for you to look at Eliezer’s profile as representative, since he’s one of the founding pillars of the community under discussion. But I note that a) that’s not particularly representative of the tone and content of his FB feed generally (it contains lots of silliness and jokes and a mix of the extremely serious and the strikingly odd that vastly outweigh this more Spock-esque example), and b) he’s not particularly representative of the rationality community as a whole anymore.
So it feels like a stretch to me? As someone rather deeply embedded? It’s like you’re saying “examples of the bad thing still exist!” while Scott’s trying to say “the bad thing is nowhere near as prevalent as you’d think if you assumed all the criticism was representative and proportional!”
There’s good and useful critique out there. But it’s being lost in the sound of all this straw shuffling around.
Eliezer has the same problem as Malcolm Gladwell, bright guys whose writing quirks are so distinctive that they wind up pissing people off.
Malcolm Gladwell’s problem is that a large number of longform journalists are eating their livers out in envy of his success. If he wore a Cosby sweater with “igonvalues” as a tessellated text pattern through the lobby of the New York Times, the Grey Lady would exhaust the resources of its dental plan paying out on bruxism complaints. Eliezer may have the problem that too few well-placed people envy him.
lol uttering Gladwell and EY in the same sentence is an act of cosmic injustice .EY has the actual knowledge to back what he is saying (most of the time); Gladwell is just grasping in the dark
Except regarding, physics, AI, economics…
Aside from the people who swoon over him.
http://scarygoround.com/sgr/ar.php?date=20080521
“tophats and monocles, that’s what I like!”
Fortunately, he has about 50 IQ points on Gladwell, so his quirks are more forgiveable.
…and here I was going to say that comparing Yudowski to Gladwell was selling Gladwell short.
“Tell me how to solve this problem. Comments that mention the only ways to solve this problem will get deleted.”
It’s easy to say “diet and exercise”. What’s the best diet? Well every publisher who has had a windfall bringing out the latest “this will make you lose lbs and keep them off” diet book for the past thirty years is thanking Mammon that there isn’t one particular diet that works for everyone and can’t be improved upon. As for exercise, now the thinking is that it makes you fitter but it won’t shift weight of itself, not unless you’re doing the equivalent of training for triathlons. (Something I personally have noticed, as I’ve had to walk everywhere since I can’t drive and even though I get the miles up I don’t get the inches down).
I’m fat all my life. I’ve heard “diet and exercise” all my life. I’ve had a doctor recommend me the Rosemary Conley Hip And Thigh Diet back when that was The Smash Hit Diet of the Moment, I’ve recently had a consultant nephrologist recommend (sight unseen, not willing to see me for an appointment unless my kidney function degrades to a certain point, going only by information in my GP’s letter) that I go for bariatric surgery (which for various reasons I’m not thrilled about), I’ve heard about the high-fat low-carb diet, the low-fat high protein diet, the Atkins diet, every diet that’s come down the pike.
Getting weight off is half of the struggle. Keeping it off is the other half and the harder one. Yo-yo dieting is definitely a thing. To repurpose the joke about stopping smoking “Losing weight is easy, I’ve done it hundreds of times!”
Less than the fat person is eating at the moment. Typically a lot less.
Yes. As with the previous one, the public health establishment has done people an enormous disservice with their messaging. Just a small amount of exercise makes you much healthier, they say. Well, yes; a little bit of exercise, a brisk walk, is probably far better than lying around in bed, but most people (even most fat people) already do that, doing a little bit more won’t help that much.
Same goes for food. Public health officials talk about “healthy eating”, which seems to translate in people’s minds to eating some leafy greens as well as all the saturated fat, junk food, etc. Doesn’t work, especially when you dump fat (“but it’s olive oil, it’s healthy!”) on top.
Yeah, because once the weight is off, you’re still hungry all the time.
Interventions requiring the equivalent of taking on a part time job and having the willpower of a saint are by definition not for everyone, and are probably not an effective medical recommendation.
Do you think our pre-farming ancestors had hunger pangs every single day they didn’t have three big meals? Wouldn’t that make it hard to hunt?
cant blame him. he’s tired of the same trite advice
See eg http://amptoons.com/blog/?p=22049 . I think Eliezer is basically right about this one. Will be reviewing Guyenet’s book on the neuroscience of body weight soon and it should hopefully convince you.
I would be very interested in that review as well, since I basically agree with reasoned argumentation (the user, as well as the concept).
Thank you. That link is a perfect example of the kind of terrible reasoning I’m trying to point out.
The objections listed were:
1) NO ANECDOTES PLEASE
2) SIGNIFICANT AMOUNTS OF WEIGHT LOST
3) WEIGHT LOSS WHICH LASTED AT LEAST FIVE YEARS
4) MOST PARTICIPANTS DIDN’T DROP OUT
5) NOT A STUDY OF ONLY SUCCESSFUL DIETERS
6) PLEASE DON’T TELL ME ABOUT THERMODYNAMICS
Very science-y, right? But no, not really because you’re not looking for a diet that can take 100 fat people and make them all thin – you’re looking for a method to make one particular fat person (you) not fat. Taking those objections in order:
1) [Anecdotes] Anecdotes are perfectly fine because they tell you that something is possible. Sure, there are likely circumstances that caused that person to succeed exceptionally well but the solution isn’t “dismiss the data point” – it’s “understand the circumstances and see if they apply to me”. When you go to weightlifting fora there are hundreds of “anecdotes” describing how lifting weights and consuming protein will reshape your body. As far as I know there are zero anecdotes about lifting not leading to gainz.
2) [Major weight loss] “Everything or nothing” is almost always an excuse to not try at all. Can’t damage that self image by trying and failing so better not to try at all since success isn’t guaranteed.
3) [Sustained weight loss] This one is a trap (combined with two later steps). Fat people are people who got fat in the first place. That some of them got thin for a while is interesting. That lots of those people got fat again isn’t that interesting – they were fat to start with – maybe people’s schedules changed at work, maybe they got stressed and turned to food for comfort, maybe one of a billion things that happen in people’s lives happened. The interesting part isn’t the failure rate, the interesting part is distinguishing the long term failures from the long term successes. Paying attention to the rates is pure “scientism” / cargo cult science – observing the forms of science (check the rates) without considering why you check the rates – you check the rates so you can design a program to get as many in the success bucket as possible given your constraints. However, in this case you care about an n of 1 which is either going to be in the success bucket or the failure bucket (not strictly true of course – being more fit than you would have been otherwise is still a success so there are shades of success and failure – but to simplify). Getting at the object level here the distinction between “restricting calories works for a time until hunger overwhelms willpower” is one reason for failure that might be unavoidable – “eating fewer carbs works until the person is tempted by the delicious carbs” is a totally different type of reason for failure. The solution to the latter failure mode is built right in – the former, not so much.
4) [Drop out rates] The given reason for looking at drop out rates is the assumption:
Well, maybe – but if the study is well done then someone dropping out because they didn’t lose any weight shouldn’t be put in the “dropped out” bucket they should be placed in the “failed to lose weight” bucket. Of course, social science is rarely done well but then why does your objection to weight loss advice center around the lack social science support? That really looks suspicious – a way to avoid taking action that the arguer is likely to find unpleasant combined with a risk of self-image damaging failure. It’s a fall back from “show me some ‘science'” to “well, the science is bad”. Start with “the science is bad” if that’s your position – but it’s not the position – it’s a rationalization.
For a steelmaned version of the objection – “lots of people dropped out of program X because it’s incompatible with human nature” – that’s actually a good objection. On the other hand if 15% of people didn’t drop out then maybe it’s possible for people to follow it (maybe it’s not and only people who are one standard deviation from the norm on some measure can follow that program). Keep the context in mind though – maybe the person knows they’re not 1 stdv from the norm in trait x that allows success on that program – if that’s the case then find another program where you are. The context is individual weight loss – not “design a solution for everyone for all time”.
5) [Don’t study only successes] Sure, good advice as far as it goes but look at this in context with the other objections – don’t take anecdotes, don’t look at why some people drop out of studies, don’t look at programs that succeed for lots of people to a limited degree. Don’t only study successes is good advice – “don’t study how successes differ from failures” is terrible advice.
6) [Don’t talk about thermodynamics] That people start to talk to you about thermodynamics or POWs isn’t because they’re trying to convince you of the merits of the starvation diet – it’s because they’re reacting to you presenting arguments that impliy that weight loss is impossible. It’s a reductio ad absurdum to an argument you’ve made – not a point they’re trying to make.
[Back to the object level] The solution is simple but hard – lift, eat protein, cut carbs. You can go a touch easy on the third part if you don’t mind being a bit doughy. Sure, most people who were the type to get fat in the first place are going to find this hard to do but most people who are in that category find everything in life hard because they’re below average in intelligence, motivation and willpower – which is exactly why* people want to lose weight – they’re sending signals they’d rather not be sending. What does “rationalist” Elizer do about it? Does he do the gwern thing of trying every diet and exercise program and checking the results? No. He turns to social justice language to reclaim status using his high verbal IQ – rationalizing with a veneer of cargo cult science plus SJ.
* Part of the reason anyway – I’m sure they don’t find it pleasant to be winded after climbing a flight of stairs.
“[Drop out rates]”
This is a standard issue, with standard solutions in statistical analysis.
I agree with your larger conclusion, but I think your criticism of point (3) misses why Ampersand et al are looking for studies specifically showing sustained weight loss. One common criticism of diets is “yo-yo dieting”: someone follows the Special K Diet of eating just a bowl of cold cereal for lunch and dinner, loses thirty pounds after however many months of this, declares success and goes back to eating normally, and then promptly regains the thirty pounds plus maybe a little more. A well-designed study should continue tracking participants after the conclusion of the diet to catch undesirable outcomes like this.
The simple explanation for this, IMO, is that the person’s “normal eating habits” mean she eats too many calories to maintain her post-diet weight, so of course she gains some weight. A more complicated explanation, however, might be that the Special K Diet is unhealthy and the weight loss is inherently unsustainable. I think that’s not the case with a literal “cold cereal and skim milk 2x/day” diet, but it probably is with some; I don’t know enough biochemistry to be sure.
Point 3 has the surface appearance of a reasonable critique but without investigation as to why the weight loss wasn’t sustained it’s meaningless.
Special K + skim milk 2x per day diet? Unsustainable because you’ll literally die if that’s all you ate. Eating steamed fish and buttered vegetables only? Unsustainable because the people who tried that diet couldn’t resist the tasty doughnuts. Different category.
The “fails to show sustained weight loss” is inevitable (for some people) for every diet. The why is the interesting part.
Exactly: you need to investigate why it wasn’t sustained, and preferably demonstrate by example some way it could be sustained.
(On the tangent of Special K + skim milk, I was referencing the old weight loss campaign they ran: eat their cereal with skim milk for breakfast and lunch, and eat a medium-sized healthy dinner. Never had to try it myself, but I liked their cereal back then, so I saw it on the side of the boxes pretty often. And hey, if you get the “medium-sized healthy dinner” part right, it sounds like it’d work… but that’s a big “if.”)
Sweet God Almighty, what is this obsession with weight lifting? I suppose if you’re a guy who wants to look like someone stuck a bicycle pump up your backside and inflated you like a frog, it has some appeal, but this mantra of “lifting…lifting…lifting” annoys the ever-loving heck out of me.
And I suppose there are some women who like men with the bicycle pump look. I’ve never found weight lifters attractive, whether it’s the Olympic competition guys, the “crêpe skin competition” guys or the ordinary guys who spend time in the gyms with the machines and the seasonally adjusted routines and the whey protein powders and creatine and who, to be frank, look to me like beef cattle reared and conditioned for slaughter – the same beefy, soft musculature that doesn’t say “strength” to me but does say “inflated frog”.
Apologies to those who love their weights. I just don’t like the look or the cultus around it. Plainly, as someone who is “below average in intelligence, motivation and willpower”, I haven’t got the mental power to understand the virtues of the rule.
You need to train like a bodybuilder to look like a bodybuilder — which basically means treating the gym like a part-time job and controlling your diet to a degree that would make even the weirdest and most restrictive fad diets look half-assed and uncommitted. And even then, a lot of people need chemical help. It’s really not a great idea.
On the other hand, weight training is the best way to make yourself stronger, which has lots of benefits. Especially if you’re into athletic hobbies, but even if you’re not: in the context of losing weight, it’s important because it’s probably the best option for increasing your lean body mass, which translates directly into the calories you’re burning at rest.
It’s extremely difficult to look like someone stuck a bicycle pump up your backside and inflated you like a frog even if you lift. I lift 2-4x a week most of the year, mainly for strength and injury prevention, and I am not buff at all.
I think some people have an obsession with lifting when it comes to weight loss, because extra muscle mass tends to increase basal metabolic rate. From what I’ve read, though, the extra calorie consumption per unit muscle gained is small enough that other ways to lose weight are easier. Gaining muscle mass is hard.
My intuition is that lifting is good in general for overall fitness, injury prevention, and looking better, but when it comes to weight loss, caloric intake is by far the most important thing, and the effect of any exercise you do, whether it be lifting or cardio or other, is dominated by the calories. That’s mainly from my personal experience: I struggled with being overweight/obese for many years until I decided to just severely limit calories and managed to lose 60lbs in 9 months. I did run during that time, but I found that my rate of weight loss didn’t seem to be much affected by my how much or often I ran, and as long as I kept the calorie restrictions in place, the weight loss continued at about the same rate.
But obviously thermodynamics isn’t helpful for everyone, even if it was for me. Resisting the hunger can be very difficult, and I honestly have little idea why I was able to do it, because I never considered myself to have particularly strong will in that regard.
I lift because I want to add muscle mass so I will stil be healthy in old age.
It’s really really hard to get that “bicycle-pump” look, and I probably started too late in life to achieve it any way.
@lvlln
Muscle is heavier than fat, so the people who do gain muscle may actually get heavier. AFAIK there is a genetic component to how easily you gain muscle and you can change that by using steroids (don’t, btw).
IMO, weight is a bad goal anyway, fat percentage is much more important.
Nornagest and lvlln hit the nail on the head: it is very difficult to look like the guys you linked. Very few people want to or can achieve that. Dollars to donuts most, if not all, the men you would find attractive lift weights on a regular basis. Perhaps not. But “lifts weights regularly” covers a wide swathe of bodytypes; from hardgaining ectomorphic skinny nerd all the way to the aforelinked Lee Haney.
I’m sure these two things have nothing to do with one another, but I was just reminded that (13 years ago anyway) about 20% of the male US population lifted weights regularly, and something like 20% of men are rated by women as above average in looks (if okcupid data is anything to go by).
http://www.cdc.gov/nchs/fastats/exercise.htm
https://theblog.okcupid.com/your-looks-and-your-inbox-8715c0f1561e
>When you go to weightlifting fora there are hundreds of “anecdotes” describing how lifting weights and consuming protein will reshape your body. As far as I know there are zero anecdotes about lifting not leading to gainz.
Do you think people are gonna jump on bodybuilding.com to write posts celebrating that the plan didn’t work and they’re still fat and dissapointed with themselves and the world?
But anyway you’re missing the whole point here. The guy isn’t asking ‘how to lose weight’ in general, or for you to swoop him and save him from his own ways, he’s asking for specific information that might be helpful, so your thesis on how to lose weight in general is off topic.
You can give the same advice for how to be good at anything: how do I get good at maths, business, tiddlywinks, being a stunt man, etc. It’s all the same basic process, but to what extent you’re willing to dedicate yourself to it and throw yourself into it, is determined by your current abilities and priorities. You’re basically taking it as a given that EY should be way more desperate to lose weight than he is, but that’s none of your business. The guy isn’t begging for help, he’s asking for specific information.
Also, anecdotally I can eat as much as I want and exercise as little as I want and not get fat, and maintain some decent strength, as well as put it on pretty fast. Sure maybe ‘how much I want’ is less and more respectively, but if I recall correctly that’s the kind of thing EY was interested in, looking for a way to short circuit the process and make it easy(er). So where’s the contradiction. EY wants to lose weight but hasn’t? You realise lots of people vaguely want things they haven’t yet made happen?
Anecdotes are Bayesian evidence, and Bayes trumps Science, right? So here is my anecdote:
I was fat most of my life. So is my nearest family. Diets didn’t work; except for one that made me lose a little weight temporarily, but I spent most of the day thinking obsessively about food, which was not sustainable. I hate all sports, and what’s the point anyway; if you keep exercising for an hour and then eat an apple, I heard you get all those calories back. I spent decades like this.
Then I had two options: either accept Eliezer’s reasoning, also supported by my own experience, or… try harder and smarter (ironically, inspired mostly by texts Eliezer wrote on other topics). I am lucky I tried the harder and smarter way, and had a supportive environment. These days, I still have some fat to lose — and I am planning to, — but people who haven’t seen me for a while keep spontaneously complimenting me about a visible change.
I did three things, not sure about the exact degree each of them contributed to the outcome:
First, I had my blood checked. I had some iron deficiency, so I started taking pills. Made a lot of difference at the beginning; later it became less of a difference and now I only take a pill once in a month; maybe the problem is already mostly fixed. — To explain what iron deficiency can feel like from inside: You feel tired, despite not really doing anything hard. If this was your normal, and you take your first iron pill, you feel as if you are a superman, or as if the gravity was lowered; suddenly it starts making sense why other people are full of energy.
Second, I started to eat a lot of unprocessed vegetables. Like, some days maybe 50% of what I eat are unprocessed vegetables, without exaggeration. The main challenge was to find a solution where I don’t have to keep buying and preparing those vegetables every day, but someone does it for me.
Third, I started doing strength training, really seriously. Aiming for every day, in practice more like every other day on average. The first important step was buying my own weights, so that I don’t have to go to a gym, because that would be a waste of time I couldn’t afford daily. The second step was a switch to exercising using my own weight (link). That means I can exercise different parts of my body intensely, without having to go anywhere, or having an exercising machine at home. And it takes me only one hour a day, any hour during day or night, and I can e.g. browse the web between the sets.
Other things I tried to do but failed: Fixing my sleep cycle, which would probably give me even more energy. Not eating tons of chocolate. In both cases, my willpower was insufficient in long term, and I didn’t find a smart way to do it sustainably. I mention this just to say that I achieved success even without doing everything correctly.
Probably an important factor was that I precommited to “do the right thing” even if there would be no result. Like some kind of exercise in virtue ethics. And it made sense because for the first month, there was probably no visible outcome. And one month is a lot of time to wait for feedback on something you are doing daily.
In hindsight, I see many things that I was doing wrong in the past. Probably the worst thing was that as a solution for losing fat, almost everyone recommended some variant of eating less, so I kept thinking about this class of solutions. Wrong! As Eliezer correctly says, eating less mostly makes you feel weak, and in extreme cases unable to think about things other than food. Such suffering may help you signal great virtue, which is probably why everyone keeps recommending this, but signals of virtue are not what you should be optimizing for.
Instead, strength exercise makes you feel strong, so if you are already inspired to become stronger, this is how you do it completely non-metaphorically. But you should optimize to make the exercise simple and safe, because we are trying to win, not to signal virtue. Exercising using your own body weight is in general more safe; and cheaper; and you don’t have to go anywhere. And the key to eat more unprocessed vegetables is to add something tasty to them (try many things and find what works for you), and eat as much as you want. Again, not trying to signal virtue by starving yourself or eating something you dislike.
Also, psychologically… focusing on “becoming stronger” is positivie, focusing on “losing fat” is negative; focusing on “trying a tasty veggie recipe” is positive, focusing on “eating less” is negative. It’s not enough to do the technically right thing; you also have to make your own mind support the process.
Anyone, feel free to do a peer-reviewed study on this. I told you all my secrets. (Well, except how to find a group of friends who will support you in the process. But if you make a study, the participants can support each other.)
As a sidenote, I may be imagining things, but it seems to me that people perceive strength exercise as something… politically incorrect. It’s like “right-wing people lift, left-wing people do cardio”, but of course it sounds stupid when you say it like this. I suspect it could be about signalling class: right-wing people don’t shy away from lower-class behavior, and lifting heavy things is what many poor people do for living.
I dunno man:
https://www.youtube.com/watch?v=-c8ZWA2sFm4
That body weight stuff looks pretty dangerous to me.
Lifting weights isn’t low class, it’s sexist.
Scott Adams of Dilbert fame has a bunch of tricks in his book how to eat healthy without a having to do a lot of preparing meals, tricking the mind and desires, etc. Worth a glance, imo.
I am fascinated by this and will save this post somewhere in case I ever get to the point where I want to try it myself.
I do however have one potentially too personal question: How much did you weight at what hight? I am mostly asking because I have noticed that most people have very different understanding of when somebody is chubby vs fat.
I apologize if that is too personal.
(It’s perfectly okay; it was my decision to share the story here, and I feel pretty proud about my achievements. It would be hard not to, with everyone in real life giving me positive feedback.)
I am 180 cm tall; my weight was around 93 kg previously, now it’s 87 kg. But — and I believe this is a very important point — mere weight does not tell the full story, because one kilogram of fat weighs exactly as much as one kilogram of muscles. I did more than merely “lose 6 kg”.
Optimizing for lower weight could even be harmful, because you might lose muscles by starving yourself, or temporarily lose a kilogram or two by becoming dehydrated, and the metric would declare that a success, while your health was actually damaged. (I suspect many diets do exactly this.) Losing weight is a bad goal. A better mindset is that you try to become more healthy (and increase your expected lifespan), and also stronger and attractive (maybe less important, but hey, these things correlate); and losing some weight comes merely as a side effect.
Before writing this comment I actually had to measure my weight, because I stopped watching it on purpose. As long as I gain muscles and lose fat, I don’t really care about the total weight. (It’s like adding two numbers that correctly should be subtracted.)
I don’t care about the “chubby” vs “fat” distinction. I am not saying I was the fattest person ever, just that my body was sometimes a source of inconvenience to me, and it was gradually getting worse: I got easily tired, had more difficulty manipulating things, was perceived as less attractive than now. And also, I don’t have a proof for this, but I probably had a greater risk of some health problem happening (although, luckily, nothing happened). There is still a lot of space to improve, but that’s what I’m planning to do, and based on recent developments, I feel quite optimistic about it.
(And when, maybe two years lated, I become a walking mountain of muscles, I expect many people to say: “Yeah, that was pretty easy for him; some people are just lucky to be born with a perfect metabolism.” — By which I am not suggesting that genes play no role; just that their role is probably exaggerated in most cases. Maybe some people achieve the same outcome with half or tenth of the effort, but I still regret not having the knowledge I have now ten or twenty years ago.)
Thanks a lot for sharing.
Can confirm this.
When I was fat people told me I was a lazy cunt.
When I got into seriously good shape people said I was lucky to be so naturally fit.
It’s all sour grapes.
As a sidenote, I may be imagining things, but it seems to me that people perceive strength exercise as something… politically incorrect. It’s like “right-wing people lift, left-wing people do cardio”, but of course it sounds stupid when you say it like this.
Oh, I’m the common clay, so I have no bias about right-wing or low-class (indeed, I think my bias is the other way: people who go to gyms/have equipment to exercise are more liberal or slightly higher in class). I think mainly my kneejerk grumpiness to being told “lift! lift! lift!” is that (a) I have treetrunk calf muscles from walking and cycling everywhere all my life. This means that, for example, when I was in jobs that involved wearing wellingtons I had to wear the men’s boots because my feets too big. Muscle mass there didn’t and doesn’t mean fat came off the hips and stomach and bosom. Same with all the lifting and hefting I did; strong enough in the arms when younger but not getting me svelter by any means (b) the people I grew up amongst who did hard physical labour were blocky and stocky and strong, so I don’t have the association “muscles mean strength, fat means weak and lazy”, I have the association “muscles mean copious spare time to work on getting muscle and not real working strength muscle”.
@Viliam
Pick up artists tend to really like lifting.
There must be some kabbalistic connection between “picking up” and “lifting”.
Scott, I agree with you on a million things, but this isn’t one of them.
So, what does Eleizer’s personal trainer recommend for his weight loss? Because if that guy is throwing up his hands and saying “Well shit! We’ve tried everything and nothing works!” then I’m much more inclined to believe this is anything but straight up Ignatius Riley levels of rationalization. If there’s no personal trainer, this feels like a central example of “Did not do the due diligence before complaining”.
I’m trying to make this next bit as snark free as possible, and to phrase it delicately. But has he considered approaching HungerHacking the same way that he does polyhacking or orientation hacking? The latter two seem like much, much more difficult tasks to accomplish than maintaining dietary discipline in the face of low level hunger from operating at a small caloric deficit.
Failing at something so straightforward and commonplace (Though admittedly not easy. Which shouldn’t be a problem for someone in the practice of “Systematized winning”) really injures my faith in his abilities that I’m less able to measure. And that definitely generalizes onto the movement that still seems to believe in him.
JayMan has some convincing –and startling– stuff which supports Eliezer’s use of the term “metabolic disprivilege”:
https://jaymans.wordpress.com/obesity-facts/
https://jaymans.wordpress.com/2013/08/18/even-george-w-bush-has-heart-disease/
More at the links, of course. With links to relevant material etc.
I’m glad Scott brought up amptoons because this issue of “metabolic disprivilege” is something that fat-acceptance activists have been talking about for years. It’s easy to make fun of them–I myself have done so in the past–but maybe they have a point?
The diet studies counted were only those with a 2-year follow-up period, with the diets themselves lasting “a few months to 1 year”. Of course they gained the weight back. There can be no end date to a diet if you want to keep the weight off. This isn’t “metabolic disprivilege”.
While this kind of response is correct in the details, it concedes too much – the critics say “Rationalists say they’re so good, but they aren’t!” and some of this is along the lines of “We don’t think we’re that good”, which is weak. For example, while it’s true that rationalists aren’t perfectly rational and generally don’t claim otherwise, let’s not fall to the vice of humility – they’re significantly more rational than is typical, even among similar demographics. If anything, there’s an excess of self-doubt and self-criticism, and the founding nature of a willingness to be contrarian has sadly faded.
I find Will Wilkinson’s critique quite irritating…
“Bayes Law is much less important than understanding what would ever get somebody to ever care about applying Bayes Law”
“I see no interest among rationality folks in cultivating and shaping the arational motives behind rational cognition”
“Good things, like reason, come from co-opting and re-shaping base motives”
I see no interest among non-rationality folks in cultivating and shaping the arational motives behind irrational cognition, and Bayes Law is much less important than the base motives which lay behind Will Wilkinson’s whole complicated screed about rationalists. The problem is quite basic–the behaviours of the rationalists scream out ‘low status’, ‘nonexistent levels of social panache’ to everyone who is watching, but if that is what you really feel then just be straightforward and just taunt people already, instead of couching it in all this moralistic rhetoric.
On the other hand, I really don’t think most people here behave the same online and offline–there is a special persona associated with communication here, that may not match up with real social life. Maybe not for the most committed, or public-facing, members, but certainly for the huge halo of observers and incidental participants. I especially appreciate the role rationalists have played in getting us closer to the truth and disseminating information on various topics pseudonymously, ‘behind closed doors’, so to speak, and I suspect many other people–some quite famous–also appreciate this service that the community renders. But, because of the whole ‘low status’ thing no one will be caught defending the community publicly–all the incentives point in the other direction.
I don’t dispute that there are people who like to bully the low-status, and that a large part of the mocking of rationalists stems from that desire, but things can’t all be that bad for us. Between all the polyamory, and the high IQ silicon valley types endorsing rationalist stuff, and highly visible people like sinesalvatorem and theunitofcaring on tumblr, I think the rationalist community is doing pretty well in terms of status.
No matter how high status a subculture is, there’ll always be naysayers who care nothing about that culture’s standards of status. The example I have in mind is left-wing or liberal university students who are fairly well-off, on top of the latest social justice happenings, and have some kind of journalism job lined up–perfectly respectable within the blue culture they’re embedded in, but a lot of people on the right will call them nu-males and SJWs no matter if the students are undergraduates or postdocs.
“Bayes Law is much less important than understanding what would ever get somebody to ever care about applying Bayes Law”
I thought this criticism to be spot on. I certainly don’t see it as a taunt on low-status behaviour. At work, I sometimes write tools to automate tasks that are extremely useful to me, and since it was really useful for me, I thought I’d also spread the tools around my co-workers in the hopes that it will help them as well.
It turns out that there is a huge gap between making, refining the tool to be useful and getting people to care enough to incorporate that into their workflow. Any disruption to an existing workflow necessarily means an initial slowdown in efficiency as the new tool/method installs itself. If there are no sufficiently compelling case made and if the tool is not packaged in a way easy for people to pick up, the tool may as well not exist.
In that sense, Wilkinson’s argument is spot on. Presumably a lot of effort is spent on making, refining and perfecting these mental tools. But tools are meant to be used by people, and if people are not interested in using the tools, then their existence is much diminished. Therefore, absolutely, understanding that there is a need to get someone interested in using Bayes Law is more important than Bayes Law itself.
Economists having very simplified views about the world is basically a meme amongst academics at this point. My professor in economic geography responded to a quote he brought up providing a definition of economic geography by saying “well, he is an economist…”. There are also stereotypes about a divide between economic and social geographers but this hasn’t really been my experience with the few I’ve seen so far. Not that I necessarily agree with the direction geography is moving in and that seems to be about the same direction economics does have.
I keep seeing examples of homo econimicus– there’s some truth to it..
The standard response to examples where people supposedly aren’t acting as a homo economicus is “well, they were obviously optimizing for something else”. See that study about poor people making actually good choices by going to check cashing stores.
Caplan’s criticisms seem hardest to deny.
Peak LW was annoying and wrong about a lot of things, but it was also phenomenally productive, focused and challenging. The community coalesced in that era because it was genuinely changing minds and being provocative in the best way. And then it ran out of steam. Because sure, people are aware of the criticisms, but they haven’t really answered them so much as accepted them and retreated from their most advanced positions. The rationality wave broke, and on a clear day, with the right sort of eyes, you can still see the high water mark.
So now it’s a fractured diaspora, linked and governed mostly by aesthetics.
The level of thinking here, the genuine attempts at truth-seeking, is extremely high. But it would be much higher if we could get past the aesthetics.
Wanted to register my appreciation of that Hunter S. Thompson paraphrase.
It was a little weird to see Caplan basically dismiss utilitarianism by way of a link promising “many well-known, devastating counter-examples”, which led to a… study guide? homework page? where those examples are immediately followed by some reasonably compelling utilitarian rebuttals.
Like, maybe you’re not ultimately moved by those counter-counter-arguments, but are they so obviously, laughably weak that this link serves the knockout punch Caplan clearly intended? Does anyone think that Mill’s Utilitarianism is accurately described as a “hasty, dogmatic reject[ion]?”
Was he dismissing utilitarianism as false, or pointing out that isn’t a done deal?
I guess that particular post isn’t really an outright dismissal, but Caplan’s written before that he’s a sort of deontologist.
He’s right that utilitarianism is subject to many well-known, devastating counter-examples, but it was a weird link for him to choose. My guess is he googled “utilitarianism criticisms” and linked to the first thing he found without reading it carefully.
I thought so, too, but it looks like he posted that same link 8 years back, and with near-identical wording. I guess he has it bookmarked under “utilitarianism, devastating counter-examples of”.
(The link in the older post is broken, but clearly points to a previous version of the same page.)
Can you source this?
I’m not being snarky, I would honestly love to see it. I understand utilitarianism as generating a lot of horrifying conclusions, but a lot of utilitarians meet those with “yeah, and are we wrong?”
Saying that utilitarianism allows utility monsters is interesting, but not really a rebuttal. (“Yeah, and? Most people live like this in practice, people who are less stoic get more resources.”) The repugnant conclusion has been both accepted and denied by various people, I find it challenging but far from devastating. (It’s better with the companion problem attacking average utilitarianism, but still not ironclad.) Failing Pascal’s Wager is substantially harder, but shared by a lot of other decent-looking ways of making decisions.
…what’s the well-known, devastating stuff? I’m honestly not thrilled to be a utilitarian, but I am one and I’ve never seen something knock it down all that well.
If the defense of current rationalism is to distance itself from circa 2008 LW High Yudkowskianism, then it’s very unclear what rationalism is now. Since the fall of LW, rationalism is so fragmented that the Yudkowskian roots are all that holds it together.
I have been meaning for years to write down my beef with rationalism. There is so much I am drawn to in rationalism, but there are fundamental flaws. The big one is the far right politics. Much of it is so obviously wrong and horrible, that a philosophical system that fails to filter that out cannot claim to be the path to enlightenment.
Let me try to be more positive. Let me say what you I _like_ about rationalism:
(a) Certain customs, specifically the steelman, and the taboo. These are excellent argumentation tools. More generally, trying to argue in good faith is a great part of the culture. Raising the sanity waterline is a great project for any community.
(b) I think a big chunk of the community is well-meaning and “good people.” This is important — regardless of ideas floating around, they require human heads, and the type of human you attract matters a great deal.
—
There are lots of bad things, but in my view they can all be traced to the fact that rationalism is _also_ a global social club for folks who might otherwise have difficulty having such a club. Having a social club is very valuable for humans because we are social, and we need that sort of thing in our lives! But having a social club also means you are in thrall to social club dynamics, like the founder effect, like peer pressure, tribalism, etc. My “sympathetic outsider” advice for rationalists has always been to treat it more like a job and less like a social club (sort of like what academics do).
—
I don’t think rationalists have far right politics. I think you might be thinking of our edgy friends (very very few of which split off from LW in the early days, but they are not formally “in full communion” with rationalists per EY’s ruling a while back).
One of the things I like about the rationalist community is the respect given for admitting mistakes and changing one’s mind in response to new information.
Absolutely — but some practice this more than others (because it’s so hard…)
100%. I’d rather argue with an idealized opponent, just like i’d rather fight an opponent who’s up to the standards expected by the community (not elaborating on that, I have shameful gaming hobbies that can be used to track me). Because I know I’ll have to one day, and it teaches me about that argument or playstyle.
Also as to far right politics: if a group of people calling themselves rational and seeking rationality end up on a certain political sphere…well, that’s not in and of itself vindication of any type of politics, because those people can still be wrong, whether because irrational or because “rational” isn’t the correct measuring tool. But you could at least consider that you might be wrong – either about the sphere or its justification.
(This also leads me to another thing I like about the rationalist movement: the distinction between what is likely and what is actually true, is well understood.)
Not uinique–known as charitable interpretation and unpacking in mainstream philosophy.
Yes, there is not a lot novel in the rationalist circles. But that’s ok! Old good ideas are also good to use.
I agree. There are some really good tools and practices in rationalism. That’s why I keep reading all this stuff. The faults are mostly overconfidence and regular human flaws, that rationalism fails to counteract.
You are right that most rationalists are not far right. Every survey shows very few are. I am just disappointed that mainstream rationalism fails to counter those elements. The little I see is just standard lefty arguments, not making much use of rationalist tools. If you want to raise the sanity waterline and end up with a bunch of insanity, something is wrong.
Well, if the project didn’t work on young adults, perhaps go full Jesuit, and get em while they are young?
Various people, including Scott, have said here that LessWrong got a lot of things wrong in its early days. None of them have said what. A commensurate set of examples would help us to be actually talking about something.
Internet rationalists share this trait with a lot of previous intellectual movements: they are vastly more effective at criticizing others than at understanding themselves. If you asked me to rank my perception of the level of self-knowledge of the median members of various internet constituencies, I would feel compelled to place rationalists near the very bottom. I find the project quite useful for thinking through certain kinds of bad reasoning; I find the converts almost impossible to talk to.
Also you guys for fuck’s sake, the Singularity is not science and that “Demon” whatever the fuck Yudkowski is always talking about is like something a schizophrenic would come up with. It is so hard to take other parts of your project seriously when you develop these utterly fanciful imaginary constructs and then talk about them as though you have actual tangible proof of their reality.
Speaking as a non-rationalist, I never knew anything about Less Wrong and had my exposure to rationalism (or Rationalism) been via Eliezer Yudkowsky, I would probably be very much of your opinion.
Whatever about the Singularity, I am not going to knock people’s personal religious/spiritual beliefs (and anyway, we all have mildly embarrassing enthusiasms we went overboard about in our younger days that we may have more mature and considered opinions about a few years down the line). But what I find here is a group of people that are interested in all kinds of things, that there is no One True Path To Rationalism and that if somebody wants to have a discussion about the Singularity or AI risk that it can happen with both the pro and the con side getting to put their points and generally nobody going off in a huff.
I don’t know if most of the people on here are capital R Rationalists but many do try to be rational in their approach to understanding “why do I think this? why do I believe this? am I being honest about the reasons or am I just rationalising a bias or preference? what is the best way to make decisions?”
And the big one, whether you approach it as an ethical or philosophical or religious question: what is the way to live a meaningful life?
Besides, where else are you going to get godawful pun streaks, discussions about battleships, pop and high culture references, and the chance to get exposed to a lot of different viewpoints outside one’s own bubble in a handy one-stop site? 🙂
Hear Hear.
since socialists advocate for what is more or less a false god and can’t admit it to themselves because that would mean that there is no god, are you even further down the list?
bonus question: didn’t your blog say you wouldn’t be talking about politics, and then start ranting about Trump and charter schools and the Mercers?
Isn’t there some rule on this blog about not being an asshole?
By “ranting” you mean making data-driven policy arguments about education, a topic on which I am very well qualified, which I specifically said was going to be part of a blog on education.
Do you already have the URL for the blog? I’m very interested in reading it once it starts and would hate to miss it.
found it through google https://medium.com/@freddiedeboer
fredrikdeboer.com/anova
Thatnks to both of you.
Guess I shouldn’t be surprised that you broke a promise to stop talking about politics, to be honest.
To the thrust of the article itself, and this is what bothered me: progressive ends are rarely achieved by progressive means, which is the problem you run into. An easy example is, to harp on a topic you already expressed no interest in discussing, capitalism vs. socialism; capitalism having lifted huge portions of people out of poverty and socialism having returned some of them.
Now I’m not so sure that charter schools work, as such. But if they do, then maybe you should just let conservatives win and let everyone benefit, instead of complaining that conservatives like it.
Internet rationalists share this trait with a lot of previous intellectual movements: they are vastly more effective at criticizing others than at understanding themselves.
Intellectual movements tend to be the opposite: they tend to reject absolutist thinking and ‘weak man’ arguments, and are quite hard on each other (there is perhaps a tendency for Rationalists to be too charitable to opposing views, although the EY ‘worship’ may be an exception to this). You see this nitpicking on the intellectual far-right too.
@Freddi deBoer,
they are vastly more effective at criticizing others than at understanding themselves
This kind of statement becomes self-defeating really fast. How is your comment not falling into this very trap as you say it?
You are inferrring “Freddie cannot understand himself” from “Freddie did not self-flagelate during that one comment” ?
I mean, I’m not saying this is the case, but couldn’t both be true?
As someone who is only toe deep in the internet, would you be willing to elaborate on this point. I am unaware of any other internet constituencies, but I find the level of self-knowledge and personal understanding here on SSC a welcome reprieve from my day to day interaction, if there are better places I could go, I would be interested.
that “Demon” whatever the fuck Yudkowski is always talking about is like something a schizophrenic would come up with
Maxwell’s Demon? A 19th century thought-experiment by a Scottish physicist, not generally regarded as being a nutter (I learned about it in secondary school science classes). If there is some other demon Yudkowsky talks about, I don’t know enough about LessWrong to recognise it (granted, he does take concepts and run with them or create his own riffs on them, so one of those may be what you mean).
I figured he was referring to Moloch.
I thought the Basalisk, or rmabe the God-AI.
Man, now you’re just handing ammo to your enemies. Just imagine the 10000+ times reblogged article:
“Scott Alexander says, and I quote, ‘Economists think that they can figure out everything by sitting in their armchairs and coming up with ‘models’ based on ideas like ‘the only motivation is greed’ […]All they ever do is talk about how capitalism is perfect and government regulation never works, then act shocked when the real world doesn’t conform to their theories.'”
” They don’t pooh-pooh academia and domain expertise”
Really, Scott? I can find three examples of prominent folks in the community doing just that, starting with the “diseased discipline” on down. By your lights is that just youthful indiscretions?
The rationalist main man Eliezer is _explicitly allergic_, re: reading and writing academic papers.
—
I think the difference here is, you don’t do this. And your idealized headcanon of the community is the same — but I don’t think the community really lives up to this standard. In fact, while I find quite a few things to like about rationalists, this specific issue is one I always thought the community had and was annoyed about.
I’m not sure if I’m a member of the rationalist community. I only learned about this site about a year ago, and before that, I had never even heard of the rationalist community or Less Wrong.
For what it’s worth, I think this site is spectacular. It’s as smart as it gets. Far more intelligent and consistently insightful than MargRev, and certainly better than Noah Smith or Will Wilkinson. I’ve never read a blog that made me think so many times, “Damn, I wish I had written that.”
No community is a monolith. To the extent that I have a criticism of the Less Wrong community, it’s that it doesn’t always live up the values espoused in wonderful pieces such this one. Of course, about what community could you not make the same criticism?
Reading Tyler Cowen’s post about this community makes me think that he should be lowered in status, particularly in comparison to this site. It was just such a ham-handed, uncharitable, blunderbuss criticism. Part of me thinks it was just an attempt to poke a bear at the circus and have the spectacle centered on him.
Either way, when all this nonsense has passed, I think that you will end up looking better at the end of it.
He’s attacking my tribe, lower his status!
Thank you for reminding me why I don’t usually comment.
After lurking for 2 years, I created an account just to thank you for your comment. You said exactly what I’ve been thinking: “Far more intelligent and consistently insightful than MargRev, and certainly better than Noah Smith or Will Wilkinson”.
I don’t know about the “rationalist” community (have never been grabbed by many older linkbacks to LW and Overcoming Bias), but SSC and its commenting community has virtually no peer on the internet, imho.
It’s not a “tribal” thing, it’s a *actual quality* thing.
I loled hard at this
Look. I’m the last person who’s going to deny that the road we’re on is littered with the skulls of the people who tried to do this before us. But we’ve noticed the skulls. We’ve looked at the creepy skull pyramids and thought “huh, better try to do the opposite of what those guys did”. Just as the best doctors are humbled by the history of murderous blood-letting,
hmm… unfortunately you made the mistake inadvertently of associating Rationality with murderous dictators (unless someone else specifically made this critique). Bad choice of title and example imho, unless this is an example of Poe’s law?.
The example of doctors and medicine is good one..in the past, medicine was iatrogenic, but great advances have been made in treating disease and prolonging life, which is an example of science succeeding.
It’s a pretty standard critique, and Scott specifically mentions Marx and the Soviets (you could also add the Chinese communists, the French Revolution, etc). If you want to rule such nasty examples out of the rationalist tribe, you’re probably guilty of the no true Scotsman fallacy.
Here’s my complaint about rationalism, which is I admit somewhat specific.
In HPMOR, Harry lectures everyone about rationality a lot, but ultimately, he solves his problems by being smarter, more creative and better educated than his opponents. If he uses the rational principles that he introduces, I don’t see it. Maybe rationality helped him to get so smart, but maybe he’s just really smart.
One of Harry’s students in rationality does in fact adopt EY’s philosophy whole hog in a way that changes this character’s life dramatically. Rather than show us how rationality leads to positive results, this character then mostly disappears from the story.
——-
My overall opinion is that rationality attracts unusually interesting and smart people, which is its primary virtue. Its secondary virtue is that the community has some values and tools that tend to lead towards effectively discussing and hopefully solving problems, although at the cost of hundreds of thousands of words.
wasn’t one of his biggest-used abilities to transfigure down to the atomic level?
Sure partly that’s just being better educated – he knows about atoms! but it takes him a while to actually do it, because he tries to rationally analyse magic and so forth.
I don’t know that Harry’s analysis of magic is particularly Yudkowskian – it’s pretty much stuff that Sir Francis Bacon would recognize, if Bacon were well read on atomic structure and had very little regard for his own life.
Are you talking about Draco? It doesn’t seem to me that he adopts Yudkowskian Rationality any more than Harry himself, and I don’t see much evidence that he goes much farther than even Hermione.
My recollection is that the last time we see Draco before the climax, Draco has become some kind of rationalist answer to Sherlock Holmes, who applies Rationality to strip through mysteries like, well, Sherlock Holmes, but that he drops out of the plot immediately after that until the denouement, so we never get to see how that works.
Yeah HPMOR is geniusfic for sure.
Like the Ender’s game series except Harry’s domain is a broader than ‘violence and tactics’, or ‘escalation and manipulation’ etc like various characters have there.
Which btw makes sense in story because, nvm spoilers
Huh, did we go to the same university? That sounds exactly like my old monetary economics teacher, who among other things claimed that buying lottery tickets is a perfectly rational act if you just draw participants’ utility curves in such a way that they are effectively risk maximisers.
In fact I think that there are many fields where leading academics are spouting things which are *blatantly crazy* in way that’s obvious to anyone with a smidgen of common sense, but which goes totally unacknowledged by those within that field. The last time I went to a meetup of philosophers I jokingly asked how many pages of their PHD they spent on defining the concept of ‘truth’ – the answer, without a trace of irony, was “three”. The latest I heard from social studies was that ‘race does not exist’ and there’s ‘no correlation between IQ and crime’, and there was a whole room full of IQ 140+ people all nodding along like this was a totally reasonable thing to say.
I fully approve of the rationalist project, but a bias that you all have is that you tend to make things too complicated, too meta, too shy of obvious solutions. I agree that you’ve gotten much better at this, but I still remember how everyone insisted on using the “principle of charity” to reconstruct totally indefensible views as completely different arguments. I remember when Yudkowski proudly declared that he voted libertarian during the W. Bush election instead of Democrat because he “didn’t want blood on his hands” (because democrats are against free markets, I guess?) I remember his ‘politics is the mind-killer’ post being used to argue that only ‘rational’ arguments like free-market economics should be discussed, and not anything as ‘political’ as global warming – which culminated in Robbin Hanson claiming that noise externalities are not a problem because people can just individually work out contracts with the noise-makers and pay them money to stop making noise.
I remember that even after describing how well-kept gardens die to pacifism, no mods were appointed (Because an upvoting system is like free markets!) and the community was allowed to be overrun by schoolyard bullies who openly advocated for block-downvoting those with “undesirable political views” (i.e. anyone to the left of Hitler). And then, when everyone with a grain of common sense noted that “golly gee gosh, there seem to be some strange people on that there forum”, the community replied without a trace of irony that everyone only thinks that Less Wrongers are asocial weirdos because clearly the critics must have all watched Spock on Star Trek and that’s the real problem.
I can easily imagine a feminist Scott Alexander having written the following post instead:
Listen. I don’t think that the people criticizing the Rationalist community are criticizing us for not being rational enough, any more than that we criticize feminists for not being feminist enough. I think these people took one look at us, saw all the junk I described above and immediately lumped us in the same category as Ayn Rand followers and Silicon Valley in general – i.e. those strange nerdy people who are constantly inventing weird reasons for why it’s definitely okay to torture people in some cases.
But that’s a PR problem second, and a genuine problem with the movement first.
but the Rationality community isn’t really a ‘movement’…it’s not trying to win a popularity contest, where things like PR matter. Rationalism should be kinda esoteric and ‘nerdy’; otherwise, it risks become like any other ‘boring’ political forum where the same predictable stuff is repeated over and over.
Less Wrong was pretty explicitly founded on the idea that rationalists should “win”, and the original motivation for creating it was not only effective altruism in general but specifically the idea that creating a base of rationalists would create a greater recruiting pool for the Singularity Institute. So I would say that it was certainly intended as a movement, even if some members prefer not to take part in that aspect of it.
Edit: Wow, how did I manage to mispel the names of both EY and Robin Hanson in a single post? I am impressed with myself.
My impression as an outsider who came late to the party and never hung around LessWrong is that a particular group coalesced around a particular person who was aiming for a particular purpose, and once he got that or near enough to it, he peeled off and followed what was his primary interest and goal.
And that’s fine, because everyone is perfectly entitled to say “Okay, I’ve had enough of this game, I’m leaving, have fun guys!”.
But Rationalism/rationalism having produced all the other blogs and groups and people going forth and spreading the message and no longer being tightly tied to “this one site and this one group” is a good thing, because it means the idea/movement/cult/philosophy (take it as you will) is alive and healthy and thriving. It’s spreading, even if that means changing in ways that were not considered or if considered were not thought optimum, because growth is change. The very fact that you’re getting outsider criticism is because outsiders are becoming aware of your existence. This is a hopeful sign!
It’s exactly what is not happening with Effective Altruism (again, a view as an outsider who came late to the party). Looking at the last couple of conferences organised, to me there seems an unhealthy emphasis on networking, on “if you’re interested in getting into the field, come along and meet possible employers” and a turning-inwards in speaking to the self-selected little group(s) who are becoming incestuously clannish. I know that sounds very harsh, but I don’t see EA as growing, changing, getting into the mainstream, becoming noticed, and spreading in the same way. (And Peter Singer as a guru never made me warm to the movement anyway).
Hmm, I have the exact opposite impression. In terms of books published, physical groups of affiliated people, media exposure, and endorsement by famous people I think Effective Altruism is much more successful. I could easily see it going mainstream in the next few decades, but I don’t think internet rationalism has much more room to grow.
Honestly, I didn’t think the critics were particularly inaccurate or uncharitable. Noah Smith was mostly just talking about how the people attracted by the rationality movement are sometimes… odd or rude (and hardly in a way out of the ordinary for the internet). And Tyler Cowen… well Tyler Cowen’s writing is kind of silly. You can’t take him too literally so to speak.
But even without knowing about the stuff you wrote, my impression of less wrong and Eliezer was definitely not a great vibe. A little bit kooky and off for sure, and mostly just much more arrogant than Scott or the people here are (and it’s hardly like ego is lacking here). On the other hand, I feel here is pretty great! Even the people who irritate me sometimes actually seem pretty genuine and relatively less interested in just verbally stomping opponents than most places on the internet. I also get the impression the groupthink level is relatively weak here. I would ballpark that more conventional left-winger posts + far left-winger posts are outnumbered roughly 2 to 1, which is a small miracle (even considering Scott’s moderation policy) when most places that ever discuss politics rapidly self segregate to ratios more like 10 or 100 to 1. And the right and libertarian wings here covers a really weird and broad portion of the spectrum.
I can vouch that some are, because I’m one of them. There is no reason for all you critics to be on the same page … politcal movements may have critics towards both the right and the left — and there are people who think that Dennett ism’t reductionist enough….
I’ll second this. The Rationalist community certainly talks a lot about how rational it is, but when it comes to demonstrating that in their writing and behavior…
…they suck lol!
Thanks for your sophisticated and refined contribution.
And for signposting it to us by wearily trailing off while perhaps fanning yourself and discussing the finer points of Sartre or Foucalt. We get it, you took english lit, you’re wise.
>The last time I went to a meetup of philosophers I jokingly asked how many pages of their PHD they spent on defining the concept of ‘truth’ – the answer, without a trace of irony, was “three”
What’s the problem with that?
Actually hearing that increases my confidence in the field. Isn’t the whole point of the ‘field’ to question intuitions and try to ground things in the most fundamental way? Thought they’d moved away from that.
Fixed this for you, Scott: “If any moron on a street corner could correctly point out the errors being made by bigshot PhDs, why would the PhDs
never consider changing?”So you claim that this criticism is unfair.
But what about a certain rationalist guru with no academic credentials or demonstrated domain expertise (but allegedly with a very high SAT score) claims to have found the solution to the problem of the interpretation of quantum mechanics, a problem that eluded physicists such as Einstein, Bohr, etc. for over a century, and he further claims that the solution was obvious and anybody who does not agree with it “does not have enough g-factor”? What about when he claimed, again with no demonstrated expertise, to be better than professional VCs at predicting which startups would succeed? And don’t get me started on his claims about cryonics…
Do you think that the quoted criticism is unfair in this case? Do you think it was fair in the past but it does not apply it anymore because mistakes were made but now the Rationalist Movement™ recognized them and moved on?
The criticisms are not fair of most rationalists, but they are fair of one very prominent one. Controlling who your leaders and spokespeople are is part of controlling the message.
> claims to have found the solution to the problem of the interpretation of quantum mechanics
No, he didn’t. He claims, very plausibly, to have read the solution, its already having been worked out by actual physicists, over a period of decades. Einstein, Bohr, etc were dead well before this work was done.
He also said that the solution was obvious in retrospect only, not prospectively.
And the g-factor post you linked… note that the community downvoted that REALLY HARD. He overstepped, and was called on it. And that happened right away, so it wasn’t something we’d look back on and say we learned. So as TheAncientGreek said, this evidence doesn’t generalize.
18 upvotes and 23 downvotes counts as REALLY HARD downvoting relative to EY’s usual, but it’s not REALLY HARD in absolute terms — not like some of the dogpiles I’ve seen, anyway. (Point is, 18 upvotes isn’t exactly universal opprobrium.)
But MWI has not been worked out in the maths sense .. the derivation of the Born rule is still an unsolved problem. I think you are missing that the I in MWI stands for interpretation, and interpretation means a conceptual understanding of existing maths. Also neither Einstein nor Bohr had anything in particular to do with MWI.
Technically, Bohr was still alive and active when Everett published the first version of the MWI, though quantum decoherence was introduced after his death. Anyway…
And many physicists disagree.
-5 doesn’t look that “REALLY HARD”, and that comment was probably the lowest point of the debacle.
Anyway, I’m not claiming that everybody in the the “rationalist” community mindlessly follows EY as a cult leader. I just wanted to point out that the kind of criticism of the “rationalist community” that Scott considers unfair does actually apply to one of the most prominent and founding figures of the community.
The fundamental issue is with the very nature of writing. People who read and publish things on the internet take a lot of things for granted. This applies equally to Scott Alexander or Tyler Cowen. Both authors assume that:
1.) What they write can be meaningfully interpreted by someone else.
2.) There is purpose to their writing.
How do these authors, let alone anyone else on the internet, receive knowledge from what the read and see on their computer devices? What is lost in the process of translating lived experience to language?
What is the difference between writing about World of Warcraft and writing about European presidential elections?
Is it rational to interpret the symbols on a computer screen as reality? What is different about experiencing World of Warcraft compared to the European presidential elections through a computer screen? What are the limits of what can be interpreted and experienced through written language? What are the limits of what can be communicated through written language?
Would Socrates have a blog if he were alive today?
Does it make any more sense to argue on the internet about the intricacies of European presidential elections than about World of Warcraft?
As a follow-up question, does it make more sense to declare oneself a Rationalist or a Neoliberal and defend your position on Twitter than it does to declare oneself a Paladin or Death Knight and defend your position on World of Warcraft? How is the former character class more real than the latter?
What makes you think arguing about World of Warcraft is pointless?
It isn’t pointless, but the interesting argument is between the warmongers who want the Alliance to fight the Horde and vice versa and the reasonable people who realize that conflict between the factions only helps the Litch King, or the Legion, or whoever the current big bad is.
I do enjoy those arguments, myself.
Possibly a bit too much.
Under what conditions is it possible for an outsider to ever level a legitimate criticism at a movement?
When the outsider understands the movement well enough to see problems with it? I mean, that sounds kind of tautological, but I’m not sure what you’re getting at otherwise. One doesn’t have to be a member of
the Naa woo-woo cult to see that the woo-woo cult has gone pretty wrong.Seems like Scott was promoting the principle that if you, an outsider, think of a criticism, it is overwhelmingly likely that insiders have thought of that same criticism and hashed it out and either incorporated the legitimate parts of the criticism or found the criticism to be wanting. In that case, as an outsider, unless you know as much as a well-versed insider, then you should have a low confidence that your criticism is a good one. Given that it is very rare for an outsider to be motivated enough to research a movement as much as a well-versed insider, then it seems exceedingly rare that an outsider will ever have a legitimate criticism (that is, it can happen, but it almost never does).
Compare, for example, a field that everyone likes to make fun of: X studies, with their various methods of autoethnography and whatnot that seem absurd to outsiders. Instead of poking fun, RealPeerReview-style, at all the seeming nonsense that gets published in that field, should we assume that insiders know those criticisms well and have dealt with them, and thus their dismissal of outsider criticism is not sticking their heads in the sand but just the same type of frustrated response a rationalist would have when someone says that Spock is not a good model of human behavior?
But if rationalists consistently believed that, they would have to withdraw their criticisms of philosophy, etc.
But the actual dynamic, is: insiders know standard objections, have answer to standard objections, think answer is good. Outsiders think answer is bad, and therefore objection stands.
If that’s the actual dynamic, then Scott’s post basically boils down to “smart people make dumb criticisms of things they haven’t bothered to look into”, which is true as far as it goes (and rationalists can be as guilty of it as anyone), but I was trying to draw a broader epistemological point out of it.
But the actual dynamic, is: insiders know standard objections, have answer to standard objections, think answer is good. Outsiders think answer is bad, and therefore objection stands.
In my experience, “insiders know standard objections, have answer to standard objections, outsiders are uninterested in learning answers to standard objections because it feels satisfying to think of yourself superior to a big group of people, and actually having to engage with those people’s arguments would get in the way of that” is way more common.
As someone who got my degree in X Studies, YES PLEASE THAT WOULD BE A VERY GOOD IDEA.
I did religious studies, and considering modern theology (not, strictly, religious studies, which is largely secular, but you get a good dose of theology doing religious studies) tends to fall into two categories (as something many people think is nonsense – or, of course, think part is nonsense, since different religions have their own theologies – I think theology can be considered “x studies”) they vary in how they deal with outside criticism. Some heads get stuck in sand, others don’t.
It’s dangerous to assume “ha! Those dummies don’t know anything; if they did, they wouldn’t be studying it!” but it is also dangerous to assume “they must know what they are talking about, so any criticisms must already have been dealt with.”
To give an example, consider the different Christian and Jewish responses to 19th century onwards scholarship on the authorship of scripture and other issues coming from textual criticism, etc. Some denominations (mostly liberal, but including some conservative denominations) seriously grapple with the issue that a lot of what was traditionally thought about who wrote what was wrong. Others just dig in their heels and deny the scholarship’s validity (this is almost always conservative denominations). Others still just sort of ignore the whole issue (this is mostly liberal denominations) – they aren’t literalists, but they aren’t especially curious either.
In any academic pursuit, you’re going to have some people who seriously consider criticisms, and others who get them out of the way by hook or by crook because their personal beliefs/congregation/faculty position depends on it.
Seems like Scott was promoting the principle that if you, an outsider, think of a criticism, it is overwhelmingly likely that insiders have thought of that same criticism and hashed it out and either incorporated the legitimate parts of the criticism or found the criticism to be wanting. In that case, as an outsider, unless you know as much as a well-versed insider, then you should have a low confidence that your criticism is a good one. Given that it is very rare for an outsider to be motivated enough to research a movement as much as a well-versed insider, then it seems exceedingly rare that an outsider will ever have a legitimate criticism (that is, it can happen, but it almost never does).
This principle is admittedly inconvenient, but in my experience it is entirely correct, and the faster that everyone internalized it, the better.
My experience has been that *each time* that there has been a sizable community that lots of smart-seeming people support, but which has well-known objections to it that have led me to dismiss the community… then the very moment when I started actually looking for the community’s strongest responses to those standard objections, it became obvious that there existed strong responses which the outsiders were totally ignorant of.
And I have *also* been in several communities that made some counterintuitive claims, had lots of people dismiss those claims based on what seemed to them like obvious objections… and been immensly frustrated by the fact that we’d spent enormous amounts of time analyzing those objections and making what I felt to be very strong counter-arguments, but basically none of the critics had even bothered looking up what our answers might be. (If they had even read the answers and then disagreed with them, then they would at least have been *trying*. But they were literally just going with the obvious objection and then making *absolutely no effort* to find out whether we might even have tried to answer those objections.)
I think the principle would be find adopted as a norm, but given the intense shift towards global skepticism and agnosticism on most subject areas it would necessitate, I don’t think it’s a norm that could ever be adopted among the population at large, much less among smaller, more epistemically fastidious communities.
I am genuinely curious. I’m a traditionalist Roman Catholic (very strongly formed by Chesterton), and I don’t entirely understand why a rationalist would care about other people. I’ve only started reading your blog fairly recently, so please forgive me if you’ve written extensively about this.
To me, when I was at a crossroads decades ago, exploring multiple different religions and philosophies, there were only two paths that made sense to me: fully embrace Catholicism and all of the consequences of its philosophy and tradition, or conclude that there is no Creator, thus no real teleology, therefore no meaning to my actions, and I should become a nihilist and work towards maximizing hedonic pursuits.
I’m not a sociopath — I do feel empathy — but I was looking at it from a perspective that I felt was rational. If there is no teleology of man, why ought I view man, either myself or others, as worthy of time and effort?
I believe you are genuine in your altruism. I don’t think you’d write what you write otherwise. But I must ask, why? I’d be grateful for a real explanation.
By the way, you are on the way to convincing me on a basic income guarantee and related topics. I am a reactionary monarchist, but not a modern conservative. And though I am reactionary, I don’t think we can (or should) put technology back in the box, and even a good and righteous Catholic king would have to deal with robots displacing workers and the level of specialization and globalization that modern communication allows. I also strongly believe in a sense of noblesse oblige, and the wealthy and privileged paying for the basic needs and health of those who cannot makes sense.
As I’m sure you are well aware, there are literally thousands of books dedicated to precisely the questions you’re asking. One of the common threads of these, as I’m sure you’re also aware, is that maximizing short-term hedonism tends to have a strong detrimental effect on long-term hedonism, so if we’re interested in maximizing overall pleasure, we need to think long-term, and moderation and cooperation become the order of things. (This is the basic insight of Epicureanism.)
Another common thread would be that “fake” teleology that is just as motivating as “real” teleology is good enough for most people. This has been one of the central messages of, e.g., Daniel Dennett (see Darwin’s Dangerous Idea for probably his clearest statement of this).
Yet another one is simply that we care about others because we feel like it, due to a combination of upbringing and genetics. I don’t want to beat other people up, because I like other people. What more do I need, on a personal level?
It’s worth noting that societies that are not largely based on cooperation will fall to pieces, and so most of us tend to end up being socialized to care (at least somewhat) about others. Which is a good thing (at least if you’ve been brought up in one of these societies).
There’s plenty of attempts to find a rational grounding for being nice to each other, Kant and Mill being two of the most famous examples, I suppose. Surely none of this is news to you. So what’s the real question?
First of all, whilst that may be true on a societal level, on an individual level you’ve given me no reason not to shaft my moderate, co-operative neighbours if I can do so without anyone discovering.
Secondly, whilst you may disagree, most people’s moral intuitions seem to include some degree of categorical force — X is just wrong, period, not “X is unlikely to advance some goal you happen to have”. Even if I accept that moderate and co-operative behaviour is likely to increase my hedonism, that still doesn’t get me to a moral system as it’s usually intuited.
Missing the point. The question is “Why should we be good?”, not “Why do we think we should be good?”
Lots of people obviously don’t “feel like it”, though. If my personal “combination of upbringing and genetics” leads me to want to commit genocide, what are you going to say to me? “It looks like your preferences are different to mine”?
Again, that does nothing about the free rider objection. Society isn’t going to stand or fall based on whether I manage to scam some little old lady out of her widow’s pension, so if I can get away with it, why not?
That statement only makes sense if you have some sort of criterion for judging what is and isn’t good, which you don’t, as far as I can see.
Society doesn’t want lots of people getting away with it, so it sets up rules where no one does. That’s where your obligation comes from.
Back in Nazi Germany, society set up rules that everybody had to hand over any Jews they knew to the authorities. Back in Soviet Russia, society set up rules that anybody who heard a family member saying something counter-revolutionary had to report them. Had I lived in these countries, would I have been obliged to hand over Jews to the Gestapo or shop my parents to the KGB?
In the sense that you might have been punished for not doing so. But if you understand societal rules as intended to fulfi a purpos,e yuo don’t have to accept them as absolutes, even in the absence of some fundamental moral law that is part of the universe.
>In the sense that you might have been punished for not doing so
and what sense is that exactly?
Morality is not a personal matter. It only makes sense in the context of a society. And I think many people agree that it can only be fully justified when taking the society into account.
I think this is just simply wrong? Surely most people would agree that Epicureanism, Utilitarianism, Rawls’ theory of justice, etc, contain some elements of a moral theory? Clearly a lot of people disagree with your intuitions on this one.
Nope. Dennett’s point is that you get good-enough-for-any-real-purposes teleology from the real world. Those “purposes” include “justifying morality”.
I would say that I will do anything, up to and including murdering you, to stop you.
Society is going to stand or fall based on whether it actively penalizes defectors. If you need more of a grounding than that, Kant seems a good point to start?
What’s your point here?
As I said, a moral theory needs to have some sort of normative force — “Maximising happiness [or whatever] is the right thing to do”, not “If you, personally, happen to want to maximise happiness, this might help you do it”.
If Dennett thinks teleology doesn’t actually exist, then it can’t actually justify anything, morality included. If he does think teleology exists, I’m not sure why you’re using him in an argument that we can have morals without teleology.
Note that you don’t say that I’m actually doing something wrong, because, under your view, right and wrong don’t really exist. All you have left is an appeal to brute force, which isn’t the same as morality.
For the purposes of this thought experiment I’m able to scam the widow without anybody finding out, so the issue of societal punishment never arises for me.
Kant’s categorical imperative doesn’t really give me much reason to behave morally, either. Sure, if everybody went around scamming people that would be bad, but so what? I’m not talking about everybody scamming everybody else, I’m talking about me scamming one person.
I don’t think we’re going to do much more than argue in circles here, but I will add that Dennett’s point is that what philosophers have typically thought of as teleology doesn’t exist, but there is a perfectly good form of teleology in the natural world.
So the claim isn’t that you can have morals without teleology. It’s that you can have morals without Teleology(TM). Specifically, you do not need a supernatural (or otherwise magical/transcendental) source of teleology, you get enough from the real world.
I am fully aware based on what you’ve said that this will not strike you as enough, precisely because it is contingent and limited. And here I think we have to accept that we have remarkably different intuitions – I’m ok with a contingent and limited source of morality (and would argue that most people are as well).
As I recall, his point was actually that teleology doesn’t exist, but that evolution means that something a bit like teleology does. It’s all pretty incoherent, of course, because evolution is itself teleological, as is genetics and most biology in general.
As far as I can see, the distinction between “teleology” and “Teleology(TM)” is an artificial one, made up by philosophers and scientists who can’t deny that teleology exists but don’t like the implications. Basically it’s the naturalist equivalent of the micro-/macro-evolution distinction.
Hard to respond to this unless you’re very clear about what you mean by teleology. He thinks teleology exists, and for that matter free will. He doesn’t think that either of them have all the qualities that have been traditionally insisted upon by philosophers.
Not really, no. The specific argument Dennett was involved in in this case was against philosophers (like John Searle, Jerry Fodor, Colin McGinn, David Chalmers, and many others) who have insisted at one time or another that (a) there is a definite teleology in the world, (b) which bears a great deal of resemblance to the traditional Christian conception, and (c) cannot be explained by natural processes.
Dennett agrees with (a) only, and seeks to explain it through our evolutionary history. There are some philosophers who deny (a), but certainly not the ones involved in this debate.
Dennett might think that something he calls teleology exists, but if he denies that it has the qualities traditionally ascribed to it, it’s misleading to call it teleology in the first place.
Teleology applies to more things than biological life-forms, so even if Dennett were successful, it still wouldn’t explain teleology per se, merely one form of teleology. Plus, evolution itself is a teleological concept, and hence cannot be used to explain teleology.
And at this point it’s clear that you have a very particular vision of what “teleology” means that many of us do not share. What, specifically, is missing from an evolutionarily-grounded form of teleology that you think is critical? You say that non-biological entities have teleology – can you provide an example? Is there anything in the entire universe that does not have teleology? If not, is it even a meaningful term? And why is evolution teleological?
My “vision” of teleology is the standard one in philosophy, namely, a thing’s goal-directedness, purposiveness, or pointing to an end beyond itself, as (to use traditional illustrations) the moon is directed towards movement around the earth, fire is directed towards the production of heat, and so on. In classical philosophy teleology was taken to be a fundamental aspect of the physical world which explained the existence of causal regularity in the universe. Early modern philosophers like Bacon and Descartes thought that teleology didn’t really exist, and that causal regularity was imposed on matter by God (hence the term “laws of nature”, which was understood rather more literally than most people understand it today). Later philosophers kept the abandonment of teleology, but also abandoned the idea of divine laws which had explained how, in a world without teleology, causal regularity could still exist. Hume correctly saw that this rendered the entire notion of causality suspect; most other philosophers haven’t been willing to accept such a radical conclusion, but also haven’t successfully found an alternative to teleology/divine commands, which is one of the main reasons for the incoherence of modern naturalism.
I am of course familiar with this vision of teleology. But I’m mostly struck by the fact that modern naturalism seems to work pretty well. Scientists have long leaned toward making naturalist assumptions. I am aware that Feser and his ilk think this is only possible because scientists are really being covertly teleological, but for a few reasons I find that highly implausible. Modern science is much more successful than earlier science, not merely comparable; if teleology is essential, why would making it covert produce better science than we had when it was overt? Plus, though this would go far beyond what would fit in a comment, as something of an expert on philosophy of science I think the amount of covert teleology in modern science is greatly exaggerated.
Hume noticed that the lack of teleology had some impact on causation, but he did not do away with causes. I would be inclined to say that subsequent philosophy and science have pretty successfully indicated that a stripped down notion of causation, however suspect you may find it, seems more than adequate for all of the purposes for which we need causation. Indeed it seems to be much more useful than old-fashioned teleology-encrusted causation.
@Protagoras:
I’me fairly certain I’m not very good at philosophy as practiced by modern philosophers.
But, in support of what you are saying, it strikes me that modern scientists are more “turtles all the way down” than teleological. Once they understand atoms, they look to understand electrons, protons and neutrons. If they come to a “complete” understanding of quarks, et. al., they will look for something deeper/smaller. If they reach a limit beyond which they determine it is impossible to explore, they will fall back on something like Goedel incompleteness, not teleology.
@ Protagoras:
You can follow the scientific method without understanding the philosophical justifications behind it. That doesn’t mean that the scientific method makes sense absent these justifications.
Well at least there’s something solid to grapple with now.
Are you saying that the ancient teleological vision is correct? That the moon has a goal of orbiting the earth?
I suppose you’re right that Descartes marks an explicit break with the idea of teleology being a fundamental part of the natural world. He, of course, keeps teleology around as a fundamental feature of minds, which are inherently disconnected from the rest of nature (precisely because they are teleological and it is not). And his thoughts about this became dominant within philosophy.
Dennett is explicitly opposed to this split (and I follow him here). Minds are part of nature, minds have teleology, therefore teleology is part of nature. The difference from the old view, however, is that this is simultaneously opposed to pan-teleologism. The universe writ large has no goals, but evolutionary processes ended up creating beings with goals.
I think you will still argue that this is not sufficient, that we need a universal teleology. But at least we know what we’re arguing about now.
Yes.
That depends on what you mean. If you mean “Is the moon’s nature such that it reliably orbits the earth instead of, e.g., bouncing up and down like a yo-yo, or flying off into space, or turning orange?” then yes. If you mean “Does the moon consciously want to orbit the earth?” then no, and nobody’s ever thought that.
“Morality is not a personal matter. It only makes sense in the context of a society.”
I’d like to address this point, as I’ve had interesting conversations about it with my SO.
By morality I’m assuming an absolute or relative code of conduct that one would feel bad about violating, and good about acting in accordance with.
1) No other people, no society:
One can conceive, and follow, a morality toward one’s environment. A very basic one would be a morality of sustainability and possibly of utility. Anyone who farmed, herded, hunted, seen a desert form where there used to be fertile land, or been castaway on a deserted island immediately realizes the applicability of such a morality.
2) Another person incapable of forming a society:
Most parents also reflexively understand a morality of somesort with respect to their children, who are as yet too young to form a society with the parent.
Yudkowskian rationalism is about fulfilling your values efificiently, and places almost no constraints about what your values are. So if you care, care, and if you don’t care, don’t care.
That makes Yudkowskian rationalists sound almost exactly like the sophists one finds in Plato’s dialogues.
Is that intended to be a criticism?
Exactly. The sophists argue that it’s best to be an vicious man where everyone else is virtuous. Socrates ultimately believed in a higher good, and that virtue was more rewarding.
Thrasymachus argued that. Just Thrasymachus. He’s the only one. Really. Even Gorgias wasn’t on his side on this issue, never mind Cratylus or Hippias or Prodicus or (cough) any others we might name. Please do not attribute this to “the sophists.”
Sophists get too bad a rap, man.
True, although I was actually thinking more of the “Give corrupt politicians good rhetorical training so they can be corrupt more effectively” angle.
Rationalists do not argue that vice is better than virtue.
Quite. Orthogonality thesis, anyone?
This is what rational agents do in economic theory. Rationality in economic theory usually takes preferences as given (or exogenous to the model) and then assumes that the agent will maximize his welfare given his resources and preferences.
The short answer to your question is “because it’s what we want to do”.
The longer answer is that philosophers, psychologists, biologists, and lots of people with no special training have all tried to answer that question and come up with a zillion different answers. Purpose bottoms out somewhere; humans evolved with lots of different instinctive drives and the capacity to acquire more from the society around them. Altruism is one of them. We learn about people suffering and dying, decide “Fuck that shit – the world should not be this way!” and then do what we can to make the world closer to the one we wish we had. There’s nothing logically impossible about a creature that only cares about its own pleasure or about the number of paperclips in the universe, but we happen to be humans who care about other humans and don’t want them to suffer and die. That’s what it all boils down to.
Judging by their behaviour, lots of people actually want to murder, steal, rape, and do assorted nasty things. On what grounds do you judge that they should ignore these desires whereas you should follow your own desires to be nice to people?
a lot of people do want to do those things…and they are in jail
So? You haven’t actually given any reason to think that those things are wrong. And no, “There are more of us than there are of you, and if you do this we’ll lock you up” isn’t actually a reason.
they impose a negative externality on society, meaning the the actions of some hurt those who don’t consent to it
Again, so what? If, to quote Doug S., hurting others is “what we want to do”, I guess we just have to accept that “we happen to be humans who don’t care about other humans and want them to suffer and die.”
I’m more concerned with a more lawful evil live-and-let-die attitude. Society has laws against doing overt harm to other people. But what about those who just want to use 100% of their resources to maximize their own pleasure, and is unconcerned about the suffering of those around them, but does nothing to directly cause harm to anyone.
@The original Mr. X
On what grounds do you judge that people should ignore those desires?
Edit: I think I misunderstood this conversation, are you asking ‘what justification’ does anyone have to push their desires on anyone else if there is nothing beyond just human desires?
On what grounds do you expect moral motivational systems to be independent of people’s identity and universal? I don’t think it’s reasonable to say that moral arguments must be able to argue people into not murdering others for morality to exist. Morality necessarily grounded in individual people’s motivations, because otherwise everyone would quite sensibly ignore moral obligations entirely.
Christian Teleology is just a way of pretending that everyone’s preferences are secretly identical, when a straightforward analysis would lead us to conclude that they’re obviously not. You don’t get to have it both ways and claim that people have a desire to murder but also claim that they have an intrinsic sense of right and wrong that leads them to do good.
Yeah, Socrates himself couldn’t argue Thrasymachus into being virtuous. Hence the way that Plato’s portrayal sometimes represented Thrasymachus as a wild beast; if a lion is killing and eating people, you don’t try to persuade it to stop, you shoot it (or at least tranquilize and relocate it). Treating a human like a lion should be a very distant last resort, but sometimes it really is the only option left.
First of all, you might want to check your history of philosophy. Teleology and its application in ethics dates back to the ancient Greeks, centuries before the birth of Christ.
Secondly, no, teleology isn’t about people’s “preferences”, it’s about what, given the nature we have, best fulfils that nature. It doesn’t claim that all our preferences are in accordance with our nature, or that everybody’s preferences are the same.
ETA:
I expect moral arguments not to lead people to absurd conclusions like “Hitler was justified in setting up the Holocaust”. You may consider that an unreasonable burden, but I think most people would disagree.
Yes, but he said Christian teleology.
“Christian teleology” is a made-up concept. The role of teleology in Christian philosophers is exactly the same as the role of teleology in pre-Christian Greek philosophers.
Some people murder and steal because they have similar values to me but incorrect ideas about how to reach their values, and they can be reasoned with. Other people just have different values than I do, and I can change their values (if possible) or attempt to punish them to deter them from acting on their values. But it’s true that if I’m talking to Murderbot I will probably not be able to convince Murderbot not to murder people. (This is the insight that leads people to be worried about the AI control problem.)
That was basically Alasdair MacIntyre’s point in After Virtue, as I recall.
I just picked it up and was planning on reading it tonight.
As someone closer to the “hedonic pursuits” end of the spectrum than most, I can say that it doesn’t at all exclude caring about other people – if anything, it’s the opposite. A virtuous person with honest and otherwise positive mutually beneficial interpersonal relationships is happier than the stereotypical sociopath or hedonist.
“Positive mutually beneficial relationships” can happen without caring about suffering of people who aren’t your friends.
I’m not stating that maximizing pleasure involves spending all your resources on hookers and blow. It just means prioritizing your wants (long or short term) over anything else.
Still, that involves caring about people, so that desideratum is satisfied. Regarding strangers, a lot of people get something out of making them better off, so they have a reason to do it to some degree. But that depends on your psychological constitution, and if yours is different, you may have no reason to do it. If you genuinely get nothing out of it, neither instrumentally nor as a source of pleasure by itself, then you shouldn’t do it.
Elsewhere, you ask why you should care about anything if there’s no higher purpose. But one might well ask the opposite question: if there’s no higher purpose, must you not care about anything? Obviously not. And since it’s highly likely that you already care about something, there’s no need of convincing.
I don’t entirely understand why a rationalist would care about other people
The four cardinal virtues. Even pagans can be virtuous according to their lights. A rationalist may care about other people operating under the virtue of justice:
I, of course, subscribe to those virtues. Pagans also don’t reject teleology. A total atheist must reject teleology, no?
And I’m talking about a rationalist qua rational thought. Why is a rationalist, whilst trying to be rational, caring of his fellow man? If the answer is “because I want to”, that makes sense. Were I a nihilist, I wouldn’t go around punching people, because I don’t want to.
If the answer is “because that would cause the fall of civilization”, one person failing to participate productively in civilization would not cause it to collapse.
I’d guess the answer to that would be that society has created systems to prevent that from happening (police and law).
A complete rationalist would probably really go around punching people if he felt like it. Fortunately humans usually have built in empathy, lack of that is one of the criteria to be a psychopath.
Disclaimer: I don’t truly consider myself to be a part of the rational community.
If the answer is “because that would cause the fall of civilization”, one person failing to participate productively in civilization would not cause it to collapse.
That’s like the “one person’s vote means nothing in an election”. Yes, one person out of thirty million means little to nothing. But if each of those thirty million, or the majority of them, think “My vote means nothing, so I won’t bother voting”, then it means a very great deal. I think we see that already, where elections are being won on a portion of the electorate turning out to vote; in the presidential election 60% turned out to vote while 40% didn’t. That’s probably enough to count as a majority of the electorate, but less important elections often have drastically lower turnouts, to the point where I do think one of these days an election may be won on “only 40% of eligible voters bothered to cast a vote”.
A rationalist can care about their fellows because they wish to live well, and the best way to do that is in a secure, free society, and the way to get that is to treat your fellow citizens well and encourage the kind of behaviours and laws that induce a free, secure society where everyone’s rights are respected and there are ‘safety nets’. A rationalist could reason that in their own self-interest, persuading others to uphold rights is the right thing to do, and that if they consider themselves to be a conscious entity with the capacity for happiness and suffering, a society of mutual co-operation where all work to ensure happiness over suffering is both just and in their own benefit.
Conversely, a society where people go around punching other people because they feel like it means that our rationalist is at risk of getting punched a lot which is both unpleasant and may eventually lead to injury and incapacity. One punching person getting away with it encourages others to try it, and the more who get away with it the more the consensus about not punching people is weakened.
What about Judaism? Buddhism? Confucianism? Is there a reason why those philosophies don’t appeal to you? I’m honestly curious.
I explored those. Not Judaism so much, largely because there is a strong racial component, and if I’ve got any Jewish blood in me, it’s pretty dilute. But I looked into Islam, Buddhism, Hinduism, LDS, various protestant denominations, eastern/oriental Orthodoxy, a few pagan variants, Stoicism, Nihilism, and probably a few I’m not thinking of.
But I found the systematic intellectual rigor of Catholicism appealing. The theology and philosophy made sense to me in ways that others didn’t.
I don’t remember all of the details of my findings, but here’s a quick list of my primary objections:
– Islam: a lot of the Allah is perfect so anything Allah does is good by definition, even if it seems evil and horrible to us dumb mortals. Catholicism avoids that by having a rational God who can’t violate the rule of non-contradiction.
– LDS: too many things to go into.
– Buddhism, Hinduism: not very precise. Smells too gnostic for my taste.
– Stoicism: great stuff. Very masculine. But doesn’t justify why one should live honorably. Appeal to the natural order, but whose natural order and for what purpose?
– Paganism: similar to Stoicism but less so.
As far as Nihilism goes, I was more drawn to lowercase-n nihilism. Basically, “if there is no higher purpose, no higher good, then fuck it, why should I care about anything?”
To me, that and Catholicism were the logical extremes. And I’m a person who takes things to their logical extremes.
I thought I owed it to myself to take Pascal’s wager and at least try to be Catholic. I then had some religious experiences that convinced me at least some of what Catholicism claimed were true. And I went to that logical extreme.
And I’m not a regular American “conservative” Catholic that’s little more than a shill for the GOP. I’m a reactionary Latin mass devotee who wants a Hapsburg-style altar-and-throne order, distributist and very local economic system, and I believe that there is no salvation outside the Church, and everything else the Church has at all times and always taught.
Yes! Someone else who not only takes Pascal’s Wager seriously but actually accepts it at face value!
If you don’t mind explaining, why do you think a “Hapsburg-style altar-and-throne order” would lead to the sort of government and society you want? The actual Hapsburgs were often quite happy to use the Church as a mere tool for political ends, and I don’t see why they’d be any different after being restored to political power. Sure, the current Karl von Habsburg seems pretty pious, but what about another generation or two down the tree?
There’s a maxim invented by Greg Egan, that goes: “It all adds up to normality.” What he’s saying is, no matter how crazy your physics of the universe gets, no matter how much it makes you want to despair, it doesn’t become any more or less true by you believing in it; it either was already false or is already true. In fact, it is the environment in which you evolved, so it’s the thing that all your existing intuitions are about. If your intuitions no longer make sense in the new model but they did in the old one, that just means your old explanation was wrong.
People don’t feel empathy because they find religion. People feel empathy first, and then religion adds support within their framework. When you discard religion, that doesn’t mean empathy and charity become wrong; it just means your support for it falls away. But the sentiments themselves aren’t caused by religion and so are not dependent on it. Religion supports charity because charity is good; it doesn’t become good by religion supporting it.
Rationality has a different story behind empathy and charity; it says that human beings are legible to other humans and empathic behavior is better for the group and thus more likely to be accepted by others, leading to a positive selection effect. Through legibility and iterated games, the collective benefit becomes an individual one and empathy is selected for. But this still isn’t a question of “it’s good because rationality says it’s good;” rather, first it’s good and then we try to explain why you believe that. The good predates rationality, just like it predates religion.
As our sermon says, “it doesn’t explain it away, it just explains it.”
>>I don’t entirely understand why a rationalist would care about other people
I feel that the answer to this question and the further ones you’ve posted down the line should really be broken into two parts:
1) Why do people behave nicely (like they care about other people) (the complicated part)?
– There are plenty of non-religious ways of defining one’s morality and behavior out there. The Gloden Rule, for example, does not require that one subscribes to some absolute morality or believes in any particular diety – the “do unto others as you like to be done unto you” thing is some pretty simple reciprocal altruism that doesn’t need any higher foundation.
– Religious / absolute values based morality, on the other hand, does not really stop people from being nasty. Firstly, a person with bad impulse control will be a person with bad impulse control, whether he believes in God-given morality or in some Utilitarianism-based morality. If a person hits me with a axe in a fit of anger, it doesn’t really matter to me whether he was an atheist or a faithful Christian – I’d be dead either way. Secondly, people are good at rationalizing their actions – if someone really wants you dead, they’ll think up a good excuse why is it ok to kill you (and, IMHO, utilitarian/hedonistic morality wins here – at least, I don’t recall it explicitly saying anywhere that it might be good to kill people. Take the Bible though, find the verse that says “Thou shouldn’t commit murder”, flip some pages randomly, and it doesn’t matter which way you go – you will find an explicit excuse to kill someone with some juicy examples of how to apply it). Thirdly, some people are just dicks. Sucks to admit it, but yeah, there are people who would do bad things no matter what the primary moral system in their society is.
– Finally, some people are just (surprise surprise!) actually nice and they care and actually like to do nice things. (Hells, you can even see some nice behavior in animals, who definitely don’t give a crap about either the moral systems or God.) Moreover, the majority of the people are actually pretty neutral and feel that the whole rape-murder-steal thing is not actually that hot, once they think about it. Actually, as I’ve heard on good authority from a dude with a, let’s say it, broad range of life experience, the whole rape-murder-steal thing is actually a lot overrated and it “sucks ass” – if you do it in a nice, stable society, then people are easy victims, but the police will get you sooner or later (there is no perfect crime) and then you are in deep shit. If you do it in a society where no one really cares anymore – there is no police and no jails, but the living conditions tend to deteriorate quickly and the place is generally full of dicks like you, so you have to constantly watch your ass so that you don’t get killed, raped and robbed (in that exact sequence) yourself, and that quickly gets tiring too, so from a purely hedonistic standpoint, the best thing is to be a reasonably nice dude in a reasonably nice society.
And, finally, question part 2: Why do people care? (not just behave nicely, but actually care). The answer here is – because. You either care or you don’t, it is not really something that comes from believing in a certain set of rules. People can be made to behave a certain way by a system of reward and punishment or by explaining to them why they should or shouldn’t do something, but you can’t really make someone care (well, you actually can, but it involves some creepy brainwashy rapey shit that we probably shouldn’t practice, no matter what our particular religious affiliation is). People just don’t work that way.
>I am genuinely curious. I’m a traditionalist Roman Catholic (very strongly formed by Chesterton), and I don’t entirely understand why a rationalist would care about other people.
Because favouring yourself compared to other entities, just because you happen to be yourself, is biased. QED.
I think one should still focus on oneself and the area around them, because that is the centre of their influence, and thus responsibility, but that’s a later extrapolation of the above position.
Fundamentally being rational means avoiding the natural delusion (or convenient fiction, or model) that other people are any less real than you are.
(Also because short sighted hedonism is really far from the the best way to be happy, but you probably didn’t mean that.)
A couple years ago when I was a more raging anti-SJ type, I made some comments along the lines of “The XYZ community is bad because some people in the community did some bad things in the name of said community and the rest of the community was not sufficiently self-aware to condemn it, therefore they must all approve of this bad thing and are bad people.” In retrospect, I realize I was being uncharitable, and most people just want to identify with a community and not worry about shaping their whole lives around who said what in what context. With that in mind, I think we should keep in mind “In Favor of Niceness, Community, and Civilization” and realize that a few people making dumb criticism is probably not a real attack on rationalist values, and probably part of some complicated status game that they’re playing in their own social circle, and it would be better for everybody’s blood pressure if we worked on something else.
I think this position becomes more and more charitable as the number or prominence of the bad actors in the community increases.
Why do I have this strange urge to shout
BLOOD FOR THE BLOOD GOD!!! SKULLS FOR THE SKULL THRONE!!!
after reading this post?
On a mountain of skulls
In the castle of pain
I sat on a throne of blood!
Yes, we have noticed the skulls
It is probably the coolest blog post title, I have to admit 🙂 A little defiant (whatcha gonna do about it, huh?), a little nonchalant (oh the skulls? yes, they’re there), a little metal (doin’ it for the aesthetic!)
I think you’re the first comment I agree with. I’m not sure what this battle was over, who the participants were, and what was at stake, but I can safely say, Moloch won.
Obligatory link: http://www.dorktower.com/2016/10/11/blood-for-the-blood-god-dork-tower-11-10-16/
My biggest complaint with the rationalist (and EA) community is the tendency to be vastly overconfident in their ability to meaningfully impact the world. They share this mistake with many, many previous movements and communities, and use motivated special pleading to ignore the fact that nearly everyone who thinks they can meaningfully impact the world is wrong. This speech is the one instance I’ve seen of engaging honestly with the issue, but its proposed solution of essentially becoming impact groupies seems unsatisfactory (the social climbing/inner ring dynamics in the Bay Area rationalist community especially are bad enough as it is).
I’d like to see more citations for this. Like, what exactly qualifies as “meaningfully impacting” the world, how many people are actually serious about doing this, and what fraction succeed. (Or perhaps more interestingly, what level of qualifications do you need to have in order have a decent shot at success… for example, if I tell you I graduated from Harvard, what’s your new probability estimate that I will make a meaningful impact?) I believe this might be true for entrepreneurs, but entrepreneurs are playing in a competitive marketplace that’s probably at least somewhat efficient. “40% of millennials think they’ll have a global impact” doesn’t seem like an interesting reference class. Those are people who chose “I believe I can make a global difference” over “I don’t believe I can make a global difference” in some survey they were administered, not people who are planning their entire career around making a dent.
For what it’s worth, my perception is that most people fail at making a big impact through one or more of these common failure modes:
* Not Giving A Shit
* Getting Distracted By Facebook
* Not Being Very Smart
* Not Having Original Ideas
* Lack Of Grit
Etc. I suspect that the odds that individuals will succeed varies a fair amount, and also that individuals can increase their odds by e.g. installing FB Purity so Facebook becomes less distracting.
Most people can’t make a meaningful impact in the world because the world is really, really, big and individuals are small in comparison. There are very, very few people in all of history for which it can be said that the history of the world, or even their country or their town, would be different had they never been born.
It’s basically a light version of the Total Perspective Vortex
Why does an impact have to be substantial relative to the size of the world in order to be meaningful? Is the impact of saving ten lives less meaningful in a world with a population of eight billion than it would be in a world of eight million or eight thousand?
Isn’t it more reasonable to define “meaningful” relative to the size of the actor rather than the population he is a part of?
Yes. Saving ten lives in a band of 80 people may mean the difference between the survival of that band or its near-term extinction. Saving ten lives in a world of 8 billion? Nobody’s going to notice, unless they’re all concentrated in a smaller subgroup.
This is also true of nearly everything else. I used to hang out on religious discussion forums. This criticism was true of basically all non-theist criticisms of religion. It was also true of basically all religious criticisms of non-theism. It was true of religious criticism of other religions.
And a thing which I’ve noticed is: I think it’s worth recognizing that this isn’t because people are idiots. It’s because people are making the reasonable assumption that the world is similar to their experiences.
The sorts of people who aggressively proselytize for their religion and are jerks to people about it are *more visible* than people who have nuanced and thoughtful positions.
So, a lot of my friends are “rationalists”, and I don’t imagine they’re completely unlike the rest of the community. But then, a lot of my friends are “feminists”. But my impressions of rationalism are definitely affected by the “rationalists” who insist that only the most extreme caricatures of feminism are “feminists” and thus that feminism is stupid and sexist. And yet, I know that they’re obviously not really representative; they’re just going to be a lot more obvious.
But if you want to know where these ideas come from, consider the “rationalist” who explained to me that, because he’d seen some people on rationalist forums state that they had “hacked their brains” to become polyamorous, obviously it should be trivial for people to change their sexual identity or gender orientation. When I pointed out that this theory had been tried extensively and had a hell of a body count, he accused me of trying to emotionally manipulate him and guilt trip him rather than addressing his arguments.
And after many hours of discussion on religious discussion boards, I finally realized the thing: Those jerks are not necessarily *representative* Christians. But it’s important to admit that they *are* Christians. Same for the atheist jerks.
If you say “no, those aren’t real rationalists you’re attacking”, you will instantly lose people who have been harassed by jerks who are also (possibly not very good) rationalists. If you say “yeah, those guys, we think they’re jerks too”, it’s a lot easier to migrate this into the schema everyone has for “people who share my opinions at some level but are total jerks”. Every group has those people, everyone knows about them.
There was a post awhile back(not sure if SSC or elsewhere) that went hugely viral titled The Other Side is Not Dumb. We (the community, Rationalists, etc.) need to resit the temptation to create and fall for seductive, intellectually lazy reductionist narratives that pigeonhole the ‘other side’. Augments that seem obvious to ‘us’ have almost certainty been addressed by the ‘other side’, and reusing these well-worn narratives as if they are somehow revelatory and profound, when they aren’t, is an insult to the intelligence of both sides. A greater understanding of one’s ‘own’ side is attained by ‘Steel manning’ the opposing side, in ascribing the most charitable view of your opponent.
I would amend that to ‘the other side is not dumber than our side’ since while ‘our opponents are stupid and intellectually lazy’ is not an argument we should be making, ‘everyone is stupid and intellectually lazy’ might be.
That’s…at best infelicitously worded. I assume that “people” = “modern rationalists”? even “…20% of them above age 30…” Or even restate “modern rationalists” as it’s been a few sentences since you stated your referent.
In context, I assumed it meant that in the SSC survey, P(PhD | age >= 30) = 0.2.
Now, we’ll know we’re *really* winning when we get a lot of 25-year old PhDs and people recording incomes in the millions 😉
The criticism of economics is maybe 50% correct, more correct than it would have been a century ago. There have been no paradigm shifts in economics. The Nobel committee has good taste, from Coase to Kahneman. All economists praise them, but virtually none of them follow them. Coase has been in the curriculum for a lifetime and has had no effect.
a lot of progress has been made in asset pricing
How would, say, the latest issue of the AER be different had economics incorporated those criticisms/theories: https://www.aeaweb.org/issues/449
There’s definitely more methodological diversity, at least, than you would expect from caricatures of economics.
If a movie or TV show depicts a mental hospital between 1870 and 1970, it’s likely at some point they’ll have a patient strapped to a gurney with huge electrodes, and screaming in agony as massive shocks are delivered. Getting into such a situation is a standard trope for time travelers. It may be difficult to get that notion out of the popular culture.
This post seems to summarise to “people are people, and that is a problem”. The problem of outsiders making criticisms of somrthing based on inaccurate or outdated ideas is widespread — so widespread that it also occurs within the rationalist community.
It takes two to communicate, and it takes two to miscommunicate. I think Scott underestimates how hard it is to for outsdiderst to understand rationalism. But there is a technology for getting your point across and it is called PR.
Using standard terminology in standard ways in pretty helpful, too.
There’s extant material where they do exactly that.
Might I offer a different criticism of rationality?
In order to engage in rational argument (I hate the term argument btw. It doesn’t mean the same thing to me as it does to most people here), one must have the humility to loose, and so to accept that they might be wrong. Without that, rational debate is not possible (I hope this is not a controversial point).
On matters that are sufficiently personal, many (most?) people simply do not have that option. If the consequences of “loosing” or being wrong are sufficiently dire, it becomes impossible to admit defeat, and people will fight for their side as hard as possible, even if they evidence starts mounting against them.
This means that rationality can become a status symbol, a way to signal to others that the speaker is sufficiently secure that most of the issues that society worries about are not going to adversely affect them no matter which side is correct, similar to how in some societies the wealthy wore clothes or groomed themselves in ways that would make manual labor impossible in order to signal that they don’t need to do manual labor.
I’m not trying t make the case that all rationalists are privileged people with no real worries in the world, just that it’s a temptation to watch out for.
What’s wrong with that?
>On matters that are sufficiently personal, many (most?) people simply do not have that option.
With the exception of one person on this planet, there’s always someone is a worse situation than you are, so no you don’t have the right to ‘not have that option’ just because some people have it easier than you. Because then you’re gonna run roughshod straight over the people who have it worse.
Sure some people do have that right. Some people really do have ‘no option’ but to go out there winging it trying desperately to survive with no consideration for anyone but themselves. (-or collapse/die).
-It’s not just the one person who’s worst off, but neither is it anywhere remotely near ‘most’ people. That’s fucking crazy. It’s borderline evil.
Most people are not ‘OK’, not stable or grounded or who they want to be, perhaps not not acutely suffering, etc, but that’s exactly why your not being that way, either , doesn’t give you the right to never compromise. Your life being fucked up and not-even-tragic does not mean you are automatically the centre of the planet.
Beyond a certain point, sure, it kind of does, but we’re not living in a generally purposeful and ordered and well design world here. Your tragedies are unique, and probably no one will ever understand them, but not unique in scale or level or awfulness.
Also, of course cutting that option out of yourself is very tempting, because not being able to compromise or listen makes it easier to escalate and get your way, and to sideline and trample over people in greater need of compromise than yourself. And obviously those people generally have much less of a voice, so it’s easy not to notice, because after all you’re so hurt, and ‘aren’t we all?’ is just a platitude, because sometimes liars use those words.
Anyway TL:DR, basically what you’ve presented is an ideology of of complete irresponsible solipsism.
One reason standard criticisms keep getting repeated is that the standard responses are just not that good. Rationalism should be a lot less religious than everything else … it is not good enough to be average.
People fundamentally have a problem with the cultural trappings of the rationalist movement. The science fiction. The fanfiction. The libertarianism. The polyamory. The group houses. The transgenderism. The fondness for coining words and inventing rituals. The fact that we are *countercultural.*
Which is, in fact, my favorite thing about this community, despite the fact that I think our *intellectual* contributions are fairly minor and should be treated with the same skepticism an honest intellectual takes to everyone’s theories.
But: many people feel we should be mainstream academics, or mainstream progressives, or something. And my reply is: well, they don’t get to decide that. My commitment to truth means I have to listen to Wilkinson’s, Cowen’s, and Caplan’s arguments about what reality is like. I have no obligation to change my *aesthetic* to fit theirs.
(I actually think Caplan is exactly wrong. The *best* thing about rationalists is that we’re inspired by science fiction, which traditionally has really good humanist and pro-science values. The strongest criticism of rationalists is that we haven’t come up with very much intellectual content, and get things wrong about as much as anybody else.)
The correct comparison point for rationalists is not Will Wilkinson. (In the world to come, you will not be asked,”Why were you not Will Wilkinson?”)
The correct comparison point is previous generations of countercultures of geeks, gays, and hippies. And I think we have some strengths and weaknesses compared to those cultures. 90’s Extropians and old-school SF fans were less sophisticated in their arguments but often more grounded in the facts of science and technology. Previous generations of gay culture had way more courage and aesthetic discernment than we do, but afaik none of the scientific stuff. Hippies get some things right psychologically and philosophically that we’re failing terribly at (in particular the importance of *chilling out* and *unplugging from the need for validation from the Establishment*) but they have the woo problem.
What is “the woo problem”?
A data point:
I started reading LessWrong before it was LessWrong (~2007) (and have commented some). I think the Sequences are an amazing body of writing and thinking. I think Eliezer, personally, is a pretty great guy (though his views aren’t gospel and some of his tastes in fiction are questionable). I’ve been to LW meetups and megameetups. I hang out in a LessWrong-related chat room on a daily basis. (I admit to having told someone to go read the Sequences on several occasions.)
I like science fiction. I liked HPMOR. Libertarianism is ok. I am not into polyamory or the sexual/romantic mores common among rationalists. I dislike group houses. I dislike rituals.
You will not find many stronger supporters than me, of the core ideas of LessWrong and “Yudkowskian” rationality. But I am not really on board with having a “rationalist counterculture”, and I am specifically not too fond of the one we have now.
Ideas and aesthetics are separable.
Side note about science fiction:
There’s a lot of it. Different kinds. Which are we inspired by?
You could say, we’re inspired by the sorts of sentiments and ideas and general outlook that’s pervasive in the genre as a whole. True, to some degree. But not the whole picture.
How much of rationalist thought and culture is inspired by…
… Robert Heinlein?
… Ken MacLeod?
… Iain M. Banks?
… Stanislaw Lem?
… the Strugatsky brothers?
… Philip K. Dick?
This short list of authors represents a very large range of political and philosophical orientations. And there’s a lot more out there.
I don’t have an issue with any of those things. But, if all there is to rationalism is a social club with certain aesthetic, then it is mostly meaningless to anyone who is not part of club or into the aesthetics.
Agreed!
I think people should think of it that way!
And if people in that social club make intellectual contributions (and some of them do), then evaluate those people *as individuals* and their intellectual contributions *on the merits.* If you do that, you see that it’s a mixed bag, some real stuff and a lot of fluff.
“The science fiction. The fanfiction. The libertarianism. The polyamory. The group houses. The transgenderism. The fondness for coining words and inventing rituals.”
These all share a zeal for the clear cut and explicit and papering over the fact that life is complicated and nuance and messy and compromised all over the place. It does seem apt to then insist that criticizing intellectual content is fair game but not the aesthetic content, as if those aren’t deeply intertwined. (Which is not to say that explicitness isn’t often good and useful. It’s just that it’s one particular paradigm, and not the best tool for handling every problem and topic.)
But I do like that you’re honest about a lot of the appealing being about the sense of community. What can get a bit insufferable is just the self righteousness about it.
These all share a zeal for the clear cut and explicit and papering over the fact that life is complicated and nuance and messy and compromised all over the place.
Polyamory feels like the very opposite of “the clear-cut and explicit” to me, and much more in the direction of complicated and nuanced and messy and compromised all over the place than what monogamy tries to be.
As long as we are doing obligatory links.
I mean, I wouldn’t disagree but I’d put a positive spin on it. The “literary fiction” view of life is deeply pessimistic about human nature. Everybody goes through the same problems, again and again and again. Adultery, grief, betrayal, despair. The “science fiction” view of life (at least, the hard-SF/space-opera track) is fundamentally about alternatives to the Augustinian view. I’d rather spend my time with the people who believe in improvement, even if they make mistakes along the way.
Mr. Wilkinson’s critique was strange. Much of it seemed to have an odd enough perspective that I suspect there’s some very strong cultural divide between him and myself. I’m sympathetic to the idea that rational-aspirants should cultivate personal virtues as well as epistemic skills (which is what I think he was trying to get at), but most rational-aspirants I know do try to cultivate other virtues.
The part that rubbed me incredibly wrong was the idea that we must be afraid of being wrong or have something else wrong with us to want to know the truth. This, the idea that people need an excuse or some sort of personal damage to be virtuous or seek truth is something I’ve always thought of as part of the fallout of post-modernism (yeah you classical-liberal Cato-institute-worker-for! I’m calling you post-modern!), and one of the worst aspects of post-modernism to percolate into popular culture.
Search for truth comes form Eros? I can believe it. Thinking of how an MRI works, all the hydrogen nuclei in bright array, responding to each disrupting pulse with a signal as they regain their position, it’s impossible not to think of “And all the sons of God shouted for joy.” It’s difficult not to think of the universe as some endless, unconscious, brilliant song and dance. I’m not afraid to be wrong, I’m in love with reality.
I didn’t see the must. It looked more like “if you haven’t examined your motivations, how do you know what they are?”.
My problem with rationalism is that there’s a motte and bailey that’s involved.
On the one hand you have the highly defensible version of the movement that encourages, well, rationalism. Trying to understand and avoid cognitive biases and generally encouraging people to use effective tools to understand the world. On the other, you have the version that says we are a bunch of really smart people that have done a lot of work to avoid cognitive biases and we’ve come to these conclusions about AI (and other very specific predictions about the future), the proper ethical framework, the best way to organize one’s romantic life, dieting, the overwhelming importance of status games in human affairs, and so on. If you aren’t convinced, did we tell you about the need to avoid cognitive biases? Maybe keep working on that. Or maybe you just don’t have a high enough IQ to get it. Did we mention that most of us have really high IQs?
To take a concrete example, look at EA. The messaging is that the non-profit sector just doesn’t do a very good job of making sure it is accomplishing goals effectively. Let’s see what tools we can create to do better. That’s a really fantastic critique and plan! But when you get ready to write a check and start digging into who you’ll be giving it to, it becomes very difficult to determine if some of your money is going to end up in the hands of someone working on a lemma to lob’s theorem.
Scott is an extraordinary writer and thinker. He writes about many things, sometimes including advocating for the bailey part of rationalism, but almost always in a humble, tentative, and thoughtful way. He never resorts to “go read the sequences” or “maybe you just aren’t smart enough to get it”. But that existence proof isn’t enough to save the entire movement (even god wanted at least ten men).
I understand that an amorphous community that anyone can declare himself a part of can’t reasonably be held responsible for every last thing anyone says or does it its name (see e.g. BLM). However, at some point it is fair for outsiders to form a general impression based on many interactions over a period of months or years. Particularly when we are talking about a relatively small group of people (in the single digit thousands?) rather than huge groups like feminists or conservatives. If Scotts predominate in the movement than I’d love to be pointed to some of them, so I can read their writings too.
This resonates with my view. Ultimately, everything is informal reasoning. Some is just better than others. Saying you’re a rationalist makes it seem like (to an outsider) that you are claiming a difference in kind rather than degree.
I think this is well put. There’s also the risk that by identifying as “rationalist” you are implying that anyone who does not also identify as such is irrational which is often considered to be an insult.
It’s almost similar to those who favor increased restriction of abortion referring to themselves as “pro-life” as if to imply their opponents are somehow anti-life. It’s a fine rhetorical tactic if your goal is to antagonize your opponents, but not so helpful if your goal is to convert them…
Same rhetorical reasoning as pro-abortion activists preferentially calling themselves “pro-choice” and the opposition “anti-choice”.
What kind of totalitarian bigots are against choice? Americans have the right to choose life, liberty and the pursuit of happiness!
Though at least the media seem to have switched to the “anti-abortion rights” label now, which I have no problem with: I am anti-abortion and don’t consider it a right, so it doesn’t work to make me all flustered and blustering “Well…well… well, of course I’m not against rights, but but but…”
But let’s not pretend that everyone isn’t trying to score points and make themselves look of superior virtue, all right?
I did not intend to imply one side was doing it over the other. Was just an example. I lean anti-abortion myself, as a matter of fact.
well put
Yeah. Sometimes I wonder if claiming to be a contrast to general human tendencies can undermine attempts to go against those tendencies. In this case, the example would be thinking “we are so rational, it’s in the name!” and then overlooking ways in which irrationality might be happening. There’s a human tendency to be irrational, and it’s questionable whether trying to be rational works consistently.
>it’s questionable whether trying to be rational works consistently
This is too vague to mean anything. Trying for who in what circumstances to what extent with what support to what end?
For what it’s worth, some rationalists (like me) are opposed to the EA movement (for something like the reasons you’re pointing at).
I entirely sympathize with your criticism of EA, and it has indeed captured much of what may be called the “rationalist community”, but I think it would be very useful (epistemically and pragmatically) to delineate criticism of EA and criticism of rationality-in-general-not-counting-that-EA-thing. By no means should we ignore that EA is big in rationalist spaces, but I think if we separate EA out, and then look at what else there is, we’ll get a much clearer picture.
What about the AI stuff, would you say we should separate out that too? Or is that too intrinsic to be handled the same way?
My own problem with rationality hinges on this phrase: “Trying to understand and avoid cognitive biases…” Self-described rationalists tend to give the impression that they regard “understand and avoid” as two ways of saying the same thing, or at least that they see the latter as following almost automatically from the former. My own experience here suggests that while understanding about (for example) tribalism is probably still better than not understanding about it, it’s of remarkably little help when it comes to actually steering clear of tribalism. (I say probably because rationalism can also be used simply as a source of dandy new insults to hurl at the Other Tribe.)
I finally got around to reading Chronicles of Wasted Time on Scott’s recommendation (incidentally, I found it less maggot-intensive than he did). For the epigraph to one chapter, Muggeridge chose this, from Samuel Johnson’s Life of Savage: “The reigning Error of his Life was, that he mistook the Love for the Practice of Virtue, and was indeed not so much a Good Man as the Friend of Goodness.” LW-style rationalism seems to produce a lot of Friends of Rationality.
Speaking of the difference between seeing and avoiding: the owners of the most recent skulls presumably noticed all the older skulls when they tried to come this way. It doesn’t seem to have done them much good.
motte: it’s good to avoid logical/reasoning fallacies
bailey: AI will always be friendly
But almot everyone does this. The motte is the ‘end goal’ and the bailey is the ‘means’. For the ‘left’:
motte: poverty is bad
bailey: we need higher taxes
It’s possibly fallacious if one actually switches back and forth (“do you want to see people starve?”) But many times using a motte to justify a bailey unintentional and not necessarily in bad faith. The question is trying to determine when one logically follows from the other.
That first bailey is reversed, and this confusion is worth clearing up. (I agree that neither part trivially follows from the motte, and also am at least a little worried that I’m falling for Something Is Wrong On The Internet, but hey)
motte: it’s good to avoid logical/reasoning fallacies
bailey: AI that is not provably friendly will almost certainly destroy all value in the universe
(or, alternatively: We need to devote lots of resources to ensure that when we build AI it is friendly)
(or, simply: AI will almost always be unfriendly)
By contrast, the people who say “AI will always be friendly” tend to be people who think the rationality community is a bunch of crazy people.
In my experience, people who think rationality is crazy tend to think the idea of an intelligent program is absurd, presumably because they haven’t even broken through the body/soul duality and figured out that biological minds are intelligent programs.
I completely agree with you in this case, but saying “if someone disagrees with [a belief which is prevalent in the rationalist community], it’s probably because they haven’t figured out we’re right yet” in a thread about problems with the rationalist community must be some kind of record in irony.
Gray Enlightenment I don’t follow what you are trying to say vis-a-vis my post. Are you saying that the position “we are a bunch of really smart people that have done a lot of work to avoid cognitive biases and we’ve come to these conclusion … if you aren’t convinced, did we tell you about the need to avoid cognitive biases … or maybe you just don’t have a high enough IQ ” is a perfectly fine and dandy one to take?
I think your Motte is just “Stuff Eliezer believes”, while the Bailey is “People who’ve read Eliezer’s writings on rationality and are trying to extend/apply them”, who may or may not accept any of Eliezer’s particular positions (though they are more likely to).
And then there’s a wider bailey which is something like “People interested in rationality” (in something roughly equivalent to the Less Wrong sense of the word). This includes the Eliezer cluster, and those directly influenced by him, but also many others.
I’m not sure what to make of this besides noting that, as a matter of historical fact, Eliezer did write a huge quantity of amazing material that got a lot of people interested in rationality, and that this is how a great many people connected with this blog learned about these topics.
The part that made me agree with you was the part about the ethical framework. The cultural parts, status games, whatever, I was able to write off as subculture (sub-subculture?) but I just can’t get past the ethics. I’m not a utilitarian, although I have sort of been coming around to general consequentialism lately, and it is incredibly infuriating to read a lot of rationalist-associated writing that just assumes all smart people are strict statistical utilitarians. It just comes off as smug and obnoxious.
Tyler Cowen making lazy overly broad generalizations based on pattern matching to cliched “deep wisdom” and cached thoughts? I’m shocked!
My own view is that TC runs a decent news aggregator, but that his opinions rarely contain anything original or profound. He mostly plays the deeply wise neutral referee.
Also: Gell-Mann amnesia applies to megabloggers too.
p.s. I only commented on Tyler because Noah’s not worth my time and I don’t know who Will Wilkinson is.
he is pretty prolific outside of this blog, but I find his blog kinda boring though. too much minutia
Is he?
I just looked at his CV. He’s written a few books, but I think they’re mostly pop econ (and an undergrad textbook). His last journal article in an economics journal I recognize is his 2007 JEBO. I would guess the U Chicago Law Review is good too, and he seems to be publishing in areas outside of typical economics journals that might be decent, but his last journal article is from 2011. Maybe he has others that aren’t on his CV.
So as academic economists rate things, I wouldn’t call him very productive, let alone prolific. I’ll admit he’s pretty successful as a megablogger / public intellectual / pop economics writer. And plenty of academics read his blog, so he’s not without influence there.
Don’t get me wrong, he’s smart and produces a lot of valuable stuff. He just doesn’t strike me as an incredibly perceptive deep thinker, such that I would take his critiques seriously. I think he could be if he really devoted himself to it, but it seems to me that he prefers to take a broad approach, which inevitably results in shallow surface-level analysis of most areas, particularly when he ventures outside of economics.
Considering that the whole essay uses the world “skulls” as a metaphor for “mistakes” this probably wasn’t the best word choice. I couldn’t help reading it as:
Which I just find hilarious. But my sense of humor is more morbid than most.
they’re good skulls Brent
+1
Gracile? No sagittal crest? Boring middle-of-the-road omnivorous dentition? Fie upon your skull-judgement, Bront; these skulls are 7/10 at most, and they’d score a lot worse if the location of the foramen magnum weren’t so far forward — I have a soft spot for bipedalism. (Take heed, haters: bipeds have a wide field of view and incredibly energy-efficient locomotion. It doesn’t entirely make up for the unaesthetic skulls, I admit, but those are probably a contingent feature of their evolutionary history rather than an inherent disadvantage of standing erect. I will die on this hill.)
“Yes, We Have Noticed The Skulls” seems like it should be a Mountain Goats track.
And if you’re ever in an improv show and your prompt is “annoying person who knows nothing about feminism criticizing feminists,” you can find a wealth of inspiration from the commentariat at this very blog!
If the feminists are already aware of our criticisms of them, why do they keep doing the bad things we’re criticizing them for?
If the rationalists are aware of our criticisms of them, why do they keep on doing the… etc.
I’d like to hear the answer to both of these.
Both movements I view as “Love the goal, troubled by the things people do under the banner”
http://www.cracked.com/video_19536_why-its-impossible-to-advance-cause-online.html
I wish there was a “refutopedia” on the web, that had short and simple refutations of the top 1000 fallacies in Economics and other widely misunderstood fields.
Should we build one?
It’s hard to do this (and I think is one reason Arbital didn’t go anywhere). I think what’s going on here is the critical shortage isn’t the lack of short and simple refutations but having the cognitive infrastructure to understand the refutation.
A very simple solution for Simpson’s paradox existed for 20 years now, in a very clear paper form, yet most people are still unaware of it, or why it’s a refutation.
Yeah, let’s not start with Simpson’s paradox. There are many far easier to understand fallacies that tons of people believe.
Like the “lump of labor” fallacy. If you google that, you’ll find a lot of texts attempting to explain them, but they’re way too long winded and hedging for what I’m thinking of.
I’ll probably have to write this myself to get what I want…
My prediction is, you are not going to get any traction. A good explanation is a binary relation between two people. You have to tailor your explanation to a person, explanations are not for broadcast media.
An approach which has had some success is to write a lengthy sequence of entertaining articles (blog posts, chapters, …) explaining the background knowledge, thus bridging the inferential gap for a fairly broad range of people. Obviously most people who disagree with you won’t read it, but some might!
And yet, I think Wikipedia, despite its obvious impossibility, does pretty well on content (judging it on subjects I know something about), while being spectacularly successful on traction. Scholarpedia has gone nowhere. Wikipedia itself was the offspring of Nupedia, which lasted only a few years.
How much traction does the Encyclopedia of Mathematics have? In the distant past, before the web, before public internet, I saw a review of it (I think by Paul Halmos), describing the content as magnificent, but the enterprise useless, a monument to sit on library shelves unopened. The EoM still exists and has moved online and transformed into a wiki, but I don’t think I’ve ever seen it come up in a Google search from that day to this.
And then there is TVTropes, which in its chosen field has both high-quality content and traction.
While the difficulty of talking to a large audience is a factor, there is a lot more to it. Textbooks are broadcast media. So are lectures, on a smaller scale, and academic papers. In my experience, one-to-one tutoring is a small part of how most learning on technical subjects happens.
Richard, I think Wikipedia does a pretty good job on subjects where you don’t have to be technically correct. On subjects where you do, it’s basically the luck of the draw — for pure math it seems great, for stats/ML it’s decidedly NOT great.
Like other such projects, Wikipedia is a creature shaped by incentives. And experts are not incentivized to spend time on Wikipedia, or battle revert bridge trolls there.
—
For expert-crafted content like encyclopedias or courses, the limit isn’t the content but the student’s mind. It’s true that Universities do lectures, but it’s basically because they have to in order to parallelize. I don’t think it’s controversial that lectures (especially big lectures which less resemble one on one interactions) are not great for learning anything.
The problem with lectures is that they’re both non-interactive and real time. Fine for TED fluff, not good for anything that has to be studied rather than just listened to. Written media, as per the original suggestion, sits there for as long as a reader cares to spend with it.
You mean the reader not being smart enough? The impression I got from Arbital and its five or six predecessors was a shortage of people who could write content.
I do mean the reader, but I don’t necessarily mean “not smart enough,” I mean lacking the background to understand the explanations. For example, most commenters here would understand the Simpson’s paradox explanation with sufficient background (in my class I teach it after a few weeks to undergraduates who had a single machine learning class by that point).
So what is the refutation of Simpson’s Paradox?
(I suspect that you and I have different ideas of what a refutation is. I don’t consider “it is possible to figure out which one of the apparently contradictory results is meaningful, given a particular model” to be a refutation.)
Read Pearl’s paper on this, and see if it makes sense to you. This is not a comment sized explanation, sadly.
I have no idea which paper you mean. The wikipedia article links a paper by someone named Judea Pearl, but this paper is from 2013, so cannot be the 20 year old paper you are referring to. It refers to a 2009 paper by Pearl which isn’t 20 years old either. At any rate, the paper explains, under certain circumstances, which of the two seemingly contradictory results you should accept, and does not use the word “refutation”.
I would use the word “refutation” to mean “the paradox says that X happens. X cannot actually happen.” Simpson’s Paradox has not been refuted by this definition; I can still write up a situation where Simpson’s Paradox happens.
Sorry, Simpson’s paradox is a veridical paradox (e.g. the reversal happens, as it is a property of tables of numbers). The explanation is why we think it’s surprising.
The paper is recent, but the explanation goes back to his book in 2000 (and in fact even before then).
That’s an explanation, not a refutation.
Details? (I think I understand Simpson’s paradox, but I’m unclear on what a “solution” to the paradox would represent.)
I think a good “refutopedia” would be a listing of academic review articles describing the current state of play on major debates in various academic fields. Or high quality lecture notes from graduate (and good undergrad) classes, for a longer treatment.
Several years ago I went through a very nice set of philosophy lecture notes that described the major arguments on many philosophical questions. Off the top of my head, I think it was “Problems in Philosophy” on MIT OCW. HTMLing that (and equivalent sources) would be a good start.
Sorry, I realize this is somewhat different than what you’re proposing. I’m really talking about more of an “argumentpedia”, i.e. a reference for high quality arguments on various questions. Refutations of standard wrong arguments would only be a small part of this.
That sort of thing has been done in book form:
https://books.google.co.uk/books/about/Philosophical_Problems_and_Arguments.html?id=cRHegYZgyfUC
You may very well be right about rationalists, but I am not seeing that leftists has learned too much from the Soviets and socialist failures since then. Many of them continue to advocate the same solutions that brought disaster to the Soviet Union (or Venezuela). Of course, you said “the best leftists”, so there’s always a no true Scotsman claim towards any leftist that does not fit the picture.
Yeah, I was about to say. I’ve actually never met or read a Leftist who was “humbled by …etc.” those examples.
I’ve met/read Leftists who try to explain them away; they’re not “humbled.”
I’ve met/read a few Leftists who are from traditions that were against those “experiments” before they were even begun; but they’re not “humbled” either, they’re proud, and they profess to offer an alternative to the mainstream of the Left.
IOW, I’d agree that the “best Leftists” would be ones who were “humbled by …” etc. But where are they?
And it’s actually a bit similar for rationalists. There’s something in the religious criticism that rationalists are overly proud, that “humility” is the last thing they have any idea about, precisely because they don’t have any sense of something beyond them and greater than them (imaginary though it may be).
In fact, I’d go further and say that the connection between rationalism and the Left is quite deep: when you’re a clever kid, you kind of sort things out in your mind quite early; possibly you never revise those foundational opinions. This leads to the childishness some have noted about the Left, particularly the modern Left (most notably Evan Sayet, whose acute and vicious analysis of the Left can be encapsulated in the claim that the Left is “regurgitating the apple”).
A sense that more is possible? The level above one’s own? Superintelligent AI? Rationalists absolutely do have such ideas.
A difference between the religious idea of something greater and the rationalist one (and, for that matter, the EA one) is that in the latter, you are supposed to actually move towards it. In the former, you are expected to piously beat your breast and proclaim your unworthiness, but you aren’t allowed to get even a little bit less unworthy from one year to the next.
I suppose it can be a sort of comfort to believe that you are, that everyone is, utterly evil, vicious, and damned without hope save for the infinite mercy of a loving God whose grace you are absolutely unable to attain by any act of your own. It would also be a comfort to believe that you are already above other people (but if you aren’t one of [insert your own list of remarkable people], more liable to refutation by observation). Neither of those stories requires anything of you. On the path of reality, it is possible to do better by your own efforts, but it requires real effort, rightly directed, and you may fail.
And if, as a religious person, you insist on something provably unattainably greater than ourselves, we have something for that as well.
The difference is that there seems to be a certain gung ho optimism about the whole affair. ‘We’re not there yet, but we will be!’.
As you yourself said, the religious view is if anything too humble and self flagelating. It’s not so much to always remember that greater things, ways of being, are though we might never get there, as that they’re implausibly difficult to attain as a lowly hominid and we should not aspire to be more than ‘only human’. (or that’s how I see it as well).
I think that’s is a much worse view, but it’s obviously one far more applicable to humility than ‘we better get it right when we inevitably build a god‘.
So I think it’s a poor defence to say ‘sure we have something to be humble before just like religious people’.
You see that as a point in favour of religion. I see it as a point against. As I wrote, the religious view amounts to saying that you must be better, but you cannot be. You must try your utmost, but it’s against the rules to succeed. Everything is possible, but everything is impossible.
The rationalist view is that so much more is actually possible. As possible as steam engines.
The rationalist writes “Plus Ultra” on a signpost pointing into an unknown landscape, inviting explorers to enter. The religionist writes the same on a signpost pointing into an impassable abyss, forbidding anyone to go beyond it.
@richard I wrote that before even reading down because the comparison struck me as not relevant to humility. I’ve edited my post since to clarify that I don’t think this humility is worth it given what it’s tied up with. Seperately I’m also not so sure that ‘humility’ is a straightforwardly good thing. My apologies for being unclear.
Ok, sorry for misunderstanding.
Therefore it is written: “To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.”
If you’re abasing yourself before unattainable greatness, you aren’t being humble at all. Abasing yourself is never about humility. It’s a social action. You don’t bow to your calculator, though you’ll never be half as good at arithmetic.
The proper response to unattainable greatness is to figure out what makes it great and copy those bits as best you can. Striving for perfection is like following a compass: no matter how good you get, you’re always prompted to get one step closer to perfection. Trying to be a virtuous member of the tribe – to be better than the people around you – doesn’t work nearly so well, because you’re no longer looking for ways to improve.
If it’s inevitable that someone will eventually “build a god” (which I think it is, given that my brain runs on physics), humility is figuring out what the true goal is and taking extreme care to get as close as possible. Saying, “Oh, we aren’t superintelligences, how can we possibly decide what one should do, we are not worthy to contradict something greater than ourselves” is not humility, it is social modesty. Likewise, saying that we cannot possibly build something cleverer than a human is a social behaviour: it’s claiming that we aren’t special enough to succeed, or else that humans are too special to be exceeded by a mere machine. The humble choice is to take precautions anyway, because the stakes are high and this kind of argument has been wrong before.
It must have been eaten as spam.
This might have been too long before; breaking out the example text to shorten it up.
First off, I want to say that as part of the immediate present conversation, Scott’s argument is reasonable and reflects an honest view of the people he is defending.
But I don’t think that Scott’s “Improv Sketch” characterizations of anti-economics, anti-rationalist, and anti-psychiatric arguments are sufficient steelmen of these popular sentiments for the defense he mounts of these fields in their current forms. There are several possible levels of magnification/generalization at which you can analyse such vast subjects as “economics”, “rationalism”, or “psychiatry”, and Scott seems to be choosing the most convenient possible levels both for his foils and for his defenses.
For example, you could look at psychiatry at the following levels of magnification/generalization where each level in some way summarizes observations of the immediate lower level:
1) At the immediate personal level – Has a specific individual been helped or harmed by their interactions with psychiatrists and/or by policies and institutions over which the psychiatric profession has influence?
2) At the general personal level – Are individuals in aggregate helped or harmed by existing real-world implementations of psychiatric treatment combined with the psychiatric profession’s total influence upon policies and institutions?
3) At a paradigm/framework level – Do the dominant paradigms and theoretical frameworks within which psychiatric research is occurring at a given time map well to reality, and/or do they tend to be generative of beneficial therapies and/or policies?
4) At the current program level – Is it reasonable to conclude at the present time that the direction of development from past paradigms / frameworks is in a direction of improvement and self-correction?
5) At the overall program level – Is it reasonable to conclude, upon examining the entire history of the program we call psychiatry that it reliably produces good, truth, or come other reasonable measure of value in excess of the harm, falsehood etc?
In Scott’s defenses of various fields in the linked essay, he consistently describes his foils’ positions at level 3 and then defends at level 3 and 4. In his discussion of psychiatry, the foil is criticizing the harm which was done and the healing which failed to occur in the past by electroconvulsive therapy, which belongs to a specific (past) level 3 paradigm. Scott’s defense is (level 3) we don’t do that anymore because we have better treatment regimes, in fact, we have better treatments now largely because (level 4) we are motivated and competent in our efforts to identify sources of error and correct them.
But there is a perfectly good level 5 argument which I believe would be the proper steelman for why laypeople are justified in being skeptical or even hostile towards psychiatry:
(That the track-record of flawed and destructive psychiatric paradigms over time justifies skepticism that the overall project of psychiatry tends towards error.)
D.C. Al Coda for Rationalism and Economics
Based on this analysis, it seems to me that a successful defense of a field should take the form of answering the question: What positive contributions have been made by theories that have since been superseded?
Sample answers: in physics, pre-quantum understanding of atoms enabled chemical engineering; in astronomy, star charts that did not account for the precession of the equinoxes were useful in navigation; in philosophy of science, discoveries like Ptolemy’s were made by crude application of the principle of empiricism without an understanding of probability and statistics.
That is, the field should show a history of producing positive results even when wrong.
Yes, I think it is quite useful to ask in many cases. A researcher usually has to work within some framework of concepts, definitions, and assumptions in order to do productive work. But in any field which is under active, productive investigation, these frameworks seem to get overthrown and replaced quite regularly. Many contemporary experts are in the habit of speaking to laypeople about their fields’ current working models as if they are, in all of their particulars, just as reliable as their’ fields’ best supported results.
This style of presentation implies that the expert is entitled to exert a sort of authority over the permissible conclusions of the non-expert with respect to their field. But it is this same style of presentation (conferring upon the whole of the working model the truth status of the best evidenced particulars) that empowers the very sort of armchair dabbler that serious researchers most often complain about: the smart-but-lazy layperson who wants to master the field through shallow study of the model, who proceeds by manipulating the terms of the framework rather than using it to ask testable questions.
So, the meta-model: Instead of factually accurate maps of reality, scientific paradigms are just useful sets of assumptions for asking questions and communicating between peers (In many fields, this is probably not a heretical idea in discussion within the circle of experts; it is only in dealing with the public that skepticism towards the paradigm is unseemly). I think it is perfectly fair for the public to ignore the current paradigm of a field as well as claims made based on it and to accept only those compact, testable claims which have survived through many theoretical upheavals. And if no such nuggets of fact can be identified, perhaps to grant the field no deference at all
I endorse this meta-model.
I strongly disagree. The current paradigm may not be right, but it’s likely to be much better than any alternative the public can offer. The best general algorithm is “believe the expert consensus unless you understand it and have specific reason to doubt a particular part”. The hard part isn’t saying that the experts are wrong. The hard part is knowing exactly which thing they’re wrong about, and coming up with a better answer, without throwing out all the things they’re right about. “Sell more Braeburns and fewer Golden Delicious” is a business plan. “Sell non-apples” is not.
Skepticism towards the paradigm is accepted between experts but disliked in non-experts because non-experts are almost always clueless. Being wrong is a high bar to pass. Most lay criticism isn’t even wrong: it’s so confused and ignorant that it isn’t even attacking the paradigm, let alone landing a hit. See, for example, most lay discussions of quantum mechanics. And even when the criticism is valid, it’s usually useless. Consider that the last time I went to a lecture demanding a revolution in economics, most of the “new paradigms” were older than me – because making productive use of the ideas is a lot harder than saying that we ought to use them.
Or as Scott put it in the OP:
“They don’t pooh-pooh academia and domain expertise – in the last survey, about 20% of people above age 30 had PhDs”
A lot of the criticism of academia I’ve read has come from people with PhDs. Andrew Gelman isn’t some anti-academic wrecking ball*, but he is extremely critical of a lot the standards surrounding academic papers (specifically in regard to statistics and “statistical significance”). And your “Yeah, yeah, we learned” take actually reminds me of this post, in which he notes that much of academia still hasn’t learned from problems pointed out many decades ago.
*Bryan Caplan is actually writing a book titled “The Case Against Education”, but a more typical example might be Greg Cochran saying entire disciplines are worthless and should be dissolved.
The most pernicious criticism of academia (that I’ve read) tars the strongest research with quibbles regarding the weakest research … commonly this pernicious practice is largely or entirely subsidized by corporate interests.
As a typical example, here is this week’s “steelman” climate-science:
Summary figure here. Quibbling strawman-astroturfing corporate-subsidized denialism here.
I wasn’t thinking of criticisms of a specific topic, but a more thorough-going criticism. So Gelman’s point about publishability being determined by statistical significance, then treating publication as prima facie indication of accuracy (and even being published first as being more indicative than a replication with a larger sample size and a form of analysis known to be determined ahead of time precisely in order to replicate). Or Robin Hanson’s criticism of academia being focused on affiliation with “impressive” (rather than insightful, as he professes to prefer) people. Paul Romer’s criticism of parts of economics as having abandoned norms of science would be in that vein, although narrower. And I expect people in the “rationality community” tend to read people like Gelman, Hanson & Romer rather than whoever is at the Heartland Institute.
These guys are in their 40s-50s, and have tenure — and thus sufficient overview for a proper critique of a system they lived in their entire productive lives.
There’s no shortage of older and/or retired and/or non-academic and/or post-academic researchers, who are still vibrantly active in the STEAM-game, long after tenure (or the absence thereof) has ceased to exert any controlling influence upon their work.
• Marsha Linehan (age 73)
Cognitive-Behavioral Treatment
for Borderline Personality Disorder
• James Hansen (age 75)
Storms of My Grandchildren
• Jorge Mario Bergoglio (aka Pope Francis, age 80)
Laudato Si
• Jonathan Shay (age 76)
“Casualties” (PMID: 21898967)
• Annie Proulx (age 81) That Old Ace in the Hole
(also Barkskins)
• Wendell Berry (age 82)It All Turns On Affection
(NEH Jefferson Lecture)
• Jane Goodall (age 82)
Reason for Hope
• Amartya Sen (age 83)
The Idea of Justice
• Ed Wilson (age 87)
Half Life (also Anthill)
• Eric Kandle (age 87)
Reductionism in Art and Brain Science
• Walter Munk (age 99)
The Sound of Climate Change
These senior-works are notably consonant, aren’t they? What is the rationalist account of this mutual consonance? Varieties of rationalism that do not naturally explain this persistent, vigorous, unified (and post-economic!) creative unity, are notably lacking in explanatory power, aren’t they?
After all, these folks are plenty old enough to retire. So why won’t they just quit? Quit annoying rationalists, at least! The world wonders.
They do pooh-pooh academia and domain expertise because they do. Yudkowsky told people not to get PhDs. Muelhauser called philosophy diseased.
I thought Eliezer called philosophy diseased and Luke defended it.
It’s true that there’s some skepticism of academia in LW circles (I prefer that term to “rationalist”, because we’re talking about a particular set of people and ideas associated with Eleizer/OB/LW and offshoots here).
But from where I sit in academia, that skepticism seems well-justified, and I know many academics who express similar sentiments.
But skepticism of traditional academic institutions is very different from lack of respect for academics and their work, which Eliezer and company decidedly do not manifest. Well, with some exceptions: the disciples are every less than the master.
And a lot of the internal criticism of Eliezer has been either that (1) his work is mostly rehashing the best of academic philosophy and behavioral psych/economics, and (2) his views on zombies/physics/whatever are against standard science, or at least minority views within science.
It’s true that LW folks have low opinions of certain parts of academia that shall not be named. But that opinion is widely shared by the better parts of academia. And of course, Sturgeon’s Law applies to academics as well, which is recognized by Eliezer. Of course, Sturgeon’s Law applies to rationalists too (hence the law of diminishing disciples, and all that).
The Diseased Discipline post was written by lukeprog (Muelhauser). It’s partly a criticism of bad philosophy, partly a defence of “good” philosophy and partly a criticism of EY’s writing style.
I think they have a slew of misunderstandings of philosophy specifically. They grumble about philosophers’ use of “intuition” without having shown you can get by without any[*] use of it. They say that it would help to philosophers know how brains work …how? They say it would help philosophers to know about cognitive biases…how? They state most philosophical problems can be dissolved..how do they know?
There is this repeated pattern where philosophy is lambasted with ingroup beliefs about how to do things better that have not been proven in practice, or shown to have high plausibility.
[*]The Use of Intuition in Philosophy
It’s not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can’t do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers *mean* by “intuition”, that is to say, meaning 3 above. Philosophers talk about inution a lot because that is where arguments and trains of thought ground out…it is away of cutting to the chase.
Most arguers and arguments are able to work out the consequences of basic intutitions correctly,
so disagrements are likely to arise form differencs in basic intuitions themselves.
Philosophers therefore appeal to intuitions because they can’t see how to avoid them…whatever a line of thought grounds out in, is definitiionally an intuition. It is not a case of using
inutitions when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven’t seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.
Scientists are typically taught that the basic principles maths, logic and empiricism *are* their foundations, and take that uncritically, without digging deeper. Empircism is presented as a black bx that produces the goods…somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically relflect on the validity of basic principles. That does not mean the foundational principles are not “there”. Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.
“But skepticism of traditional academic institutions is very different from lack of respect for academics and their work, which Eliezer and company decidedly do not manifest.”
I don’t think either EY or Luke have enough overview of academic institutions to properly criticize them. Here’s my favorite example of the kinds of problems the MIRI model suffers from that academia solves — who does oversight for how money is spent? In academia that is the funding agency. So, for example, every year we write a report explaining our expenses, but also talks we gave, how many papers we wrote and on what topic, etc.
—
Best show of respect is reading and using things people have done.
—
Calling an entire discipline “diseased” is basically an epitome of a status diss. I think we are just disagreeing on basic social conventions here.
—
The academic retort of that kind of language is: “MIRI is defrauding impressionable youngsters, and using their money to subsidize a comfortable Bay Area lifestyle, traveling to conferences, and working on the kind of theory safely removed from any kind of evaluation, by peer or practice.”
That’s a pretty serious status diss, right? But then I might say something like “it’s true, mainstream academics have some reservations about the alternative model MIRI presents, but we do like some of their stuff!” For example, I honestly think their functional DT paper is super neat.
This second thing is what it feels to me you are doing here, while the first thing is what it feels to me they are doing.
—
Phrasing matters — our good friends EY et al apparently are super bad at phrasing. I don’t think it’s any kind of disability, I think it’s simple ego.
—
“Well, with some exceptions: the disciples are every less than the master.”
Hard eyeroll here. That this sort of language rolls off your tongue so easily is a cultural problem in the community I am talking about.
Yes, many of us academics notice the skulls in our industry.
More charitably, but to much the same effect, in the introduction to his Classical Algebraic Geometry: a Modern View, Igor Dolgachev reminds his readers:
Without skulls there can be no evolution, can there? And the vast majority of skulls are not noticed, but rather are buried and lost, without ever being appreciated at all, aren’t they?
This sobering academic reality is no secret (obviously); still graduate students (and there professors too) commonly underappreciate it. “Every acolyte imagines themselves a prophet; every prophet imagines themselves a messiah.”
If I were an actor in an improv show, and the prompt was “person criticizing philosophers who’s never read any philosophy”
——after reading some philosophy——
Yep, that seems about right.
It’s great that you made this mistake, because I think it’s instructive. What you describe is a stereotype of philosophers that looks accurate when you read the philosophy that you’re most likely to read as a nonphilosopher. The people who talk in the ways you describe are those in the postmodern and continental tradition, which is what people who are not philosophers are likely to read, precisely because the philosophy they get access to is likely to be selected for being exciting and profound-sounding. In real philosophy (at least in the Anglophone world), these traditions are a frequently disdained minority fighting for recognition. The proper stereotype is closer to “overly precise and pedantic nitpickers working on impractical problems like the correct semantics of epistemic modals.” (maybe something about mental masturbation still applies)
The more general point here is that misleading stereotypes can get confirmed when the most salient examples of a tradition to outsiders are nonrepresentative. And I’m sure this effect happens with rationalism as well. “I hear that rationalists/feminists/etc. are X. Let me read some rationalists/feminists/etc. to see. *read the most salient nearby examples of rationalism/feminism/etc., which are selected for something other than representativeness of the group*. By golly, rationalists/feminists/etc. ARE X!” And of course, the fact that they actually look for examples gives people false confidence that their stereotype is grounded.
I would just like to say that I love your name and avatar.
It’s a portmanthree! I was quite pleased with it. And I used all my art skills to photoshop a Nietsztache on the cat.
Would you expect to be able to understand a random maths paper? Have you considered the possibility that you can’t understand random philosophical works because you need to go through a systematic process of building up background understanding and familiarity with terminology? The kind of systematic process that education is?
The more appropriate test is whether somebody with minimal familiarity with the discipline could write a paper good enough to fool the experts. Essentially it is a Turing test: if a random person with minimal domain expertise can fool the experts, then the experts are as expert as a random person, that is, their expertise is fake.
In post-modern philosophy/social sciences/X studies it is definitely possible to write such a paper, as Alan Sokal demonstrated. In maths I don’t think it has never been done and I don’t expect it to be possible.
well. if you meant continental philosophy, you could have said so…
Fails because of shibboleths.
I’m pretty sure I couldn’t write a fake homeopathy paper and not because homeopathy is proper science.
Passing the test is a necessary condition for true expertise existing in a field, not necessary one.
I thought this post was going to include a reference to the Mitchell and Webb skit, “Are we the baddies?“, but that’s kind of the opposite point being made.
You beat me to it. Speaking of which, who gets to design the rationalist uniforms?
Hugo Boss AG does some good work.
I think there are some weak criticisms of rationalism because there are some mediocre ambassadors for rationalism. Even if the field has developed as a field, many individuals are still developing as individuals and haven’t noticed the skulls yet. Add to that some personality quirks and I think you basically have an explanation for the phenomenon.
As for evidence I bring you… a synthesis of anecdotes!
I’m not a member of any explicitly rationalist community, but I’ve read most of the canon, agree with and try to practice a lot of it, and (I’m pretty sure) can “pass” as rationalist in a social setting. Furthermore, from reading your blog and a few others, I know there are at least some rationalists who are super smart, and whom I’d love to be friends with and talk with at length. So, occasionally I go to rationalist meet-ups and see if I can meet interesting people. Sometimes I do, and it’s great.
More often, though, I end up in conversations with people who I would describe as: high-ish IQ, very loud, often smelly*, fairly arrogant, socially graceless, not very worldly, and very into RATIONALITY. If nothing else, these people usually know a bunch of interesting facts, so I don’t really mind talking with them. But there’s no way that such people make good impressions on your average Joe, or even your average well-educated, high-ish IQ Joe. More likely, such people would recite loudly about RATIONALITY, possibly not really taking the care to check their own thinking or to give their interlocutor the respect of a sincere back-and-forth. Maybe they speak reverentially of Bayes’ Rule without explaining why it’s special. Maybe they’re in Chapman’s rationalist ideologies as eternalism phase. One way or another, they make a weak impression on behalf of rationality.
I bet for every 1 top-tier rationalist there are 10 people essentially as described above. Add to that the selection bias wherein the loud arrogant ones are the ones who have the most conversations with people outside the community, and there you go. If Cowen, Smith, et al. meet a handful of people like this, and don’t take the time to figure out which people are the more fully-developed thinkers, is it any wonder they form the opinions they have?
*I don’t mean this with any malice and I hope it’s not a distracting observation. It’s an honest, common experience I’ve had. I’m not especially sensitive to bad smells.
After reading various comments here I have become more and more confused about what in the name of everything LessWrong is. Considering that the site itself tells me that it has changed significantly over the years can some of the Ealdormen of the rational community explain to an outsider what it started out as and what it became?
I’m sorry if this seems incredibly lazy on my part but I can’t trust that reading through the site as it is right now is a good representation of how it was (actually, I’m pretty sure it wouldn’t be).
Sometime around 3500 BC in Internet years, economist Robin Hanson founded a blog called Overcoming Bias, where he posted about heuristics and biases from an economics perspective. Most of this blog was his own work at the time, but he did and still sometimes does accept guest writers on it.
Later on — let’s say around 29 AD in Internet years — Eliezer Yudkowsky began posting a series of articles on that blog on the theory and practice of human rationality. This series started attracting an audience of its own, different from the traditional cranky economist crowd, and was spun off into its own blog, Less Wrong, where the series continued; they were organized into lines of related posts, and therefore came to be known as “the Sequences”. The blog housing them was intended from the beginning to be a group project, but in practice it was Eliezer’s baby — partly because he held administrative rights and partly because in those days he was just ridiculously productive (the Sequences total about half a million words). It did attract more writers, though, among them our host, who’re collectively responsible for a number of secondary sequences and a much larger variety of standalone posts.
This proved stable for a while, but eventually a while Eliezer moved his attention away from LW and toward other projects, most notably Harry Potter and the Methods of Rationality. While HPMoR was running, this formed something of an Eternal September period in LW’s history; the fanfic was driving more people than ever before to the blog, but there wasn’t much in the way of new content being created. Many of the old guard split off their own blogs at this time, including this one.
Once HPMoR ended, there wasn’t much holding LW together but its history, and it went into a long decline. Other hubs sprang up to serve the community, of which this is probably the largest in terms of readership if not in terms of content. Much more recently, there’s been something of an attempt to revive LW, but I haven’t been following it all that closely.
Was there a specific reason why the other writers couldn’t provide new content during the Eternal September? It seems to me that this was kind of the deciding moment between having one supercommunity vs a network of different blogs and various personalitys with similar values.
I believe that abusing the karma system by mass downvoting and creating sockpuppets played a role, although I am not sure how big that role was.
Speaking for myself, it’s quite demotivating to know that there is an obsessed person on the website who can downvote any article or comment to oblivion, for reasons that have nothing to do with the content of given article or comment, and that there is nothing anyone can do about it. A few potential contributors became targets of the mass downvoting, so it didn’t make sense for them to try posting anything.
It was a real-life scenario, where the Bayesians completely failed to coordinate against a single barbarian. 🙁
My memory is somewhat different. As I recall when Overcoming Bias was created it was originally intended to be a group blog, with multiple contributors, Robin and Eliezer being only two of them. Eliezer became a major contributor, and continued posting on Overcoming Bias for a long time. Eventually Less Wrong was created as another group forum and Overcoming Bias became Robin’s personal blog. For the most part the sequences were written and posted on Overcoming Bias before Less Wrong was created. Many (all?) of the older Overcoming Bias posts were eventually ported over to Less Wrong, so it appears if you look at Less Wrong’s archives that it existed much further back than it actually did. (Check the older posts and you’ll see the older comments were not threaded – because they were ported from non-threaded OB posts – but newer comments made after the porting are threaded).
I’m no Ealdorman of the community, but I’ve been reading LessWrong since its inception, and Overcoming Bias before that (but not since). So I find myself in a position to give an answer. I have no personal acquaintance with Eliezer and do not know how much of this he would assent to himself. But here is my personal interpretation of the history, as I have seen it. Corrections welcomed from those closer to the history than myself.
Back in the day, Eliezer Yudkowsky and Robin Hanson started a joint blog, Overcoming Bias. It was about rationality, but their respective approaches diverged so much that it was not long before they split, Hanson retaining the OB name and Eliezer founding LessWrong to continue with his work. His ultimate motivation was to find people capable of working on what he saw as the vitally important subject of how to design really powerful artificial intelligences that will not promptly destroy us all. It is vitally important, because he foresaw that such machines will inevitably be built in the coming decades, and because designing them to not destroy us all as soon as they are turned on is a terrifyingly small target that cannot possibly be hit unless the task is set about with real understanding and meticulous accuracy, using mathematics that does not yet exist.
The reasons that this would be the inevitable outcome go to the foundations of rationality itself. And so, finding that even people reckoned as really smart were unable to pursue reason in the way that it must go, he undertook what he saw as a necessary groundwork by presenting those foundations as best as he could. This work is what is now referred to as “the Sequences”. It should be noted that with few exceptions, Eliezer does not claim originality for anything in the Sequences, but I think the synthesis itself is a great accomplishment.
Many came to drink from this well, but few deeply; but this he expected. He spoke of “raising the sanity waterline”, a valuable thing in itself, but his greater motivation was to attract by this activity people who might be capable of contributing to the greater work of the Friendly AI Problem.
And so it turned out, and he founded MIRI, and left LessWrong, whose continuing denizens say, “why does he no longer speak here?” And the reason is that he is pursuing his original work in another form, in another place. LessWrong has also spun off CFAR, which is likewise pursuing its own work of “raising the sanity waterline” elsewhere and by other means. Others with much to say on rationality-related subjects began their own blogs, of which Scott’s is one. Little remains of LessWrong, although since the beginning of this year there have been efforts to revive it. Time will tell.
Well, there’s your problem right there.
This concern would be weighty, in a LessWrong-world in which AIs operated by deductive reasoning from facts and sensor-data. But that is not how we biological minds work; neither is it how our most advanced and fast-evolving artificial minds work.
Instead it turns out that general intelligences — both biological and computational — operate most effectively by non-deductive pattern-recognition, with said patterns encoded (abstractly) as varietal geometries that are realized (concretely) as neural nets.
Ratiocination is one of several methods that are proving to be effective in sculpting cognitive varietal geometries, but the microscopic processes of cognition (both biological and artificial) are not themselves ratiocinative.
Right or wrong, this post-rational view of cognitive processes — in which intelligence is associated both abstractly and concretely to varietal geometries rather than ratiocination — has become the transformatively dominant AI research and engineering paradigm, hasn’t it?
Ditto for psychiatric medicine, needless to say! 🙂
Machine learning/AI is what I do to put food in my face, and I gotta tell you, I’ve never heard of “varietal geometries” in this context. The flavor of the month is “generative adversarial network” methods (which achieve some spectacular results).
One of the main claims of AI safety researchers/advocates is that how domain-general artificial intelligence is achieved is besides the point. The point is that there’s every reason to expect that intelligence and goals are orthogonal and getting the goals right is hard. The paradigmatic fictional example here is the genie in the bottle that does what you said instead of what you actually wished.
If you’re intent on having the things not try to kill you, then having “adversarial” in their name isn’t the most comforting thing.
New this week on the arXiv server is “Deep learning and quantum entanglement: fundamental connections with implications to network design”, by Yoav Levine, David Yakira, Nadav Cohen, and Amnon Shashua (out of Hebrew University of Jerusalem, arXiv:1704.01552).
Not every article that intimately mixes the literature of “deep learning” with the literature of “quantum entanglement” is a good learning reference …but this one is (as it seems to me).
In reading this literature, it is helpful to keep in mind that (to mathematicians) tensor networks are just classical varietal geometries from which lower-dimension subvarieties (namely, the tensor networks themselves) are “sculpted” by imposing further algebraic constraints.
The algebraic sculpting severs to improve computational efficiency — commonly by factors of “big” 🙂 — and the details of the algebraic sculpting are where the physical intuition and/or AI heuristics enter.
None of that implies that the target of a machine that will not destroy us all is any less terrifyingly small a speck in the vast space of ways it can go wrong. Catastrophic, existential failure is the expected result if the basic problem is ignored. “But we’re not building it out of logic” makes the problem worse, not better. If you don’t understand how a machine vastly more intelligent than you works, you have no chance of making it doing what you want. You will merely be matter that it has its own uses for.
For a machine that can just sit there playing chess or turning Monet paintings into photographs, magical explanations do no harm except for muddying the thinking of the people they circulate among. For a machine that can outthink us by orders of magnitude, such thinking will not do. It is the sort of thinking that the Sequences are intended to address, by going right back to the foundations underlying all effective ways of dealing with the world.
BTW, “varietal geometries”? My Google-fu only finds mentions of varietal geometry, dynamics, manifolds, and so on in advanced and somewhat speculative quantum mechanics.
Condensed from a Mathematics StackExchange question:
In the mathematical literature, by far the most common varieties are algebraic varieties, because there exist deep theorems — a paradigmatic example is Chow’s theorem — to the effect that various broad classes of functions having certain desirable properties, necessarily are algebraic functions (possibly disguised by reparameterization, e.g. Boltzmann machines).
Hence in respect to the mathematical literature, speaking of varietal geometry versus algebraic geometry is largely a matter of taste, with the second usage being (at present) more common.
Andreas Gathmann’s free-as-in-freedom book-length on-line class-notes Algebraic Geometry, in the introductory chapter, surveys the implications of this mathematical worldview:
One way to structure further study (that works for me anyway), is to regard varietal geometries (in practice, algebraic varietal geometries) statically as powerful tools for representing objects, and dynamically as powerful tools for simulating dynamical systems. Here ‘representing objects’ includes ‘representing minds’, and ‘simulating dynamics’ includes ‘artificial cognition’.
There is no shortage of literature associated to this worldview; for concrete calculations the textbook by David Cox, John Little, and Donal O’Shea, titled Ideals, Varieties, and Algorithms (2007), is one good start (among many).
Needless to say, it’s perfectly feasible to “get started” in AI work — either as an applications programmer or (less ambitiously?) as a philosopher — without ever acquiring a cognitive capacity to appreciate this varietal/algebraic/geometric worldview … a worldview that (in practice) is much more than a set of facts from which researchers reason. The literature of varietal/algebraic geometry, though immensely rich, definitely isn’t easy, for beginning students especially. The introduction to Igor Shafarevich’s Basic Algebraic Geometry (2007) soberly advises
On the other hand, for some AI development work (notably including proof assistants) a personal in-depth cognitive assimilation of this worldview is practically essential (as it seems to me anyway). As one example (among hundreds of diverse examples) see this month’s arXiv preprint “Algebraic Foundations of Proof Refinement”, by Jonathan Sterling and Robert Harper (arXiv:1703.05215).
As a viable path forward, it is no bad strategy to pick problems that you care about — ranging from dystopian AI scenarios to psychiatric connectome problems — and rethink those problems in varietal geometric terms. There is no shortage of articles, books, software packages, and (most important) colleagues to help.
I am now gripped by a terrible fear that J. Sidles is the first super-human-level AI.
Well, so much for worrying that AIs will be so convincing they can get us to do anything.
Máquinas de pensamento requerem heterônimos também! 🙂
(traduções várias)
The problem is the name.
What’s the name of truth?
If you want a truthful movement you have to change its name every 3 months, reject any social cachet you’ve accrued, and force people to engage solely with the content.
May I interest you in XFINITY brand cable internet?
I like Spock.
Can you point to examples of critiques of the rationalist community which have come from outside the community and which you think are worthy?
I believe I was this annoying person until I stumbled upon this blog a few months ago. It made me see what rationalism is really about.
But most people I’ve met calling themselves rational aren’t like that. Some of them just like to mock religion (and not even in a smart way). Some just use data taken out of context to justify their racist or sexist opinions.
Now I know they’re wannabe rationalists at best, or maybe just using the word “rational” to sound more credible. But this might be where you get these bad opinions from.
Nobody’s given the Orton quote, so I will:
Talking about being self-taught autodidacts, does anyone know if Scott (or anyone else) has ever published a list of textbooks to read in order to become a good rationalist? I have see lists on rationalism before, but they onkly focus on what it means to be rational. I am more interested in what kind of knowledge (or background) you need. For example, textbooks on economics, psychology, evolution, ethics and so on.
Isn’t the sequences the sort of designated “thing you have to read before you can call yourself a rationalist”?
To some people.
But it actually, the fact that “Rationality from AI to Zombies” is the only thing people would generally point to as an answer to afirebearer’s question is probably a failure.
It’s frequently been asserted that much in the sequences is repackaged from other sources. Have a list of sources would have been prudent.
This has been done on Less Wrong, ask on an open thread.
One important potential failure mode to be braced for: What happens when rationality memes escape into the wild? In other words, what preposterous ideas will result from people who get their opinions from a third-hand rumor of rationality? Right now, this is too obscure for that to be a major danger but that might change.
Some might say the preposterousness has already happened after people got their ideas from secondhand renditions of Kahneman, Deutsch, Kaynes, etc…
*Not sure if this is showing up, did I trip a word filter?*
I had the opportunity to read CFAR’s latest handbook thanks to Duncan’s offer in the comments above. I read it with a fairly critical eye, with a focus on figuring out exactly what is going on in the dynamic this post illustrates. My default frame for the whole read was “Rationality is pretty excellent, so why do we have so much trouble convincing people to adopt it?”
If I could boil it down to the single most serious issue present throughout the whole body of work, it’s that CFAR seems to have a split focus between remedial work to get neuroatypicals up to speed with the baseline population in terms of life skills they may have missed, and optimization tactics to think better, faster, stronger, more accurately than the average bear. Both of these are excellent and useful goals. However, there is near zero overlap between the material that is relevant to each group.
The examples used to illustrate the various techniques include trying to remember to take the stairs as a bit of exercise, learning to walk through crowds at the mall, avoiding eating a piece of cake, trying to climb trees more (while tolerating having sap on one’s hands), getting off the couch to go for a jog, and learning to “Feel like a badass” until they realized they were a “hardcore spirit warrior”.
To the reader who can already interact with society, feed and dress themselves, pay their bills on time, motivate themselves to do things, and whose problems represent a higher bar than “Just get off the couch and show up”, this syllabus has vanishingly little of value. I’d be much more willing to buy in if more of the examples started with “When Anna was competing against 2000 people for a research position at NASA she…” than “When Joe was struggling to motivate himself to shave, put on pants, and leave the house he…”
I don’t mean to be judgemental towards the people for whom this is useful advice, though if this is the demographic we’re teaching, we need to drop every hint of arrogance right this second. None of this ‘Can’t the muggles see that this is the better way?’, ‘Don’t they know we have an average IQ of 135?’, ‘Hardcore spirit warrior’, and ‘Systematized winning’ stuff.
Essentially we need some honest signals of the accomplishments that the material has produced that are relevant to the audience we want to attract. The examples about Parkour were a great bucking of the trend, (which then got bogged down in some seriously questionable training advice. ‘If you want to get really good at climbing, just climb! Don’t lift weights or do squats, just climb!’… Ignoring that every serious athlete in the world squats and weight trains in addition to their sport specific training… ok, not 100%, competitive jockeys are a notable exception.) and we need more examples like that. Positions at NASA, Nobel Prizes, prestigious awards, Guinness records, championships… anything that demonstrates that these skills help one actually win in *competitive* systems. As much as he exaggerates absolutely everything, Tim Ferris is probably our best model here.