I am going to do something very dangerous today, something that makes me acutely aware of my own mortality. I am going to disagree with Robin Hanson.
In my defense, he wrote an entire blog post called Don’t Be ‘Rationalist’.
The key section:
This blog is called “Overcoming Bias,” and many of you readers consider yourselves “rationalists,” i.e., folks who try harder than usual to overcome your biases. But even if you want to devote yourself to being more honest and accurate, and to avoiding bias, there’s a good reason for you not to present yourself as a “rationalist” in general. The reason is this: you must allocate a very limited budget of rationality.
It seems obvious to me that almost no humans are able to force themselves to see honestly and without substantial bias on all topics. Even for the best of us, the biasing forces in and around us are often much stronger than our will to avoid bias. So we must choose our battles, i.e., we must choose where to focus our efforts to attend carefully to avoiding possible biases.
Robin is choosing to treat rationality as a limited resource that must be budgeted, though he doesn’t explain why.
A commenter on his blog, Silent Cal, asks him: how do you know rationality isn’t more like weightlifting? The more weights you lift, the stronger you get. If you want to become strong, it’s a good idea to “blow” your weightlifting “budget” on as many useless tasks as possible, as often as possible.
Robin says that rationality may be both like weightlifting and like money. Money also has a way to “buy” future strength in that if you invest your money today, you will get more of it tomorrow. But, he says, even when you’ve made a fortune off of good investments, you still have a limited budget. Or, in the weight-lifting metaphor, no matter how strong you are, if you’re going to be asked to lift a weight at the very limit of your strength at 5, don’t exhaust yourself at 4:30.
But I propose a third metaphor: rationality neither as budget planning nor weight training, but as habit cultivation.
Wait, no. Not a metaphor. Whatever the opposite of a metaphor is. An actual thing.
During my abortive attempt at learning aikido, my instructor taught me some basic rules of efficient bodily movement and posture and suggested that I follow the rules not only in aikido classes but all throughout my daily life. The reasoning was: right now I have certain unconscious postural habits that I adopt without thinking about them whenever I need to sit, or open a door, or whatever. If an aikido instructor is standing next to me, telling me exactly what’s wrong with each of them, I can probably figure out what he’s talking about and correct them. But when I leave aikido class, I’m probably going to relapse into my normal habits, since after all I’ve been doing them for almost thirty years now and they require less effort. The fact that I relapse into my normal habits outside aikido class probably means I’ll also relapse into my normal habits in a stressful situation like a fight, when I really need to be thinking about other things besides posture. But if I can train myself to use proper aikido styles of movement even when I’m doing something stupid like opening a door, my body will become so used to them that they will be the style I default to when my mind is otherwise occupied trying to deal with the guy swinging a broken beer bottle at me. Or, even if I am thinking “aikido aikido aikido aikido” at that point, which I might well be, I will be thinking about the complicated impressive things that supervene on the basic movements, not having to worry about the basic movements themselves.
This whole affair started when someone asked “What if rationality were kind of like a martial art?” And part of the answer to that question would be: you had better make the fundamentals of it perfectly, entirely, down-to-the-bone natural.
But I have an even more relevant example of habit cultivation.
Lucid dreamers offer some techniques for realizing you’re in a dream, and suggest you practice them even when you are awake, especially when you are awake. The goal is to make them so natural that you could (and literally will) do them in your sleep. I can attest that this works. You cultivate the habit of worrying about whether you’re dreaming, and then once the habit is established it continues even in your dreams, at which point it becomes useful.
And I think this is a good metaphor for rationality because it’s about holding on to consciousness.
The problem with dreaming is that it depresses your natural ability to wonder if you are dreaming. Like, you’re being chased around Venice by a giant dragon with the head of your second-grade teacher, and you’re wondering a lot about whether you should make a swim for it in one of the canals, and not at all about whether this maybe, might be a dream. That’s why it’s so important to cultivate the habit of worrying about it, because habits can survive even in states of weakened consciousness where the logical thought that tells you it’s now time to dream-check can’t.
And the problem with irrationality is that it depresses your natural ability to wonder if you are being irrational. One of the fundamental skills of rationality is noticing that you are confused, which also happens to be one of the fundamental skills of dealing with being chased around Venice by a giant dragon with the head of your second-grade teacher. Things like “I should notice when it is a time when I should worry about whether I should notice I am confused or not” just collapse back down to “I should notice when I am confused”. This is not something you can do by conscious thought in the grips of the same weakened consciousness that is causing the problem to exist in the first place. Your only option is the same one the lucid dreamers use: cultivate the habit of always doing the right thing, and hoping maybe that habit will be there when you need it.
“The more weights you lift, the stronger you get. If you want to become strong, it’s a good idea to “blow” your weightlifting “budget” on as many useless tasks as possible, as often as possible.”
What?! No, I think this is well understood to be not at all the case! There’s even a wikipedia article on http://en.wikipedia.org/wiki/Overtraining . If you want to become strong, it’s very important *not* to blow your weightlifting budget on useless tasks that don’t actually increase strength, but do reduce your capacity to train.
Edit: but you weren’t actually arguing that, I guess. Ah, well. I’m leavin’ it. 🙂
Overtraining is when you exceed the “weightlifting budget”.
“Of course too much is bad for you, that’s what ‘too much’ means.”
Hmm, on the other hand, there are a lot of things that can make you tired but don’t make you much stronger. Or to put it another way, some exercises are much more effective at making you stronger than everyday activity.
Put like that, it’s actually quite disanalogous to most rationality skills, because we don’t have reproducible exercises that can bulk up your rationality muscles fast – take a rationality wimp and turn them into rationality Ahnold.
I am tickled by the idea that there might be rationality bodybuilders who lift for ‘muscle mass’ and rationality powerlifters who lift for ‘strength’, whatever those might map to.
I just want to know what shady website I need to go to to get rationality “steroids”.
Overall, I think you are right, but with a broader of view of rationality, Robin Hanson has a point (though not, I think, for the reasons he provides).
I think there are two parts to approaching something rationally – noticing confusion/irrationality/bias, and acting to correct it. Or perhaps it is more accurate to say that the former is “rationality,” which is not budgeted, and the latter are supporting skills, which are. For instance, when you see an odd-looking statistic that just happens to confirm something you would like to be true, you would do well to have a mental habit that leads you to notice the potential for confirmation bias in this situation. Once you notice this, you have to decide how to respond to it, and this is where supporting skills come in. The ideal action might be to figure out where the statistic comes from, look for flaws in the methodology, and, for good measure, find out everything else you can about the matter, paying particular attention to those that contradict your desired outcome to avoid confirmation bias, so as to completely replace the biased statistic in your head. As someone who occasionally writes blog posts more or less doing just that, I am sure you realize that this is not a reasonable thing to do for all information that comes your way.
With CONSTANT VIGILANCE, I think it is a reasonable goal to approach everything rationally (not to say you will reach it, just that it’s a good place to aim). However, due to the need for supporting skills like knowledge, and the time and energy that they require (limited resources), it is impossible to “be rational about everything” in the sense of having corrected every irrational viewpoint and replaced it with one based on good evidence, for every subject you encounter. Rational outlook can be a habit and should be built up as such, but when it comes to combining rational outlook, opinions, and actions, you do have to prioritize.
Striving to be rational is not striving to have well informed opinions on everything you come across. In that domain it’s closer to striving to know how well informed your opinions are, much less resource consuming. So yes, prioritize, but don’t prioritize by saying “oh, I can believe anything I want about this because I don’t have time to research it more” but rather “oh, I don’t know squat about this, and that’s alright”.
A lot of energy can be saved by not jumping to conclusions, and not continuing to weigh evidence for both/all sides either.
Wiping that from current RAM saves internal bandwidth. If the statistic is about vitamin content of fructose vs cane sugar, asking oneself “Do I really care about vitamins right now?” can lead to asking “Do I really want pre-sweetened cereal anyway?” Which can free a lot of space currently being used for comparing other aspects of those brands. Hopefully then defaulting to some obvious choice, rather than filling that space with similar nit-picking about which un-sweetened brand to buy.
>> Like, you’re being chased around Venice by a giant dragon with the head of your second-grade teacher, and you’re wondering a lot about whether you should make a swim for it in one of the canals, and not at all about whether this maybe, might be a dream.
The people who thought it was a dream were all eaten by dragons. Evolution!
I’d avoid “rationality” as a term; it seems to contain multitudes.
Doubt seems very much like a technique that can be always on. I’d argue that it has to be constant to be useful; otherwise, the illusion-of-truth effect wins. To know that you know nothing seems possible, or at least illusion-of-possible. If that’s rationality, it’s something that you should work on as often as possible.
At the same time, there are still limited hours in the day, and humans require sleep. If knowing the truth matters, then there are only so many things you can research, and many of the topics rationalists are dragged into aren’t very high-return. Speaking from personal experience, it’s very easy to become very knowledgeable about legal minutia or recent history or the modern social justice movement in ways that probably won’t change the world. Worse, at least some types of doubts are fueled by domain-specific knowledge, so there’s overlap. Even if you have the starting advantage of recognizing Gell-Mann Amnesia and Dunning-Kruger, even obviously false statements are only obviously false if you know how a double-blind test on parachutes would work, or where to find the odds of getting hit by a comet.
And finally, for some people, even doubt may require energy. I’m not sold on the willpower-as-limited-resource model, but it’s very popular among psychologists and there’s some evidence for it. If it’s the case and given what else we know of human psychology, it may well be very easy to go about being skeptical of folks we’ve trained ourselves to be skeptical of early in the morning, run out of juice by noon, and then take really dangerous mistakes at face value in the evening.
I’m not sure you disagree with Robin in any substantial way. I’d be surprised if he thinks that there are no generalizable rational skills or habits at all. Just that the skills/habits that do generalize are insufficient for reasoning accurately about any sufficiently complex topic; that being rational on a subject involves having and knowing how to apply a lot of specialized knowledge which takes a lot of effort to get.
At what point does CONSTANT VIGILANCE become too distressing and mentally exhausting to be worth the reward? Sometimes I wish I could just relax.
Maybe, if it is distressing, you are doing it wrong. (Which is how most people do it by default.)
In humans, rational habits probably also need some emotional habits to work correctly. For example, to practice the rational skill of noticing that you are confused, you also need the emotional attitude of “it is good to notice that I was confused” (as opposed to: “oh my god, I was confused, what a stupid horrible person I am, now everyone will laugh at me…”). To admit that you are wrong and to change your mind, again you shouldn’t have horrible emotions associated with being wrong and changing your mind. Depending on emotions, attempts to become more rational can be funny or painful.
One step further: your emotions probably depend on how people around you interact with you. If they will punish you whenever they find you doing X, of course you are going to feel negatively about X. Being confused or changing one’s mind often comes with a status loss (high-status people usually express certainty, and avoid debating topics where they can’t). People reward you for some specific beliefs and punish you for other specific beliefs regardless of whether those beliefs are correct or incorrect. It’s hard to have pro-rationality emotions (emotions instrumentally useful for rationality techniques), when your environment teaches you otherwise. A partial solution is to hide your beliefs and emotions from people who would punish you for them.
This is why rationalist community can be helpful in becoming more rational. But we still have to deal with the rest of the world.
I think Robin would agree with most of this, trying to be rational all of the time will increase your ability to be rational when it counts. And allowing yourself to be irrational sometimes will increase your risk of being irrational when it counts. But this ignores the fact that trying to be rational always has a cost – time and mental effort. And trading time for epistemic rationality, reduces your time and mental energy budget for accomplishing your goals, one of which could be trying to be more epistemically rational in a really important area. And this is generally a poor tradeoff.
Practical case: if I want to build a tech empire, then I’m going to spend most of my time honing my relevant skills, [insert whatever it takes to build a tech empire here], and I’m not going to have time to do a thorough literature review on every health claim I hear for example. I will have to make decisions about what to eat, and they will not be based on rational reasons, but that is the tradeoff that I need to make. This extends to pretty much any time I hear a claim – I don’t have the time to rationally examine everything I hear and I can’t always just say mu. Being irrational in some cases is the tradeoff that I would chose.
Although perhaps I should ask a clarifying question. What does it mean to be rational all the time or exercise “constant vigilance” as you put it? Does it mean doing your best with whatever information you have? Or actively seeking out additional information and arguments when you aren’t sure what to think? If its the second then my criticism stands.
> will have to make decisions about what to eat, and they will not be based on rational reasons, but that is the tradeoff that I need to make
This seems to fall under “rationalists win”. Working with limited resources, heuristics. and intuition/system-1-reasoning are part of being a rational human. Refusing to eat breakfast until you’ve read all the most recent studies seems like a form of Straw Vulcanism.
You go to optimize with the cognitive abilities you have, not the cognitive abilities you want to have.
Well I didn’t accuse Scott of refusing to eat breakfast until he’s read all of the studies. More along the lines of refusing to say “I’m going to eat broccoli because its healthy” before reading all of the studies.
In any case, Tristan above has more or less convinced me. The rationalist is perfectly fine saying : “I don’t know all that much about this topic, but if I’m forced to chose then I think opinion A is correct with 10% confidence”.
I’m still concerned that figuring out how well calibrated you are on an issue still consumes time and resources. Which would be better put to use elsewhere.
I guess I agree with cultivating the habit of thinking.
Well, maybe I shouldn’t be so glib; there are a lot of groups that tell people to substitute whatever else for thinking and cultivate that as a habit. Present company excluded, of course.
In the end the dude in the picture’s constant vigilance did not turn out for him. Perhaps he should have saved up some of his rationality for when he needed it?
(I agree with all your points, just not the implications of your choice of illustration.)
Surely it did, he would have died even sooner if he wasn’t vigilant.
Also he died knowingly walking into a trap as a decoy for someone more important. Not so much a failure of rationality as a failure to dodge.
What about the time he was ambushed in his own home, taken alive, and imprisoned for a year?
Hey, everyone has their off days! 😛
The big problem with the rationality movements is that they ignore the obvious and primary form of irrationality: Consensus, collective irrationality, the madness of crowds.
We are in fact pretty damned good at individual rationality.
Less Wrong, through its Karma system, systematically encourages the major and obvious form of madness, evil, and self destruction found among humans.
Given the obvious and notorious propensity of humans to collective madness, evil, and self destruction, if the Karma system marks someone or something down, that is probably evidence you should attend to it, rather than ignore it.
This seems to me like an attempt to “reverse stupidity”.
Groups of people don’t always lead to madness. Sometimes they invent science… which would be difficult for an individual.
As individuals, some of us are pretty good at being better than the average. On a relative scale, that’s awesome. On the absolute scale… there is still a lot of place above us. I would like to get there. Not by jumping very hard, but by using a ladder.
The slogan of the scientific revolution was “Don’t take anyone’s word for it”
Humans are prone to go by consensus: Ann believes something because Bob, Carol, and Eve believe it. Bob believes it because Ann, Carol, and Eve believe it, and so on and so forth. That way madness lies.
Scientific method is “Take no one’s word for it”, and “Science is belief in the ignorance of experts” – which means you have to trace any claimed fact to how they know it is true, to empirical data. How do they know this claimed fact? Which means that everything has to be replicated from time to time.
When Peer Review was introduced shortly after World War II, we stopped taking no ones word for it, and instead started taking the word of secret cabals meeting behind closed doors, making decisions on secret grounds.
Sometimes, as in the climategate files, we get a look at what goes on behind those closed doors, and it is not pretty.
But karma is just a formalization of what people do anyway– for instance, if I tell my friend “this blog is really good” I am sort of upvoting that blog, and if my friend goes off to read the blog because people keep telling them it’s good, they are consuming the blog based on upvotes. I guess this principle implies that one should only read the least popular books and blogs they can find?
…also dude you are hella popular among reactionaries, shouldn’t we be concerned about consensus, collective irrationality, and the madness of crowds leading them into madness, evil, and self-destruction via liking you?
Formalizing social mechanisms is not always a good idea. Besides, upvotes and downvotes for individual items (articles, posts, etc.) are fine. It’s the per user counter that’s problematic.
Formalizing social mechanisms is great. Upvotes and downvotes do not formalize individual recommendations. Worse, they invite brigading, because what monkey could possibly ignore the chance to smash his enemies?
if *you* tell *your* friend that it’s good, and start marking things that *you* think are good with little yellow stars, that’s formalizing the way *your* opinion is known. I can learn whether you are thoughtful enough for your recommendations to be worth considering.
The masses will always upvote boobies and violent rants against their enemies. Your votes get aggregated with everyone else’s. Three votes for Moldbug, three votes for Scott Alexander, five votes for the FEMEN prostitutes in NO PLACE FOR HATE bodypaint, and that’s the winner.
DailyKos really does democracy the best. Not only do people upvote stuff the agree with, but their votes are permanently recorded; everyone knows who voted how on what. Not only is what is upvoted the normal garbage that gets upvoted, but they must cowardly avoid posting or upvoting anything that they think they could get in trouble for.
Yes. A karma downvote, received, is an anonymous Pavlovian aversive stimulus. A karma upvote, received, is an anonymous warm fuzzy stimulus. You don’t know why you received the stimulus in either case, and you make up a reason in any case. A karma downvote, given, is a purely passive, anonymous way of dissipating displeasure or annoyance at someone without actually having to explain or do anything. A karma upvote, given, is a purely passive way of feeling like you have rewarded someone when in fact you haven’t done any such thing. So there are in fact four interactions, which aren’t symmetric, which are all opaque, which don’t accomplish what they are supposed to accomplish.
Given that all that’s true, why are complaints about abuses of LW’s comment system frequently met with disdain and mockery?
Once more, I have to vouch for 4chan’s system, of all places. Replies to a post could count as both upvotes, downvotes or an extension of the discussion, in a way.
In the end you don’t just have an aggregate of positive or negative votes, but also the substance behind them, ranging from just calling someone a ‘fag’ to an actual critique, to further information, anything. This way, feedback is content in itself, which may generate even further content, all posts standing on their own merits [or lack thereof]. Quite an organic form of organisation, productive too, considering how certain comments in some blogs inspire entire posts on their own.
Sure, it’s a mess and can be hard to read, but that’s easily solved if you install certain aids such as 4chan X [which I highly recommend].
Ialdabaoth: complaining that a fundamentally broken system is being used improperly is sort of pointless, don’t you think?
Not when those complaints resolve down to “The fact that this improper use is possible is evidence that the system is broken, and here’s how to fix it”, no.
If we think the system has deep flaws that make it impossible to repair, we are likely to be uninterested in your plan for repairing it by fixing certain surface flaws.
I don’t think rationalist movements ignore that. I think they talk about it quite a bit. I just had two posts on conformity effects on this blog in the past week or so, for example.
I think you’re confusing “probability of being mocked given that you have discovered a bold but unpopular truth” with “probability of having discovered a bold but unpopular truth given that you are being mocked”. The first is high as always. The second is tiny. Most people who go against consensus, especially on the Internet, do so because they are stupid, crackpots, or trolls.
I don’t think of the karma system as a principled way to guide deep discussion, I think of it as a hack to keep out the crackpots and trolls and stupid people – who seem to be attracted to LW like moths to a flame in a way my blog hasn’t had as much trouble with (I don’t know what your experience running a blog has been).
I agree that it could in theory lead to enforced conformity, but I don’t notice that. As everyone always mentions at this point, the highest karma topic on LW ever was criticizing SIAI, many (smart) critics have garnered high karma, Eliezer gets downvoted all the time, and people with unpopular political opinons like Samo still have quintuple-digit karma. I think we do a pretty darned good job of keeping discussion quality high without penalizing nonconformists.
If that was true, then the consensus would almost always sane. If that was true, then if the emperor is reputed to have clothes, then he usually would have clothes.
You might claim that happens to be true of today’s consensus – but mysteriously happened to be wrong for yesterday’s consensus for every yesterday of the past several thousand years.
Mockery is what those who lack evidence do, hence a reliable indication of falsehood. If we actually have empirical evidence that someone is wrong, the natural response is to point to empirical evidence. If, on the other hand “everyone knows” that someone is wrong, the natural response is to mock.
For example, the fact that Global Warming scientists are refusing freedom of information requests for the evidence that the world is warming unusually is a pretty good indicator. The law of the land, the unwritten rules of science, and the written rules of the major science journals being that you have to show your evidence – which rule quietly died not long after peer review.
And indeed, almost all emperors reputed to have clothes do have clothes, so what’s your point?
It only really applies when referring to science.
Furthermore, the consensus in the past, in ideas related to science, in times recent enough to be using science to study those ideas, has been pretty correct too. You just remember the rare times it was wrong and don’t remember the many times it was right.
Even cases where science was “wrong” are often cases where it’s right for the phenomena we know about so far–Newton’s Laws are still useful in the type of situations for which Newton used them, even though they ignore relativity.
Though the underlying facts of a consensus change. For example, rules and systems that work when the pinnacle of military technology is the compound bow don’t stand up well when technology has advanced to supersonic fighters and IEDs.
And as for mockery, it’s very very very tiring to present the same good evidence over and over, only to be told “If your evidence contradicts the Bible, it’s not valid!” or “But Fox News said the Earth isn’t warming!” or “You’re brainwashed by the Cathedral!”
“If that was true, then the consensus would almost always sane. If that was true, then if the emperor is reputed to have clothes, then he usually would have clothes.”
Well … no, actually.
Assume most ideas are nonsense (because the truth is a small target in a multidimensional hypothesis-space), and most ideas are unpopular (because the Overton Window is pretty small), and there is no particular correlation between the two …
… then most popular ideas are nonsense, and most unpopular ideas are nonsense. Most correct ideas are unpopular, and most incorrect ideas are unpopular.
This is a toy model, of course. But I think Scott’s criticism is clear?
I think you’re missing my point.
Yes, today’s consensus is almost always sane. And past consensuses have been almost always sane. The overwhelming majority of points we have consensus about are things like “The sky is blue” and “There’s no boogeyman” and “The pyramids were not built by aliens”.
And even when people deviate from a false consensus, they are far more likely to deviate into crazytown than to deviate into a correct position. I know you doubt the current consensus on race, but even if you’re right about race, for every person like you there are two dozen of the people from way back when who were saying black people had no ability to play sports and Chinese people were near-morons and white people came from ancient Hyperborea and so on.
I am not making an argument one way or the other about the correctness of progressive versus nonprogressive views, I’m saying there are far more ways to be wrong than right, that the bottom 90% of people are just adding random noise, and that the karma system is a way of controlling that random noise. I don’t know how much signal it blocks along with blocking the noise, but blocking the noise is sufficiently necessary that blocking the signal is an acceptable sacrifice.
I found them a bit underwhelming. For example, in the family experiment, I would have told them that they were raving rabid moonbats for abolishing the family, and if they (quite correctly) compared family with race, I would have told them that they were raving rabid moonbats for denying members of superior races freedom of association.
I am reasonably subject to pressure to conform, I will believe stuff because everyone else seems to believe it, if the stuff is halfway plausible, but not stuff that is raving rabid moonbat crazy.
If everyone tells me that there are invisible leprechauns all over the place, I will probably believe in invisible leprechauns, but if they ask me to believe in visible leprechauns …
Going against the LW consensus isn’t going against the consensus because the LW consensus is often minority opinion.
To take one example. I’ve seen a physics PhD being reviled for criticising EYs views on physics.
I’m going to disagree with your aikido instructor here, with all due respect to your aikido instructor if he is who I think he is.
Some teachers will tell you that martial arts is all about natural movement. They think that way because they’ve been doing it for hours each day for years or decades, and it’s become natural to them. The truth, however, is that the kind of movement that benefits you in martial arts is profoundly unnatural outside of it. It doesn’t progress from everyday movement, you can’t get it from thinking hard about it, and it’s generally not optimal for it. With a few exceptions, granted; if you have bad posture, for example, fixing that is going to improve your quality of motion both on and off the mat.
With that in mind, and if you’re a novice, I suspect that spending a lot of time thinking about how to apply the movement you’ve learned while off the mat is going to be counterproductive. Eventually you’ll have the experience to tell what’ll generalize well, but before you have that you’ll just end up trying to generalize all of it, and that’s not really a good thing: you’re just as likely to build bad habits, or to frustrate yourself, as to build good ones. Worse, you can easily create bad habits by associating on random crap in the default world.
On the other hand, if your instructor’s just told you to think about some specific and relatively narrow thing, that’s far more likely to be generalizable.
(Source: am serious martial artist.)
Any mistakes in the above are probably my own rather than my previous aikido master’s (and if you think he is who I think you think he is, then he is who you think he is). He was just trying to teach me to have good posture and do actions with core strength and stuff like that.
What martial art do you do seriously?
Kuk Sool Won (an eclectic Korean system, mostly strikes), Western fencing, Toyama ryu battodo, and jujitsu, in rough chronological order. My highest rank is in Kuk Sool, but I’ve been actively studying Toyama ryu the longest.
“Natural” is a problematic term, related to “qi,” but do you not find that the movements of jujitsu eventually teach you how to move in the rest of your life? Or, for that matter, how to move in all your other martial arts?
Well, jujitsu doesn’t have nearly the emphasis on movement that some of the other stuff I’ve studied does. I can credit it with finally teaching me how to do a proper hip throw, and it’s done a lot for my ukemi. But I expect that’s not what you’re trying to get at.
Movement generalizes well between martial arts, as one might expect: there are only so many ways to hit someone, or to do a front fall. And certain aspects of movement do generalize outside them. My balance is a lot better than it was on the day when I put on my leopard skin and rode my dad’s brontosaurus to my first class. So’s my posture, and I’m better at issuing power from movement, etc. But the aspects that’re useful outside martial arts come from spending a lot of time doing things where balance and posture and power are important, not from directly trying to apply my moves on the mat to applications off it. It’s the latter that I object to.
It seems that the goal of detecting, correcting, or avoiding mistakes is shared by all creatures. When working towards that goal, human rationalists seem to place a relatively high value on staying vigilant against (some) biases.
Is it safe to say that Scott sees rationality as a virtue and Robin sees it merely as a utilitarian tool to use only when the occasion arises?
Given that Scott is a utilitarian, he would see all virtues as utilitarian tools. The disagreement would be only in how often it is deployed.
Perhaps he believes in the intellectual virtues just not the moral virtues? I wonder what Scott or the rationalist community would think of Virtue Epistemology.
There is the ethics of virtue, the ethics of rules, and the ethics of consequence. Since LessWrong is rational, they reject virtues and rules out of hand, and then “rebuild” them from utilitarianism. Then they tell themselves that they are not virtuous because virtutis praemium suum est, but rather because they make that utilitarianism check every time.
I think we’re pretty explicitly virtue epistemological.
I don’t think I like the thought of “making the utilitarianism check every time”. You make it once. Then you forget about it. Same way you might research whether it’s a good idea to take a particular vitamin once, then try to cultivate the habit of taking it every day without even thinking about it.
Rules, deontology , aren’t irrational in the sense of maintaining crazilly wrong. The LW crowd seem to reject the approach out of hand because it pattern matches Judea Christian religion.
The thought crossed my mind that Robin sees question triage as the key point because he has a gift for seeing questions everywhere. I get the impression that constant vigilance is already natural to him so he considers it trivial, given how often he posts “Why do we have [sacred thing we all take for granted]?”
(Incidentally this is a very SSC-inspired view. What’s the correct adjective? Alexandrian?)
I wish I could upvote this.
Eh, constant vigilance of a very specific sort, maybe. I don’t think that he’s constantly vigilant in the sense of, say, reflexive self criticism. Honestly, I feel like Robin Hanson often falls into the trap of wielding a hammer that makes every question look like a nail, and could use some additional “is this really a nail?” vigilance.
To be more explicit, he seems to treat literally everything as an economic question. And while it seems that most things can be fruitfully framed that way, that doesn’t mean *everything* can be fruitfully framed that way. Case in point: “Rationality is a limited resource to be allocated” doesn’t seem like a very fruitful model, but it is easy to think of/about it that way using economics.
That seems likely to be a me-inspired view, since I read your comment and thought “Oh! That makes total sense!”
Rationalsphere is generating too much good content this week to keep up with.
A recent post on Otium discusses the existence of more-rational-than-normal individuals. The cream of this crop may be the “super-predictors,” those supremely able to predict the future without bias.
What is interesting to me is that, from my prior reading on the super forecasters, they did not seem to be exceptionally successful individuals before they were discovered by the Good Judgement Project. (I welcome correction on this point if I’m wrong.) Meaning, these people aren’t really winning in the sense we often mean. They just aren’t biased. These people aren’t (sigh) Elon Musk, they’re more like Lt. Cmdr. Data. All their neurological and personality parameters are tuned just right such that they’re just really really reliable.
To vaguely steelman Hanson’s position, if you spend all your time pathologically trying to devil’s-advocate against your own thinking, you’ll never actually do anything. If you try to characterize every risk against your startup, you’ll simply never create a startup – in fact, startups are a godawful idea, you should be an accountant. Or an actuary. Being an actuary would be the perfect “rationalist” job in this sense. You just characterize risk and probability all day and ensure that you’re not missing any biases.
Obviously this isn’t what most of *us* mean when we use the word rationalist. Unfortunately I lack the time to write the post this deserves, as usual, but I think Eliezer addressed these issues in the Sequences. He understood the tension between the “protecting” and “striving” forces that underlie rationality and maps out a path that allows it to work.
I’m already familiar with the Good Judgment project, but can you link the Otium post?
Rationality isnt one thing. Self devils advocating is not good for instrumental rationality, but it is the essence of epistemic ratinality.
I was actually thinking a lot about this yesterday. I’ve pretty recently discovered the less-wrong-o-sphere and have been wondering how deep I want to dive in. I’m pretty much on board with the idea of “better living through thinking better”. Where I get lost by “rationalists” is the amount of theory vs practice. My personal suspicion from trying to become more effective over time is that there’s very little correlation between understanding, say, the finer points of Bayesian decision theory, and actually achieving one’s ends. I think the failures of rationality I make on a daily basis are kindergarten-level stuff, like not pursuing a thought to its logical conclusion because I find the consequences scary or unsettling, or doing things I know aren’t in line with my stated objectives because (tired / lazy / upset / etc). An exploration of my own cognitive biases doesn’t seem that valuable to me relative to just pushing myself more on a daily basis to be more courageous, mindful, and reflective in general. Theory, in fact, feels like a form of procrastination, and I’m a little hesitant to dive into a movement that might be largely about enabling it’s members to spend more time procrastinating instead of actually growing into more effective versions of themselves.
Josh, here is a link to one of the most practical LW articles:
Thanks for the link. Amusingly, the first comment on that post is making a similar point to the question I was asking, although from a more informed perspective about the Less Wrong community.
>One of the fundamental skills of rationality is noticing that you are confused
You’ve said this before. Having read the sequences, I still have only a vague grasp of why this is. Could someone spell it out for me?
When you are confused, one of your assumptions is wrong. My best example is getting confused when trying to find an address that didn’t have the roads and neighbors I expected to find while delivering pizza. Turns out I misread the address, and went to blah st ne instead of blah st se.
FWIW, reading HPMOR was more valuable than reading the Sequences, in terms of actually inculcating this lesson. You watch the character demonstrate this lesson on a few test cases and it really hammers it home.
In real life, when you find yourself hammering against a problem over and over and not making any headway, that’s the time to stop and say, “I notice that I am confused.” It’s an opportunity to step back from yourself and admit that (as ThurstVectoring says) an assumption is wrong, or maybe even your whole model is wrong.
There are of course plenty of times in life when you’ll just automatically notice confusion, like when you give somebody a $20 and they hand you $10 in change and your brain returns a “something is wrong here” signal. These *aren’t* the times when the rationalist skill is called for. I’d say this is just a “normal confusion” signal. “Rationalist confusion” is the ability to step back and watch yourself behaving less-than-optimally and say, “Hey, from the outside it looks to me like you’re missing something. Chill out for a minute and think about what that might be.”
To use two rationality quotes:
Schulz’ argument is true only in some cases. Sometimes being wrong feels exactly like being right or like nothing at all. But other times being wrong is something that can be detected. The entire premise of noticing when you’re confused is that you can sometimes detect when you’re wrong.
When I notice myself being confused, it often feels disorienting. It’s the mental equivalent of walking down stairs and missing a step. There’s a jolt of surprise or shock. Other times, it’s like my mind bounces back and forth between two or three incongruent ideas and approximates what the problem is. If I’m extremely confused and don’t even understand the problem, it feels like I’m looking at a gigantic and totally featureless surface. If I’m confused and biased, it’s like when someone hears a distant noise and doesn’t process it until several seconds afterwards, if at all.
These are metaphors, but the brain has its higher thinking processes built on top of the circuitry of primitive processes, so it seems natural that mental experiences would be phenomenologically similar to sensory experiences. So such metaphors seem like justified approximations of truth.
Also, as general question to everyone: my own mental experiences are highly kinesthetic. I tend to feel/imagine various forces pushing or pulling inside my head. Gentle tickles correspond to thought processes in ways that are consistent and familiar though difficult to describe. There’s a sense of balance to it all that’s very relaxing to observe, though it’s not something I normally think about.
Is everyone like this? Do I have a super weird variant of synesthesia? Perhaps unusual self-awareness? Or am I simply deluding myself?
William James described his own thinking as mostly kinesthetic, and Einstein said he deliberately conducted his in sensory imagery, not putting it into words till the problem was solved. Mine is mostly kinesthetic (what NLP called ‘derived kino’) and visual (reading diagrams and charts by the eye-motions of tracing them).
“I am going to do something very dangerous today, something that makes me acutely aware of my own mortality. I am going to disagree with Robin Hanson.”
You won’t leave Skyrim alive!
Heh, I just realised that this disagreement is basically “Constant Vigilance!” vs “Variable Vigilance!”.
This is absolutely spot on. I have one rather large and depressing caveat:
No matter how well you train that ability, you can lose it when you need it most.
When I was younger, I lucid dreamt almost constantly. I also was much more rational and goal-oriented.
Now that I’m 39, no matter how much I practice, I CANNOT translate my “maybe I’m dreaming” hypothesis into my nightmares. Likewise, no matter how much I practice, I CANNOT translate my “maybe I’m being irrational” hypothesis into a panic-attack.
Because my subconscious has been training, too. And it’s become stronger than me. So when it says, “No. You WILL experience absolute terror and helplessness, and you WILL watch helplessly while I take control and perform actions to destroy your goals and ruin your social standing”, it wins.
As a wise person once said, “adults have more courage, not less to fear”.
From the comments he makes, I think Hanson thinks that many biases are small and specific to individual fields. I tend to see biases as overarching, crossing multiple domains of knowledge. To the extent that biases are overarching, it makes sense that we can have general habits which eradicate them. But if biases are small or specific (or perhaps if applying general knowledge about biases to specific areas is a difficult process) then Hanson’s view becomes justified.
I disagreed with Hanson earlier, and didn’t see anything justified in his position. But I now think the hidden core of the disagreement seems to be whether biases are big or little, and I haven’t thought much about that, so his position seems stronger than I thought.
By using that inflammatory title, Hanson must have meant that ganging up and forming a “rationalist” community basically destroys all the good work done to overcome bias, as the very act of living in a community of like-minded people who trust each other means you’ll stop thinking clearly and just be nice and agree with each other. Add a megalomaniac leader and you have a cult, no leader and you still have a lame hippie commune.
Happy rationalists you are, your instinct is to defend your community by bogus arguments such as an aikido professor, instead of using useful heuristics such as the obvious fact that Hanson is 10x smarter and wiser than any of you, so perhaps he has a point.
Was that sarcasm?
James A. Donald:
But, in fact, the earth is not warming – which is why you respond with mockery.
And people don’t respond to young earth creationists with mockery, but with the Grand Canyon – because they have the Grand Canyon and do not have global warming.
And, of course, the fact that you believe so many absurd things is pretty good evidence that you are brainwashed by the Cathedral. Indeed, the dark enlightenment is a reaction to the trend that the mandatory levels of madness are getting ever more frothing at the mouth raving moonbat demented.
“… frothing at the mouth raving moonbat demented.”
So presumably if you had evidence that mockery implied the nonexistence of evidence you would present evidence rather than mockery.
To be fair, that was bare insult, not mockery. Mockery would be if he quoted a Global Warming argument and then rephrased it to sound stupid.
I realize that debating the definitions of words is often unproductive, but I was concerned so I typed “mockery definition” into google and got for the first definition:
teasing and contemptuous language or behavior directed at a particular person or thing.
synonyms: ridicule, derision, jeering, sneering, contempt, scorn, scoffing, teasing, taunting, sarcasm
I think this supports my usage of “mockery”.
If someone said “the icecaps are melting”, and I replied “Ha hah, that shows you are frothing at the mouth moonbat demented”, that would indeed be evidence that they were indeed melting.
But, of course, what I actually do is point to the graph of global sea ice. (And then I mock)
Similarly, if members of superior races were permitted freedom of association, it is obvious we could build higher trust institutions, and, regardless of utility, freedom of association is a natural right – and the fact that the Cathedral selectively violates this right for some races but not others admits what is denied – that races are not equal (And then I mock.)
“Frothing at the mouth moonbat demented” is a general summary, not a substitute for rebuttal. It summarizes the state of debate on the evidence.
I think you may be over-charitably interpreting your own actions and the actions of your side, and under-charitably interpreting the actions of your opponent’s side.
You said: “Mockery is what those who lack evidence do, hence a reliable indication of falsehood.”
But it can’t actually be a reliable indication of falsehood if mockery and evidence often coexist as you claim they do in your case. Just about everyone who mocks would claim mockery and evidence coexist in their own cases as well.
Reading Jim is an exercise in charity. He is often over the top, sometimes self-contradictory, and always impolite.
However he provides important evidence and analyses that you will not get elsewhere, because no one else is as willing to uncompromisingly contradict the official narrative.
You will find it is a waste of time to correct his ratio of self-mockery to other-mockery, but it is absolutely not a waste of time to take him seriously.
Mockery and the Grand Canyon serve two very different rhetorical purposes. I used both, back when I was young enough to still think that getting into arguments with young-earth creationists on the Internet was a good idea.
(I take the Grand Canyon to be emblematic of stratigraphy arguments in general, since if you point a creationist to the Grand Canyon per se he’ll point you to the global flood and flatly refuse to believe anything you say about erosion rates. It’s a little harder for creationism to explain the fossil sequences we see — usually the response to that starts with claiming that the Burgess shale represents antediluvian bottom-feeders and progresses into handwaving when you get more detailed.)
Granted, I see our host frequently uses evidence and argument, but this is exceptional and extraordinary, which is why we are inclined to engage him. It really is very rare for progressives to use evidence and argument. Progressive argument is “teabagger, teabagger, racist raciiiiisst RAAACIIIIISSSST teabagger yaaah yaaah yaaaah”
And whenever Less Wrong encounters a thought that lies outside the official overton window, they resemble monkeys in the trees flinging faeces.
It is a little known fact that “evil robots are taking over the world” is within the Overton window.
Now THAT is mocking.
It’s also an excellent point – which through alchemical fusion of opposites produces an excellent meta-point: mocking is not the opposite of facts; mocking can, in fact, be a perfectly good way to give facts salience.
Also, in irony of ironies, this:
is you mocking progressives, rather than pointing out facts about their argument style.
That is a simple fact about their argument style.
Where, for example, is an example of progressives respectfully replying to race realists with actual facts, arguments, and evidence, rather than hatred, lies, and mockery?
Um… He pointed out facts about their argument style, which facts also happens to be interpretable as mocking.
I do not recall ever seeing the phrase “yaaah yaaah yaaah” in a rebuttal to an HBD argument.
I am fnording the response
For example, it is entirely fair to summarize “the mismeasure of man” as “racist liar lying racist, yaah yaaah yaaah”. Supposedly Cyril Burt was a fraud because he found that identical twins were very like each other, fraternal twins not so much. Obviously fraud.
What the grand canyon reveals is layer after layer of rock, which rocks were laid down at the bottom of quiet, shallow seas. It is not that the grand canyon took a long time to erode, for we have seen that mighty canyons can be created in an extraordinarily short time by mighty floods, but that the rocks that it reveals took a very long time to form.
The Grand Canyon is quite young, and most of the grand canyon was cut in a few million years. If a few million, why not a few thousand? Or forty days and forty nights? But the rocks it reveals tell of vast times and quiet seas.
They tell of sea remaining sea for ages, then rising to become land for ages more, and then sinking to become sea yet again, tell of rock that over vast ages, moves.
That is very poetic, but also creationists are not stupid. In fact, Answers in Genesis has its own explanation for the Grand Canyon:
You will notice that while this explanation is wrong, it also includes a lot of evidence and not a lot of mockery.
A single vast flood would have dumped a single vast layer in a tumbled messy mass.
A respectful reply to young earth creationists tells of the quietness of the ancient sunny seas revealed by the grand canyon. And such respectful replies exist, and are reasonable common.
in contrast, respectful replies to race realists, anthropogenic warming skeptics, sex realists, and so on and so forth are pretty much nonexistent.
Anyone attempting any serious examination of the recent financial crisis simply gets called a racist, in place of argument or explanation.
You will notice, for example, the movie “margin call” avoids telling the viewer what the toxic assets were that the firm was trading, or what led the analysis to conclude that they would be worthless when the music stopped, nor what would constitute the music stopping. The big boss announces “the music has stopped”, but no one discusses what this music is.
The assets are called “shit” a reference to a real life evaluation of these assets – but the real life explanation of why these assets were shit is not given in the movie, presumably because racist.
In fact of course, “the music stopping” was that house prices had stopped rising, and may well have fallen below the prices underlying the no money down mortgages made to people with no income, assets, or credit rating, with the result that large numbers of borrowers had stopped making payments, and even larger numbers of borrowers were about to stop making payments.
These no money down, no income, no job, no assets mortgages were theoretically equally available to whites and NAMs, theoretically lenders had ceased to consider job, income, and assets because these criteria had disparate impact, but in fact, if you were white and an American citizen, these criteria were apt to be quietly and unofficially applied, so the way lending actually worked was and is more like affirmative action than the abolition of criteria with disparate impact. If white, not so easy to borrow no money down with no income, no job, and no assets.
If you apply these criteria to non asian minorities, it is disparate impact. If you apply them to whites, not so much.
And to such discussions, the reply is always “racist neonazi”.
James A. Donald, I would like to make a request.
If you can provide me with a straightforward argument that does not descend into deliberate mockery-baiting, I will attempt to respond with straightforward facts that support or refute that argument without mockery.
HOWEVER. When you deliberately and consistently use language that is designed to elicit a hostile emotional response, you lose the right to complain that all you get are hostile emotional responses.
You know better. You are *deliberately* using incendiary language, so that you can feel justified saying “see? you guys can only react emotionally, you can’t provide facts.”
You should stop that. ESPECIALLY if you actually believe what you’re saying, and believe that people should be convinced by your statements rather than simply riled up by them, because you’re currently having the *opposite* effect.
No I am not. I am mentioning incendiary truths. If I used words like “n****rs, that would be incendiary language.
How can one, for example, argue that races are different, that some are on average better adapted than others to an environment of artifacts, agriculture, property rights, and employment, without it being deemed incendiary?
Yet it is impossible to argue against affirmative action etc, or indeed that George Zimmerman was innocent, without implicitly invoking the facts one is forbidden to mention.
Here is an argument for inequality before the law. Let us see you rebut it without going “teabagger teabagger, denier, racist, racist, yaaah yaaah yaaah”
We treat men and women differently before the law in regard to acts of violence, for example the Violence Against Women Act. When women engage in public bad behavior, they are treated like children in that they are let off (think about incidents of drama happening in your workplace), but they are not treated like children in that they are not hauled off to the responsible male in their lives and he does not get told to keep them in line. They get the child’s exemption, but not the child’s discipline. We don’t treat them as equals, either in the letter of the law, or the actual application of the law in practice. If they are going to be treated as children, which in practice and in law they are, they should legally be subject to male authority, and that male responsible for their behavior.
If a Violence Against Women act to keep men in line, should we not have a Violence Against Whites act to keep blacks in line?
Many drugs are sold over the counter because it is quite rare for white people to abuse them, even though it is common for black people to abuse them, resulting in brain damage, liver damage, acts of violence, and so on and so forth. Should we not have different drug laws for blacks and whites?
In actual practice, we do have different drug laws for blacks and whites, the alternative being suicidally stupid, but we hypocritically pretend otherwise. Would it not be better to openly and officially have different drug laws for blacks and whites?
We have different laws for adults and children. For example, in many jurisdictions, an adult can carry a bottle of beer in public and a child cannot, even though doubtless many seventeen year olds can handle their liquor a lot better than many nineteen year olds.
If a boozed seventeen year old is, on average, a bigger problem than a boozed nineteen year old, it is also obvious that a boozed black is a bigger problem than a boozed white, and a boozed feather Indian is a much bigger problem than a boozed member of any other race.
The affect of alcohol on native Americans is so obvious, severe, and destructive, that it is cruel and hurtful to have the same laws for them as everyone else. If we restrict white children from booze, we certainly should restrict Native Americans, because almost any white child can handle booze better than almost any Native American of any age, the difference being obvious, dramatic, and undeniable. Indeed, almost any black child can handle booze better than almost any Native American of any age.
Now see, you actually brought up a series of legitimate, valid points, which deserve consideration. Thank you.
Give me some time (about an hour or so) to close out what I’m currently working on, and I will devote my full attention to them.
(Incidentally, I agree denotationally with about 70% of what you’ve just claimed, and connotationally with about 30% of it, so you may find there’s more common ground here than you thought.)
EDIT: Alright, here we go.
I agree! And this is a VERY big problem. If a society is going to commit itself to the Progressive value of equality, it needs to *actually* commit itself to treating people with equal expectations. The legal system often fails in this regard, treating women with Paternalistic kid-gloves as often as it dismisses them out of hand (BOTH of which I consider problematic).
I’ll get to the pragmatic problem with this solution in a moment.
I have no idea where you are from, but in most of the areas I have inhabited, white abuse of meds has been far, far more of a problem than black abuse of meds. (It being Spring, I particularly miss being able to buy real, honest-to-god Sudafed ooc – so this is particularly salient to me right now).
Having spent a reasonable amount of time among various racial and socioeconomic backgrounds, I have to say that I fear poor urban whites far more than I fear poor urban blacks. In my experience, a gang of black 20-something males will assault you over profit, over territory, or over defense, but I have only ever *personally* experienced assault for sheer thrills from white male youth – and I have witnessed it enough that the pattern is pretty well burned into my brain.
In general, I object to this set of laws as well. I’d prefer a ban-on-first-offense implementation, for various reasons which I don’t want to distract this discussion with; but suffice it to say selling race- and gender-based unequal laws by invoking age-based unequal laws is going to be a tough sale to this customer.
I know this. You know this. The native Americans know this. And you know what? They DO have separate laws. Hell, they have a whole SET of separate, Tribal laws for themselves. How’s that working out for them?
So, here’s a few ultimate points of disagreement:
1. If legitimate genetic racial differences exist, and I am not asserting that they don’t, I’m reasonably convinced that socioeconomically-imposed differences are dominating them so much that it doesn’t make sense to talk about the genetic component until we’ve got THAT elephant out of the room.
2. You’er absolutely right that a lot of the so-called “solutions” our society proposes are TERRIBLE at solving those problems, whether they’re socioeconomic or genetic at root.
3. Here’s where I get to the main point of refutation: Explicitly unequal laws are OFF THE TABLE, and here’s why. You guys had your chance, and you fucked it up big-time. Paternalistically justified separate-but-equal has so consistently turned into apparent self-justifying abuse of the very people it claims to paternally protect, that that dog no longer hunts. White people, as a whole, do not seem capable of being proper stewards of the (disputably) inferior races. And even if it did at some point do more good than harm (and in many cases, I actually agree with you that Colonialism was preferable to the alternatives), the abuses were so appalling and so seemingly pointless that you just can’t get traction. So rather than long wistfully for the days when you could just tie ’em to the whipping-post and have at, if we actually want to reduce suffering, we need to deal with the situation on the ground and figure out what we *can* implement that will reduce suffering.
No white person, not one, buys sudafed to get high. And very few blacks do either. The reason for controls is that sudafed is used in the manufacture of meth. It is probable that white meth cooks greatly outnumber black meth cooks, but if you believe that, it is because you furtively believe that few blacks are smart enough to manufacture meth, not because you believe that whites are lacking in self control.
And when someone goes nuts by abusing meth, it is usually a black.
Few whites, quite possibly none, buy Robitussion over the counter to get high. That is entirely a black habit.
If that is your actual experience, you are living a long way from blacks. And if you are living a long way from blacks, you are paying one hell of a lot of money to live a long way from blacks. And the reason it is so expensive to live a long way from blacks is that everyone else also wants to live a long way from blacks.
People do not pay a lot of money to live a long way from working class whites – for example the flat part of Pacifica in the bay area was full of working class whites last time I checked. Every location in Pacifica where you can see the sea is full of rich people, who are very comfortable living and shopping right beside working class white people.
Pacifica in the Bay Area, white working class, near zero crime. Not very far from Pacifica is Oatlands, black, intolerably dangerous. If a white guy strolls around Oatlands at night , he is at considerable risk of being attacked. If a white girl strolls around Oatlands unaccompanied at night, she will be attacked. During the day will be threatened. If a white man strolls around Oatlands police will likely arrest him on the assumption that the only reason a white man would do such a dangerous thing is to buy drugs or hire a prostitute – probably drugs, since people hiring prostitutes are not that desperate. Tenderloin is considerable less dangerous for white people seeking to hire prostitutes, a difference that suggests that attacks in Oatlands are in substantial part racially motivated.
That, despite your claims, no white person fears to walk around Pacifica at night tells us that economic effects are not dominating them.
When I went shopping late at night in Pacifica, rich old ladies went to the same street as working class boys, while white drug addicts are scared to by drugs in Oatlands.
That the children of the black middle class are not markedly better behaved than the children of the black underclass tells us that economic effects are not dominating. That the children of the black middle class don’t do markedly better in school than the children of the black underclass, and do markedly worse than the children of the white underclass shows us that economic effects are not dominating.
Fifty percent of the children of the black middle class fall into the bottom quintile for educational attainment. There don’t seem to be good statistics for disciplinary and criminal outcomes, but it kind of looks as if the reason for that is that the statistics are too horrible to be revealed.
That is simply not true: Look at black life as depicted in the Amos and Andy show. The environment depicted in that show (functional black families and a substantial black middle class with the dignity of real jobs rather than the embarrassing shame of affirmative action jobs) was considered realistic at the time.
And, similarly, let us compare Rhodesia under whites with Rhodesia under blacks, similarly, Detroit, Sout Africa.
Ending Jim Crow was a disaster for blacks. They fail to notice it, because now they get Obamaphones.
Applying the same laws to black as whites simply does not work. Compare blacks under Jim Crow, with blacks today. Under Jim Crow, had fathers, generally out of jail. Under the current system, no fathers, and a large proportion of the young males are in jail. Obviously Jim Crow was better.
When you try to treat unequal groups as equal, you wind up hurting both groups.
Which is why feminists hate men, and usually wind up single, and mens rights activists hate women, and usually wind up single: Because they both favor treating men and woman as equals, and then the women blame the men for the resulting suffering, and the men blame the women for the resulting suffering.
White males need to rule. Everyone will be happier under that arrangement. Everyone was happier under that arrangement.
This is PROFOUNDLY against my own experience. The black drug of choice where I’m from was marijuana. The white drug of choice was meth, and it led to a lot of violent white behavior.
The SECOND black drug of choice was crack, and it also led to a lot of violent black behavior. But not quite so much as the meth.
So at least in the enclaves of Arizona and California that I have hailed from, it just ain’t so. I don’t know about Idaho yet – if there’s more than a thousand blacks within 50 miles of me I’ll be deeply surprised. (And incidentally, I’m paying less to live here – by about a factor of two – than I EVER did in Arizona or California).
Regarding meth, don’t guess, use the data. In the US, meth users and abusers are more likely to be white/Hispanic.
I stand corrected: I was guessing on the basis of the behavior with crack cocaine.
But is still the case the methamphetamine is not an over the counter drug – that whites do not abuse over the counter drugs, because if they did, they would not be over the counter drugs, while blacks frequently do abuse over the counter drugs.
I personally know a white math major at a fairly prestigious college who had, for a while, a habit of getting high off Robitussin.
I can’t find race statistics, but it’s not entirely a black habit.
(However, he never caused any trouble.)
My housemate in college used OTC cold medicine to get high in my presence a couple of times, although not enough that I’d call it a pattern of abuse (just “stupid”). He was a literature major, in my estimation a pretty smart if sometimes impulsive guy, and was white.
I haven’t been able to find any data on OTC medication abuse by ethnicity, although it looks like dextromethorphan is most common among high-school aged users. Sleeping pills and OTC stimulants seem to follow somewhat different patterns of use.
Pingback: Overcoming Bias : Bias Is A Red Queen Game
This is idea of budgeting rationality occurred to me recently, pretty gratifying that it hit Robin Hanson’s mind too.
My framing was slightly different: I was debating some question of what I should do, and realising that I’d already reached my conclusion for a non-rational reason, knew it, and was attempting to fight it and reason afresh clearly. But it wasn’t going to happen, I couldn’t be bothered putting in that effort and was just going to find clever reasons to justify that I had deliberated properly and engineer the conclusion I still wanted.
So in this situation I could either:
1) Put in a token effort and attempt to reason afresh, end up with the desired conclusion, but continue to think myself rational.
2) Acknowledge that it would be unsustainable to bring my best rational efforts to the fore on all occasions, let myself knowingly be biased in this decision, and hence not fool myself about the extent of my rationality.