I.
When I wrote about my experiences doing psychotherapy with people, one commenter wondered if I might be schizoid:
There are a lot of schizoid people in the rationalist community from what I can tell. The basis of schizoid is not all the big bad symptoms you might read about. There are high functioning people with personality disorders all the time who are complex, polite and philosophical.
You will never see this description because mental health industries center entirely around people Failing At Life, aka “low-functioning”. As many radicals have noted, mental health tends to constitute itself mostly around “can’t hold a job” or “can’t hold a marriage”.
The only thing you need to be schizoid is to dislike contact with other egos, and to shave off the experience of those other egos ruthlessly before they can reach the fantasy world you retreat to.
It doesn’t mean you’re evil. It doesn’t mean you stalk people and plan to harm them. It doesn’t mean you’re over-reactive or even bizarrely delusional. You could call it a form of delusion, but really the basic descriptions of perception like top-down processing and culture could all be called delusional thinking if you want to be properly pointed about it. It’s schizoid. It’s often quite gentle. And I’ve noticed from interacting with various people in high IQ communities that if you have sufficiently high enough intelligence, despite the inherent defined tendency to retreat from reality, you can in fact become aware you have a personality disorder.
Anyway, my guess based on projection (I’ve never met you) is that people aren’t being emotional around you because you can’t be reached by them emotionally, and they know that on some level.
I feel like I experience emotions and genuine human connection. You would think that ‘not experiencing emotions or having genuine human connection’ is hard to miss. But then I think of the stories in What Human Experiences Are You Missing Without Realizing It?
In the first, Francis Galton discovered that some people didn’t have visual imagination. They couldn’t see anything in their “mind’s eye”, they couldn’t generate internal images. None of these people knew there was anything “wrong” with them. They just assumed that everyone who talked about having an imagination was being metaphorical, just using a really florid poetic way of describing that they remembered what something looked like.
In the second, a user on Quora described their experience with anosmia – not having a sense of smell. They didn’t realize there was anything wrong until college. Until then, “I teased my sister about her stinky feet. I held my nose when I ate Brussels sprouts. In gardens, I bent down and took a whiff of the roses.” Though they didn’t say so explicitly, it sounds like they thought smell was just a metaphorical way of saying something was disgusting or delightful.
And in the third – well, this is awkward – I went years without realizing I didn’t have any emotions. I was getting treated for obsessive-compulsive disorder with high dose SSRIs. When these work well they dull your depression and anxiety; when they work less well, they dull all your emotions. For me they worked less well, but I never realized it until I came off them after five years and was suddenly overwhelmed by emotions I’d almost forgotten it was possible to have. In the interim, I’d understood that getting a birthday present was a positive and desirable event, and said it made me “happy”, without realizing something was missing. This was particularly inexcusable since I’d felt the full range of emotions before I started the drugs, but I guess the hypothesis “I have stopped feeling emotions” is a hard one to consider and collect evidence for.
So if someone says I’m incapable of genuine human relationships – well, I should stress that I think my relationships are genuine. But if they weren’t, maybe I wouldn’t notice. There would be something I was capable of, I would call that “genuine human relationships” since it was my only example of the concept, and I would never have anything else to compare it to.
II.
This post isn’t about relationships. This post is about ideas.
In high school I took a sociology class, and the teacher talked about how modern society was atomized and there were no real community bonds and so on. And I thought this was dumb. I didn’t live in an atomized society! My family knew our next-door neighbors, and we’d even been over at their house once for dinner. There was a Community Center a few blocks away, and when I was a kid I would go there a couple of times a year for some kind of Neighborhood Art Night. Sometimes my mother volunteered at my school, and my dad was too busy to volunteer but probably would have if he could. We weren’t devoid of community at all.
And then three things happened. Number one, I read some good anthropology about primitive and medieval societies, which actually described pre-atomized life and the way that there was barely even an individual identity and the community determined everything you ever did. Number two, I spent a little time in an honest-to-goodness Third World village and saw a little of what life was like there. And number three, I got involved in some good subcultures – including Bay Area rationality – which were slightly but noticeably less atomized than the neighborhood where I grew up. I realized that I’d mistaken the existent-but-weak forms of community in my suburban neighborhood for the really-strong forms of community that people complaining about atomization say we’re missing, because I had so little experience with the latter I couldn’t even imagine them.
This is a similar error as the SSRI/emotions problem. People talk about emotions/community. I have something sort of similar occupying that space. So I reasonably assume it’s the same thing everyone is talking about.
I think I’ve figured out the whole “atomization” thing. But I’m not sure. What if there’s some real non-atomized community that even second-hand anthropology plus some good subcultures can’t point to? Am I just making the same mistake as I did as a high schooler, only one level higher?
Some of these same sociologists worry about advertising and consumerism. They think capitalism turns people into perfect consumers who overwork themselves at jobs they don’t like to buy products they don’t need. They think people’s entire identities revolve around brands and consumption.
And once again, I think: “Good thing this isn’t happening to me.” I don’t really watch TV and I tune out online ads. I buy things occasionally, usually things that I need or things that I occasionally enjoy. But I don’t own much “clutter”. And I don’t care about brands, except ones that really signal high quality.
Is this the same kind of mistake as “I met the neighbors once, so I’m not atomized”? I don’t know!
Either understanding “consumerism” was so easy for me that I got it immediately and effortlessly, and I live a charmed life that has prevented me from ever encountering that problem.
Or I have only a superficial fascimile of understanding it, and when I actually understand it, it’ll seem profound and important, the same way “atomization” did.
When I see other people making a big deal out of seemingly-minor problems, I’m in this weird superposition between thinking I’ve avoided them so easily I missed their existence, or fallen into them so thoroughly I’m like the fish who can’t see water.
And when I see other people struggling to understand seemingly-obvious concepts, I’m in this weird superposition between thinking I’m so far beyond them that I did it effortlessly, or so far beneath them that I haven’t even realized there’s a problem.
III.
Last week, some people proposed it was useless to steelman/understand post-modernism. It was just people being stupid or having garbled thinking. Maybe. There are some post-modernists who even the other post-modernists say are probably just pulling it out of their asses.
But how would we know? There are concepts nobody gets on the first reading, concepts you have to have explained to you again and again until finally one of the explanations clicks and you can reconstruct it out of loose pieces in your own head.
And there are concept-shaped holes you don’t notice that you have. You can talk to an anosmic person about smell for years on end, and they’re still not going to realize they’ve got a big hole where that concept should be. You can give high-school me an entire class about atomization, and he can ace the relevant test, and he’s still not going to know what atomization is.
Put these together, and you have cause for concern. If you learn about something, and it seems trivial and boring, but lots of other people think it’s interesting and important – well, it could be so far beneath you that you’d internalized all its lessons already. Or it could be so far beyond you that you’re not even thinking on the same level as the people who talk about it.
I’m looking back on my book review of After Virtue, a seminal philosophy book which won a bunch of awards and recognition from important philosophers. My review was that it seemed very confused. It kept claiming to have an important insight, but every time it said it was going to reveal the important insight, it actually said a bunch of platitudes and unrelated tangents. This is a huge red flag. Which makes more sense – that I was the lone genius able to see that the emperor had no clothes and Alasdair MacIntyre is really dumb? Or that he’s saying something really hard to understand, and I haven’t understood it yet?
Maybe there are fields doing the intellectual equivalent of gaslighting, insisting they have really profound points when they’re just vapor. But err on the side of caution here. Most of us have some hard-won battles, like mine understanding atomization. Where after a lot of intellectual work, a concept that seemed stupid suddenly opens up and becomes important. Sometimes it’s about anarchism, or reactionary philosophy, or privilege, or religion as benevolent community-building institution. Erring too hard on the side of “that’s dumb, they’re probably just gaslighting” closes off those areas to you forever.
I don’t think it’s always worth delving deep into a seemingly-meaningless field to discover the hidden meaning. That rarely works – if you had the concepts you’d need to understand it right now, you would have done so already. But I think it’s worth leaving the possibility open, so that later if something clicks you’re not too embarrassed to return to it.
The url for ‘Privilege’ is broken- looks like maybe it was meant to go to ‘Fearful Symmetry’?
Always nice to hear from you, Scott- you’ve filled in more of the concept-holes for me than I can count.
I was only able to find the “Fearful Symmetry” post by going to this page (about halfway down).
This works, too: https://slatestarcodex.com/archives/ .
List of all posts, except the hidden open threads.
Fun to browse- at this point I’ve read them all (though not all the comments sections- I think I literally couldn’t do that fast enough). So having the whole list let’s me pick one at random to reread if I want- perfect for killing time when you need to.
It doesn’t work for me – I keep getting “This page isn’t working
slatestarcodex.com didn’t send any data.”
(Just for that particular article.)
That’s funny. For me, at the top of the screen above the header ther’s a set of links: Home- About/Top Posts- Archives- etc. Clicking on ‘Archives’ takes me to the, um, archive list. This works on both my desktop and my iPhone, so I don’t think it’s a device issue.
Great read. Strong contender for best post of the year, I think.
This feels more like a problem stating post. The post that leads up to a really great post in a couple of months when Scott has figured out all this mess for us.
Yes.
Actually I’ve changed my mind. His finally paragraph was quite good and good enough to be final.
This post is on kind of the same subject as Eliezer’s ongoing book: how can you tell if the expert consensus in a field is wrong? If it seems you’ve found a really simple way to prove them wrong, when should you trust your new insight and when should you trust the outside view and follow expert consensus?
IIRC Scott said recently on his Tumblr that he won’t post about a certain subject until Eliezer’s book is complete (i.e. in another 10 days or so) because he wants to understand Eliezer’s point first.
Eh, it was really just a confused bunch of tangents. I don’t see what all the fuss is about.
(Just kidding Scott, an excellent post as always). I’m not sure it’s the best post of the year though…. or am I just missing the point. Or is worrying I’m missing the point kind of like the “low-IQ” people worrying they won’t be smart enough. The one real issue I have with this though is that I’m not sure it’s a good heuristic for finding useful/true things. It’s almost a trope that people in life-changing pyramid schemes, or religions, or whatnot will say things like they had some type of epiphany or seeing the light or some moment where things clicked and until you’ve had the same experience you can never understand. So your mind can click and you can get this perfect clarity all-congruent-thought feeling and you’ll be super excited about wrong things.
Hear hear! It reminds me of Robert Frost’s “A liberal is a man too broadminded to take his own side in a quarrel.” But with just the right amount of self-doubt, not too much.
The problem with this approach, as you have described it, is that it’s totally unfalsifiable. If it is a priori impossible for me to comprehend postmodernism, or religion, or reactionary philosophy, etc.; then what’s the point of trying ? If I can’t ever distinguish between a world where postmodernism (etc.) is a profound philosophy with powerful applications, and a world where it’s just mumbo-jumbo; then I might as well pretend I live in whichever world is easier for me to compute.
Note that anosmia is not like that. It is pretty trivial to devise an experiment that will convince an anosmic person that you have a superpower — i.e. the sense of smell — that the anosmic person does not. You do this by applying your superpower to objects in the world that both you and the anosmic person can readily perceive.
I understand that you’re trying to encourage intellectual humility, and I agree, it’s a good habit to cultivate (and it is by no means easy). But the experiment requires both people to participate. Alice the anosmic person has to keep an open mind, but Oscar the osmic has to actually posses a sensory organ that Alice does not, and be willing and able to demonstrate it. If he can’t or won’t do so, that doesn’t mean that he’s being deliberately deceitful; but at some point, Alice will get pretty tired of hearing, “you’ve just got to try harder to believe in my power”. At some point, enough is enough and it’s time to move on.
Thanks for this, I was thinking about it but didn’t come to enough of a conclusion to fit it into the main post.
What falsifiable test could I have used to figure out I wasn’t understanding atomization? You could ask me and a wise person who did understand to evaluate whether a certain community was vs. wasn’t atomized. But I already know atomization-proponents claimed my community was atomized; maybe we would have given the same answers. In order to figure out exactly what questions to ask and what things atomization could predict, you would have to understand atomization already.
Back when I agreed with the New Atheists that religion was just people being dumb and wrong, what falsifiable test could have convinced me it was actually a potentially-benevolent community-building institution in a way I didn’t currently understand, and that I needed to study sociology further in order to get this? Is it a test I could design even if I didn’t already understand the benevolent-religion theory?
For atomization, I think you could construct some objective measures. How many of your neighbors do you know the name of? How many of them have you ever shared a meal with? For how many do you know how many children they have? The names of their children?
If your answers to these questions are consistently much smaller than the answers given by someone in another community and it’s reasonably clear he is telling the truth, that’s good evidence that your community is more atomized than his.
I’m not sure I would have been convinced by most versions of this in high school Scott’s position– I think I would have said something like, “We just aren’t that close with our neighbors, but I have plenty of other friends, so I don’t get why that says anything about my existential status as an individual in the modern world.”
It sounds like what persuaded him was actually seeing it (village), or at least reading detailed narrative accounts of what it was like (middle ages) because that was the only way he could be persuaded that what he had experienced wasn’t the same as the phenomenon being described. So I don’t know how you would design that for yourself. Maybe, if the textbook mentioned that medieval Europe was less atomized, you could go read about medieval Europe and then get there a little faster. And he might have the idea to do it because he was reading a claim in a book that other people thought made sense but seemed silly, which is often a good place to poke around.
But I don’t know why it would occur to anyone to ask other people to describe how it feels to smell something over and over unless they already suspected it was something interestingly different from their own experience
It seems like that’s a different idea.
In one case, we’re talking about not noticing something that we’re missing. Like if I can’t perceive colors, I might just not notice the lack till someone calls it to my attention–I don’t have that file on my hard drive.
In the other case, we’re talking about noticing something we’re missing, but thinking we’re fine without it. Like, if I don’t drink alcohol, I may recognize perfectly well that other people are drinking and enjoying it, but think I’m better off without it.
In this case, he doesn’t even know what he doesn’t know, in order to construct proper questions. In the present example, I would have included far more questions about just how far those neighbors would stick their necks out for you, and less about trivia like their names (knowing somebody’s name is about the bare minimum when it comes to interaction; that we would use it as a measure at all indicates we’re already at the low end of “community.”)
What do? Get more perspectives. CS Lewis says “read at least 1 old book for every 2 new books.”
I think you could come up with a test better than this. How about:
How many people do you trust to come by and feed your pets while you’re away for the week? How many of those could you plausibly ask without feeling uncomfortable? Have you ever actually done so? Has anyone asked you to perform a similar favor?
What would be your reaction if your neighbor disciplined your child? How about your cousin? Has a neighbor ever disciplined your child? How about a family member?
Do you leave your door unlocked at night? If not, would you feel comfortable doing so? Why not?
Who in your community would help you if you, your spouse, or your child became seriously ill? Who would lend you money if you lost your job? Has anyone ever done either? Have you ever done either for anyone?
We could go on, but I think it’s pretty easy to create a falsifiable test which, when administered, would demonstrate to someone that they live in an atomized culture. Hell, just typing that out made me hyper-aware of just how small my and my wife’s families actually are, and how weak the bonds of our community.
I can tell you’re not Mormon.
This seems to be relating to high trust communities, but I’m not sure that this is the same as a community that’s not atomized. Given those criteria, most of the third world is more atomized than the developed world, as there is more crime, greater insecurity, lower trust, and higher corruption, so if the third world (and Western society of the past) is less atomized than a developed modern society, then atomization must mean something other than a mere lack of trust.
I think the questions you’re asking here (and some parts of the original post) invert the burden of proof. If I believe there is such a thing as social atomization, and I want to convince others, it’s ultimately my job to recognize this is potentially an easy concept to misunderstand and define the relevant tests myself. Those who aren’t already convinced can’t be expected to know a priori when it is or isn’t worth their while to try to Get Out Of The Car
I can’t answer your questions (at least, not better than DavidFriedman did, above). But the answers weren’t really my point. Rather, my point was that if you made a good-faith effort to ask the right questions, seek out the right evidence, and thoroughly investigate the problem; and if, after doing so, you’d still found nothing — then you would’ve been justified in dismissing atomization/religion/whatever as insignificant.
Now, in your specific case, you would’ve been wrong; but your conclusion still would’ve been justified. At some point, you’ve got to dismiss a claim for which you have no compelling evidence. You can’t go through life investigating everything ad infinitum just because there’s a small chance you could’ve missed something at some point. You’ll never get anything done that way.
I think this one of those “different people need different advice” things. Some people are wandering around in a world constructed entirely of their own priors, and need to be told “hey! Get the heck out of your head!” Other people are sitting around asking all the big questions and meanwhile they can’t hold down a job / family / whatever, and need to be told “hey! Doesn’t matter! Stop thinking, start doing!”
I think this post was a good read for those in the former camp.
You eventually updated on empirical evidence, didn’t you?
Sure, it would have been impractical for your high school to organize a trip to a Third World village, but your teacher could have provided you with reading material about life in pre-modern societies or contemporary societies in the Third World, or ethno-cultural enclaves like the Amish or the Haredim. It was their job to convince you of their claim and they failed at it.
This is a disagreement over values, not over empirical facts. Any theist can provide you with a list of benefits of religion, any atheist can provide you with a list of harms of religion. How do you weight to compute whether religion is a net benefit or harm them is subjective.
“What do you think atomization is”
“Well I think it is x y and z”
“Actually it’s more like W R and S”
the problem is figuring out that you have a problem, which you seem to get already.
In a way, this is the opposite to the Emperor’s New Clothes. There you have everyone pretending to see the invisible cloth and only one person (a child) brave and honest enough to say that there isn’t anything there.
Here we’ve got one person who doesn’t see the elegant suit of clothing that everyone else claims to perceive, so do they think everyone else is lying/pretending/using a metaphor or do they consider that maybe they are missing something others have?
Suppose Joe tells you he can see in the infrared spectrum while you can’t; do you believe Joe (and by extension most other humans) can do this while you lack the ability, or do you think Joe is claiming a superpower that may or may not be imaginary?
The Emperor’s Clothes fable tells us that we should judge that we (in not seeing what is claimed to be there) are correct, but in the case of someone who says he can’t smell what everyone else claims to be smelling, we who do possess the sense of smell know that he is wrong. But how do we apply this in a case where none of us know for sure? Is there something there or not? Maybe the guy claiming to go on trips to the astral plane really is doing something that I can’t do, the same way that I can smell things someone with anosmia can’t.
(Please note: I am not saying “all aboard the flight on my astral plane” is right, just that it’s not as simple as the Emperor’s New Clothes sets it out to be to know if you’re the Only Sane Man or the Guy With No Nose).
Ooh, I’m a guy claiming to have gone on trips to the astral plane: yeah, he really is! It’s a thing in the vicinity of vivid lucid dreams that most people can’t do without training.
Fractally, more experienced people say that I’ve only visited the “Disneyland” districts, not the real thing.
Whether either is at all useful is a separate question; I never got much out of it, but others claim they have. Thus back to the original problem…
I would suggest that as far as a meta concept goes (like atomization), the prediction would be that people who possess this concept are disproportionately good at coming up with certain insights.
Here’s an example I’ve noticed, taken from my own experience. I didn’t realize it was a special tool for a long while, but being able to cook up simple bayesian models and solve them is a bit of a superpower. On a variety of occasions I’ve just been able to look at a problem someone had been struggling with, write down an equation and say “do MCMC on this”. They try that and it works.
Similarly, evolution is a meta theory. It makes no direct predictions, it’s just an idea about how to come up with specific theories. But once you understand evolution you are suddenly weirdly good at coming up with a certain type of non-meta theories.
It’s not as directly checkable as anosmia, but I think it’s pretty real. So the question is, what kinds of things are the folks who discuss “atomization” or “postmodernism” really good at doing that others are not?
My answer to this would be to step back a little and share what I see. I read your post in general is about perspective. You were given some data (here is what “community” is, and mostly there isn’t any) by your teacher. You compared that data to your actual experience using the criteria they set and came up with a conclusion that said ‘I have community.’ Then, you read accounts of people who used the same or similar definition, but a different perspective and realized that maybe your take on “community” wasn’t consistent with everyone else. Then you observed other groups and added even more dimension to what you read, thereby causing you to question your perspective.
In the U.S., community tends to play out as you experienced both as a child and as an adult. Yes, there are variations out there (the more rural you get – the stronger that sense of community is, collectives of choice tend to be stronger than collectives of geography, etc,) but you weren’t wrong to think you had community. Part of this is due, I believe, to the fact that many people in the U.S. are relatively self sufficient, or at least do not rely upon neighbors for very much. In the long, long ago, and in the far, far away, people depend/depended upon neighbors, so the sense of community is/was much stronger. But whether it was here or there, or now or then, community still exists; the difference isn’t bad or wrong – just different.
This sort of perspective difference exists in pretty much every area of life. I am a veteran, but of the Navy. My veteran experience will be different from someone who was in the Air Force, and as such, we both might be talking about the “military” and saying different or even contradictory things (a friend of mine was also in the Navy, and even we disagree.) A more banal example would be shopping; the shopping experience is different depending on where you live. If your teacher would have spent all of their life in urban areas and was teaching a Home Economics class, they might have taught you that you need to arrive early, be aggressive in getting what you need as you are competing with others for finite products and be fast and efficient as possible at checkout. You, growing up in suburbia, might have actual shopping experience as a leisurely, almost fun affair where you take your time and enjoy yourself and contradicts what your teacher told you. And, you’d both be right.
It’s been my experience that when trying to understand a different perspective, especially one I just don’t understand, is to try and compartmentalize a specific thing about that perspective, then do my best to look at that segment as a whole (which is to say, try and figure out all the contributing factors, try to understand motivations, do research, look at historical context, try and understand the feelings of the person, etc.) While this
approach isn’t always successful (I still just don’t understand why some people feel it’s necessary to buy $300 designer jeans), I do find that I usually get enough understanding to see whether my perspective is closer to or farther away from others perspective.
So, to answer your question, there likely isn’t any one test. There might be “tests,” but even then, I doubt whomever constructed those tests captured *all* the variables. But you might be able to point yourself in roughly the right direction to get an answer by treating this (or anything like this) as a matter of perspective.
I’m not sure how much falsifiability matters when the world is forcing you to be a practical Bayesian. If you place a high prior probability that you live in a close-knit society and you’re presented with evidence that other societies are more close-knit, you’re going to exit with a lower posterior probability. Lather, rinse, and repeat.
Similarly, if you’re an anosmic who thinks you have the same set of senses as everybody else, your friend tells you that those eggs smell rotten, you disregard him because you don’t see anything wrong with them, and you get sick, your Bayesian posterior for “my senses are just like everybody else’s” ought to be lower than the prior.
Maybe a good way of looking at falsifiability is as a handy way of driving a posterior probability to zero. If you can falsify something, you can save yourself lots of time. But out in the real world where the odorants bind to the odor receptors, often all you’re going to get is trial and error and learning through experience.
To add another perspective, ideas have merit based not only on their correctness or conformity to reality, but also on their pragmatic value. If I write a book all about the equation 7+3=10, explaining all the wonderful things about this equation, its beautiful properties, its many dimensions of uniqueness, and so on, and somebody else finds the book stupid, it may be that that person has failed to grasp the deep, true ideas in the book–or it may well be that that person understands those ideas completely, but sees absolutely no value in contemplating them.
Physical powers of perception like smell usually have at least hypothetical pragmatic uses (if sufficiently broad and general). Real-world knowledge–such as the true range of atomicity of human societies–is similarly likely to be of at least hypothetical practical value. But there are plenty of abstract ideas that have no practical value at all. (Most of my opinions, for instance, would be considered by many to fall into that category.) If one loses nothing by ignoring them, then their truth, profundity and even popularity are ultimately irrelevant.
Somewhere down-thread, I wrote a comment that’s kinda like this. If you think of memory in the brain as a network of concepts that stimulate their next hops, and “thinking” as the act of paying attention to one or more of the stimulated concepts from each hop, some people are going to have a really well-developed, high-connectivity representation of “7+3=10”. They’ll think your book is silly, because it won’t stimulate anything novel in their network of concepts; everything you say will be confirmed by their network. On the other hand, somebody with a less well-developed math sense, which probably correlates to lower connectivity amongst the concepts needed to do math fluently, may find concepts in your book that are genuinely novel and useful for that particular brain.
But note that this is going to be a bit different from reasoning about knowledge based on whether it’s falsifiable or not. Brains are ultimately pretty Bayesian: neural networks basically start out with some fairly random connectivity, and slowly relax to form concepts that are informed by some set of Bayesian priors that work in the real world.
That no two brains are going to relax into exactly the same set of concepts, with connectivity that represents the same priors, is what makes people other than yourself pretty interesting.
One of the things I like most about this blog is that you’re coming from a rationalist, science-emphasising perspective but fairly often, and especially more recently, think your way round to seeing the value of the kind of understanding that sensible people in the humanities and social sciences argue for.
There is valuable knowledge which you cannot get at by breaking the world into objective facts and then try to integrate them and render them coherent through theory. You have to get them, you have to have a feel for them, you have to know them in the sense that you know a friend of yours. It’s a knowledge which cannot be found through the idiographic collection of data and nomoethic generalisation because it is about a whole that already coheres.
Have you ever heard of Iain McGilchrist’s The Master and His Emissary?
There’s nothing a-rational about unknown unknowns. There’s also nothing uniquely post-modernist about the concept.
What relevance do you think postmodernism has to my comment?
I’ve pointed out a number of times that the SSC crowd seems to be too charitable, especially on the subreddit (which has a rule saying to be charitable, but no rule telling people where to stop). As Scott himself has pointed out in the past, it’s hard to direct advice only at people who need it. Some people do need to be more charitable, just like some people need to stop harassing women. Some people, though, and especially the ones who don’t need it, listen to the advice and end up being too charitable, just like some nerds hear “everything I say to a woman can be harassment”.
This is also a case of Geek social fallacy #1. It is not wrong to say “postmodernists are speaking nonsense” just because that smells of making postmodernists into the outgroup. You do not have to be accepting of everyone.
Indeed. We should aim for the most charitable explanation that is still accurate, not sacrifice accuracy to become more charitable.
For example:
– ‘Humans desire to murder others’ is false and uncharitable to humans
– ‘Human never desire to murder others’ is false and overly charitable to humans
– ‘Humans sometimes desire to murder others’ is true and charitable enough to humans
Similarly, “postmodernists are speaking nonsense” is probably too uncharitable because there is too much diversity in the movement to make such a sweeping statement, but ‘Foucault is speaking nonsense’ or ‘pomos who reason in a specific way are speaking nonsense’ are not necessarily uncharitable statements.
I feel like this test (be maximally charitable without sacrificing accuracy) misses the whole point of being charitable / intellectual humility (and if you want a controversial extension, / postmodern epistemology)–which is that you don’t know where accuracy lies between you and your interlocutor. If you are never sacrificing your >current< view of accuracy in the pursuit of understanding another point of view, at minimum going so far as to rephrase your views as questions, you are not really being charitable at all.
But of course you do. The one with less prediction-error wins. The model with the highest posterior probability, wins.
“Charity” moves the balances not one bit.
It seems like you just said “of course you know where accuracy lies during a conversation, it’s [two definitions of accuracy that can only be evaluated after a conversation]”.
I don’t see that declining to say things like “postmodernists are speaking nonsense” necessarily has much or anything to do with social acceptance or outgroups, or whether the SSC crowd consists largely of geeks.
On a purely intellectual level that has nothing to do with one’s relationships with other people, it can be wise to keep an open mind to the possibility that something others find meaningful but sounds like nonsense to you, might have meaning that you’ve missed but might later understand. The OP here is all about giving examples of such situations. Whether or not you agree with the proposition that it’s always wise to keep an open mind to the possibility that what sounds like nonsense isn’t, your talk of social fallacies and outgroups makes me think that you’re reacting to something other than what the post is actually talking about.
In any case, it seems to me that a statement like “Postmodernism makes no sense to me” is functionally the same as “Postmodernism is nonsense”, except for dispensing with the implicit assumption, “I cannot be wrong”. I don’t see a downside to formulating one’s statements in the humbler way (while the last paragraph of the OP describes a potential upside).
I think it has a lot to do with it. SSC is considerably more inclined towards geekdom than the general population. I also think the impulse to not reject people from your group has a lot to do with the impulse to not say that people’s ideas are bad, or that they are insincere, or that they are incompetent. Both of them boil down to “I want to be nice to the other guy” and taking it too far.
Postmodernism makes no sense to me” is functionally the same as “Postmodernism is nonsense”,
I think the latter has implications of “I’ve investigated the claim and come to the conclusion that it’s nonsense.” Which sometimes is an entirely reasonable result to come to.
The brain is a human organ so it can go wrong like any other organ, to find someone who outputs nonsense is like finding someone whose eyes output nonsense unless they wear glasses.
Which is not to say that postmodernism is one of those things.
I, for one, have taken these admonitions to heart and now read your posts in the worst light I can plausibly cast them in.
If someone is constantly telling you that he is engaging in bad faith, at some point one should listen.
I did not say “you should take everything uncharitably”.
A more charitable reading of Brad would be that they are joking.
Arguably one way to test post-modernism would be a Sokal-style prank, i.e. submit a paper known to be pure nonsense to a journal which publishes post-modern essays and see if you can get it accepted.
Scott, that anosmia story made no sense when you quoted it the first time, and it still makes no sense. I wish you wouldn’t cite it quite so credulously.
(How do you not notice that you’re not smelling things? The person claims they yelled “gross” when someone farted—but how the heck would they know, if they had no sense of smell? Did the idea of testing their sense of smell—and there are many absolutely trivial and obvious tests that can be run—not occur to them, before they reached their conclusion about having no sense of smell? Did they really decide that they had no sense of smell based only on not being able to describe, in words what things smell like? What the heck kind of reasoning is that?! It’s a non sequitur! What does the one have to do with the other? I can’t describe what smells are like either, except in terms of each other! How else would you do it?! I can tell you that I certainly do have a sense of smell! No, the whole story stinks, and using it as an example of anything, much less any idea so grand as the one in this post, is very, very silly.)
“No, the whole story stinks.”
I feel like saying this ought to make you more sympathetic to the possibility that someone would say farts smell gross even if they couldn’t literally smell them.
Imagine the following situation: You sit down with some friends to eat lunch, which in this case are a bunch of turkey sandwiches from the local deli. You take a bite, and it tastes fine. Then, one by one, the other people at the table take a bite, make a face, and say something along the lines of “Geez, there’s something wrong with this sandwich, I think the meat’s gone bad!”
I don’t know about you, but I’m:
a. Probably not eating any more of my sandwich.
b. Probably going to be thinking that hey, maybe something about that sandwich *did* taste a little off, now that you mention it.
I expect something similar happens w.r.t. smells and people with no (or not much) sense of smell. And I’d say something similar happens often with popularity and fashion and trendy songs and such.
Does it really, though? Often enough to matter?
Because with sandwiches, I don’t think I have ever had that experience in my life. Modern first-world hygiene and food-preparation standards are such that rotten meat basically never makes it to the table, and the worst you get is the sort of culinary mediocrity that everybody is too polite to overtly denounce.
And with smells, again, really? Because modern sanitary standards are pretty good too, and I can easily believe that someone made it to adulthood without ever encountering an environment so malodorous that their companions all found it necessary to call out that fact as they hastily departed – except when correlated with visual cues for disgustingness, e.g. a filthy outhouse or a trash heap, in which case observational experience confirms the usage that “stinks” is a synonym for “generically unpleasing”.
It happens to me and my wife with smells. For one example, she is extremely in tune with the smell of beef that has started to turn. I’ll pull a package of meat out of the fridge that is a day past its sell-by date and think it’s fine, but if she smells it, it goes right in the trash. Interestingly, the opposite is true for chicken. The moment it starts to turn it makes me gag, but she doesn’t notice it. There are several different foods like this, and we tend to just trust whoever thinks it smells bad.
The one test I have found is firmly indicative is smoke. If you cannot smell, you almost certainly will be the last to detect smoke. I once had a brief anosmic episode due to my sinuses (my sense of smell is usually impaired, but in this case I lost it completely for a number of days), and I discovered this while cooking: I was distracted by some reading, and let some meat burn slightly. When the smoke alarm went off I was completely surprised – even when the air had a visible tinge of smoke I could smell nothing.
Well, there usually is a sound when someone farts. I’m quite sure that it is in fact possible to fake being able to smell, as in most cases there would be visual or auditory cues. And, well, conducting a hypothesis test is not going to be the first thought for many, many prople. Not being able to describe smells even in terms of each other might be inconclusive in and of itself, but enough to raise the person’s suspicions.
I don’t think that the story stinks per se, however it does describe a quite exotic situation, so it probably shouldn’t be given much weight without proofs.
Farts are often audible.
But I agree that not having the vocabulary to describe smells doesn’t imply you can’t smell. Almost everyone lacks that vocabulary. And if I thought I was anosmic I’d do blind tests. My daughter seemed to have no sense of smell when she was very young (never spontaneously commented on pleasant or unpleasant smells), and we did do tests and they seemed to confirm that, but then she suddenly acquired one at the age of six.
And if I thought I was anosmic I’d do blind tests.
That depends on you figuring out that you lack something everyone else has, which is harder than it sounds. We learn that someone saying “I nearly died when Sally said that” or “I was blinded by science” are speaking metaphorically and are not to be taken literally, so why wouldn’t you think someone saying “Oh I love the smell of fresh-cut grass” is also speaking metaphorically if you can’t reliably distinguish between vague smell of grass and vague smell of cabbage cooking? I think colour-blind and tone-deaf people also don’t really get what people are going on about with their “red” and their “beautiful music”, and if you can hear ordinarily why would you assume “I can hear everything everyone else does except this one particular quality, guess I must be missing something and need to conduct a test to prove this” rather than “Joe saying he prefers Bach to Mozart is like Sue saying she was knocked over by a feather”.
“The person claims they yelled “gross” when someone farted”
if you grow up and everyone yelled “gross” when someone sneezed and failed to cover their mouth with their hand would you assume that you had a missing sense that was allowing them to perceive something extra horrible or would you just go with assuming that’s what people say when someone sneezes?
I mean it’s not nice when someone disturbs the local slood field like that and I guess someone might just make up some justification about it being unsanitary or something as that would also apply… when the real reason is that everyone else is feeling the wibble in the slood field and they’re not.
Kids are good at slotting almost any observation into their worldview.
Without digging too deeply, the wikipedia article on anosmia basically seems to back up that this is a common experience
(although I admit that 1. I can’t read the citation for this claim and 2. the citation for a similar claim further up in the wikipedia page was a BBC news article which made no such claim (I’ve removed the citation))
I think it might prove insightful for you to look into the linguistics literature about how congenitally blind people handle vision-related cognitive metaphors (like “look into” or “insightful”). (Spoiler: they do it really, really well.)
How do you not notice that you’re not smelling things?
I mean, clearly the person did notice, once they got to adulthood.
Yeah, the description of how they found out might be a little… Flowery…
But it maps pretty well to what my friend with anosmia experienced.
He didn’t realize until he was in his mid-twenties.
He’s always had really peculiar culinary tastes. So no one was all that surprised by his discovery.
He managed to figure out he rated food by texture and not taste a couple of years before he realized he was anosmic, but we just figured he was being weird. (I mean, he was being weird, but now we know why he’s weird)
I once came up with the idea of “political anosmics” – people who on a physical level had a sense of smell, but who had decided that the sense of smell was Socially Constructed and therefore Had To Go, seeing as all of this smell-talk tended to get used to express one’s disdain for the lower classes and foreigners and suchlike, and were therefore determined to think and act as if smell didn’t exist.
I wrote a post about this on – I think it was LJ – and an actual literal anosmic friend said that given the sheer vagueness of what nosmics have to say about smell, there were times he could believe it was all made up.
Smell is wired to more primitive brain parts than sight or sound, which is why it’s so hard to express it in words
As an anosmic person I can confirm that it’s very easy to go decades not realizing everyone is doing something you are not. When people say things like “That smells delicious!” you just chalk it up to personal preference and ignore it. It’s not until you start asking very specific questions that you realize something is happening beyond your senses.
Just last week there was a chili cook-off at work. Everyone walked into the building and commented on how amazing it smelled. I obviously didn’t notice anything. Now that I have the knowledge that I don’t smell anything, this fact gets raised to a conscious level, but before I realized it, I sort ignored all these comments.
However, a counter point would be that a lack of smell came with other symptoms that were noticeable but confusing until I learned about this sensory issue. I had a lot of trouble with food and eating right. Very similar to Avoidant/Resistant Food Intake disorder. Since my issue started before that diagnosis was invented, I was just weird with food. After learning that I didn’t have smell and how that affects appetite and taste, a whole lot of my past made a lot more sense and I was able to make the needed changes to make eating an easier situation.
I would agree that a missing concept doesn’t exist in a vacuum and that real concepts leave their marks on our lives, but it is very easy to construct an idea of existence that just normalizes those marks.
> The person claims they yelled “gross” when someone farted—but how the heck would they know, if they had no sense of smell?
Perhaps they have the smell equivalent of blindsight: they do have a sense of smell, and can act on it, but are unable to consciously access it.
Data points in favor of people not noticing these things: I have a boyfriend who didn’t realize he was red-green colorblind until high school, and then his brother didn’t figure out that he was also red-green colorblind until a couple of years after that.
I did not know I was nearsighted until I was 9 years old, when I was formally tested for it.
I had a Heath and Biology worksheet in class in school 2 years before that, that had a set of three pictures that showed how a scene looked to someone who was nearsighted, farsighted, and with normal sight. The nearsighted image looked correct to me, and the normal sight one was obviously nonsense: having things be clear in the farfield was something that only happened in photographs and on television, so I dismissed it as yet another example of Lies That Teachers Tell.
I didn’t realize it until I was about 14, and I’m really nearsighted! All of a sudden I had to sit in the front row and squint to see anything in class. Flunked by driver’s ed eye exam hard. I don’t know if something changed or I was really dumb beforehand.
You probably were not dumb.
The progress of your myopia was probably just such that your farfield did not impact your life much. If you didn’t do sports or outdoorsing and did not have to drive, and spent most of your time looking at tvs, computers, books, and sometimes a classroom walls, you wouldn’t have noticed, until suddenly the classroom wall one day went out of focus.
Myopia tends to start out less intense, and then get worse as one grows to maturity, and then the progression tapers off in adulthood, just in time for the presbyopia progression to start.
After I got fitted with glasses when I was nine, I went through a new pair every 2 years, each one with about a quarter diopter increase in strength.
Then from ages 13 to 23, I needed a new pair every year, each about a quarter diopter to half diopter stronger. The worst jump, at age 17, the new prescription was a full diopter different.
Once I was 25 onward, I just replaced glasses as they wore out or were broken, with no change in prescription. My glasses got thinner and lighter over that time, as higher and higher refraction index lenses became available, and I became prosperous enough to afford them.
In my mid 40s, I stopped being able to focus on things literally actually touching my cornea.
Now my default glasses are progressives, and I have different glasses for doing lab bench work, for computering, for reading, and for driving, and need to change them roughly every 2 years as the presbyopia gets worse.
My case is is not atypical.
It’s fairly easy to not notice being red-green colourblind, because red-green colourblind people are still able to distinguish red and green (and the other affected colour pairs) most of the time and build up concepts of red and green as distinct colours, even if the concepts may be a little different from those of non-colourblind people. Anosmia seems like a sensory perception difference of much greater magnitude (but I’m not anosmic).
I knew I was red-green colourblind from visits to the optician from an early age, but I didn’t really have the fact that I was perceiving colours slightly differently from most other people highlighted at all to me until one time when I played LaserQuest (do people do LaserQuest in America?) some time in my early teens. There were two teams, identified by coloured LEDs on the vests; one team was red and one team was green. Because the colour was only seen in a small area, and it was very bright, it was impossible for me to tell whether anyone I met was an enemy or a friend and so I ended up performing extremely poorly (to the point that after the end of the game, one of the staff people asked me how I could possibly have gotten such a low score). But that’s the only I time I can remember where I was clearly failing to notice a colour distinction other people were noticing and making use of.
It wasn’t anosmia, but I learned something new from the same post so I think I have some perspective.
Think about how metaphors are learned. How often does someone explain to you “here’s a metaphor”? How many times have you picked up that something was a metaphor because the literal meaning was obviously absurd? When people are missing “universal human experiences”, as the post calls them, they just have a slightly larger set of things that are obviously absurd and thus interpret slightly more communication as metaphorical rather than literal.
That’s me. The story is true.
I can hear farts.
No. The assignment caused me to reflect on smell and re-evaluate the evidence of my entire life. The conclusion was based on all my life experience up to that point. I had simply never considered the hypothesis before then.
If you want to know whether I have ever done tests, then the answer is yes, of course. There have probably been 20 times in my life when I told someone I can’t smell, they didn’t believe me, and we did some sort of test such as blindfolding me and asking me to determine when they’ve moved a stick of cinnamon under my nose or something like that. I’ve always failed these tests.
Every once in a while, I think I might be able to smell something, but I’ve never been able to pass tests in a way such that I can’t rule out using some other sense to pass the test, and I can’t really distinguish the possible smell sensations I sometimes have from just moisture, temperature, etc. I can, however, reliably detect cigarette smoke without first seeing or hearing that someone is smoking. I can detect fires in the same way. If I had a normal sense of smell, I presume I could pass that same test for, say, peeling oranges, (i.e. I would know when someone around me was peeling an orange before I saw or heard them doing it), but I never have done that.
Are you allergic to tobacco? I am, and I can always smell right away if someone is smoking- but maybe what I call “smell” is actually the sensation of the allergic reaction. (Similarly, I’m allergic to mold, and a piece from a pepper or cucumber that’s starting to go moldy will taste off to me, but my husband might not notice anything.)
Hmm, I don’t think I’m allergic; I don’t seem to have any real ill effects and I can detect smoke from other burning things besides tobacco. I guess it just shows there’s lots of ways to detect things in the air!
Especially lots of ways to detect things that are probably going to kill you pretty quickly in the ancestral environment.
I wouldn’t be surprised if there was some kind of smoke detection apart from smell, to avoid death by forest fire.
And smoke also irritates the throat, eyes and lungs.
If you learn about something, and it seems trivial and boring, but lots of other people think it’s interesting and important – well, it could be so far beneath you that you’d internalized all its lessons already. Or it could be so far beyond you that you’re not even thinking on the same level as the people who talk about it.
I “got” this concept after years of doing yoga. At first, statements like “move with your breath”, “long spine,” “stand straight,” “clear your mind” sounded like “nothings.” Almost verbal tics that yoga teachers sprinkle into their classes. Then at some point I started getting hints of what those words might refer to and they have moved from “obvious stuff I tune out” to “concepts I am not sure I understand 1%, 10%, or 100%”
Overall I started to tune into cliches more through these experience. Went from “it’s annoying that people say X all the time” to “do I understand what X is really about? It is probably important since it’s mentioned all the time.”
I think this is more to do with short labels for concepts. The yoga teacher is dropping reminders, which are sufficient keys to retrieve the relevant ideas for someone who already knows them. But you can’t infer the concept just from the key.
This is a pretty universal phenomenon. You see it all the time in jargon. We build layers of cached ideas, and after a while we stop thinking about all but the last few layers; this is why it’s sometimes hard for experts in a field to communicate with lay people. There’s a saying that if you really understand something, you should be able to give a succinct explanation: to “really understand”, you need to have enough jargon that you can express the idea succinctly, and you need to understand the jargon at the top and at all the layers in between.
There’s another saying that the secrets to the universe are so simple, you could write them on the surface of an emerald. Seeing such an emerald would not immediately enlighten, but anyone who already knew the whole secret would recognize it immediately. The short form is jargon, a key.
Hi Kisil, I follow what you are saying but at least in my experience it was closer to “concept shape hole”. Ie if you look at the emeràld without knowing the secret, do you think “I don’t get it”(in which case the emeràld is indeed the key) or do you think “duh obviously ” and miss the whole point?
In my case it was the later. I didn’t think “I wish people would stop saying move with the breath and explain what that is” – that would be better(because I would be able to ask) – the problem is i thought I WAS moving with the breath but really had no idea what it meant.
Have you read Made to Stick (re why it’s sometimes hard for experts in a field to communicate with outsiders?) if not, Google the tapping experiment for a fun example of this
Like E=mc^2 is kind of useless on its own, but, coupled with a good understanding of classical mechanics, can be backderived into special relativity.
Schizoids do arguments best:
Some people seem to think that building a wall is really stupid. I think it’s kind of cool.
Some people think that allowing in refugees as a response to the death of Alan Kurdi was a good idea. I think it was crazy.
In these cases, it’s not really concept-shaped holes that are the problem – it’s refusal to engage with arguments for emotional reasons. Either side could be right, but their arguments aren’t good.
I think that’s the kind of thing that I notice – wilful ignorance of arguments. We just withdraw and ignore.
I’m less convinced that “concept-shaped holes” are a general intellectual problem, once you remove the emotional/identity element. If you are operating with a reasonable level of humility, surely you’ll just say “I don’t understand this.”
If something is in my conceptual dark-zone I tend to just get bored and stop reading about it. I can’t think of a time when I’ve read something and thought “this is so stupid” without it somehow relating to emotion/identity.
An example:
I’ve been reading on blogs for about a decade how the discrepancy between y chromosome and mitochondrial mutation demonstrates the inevitability of intra-social male competition. And, I’ve been saying for about a decade, couldn’t it be evidence of inter-social competition, instead.
Anyway, a few months ago I decided to go and read the original paper. Very much in my conceptual dark zone, however, at the end of it the author says something like “this might be caused by scenarios like Genghis Khan having lots of women”. So, I think I was right.
But over all those years no-one ever acknowledged that might have a point, and I think there was only once when anyone actually bothered to disagree.
So, the intra-social competition people may be right, but their arguments are bad because of wilful ignorance.
I don’t understand what you are talking about. Do you have any non-culture war examples to use instead?
What I’m trying to say is that the feeling of “Hmmm… this guy is making a really stupid argument” or “this argument is really weak” only occurs in culture war/identity type situations.
Another example – I used to think that government debt was a serious problem. I used to get really angry at people who would propose more government spending. What idiots, I thought.
Then someone told me that government debt was the same as net private sector savings, and I realised that I was just reacting emotionally to the word “debt” rather than having any understanding of what was going on.
I don’t think it was a conceptual black hole. It wasn’t that I was incapable of understanding the ins-and-outs of macroeconomics, because I’m still incapable of understanding that, and I don’t feel the same way anymore. It was because I had an emotional reaction.
And I’ve never had that sense of “hmmm… what a dumb argument” with something like maths, where I have plenty of conceptual black holes, but no emotional investment.
I think you’re right. Most of my strong responses to things (not just culture-war type things) are due to issues that directly affect my ego/sense of self. The emotional response is actually very helpful, because when I do notice a strongly negative response to something, I try to recognize it as a red flag. I then stop, take a breath, try to figure out why that is eliciting that response, and try to work out a rational solution instead of lashing out or dismissing it out of hand. I’ve not been as successful with the positive responses, because they feel good, and I’m not an ascetic.
By the way, I can definitely see how some people could have a sense of “hmmm… what a dumb argument” with something like math, if the mathemathical concept being discussed goes against something tied to their ego. I don’t know enough about math to think of a real example, but you definitely see it in medicine where people get violently emotional when their pet theories get discussed critically.
I’ve heard dumb homeopathy arguments, and it’s neither culture war nor ego-affecting (except on an extremely broad level that would let you classify almost everything as those).
Are you sure you aren’t now reacting emotionally to the word “savings” rather than having any understanding of what is going on?
I think you’re right. Most of my strong responses to things (not just culture-war type things) are due to issues that directly affect my ego/sense of self.
http://theoatmeal.com/comics/believe
@John
I don’t have anything like the strength of feeling on the issue that I used to. Too much saving may well be bad.
Though, there are a few arguments which I find to be bad. “The nation’s credit card” argument, which David Cameron used to trot out, is really bad. A bad analogy without any further elaboration. Probably just appealing to the electorate’s emotional responses.
Cameron presented himself as a hero fighting the debt, but it seems like his attempts at austerity failed on their own terms. Not only that but you have productivity stagnation, worse working conditions etc. So I’m ready to give the alternative a whirl. Cut some taxes, maybe. Pay for schools. Etc.
I’ve encountered ‘that’s really dumb’ reactions to non-political things. One guy proposed this theory that gravity is caused by the screening of blackbody radiation (i.e. how much less blackbody radiation something emits from its surface than would be emitted by its insides). I can perceive no possible political motive, but daaayumn that is one stupid theory.
The appropriate response to this is to get even madder.
Then someone told me that government debt was the same as net private sector savings,
Someone was wrong. It does, indeed, vacuum up some chunk of those savings. Which could be going to more productive investments. But not all. We would see the results if it were.
@Mary
I suppose it’s net financial savings – if you count all private sector assets, those financial savings are going to be a slice of the total.
But, I think for financial savings net private sector savings (+ foreign govmt savings) have to equal government liabilities. There is no-one else to take the other side.
Mark, what you’re not being told there is that this is a special case, since governments have arrogated to themselves the exclusive power of creating debtless money. Absent that power, the equation no longer holds.
It’s perfectly possible to be against the increasing indebtedness of one’s government while also being in favor of a monetary reform that removes the private sector’s dependence on government debt.
Do you have Asperger’s?
No…. why?
I do, and your comments about immediately seeing the flaws in emotional arguments struck a chord with me. I could have written something very similar. I was wondering if this traits occur more often in Aspies.
I think I get what you’re saying…
For many years I thought that people who are into “spirituality” and magick and Tarot and Hare-Krishna and all that are all deluded morons.
Then I began to suspect that they have all found more or less different ways of tapping into some interesting bugs in our mental hardware that enables them, at the very least, to enter very interesting states of mind. Reading the book that you recommended (“Mastering the Core Teachings of the Buddha”) confirmed that suspicion.
I still think that many of the esoteric traditions have blatant failure modes that lead to believing all kinds of bullshit when you’re not carefully distinguishing what’s real, and what’s metaphor, and what’s just a useful mental shortcut, and that many followers don’t get that and thus effectively are deluded morons, but I’m no longer quite as sure.
On the other hand, there are some concepts that are held deeply by large groups of people, and that are fundamentally incompatible with those held by others. Either there is Eternal Life in the traditional Christian sense, or there is Reincarnation in the traditional Hindu sense, or neither, but not both. (It also bears remembering that theology was considered the most noble of the academic disciplines for hundreds of years.) Either Communism leads to a worker’s paradise, and Capitalism is evil, or Capitalism is the only way to prosperity, and Communism is evil, or neither, but not both. Both propositions are or were defended by smart people, so if you see some smart person proposing something that seems obviously dumb (or at least fatally flawed), and you see others enthusiastically agreeing, it’s not a priori clear that you’re the one in the wrong.
There’s a difference between “they are really experiencing altered mental states” and “their claims about the world are true”. It’s possible to be deluded morons in the second sense and not in the first. Furthermore, I suspect that most people who call them deluded morons intend that second sense.
Oh, totally. The thing is this, for example: when a practitioner of magick talks about “traveling to the astral realm”, is he making claims about the “real world”, or just his internal state of mind, and does he even care? The skeptic may be quick to assume the first, and dismiss the statement as bullshit, when in fact it accurately describes an unusual mental state, and that’s all it was meant to do.
Most people are not explicitly stating “I’m speaking metaphorically, of course”, or “this is obviously all in the head”, or “take this as an analogy”, or “I’m exaggerating for dramatic effect, take this with a grain of salt”, or “I’m being literally literal here, seriously” when talking about philosophical or spiritual matters, and that may cause some misunderstandings.
Also, mental hacking may well lead into real-world effects; because we don’t live in pure objectivity, our lives are lived subjectively. I doubt magic can really make people levitate or become immortal. But suppose a believer makes a ritual to get a job or pass an exam—it’s entirely conceivable that the ritual will subconsciously predispose them to n take actions conductive to succees (like interviewing with confidence, or studying longer). If this is correct, then the magic ritual worked.
Yup. It’s a little disconcerting, but carefully cultivated delusions may be really really helpful. James Loehr recommends athletes to cultivate a normal self, where they are acutely aware of their abilities and limitations, and a competition self, where they believe they are essentially invincible, and nothing is going to stop them. That’s a bit scary if your normal assumption is that removing bias in your thinking is always the correct thing to do…
I’ve come to believe this. Specifically that “taking things literally” is simply not the norm, and neither is making a sharp distinction between our experiences and the external world.
To flesh this out a bit:
.i na’igo’i
People like David Chapman or John Michael Greer will talk your ear off about how the term “the real world” does not have a universally-accepted meaning. So the first thing to do is to taboo “real” and start talking about anticipated experiences instead.
In my experience, the more philosophical kind of occultist and, say, James Randi turn out to anticipate pretty much the same experiences. There are some testable disagreements, although our host will talk your ear off about how running the tests won’t necessarily end the argument.
Practicing magicians don’t normally bother spelling all this out because they’re usually talking shop with other magicians. If you have done the thing called “communicating with spirits”, then you know the experience that phrase refers to, you know the ways it can be helpful, you know the ways it can be misleading, etc. Calling it “real” or “imaginary” doesn’t normally add any useful information.
To be fair, these are not mutually exclusive. It could be the case that, for example, most “spiritual” people are either deluded morons or lying hucksters, but a few are genuinely able to meditate themselves into altered states of consciousness. People are diverse.
> I don’t think it’s always worth delving deep into a seemingly-meaningless field to discover the hidden meaning.
One thing I like about the state of modern culture is that it’s losing patience with people who think they have something to say, but can’t be bothered to learn how to explain themselves. Does this or that guy have something to say, despite obscuring it with useless jargon and poorly focused streams of thought? I don’t care, l2write or GTFO.
If it’s useful and true, someone else will say it better (edit: like Scott!). Because useful and true things are independent of any one person. a^ + b^ = c^ is true regardless of who worked it out first. The mechanics of the front-kick are governed by anatomy and physics, and it doesn’t matter if it was passed down to you through tradition from some venerated ancient founder of a martial art, or if you learnt it while training for cage fighting.
Of course if I pick up the same insight from some popular source instead of directly from, say, Hegel or whoever, I won’t be able to claim to have read Hegel and sound intellectual and cultured…
congratulations, following your guidance we kill all the nonverbal autistics
(you could build models of *why* the person is being hard to understand & then be kind to them if they are one of the neurotypes which tends to be both useful and hard to understand)
Do they tend to have a lot of profound wisdom to share? Maybe they do but we’re just not listening right?
How is ignoring someone who’s making a public statement because they don’t express themselves in a comprehensible manner even remotely equivalent to killing someone?
@fluffy mu
@incurian they tend to be better at building enduring structures (physical and logical) than sounding profound, which is related to why neurotypicals tend to evaluate them as “not explaining themselves”
People trying to “sound profound” is part of the problem. My target here is deliberate obscurantism, the way jargon and using too many complex words unnecessarily seem an integral part of pursuing prestige – and its material benefits – in philosophy and some related academic disciplines.
I’m talking about the sort of thing Feynman (rightly) mocked by translating a sentence that went roughly like “The individual
member of the social community often receives his information via visual, symbolic channels.” to “People read.”
This may be a Type 1 vs Type 2 errors sort of tradeoff. Perhaps the more patience a culture has with people poor at self-expression, the more vulnerable it is to obscurantist bullshit. And if it tries to immunise itself from the latter, it becomes less forgiving to the former.
And perhaps not. I am not sure. At least when it comes to me personally, it’s the people who want to sell me their books, to have me namedrop them, and to hold academic jobs paid for by my tax money that bother me, not the odd guy I meet in person who’s less than brilliantly eloquent. In person, I have near limitless patience for the latter.
I think it is uncharitable to think it is mainly about prestige.
When you work in a field, you tend to use the jargon, the way of thinking, and the structure of sentences, used in this field.
And most of the time it is more efficient.
And sometime it is not, but it can be hard to detect it, and to switch to common language when it is the case.
You are first thinking with the jargon and bizarre grammar and categories, and then you have to translate to explain.
And the translation isn’t always obvious, even when it end-up being obvious after it is done.
Even without the jargon, the way to discover something is often sinuous, so at first the way to explain it, is also sinuous.
And it can be hard to know that there is some much less sinuous way to end-up in the same point.
I completely agree it could be good to ask people to be clearer, decrease the use of jargon, etc…
Doing it can even make them think the subject in another way and understand it better.
But we have to acknowledge it have a cost, and not everybody is as good at doing it, and people don’t always know when they are unclear.
It is like source code, a lot of code is poorly written, with too much sinuous algorithms and structures, poor documentation, etc…
But it is much more a combination of being too lazy, not having the resource to bother too much about that, and not having the competences, than about being pretentious or trying to obfuscate what you are doing.
It doesn’t mean it is a bad advice/reminder to ask people to improve a code source, but implying they have pernicious intent to not write perfect code is pretty unfair.
IME rationalists are *unusually* bad at this
Ditto.
Curious to hear about particular experiences of this from you (And same for hlynkacg).
How do you jump from “ignoring their opinions” straight to “killing” ? I think you need to be more explicit there in Step 2.
I could. But why bother, unless they’ve already demonstrated in some non-verbal way that they’ve got something valuable going on between their ears? I’ve found that, for a certain type of writing, the ratio of useful-insights to work-to-understand is very low.
I don’t think this comment was necessarily malicious, but this is a bad comment and you should feel bad. It was neither kind, necessary, true, amusing, useful, or coherent.
how nice of you to provide an example of what I was talking about upthread
(note that “easy to understand” in the rationalist world tends to mean “kicks everything upstairs to the slow system”, i.e. either doesn’t have functional heuristics or is better at the sorts of language-games you find in the talmud than you are at defending yourself from said games. well, “kicks everything upstairs to the slow system and then takes great care to write them up in accordance with upper-middle-class norms and doesn’t assume the reader has any ~neurotypical superpowers~”)
How curious; that people are losing their attention and patience with more complex forms of expression is exactly what I dislike about the state of modern culture…
You can have complexity and clarity. Half the problem is people conflating complexity with impenetrableness.
To be more specific, what I lament is the fall of the cultivation of complexity for the sake of complexity, clarity be damned; or, how we used to call it back then, “aesthetics”. Expressing oneself not under the logic of productivism, of limiting oneself to the most efficient way of saying things; but instead being overflowing, luxurious, extravagant, doing complex things with language for no practical reason but just because it’s fun to see how far you can stretch a sentence; it’s delightful to see how many dead languages can one ransack for cool, obscure words, and so on.
Ok, but then I think people should warn other when they are doing it.
I certainly don’t want something like that when I am already struggling to learn or understand something, and particularly in field like philosophy where it is so hard, even when we try to be crystal clear, to know when we really are understanding each other.
And in fact I mostly never want that, because I haven’t this sense of aesthetics, and I always fell dull when I read anything akin to poesy (I just mostly don’t get these things).
But poesy isn’t a problem, because it is clear when a poesy is a poesy.
The point of this post is that the problem might NOT be that they ‘can’t write.’
I suspect I’m moderately schizoid, never been diagnosed, but it fits more than any other described disorder. However, I don’t characterize it as an aversion to contact with other egos. To me it just feels like having a Dunbar number that is an order of magnitude smaller than everyone else. I do have room in my life for other people, but clearly not as many as most.
Isn’t that basically just introversion? It’s not that you dislike people, it’s just that dealing with people tends to sap mental energy, unlike an extrovert who can feed off social interaction. As such, you have to ration social interactions, which leads to a small number of close friends.
Some of the examples show more discrete “holes” than others (e.g. anosmia), but all can be framed in terms of we humans each evaluating degrees of some quality according to our own personal Overton windows, so to speak. As we grow older and live more experiences, our “personal Overton windows” gets wider and wider, hence we gain perspective. (A classic example that certainly applies to me as I’ve spent different periods of my life in different geographic areas: one person’s idea of “hot” or “cold” is usually based on the typical temperature range of the region they’re used to and may change dramatically after spending time in a different climate. Altogether their “Overton temperature range” — sorry for butchering the original term named after Overton — will widen, and so will their standard for what qualifies as both “hot” and “cold”.)
I think there’s an application here to Theory of Drama as well: we often don’t realize that other people’s personal ranges are quite different, and are surprised when criticisms of being too X are rebuffed by “No, doing this other thing A would be too X; I’m clearly not as X-ish as someone who does A, so how dare you call me X?” (Here A is something we might view as so beyond the pale that we aren’t even considering it in this context, while our adversary’s experience compels them to see it differently.)
On the note of things we kind of assume are metaphors:
As a teen I never got the “overwhelmed with emotion” or “the rage took over” thing. Nobody in my family ever just shut down from feelings, I never became incapacitated with too much emotion.
Until I started dating someone who could actually be incapacitated by emotions alone I’d always sort of assumed it was just an intensity modifier, like saying something is “blindingly colourful” even though it doesn’t literally blind anyone.
I suspect it’s somewhat genetic because I can think of no reasonably close blood relations who i’ve ever seen or heard of falling into an emotional heap, I’m quite certain I do experience a reasonably normal deck of human emotions but they’re not the only thing carrying me along day to day. I feel angry at people who piss me off. I feel sad and cry at funerals regardless of whether I really want to etc but if something needs to be done I’ve never felt like emotions are some kind of roadblock that overwhelm all faculties nor something that just takes the keys and takes over.
And I think it’s one of those issues that a fair fraction of the population fall on one side or the other. It seems to run in families where it can be noticeable that quite a few members of a family appear to be operating with no moderator between their emotions and the outside world. Nothing in their head has the keys except their feelings.
I can relate to you on this. I don’t think of myself as particularly cold or emotionless– I feel sad, anger, happiness, etc in ways that feel very real and present. But none of those emotions have ever had the same impact on me outwardly as others. I’ve never felt incapacitated by emotion or seen anyone in my family do so. And for much of my life I assumed that when others did so they were in some way acting– no one close to me has ever really reacted in that way. But as I got older and have seen people react that way under stresses, it’s clear there’s something different than an act at the root.
Radical faith?
Concept-shaped holes can perhaps be impossible to notice… but I expect that concept-shaped roadblocks can also be impossible to notice. Or at least very difficult to recognise in most cases.
If you have a concept-shaped roadblock you may become aware of it but be unwilling to consider anything that undermines it. Confirmation bias, social peer pressure, letting go of perceived benefits, cherry picking, and so on.
I’d argue that the pursuit of rationality and the pursuit of spirituality can both be holes and roadblocks at the same time, but not both to a single person at the same time. Perhaps a ‘balanced’ person could form a ‘balanced’ view about rationality and spirituality, but then ‘balance’ may become a hole or roadblock too. And so it goes…
I mean, isn’t this the basis of tribes? This seems obvious.
At risk of falling into the trap, this seems like a conceptual error. Mental Health Industries focus on people Failing At Life in the same way that restaurants focus on hungry people and plumbers focus on people who have plumbing problems. This is why every entry in the DSM includes the necessary condition “…and it’s maladaptive.”
I found this really, really fascinating. You went into a black-and-white room, and came back out again. But instead of the experience of color, you lost and regained the experience of emotion.
I am also curious about Scott’s experience because I think I am in a similar situation, despite not taking any kind of anti-depressant or SSRI, currently or ever. By analogy: if the normal human emotional response goes from 1 to 10, mine goes from 1 to 2. I can try to tease out very small gradations (e.g. this restaurant is a 1.1 but that restaurant is a 1.17, so I suppose I like it more) but the level of my conviction in that opinion is commensurate to the absolute size of the difference (I don’t really care).
The strange thing is that I can recall experiencing powerful emotions, but I switched over to this new model around the time I completed university. What’s up with that?
@scott at this point in the chain of reasoning you can taboo “consumerism” and figure it out
you are older now and have a more solid epistemic foundation than you did as a teenager
waiting for the Scott post on how left-identitarianism is a product of capitalist ideology, which it absolutely is
(and right-identitarianism is Waluigi)
“Right identarians are evil self-pitying losers” is great clickbait but probably not good SSC insight porn.
Pretty sure he means Waluigi not in the sense of a literal mustache-twisting villain, but in this sense
…which is basically just Corey Robin’s (accurate, imo) take
Well, that’s a hell of a deconstruction of Waluigi, but I don’t understand how that describes right identarians. Unless I’m misunderstanding what a right identarian is. In my head I was picturing the European identarians who want to preserve their traditional cultures.
I could just as easily apply that characterization to left-identarians who see everything through the lens of oppression. I’m thinking of Ta-Nehisi Coates who could be described uncharitably as defining his blackness only in relation to white supremacy.
Well, for one thing, it’s not necessarily bad to be the Waluigi figure here. If the thing which you’re are oppositionally mirroring is wrong, then you’re doing the right thing. I support the leftist agenda, so I think it’s bad, but I also think the goodness of leftism is a far more open question than the relations these different movements have to one another–and if leftism is a bad idea, then building a movement defined oppositionally to leftism might not be a bad idea.
Re: your first graf–Sure, they say that, but it sort of begs the question of which tradition. I’m not super-familiar with the specific European flavors of traditionalism, but over here traditionalists spend a lot more time lauding Greek art and Cathedrals than they do in actually carrying out any traditions they or their immediate ancestors were actually raised in. It’s a very particular, curated type of tradition that I think is extremely influenced by taking a negative image of Howard Zinn’s worldview.
And, (read the following with the disclaimer of me not necessarily believing it but finding it an interesting way of looking at things), your second paragraph correctly describes the applicability of the metaphor–but that’s why right-identitarian is Walugui and not Wario. Not about going back to the original, or reacting to something original, but rather revising the revisionism.
I guess I should have tried phonetics, but looking at the word I figured I wouldn’t get the reference without a better grounding in either Japanese literature or Mayan rulers.
Would you be talking then about people who marched in Charlottesville with Nazis flags and confederate flags? I mean, they’re not actually Nazis. They’re not German, they’re not members of the NSDAP in 1930s Germany. They’re waving the flag of a thing that doesn’t exist anymore. The people with the confederate flags are not confederates. There is no Confederate States of America. They’re fake; phoney.
Marx said history repeats first as tragedy, then as farce. Is Wario the real NSDAP, the tragedy, and Waluigi is the fake tiki torch “nazis,” the farce?
And if so, doesn’t the left/right thing break down when you contrast Stalin (tragedy) with antifa (farce)?
I actually don’t think it applies to most of the Confederate flag wavers, at least in the south. The Confederacy may have died long ago, but the Confederate flag isn’t really about the Confederacy and at any rate there’s a real good chance that the tradition they’re espousing did come from their fathers and grandfathers. They don’t care about good memes.
But the Nazis guys? Yeah, total Waluigis. They’re reacting against leftist Warios who are reacting against capitalist Mario.
So what then are the people who want to go “punch nazis?” There are people who are legitimately threatened/enraged by the farcical “nazis.”
IMO that one’s really fun. Do you just say, they’re hating on the Pepes who are the most Waluigi of all the Waluigis, rendering them some kind of yet-unnamed post-Waluigis? Or do you focus on the antecedents in anti-racist punk gangs like FSU and ARA and the intergenerational and idiosyncratic nature of the punk scene (exactly who are you reacting against by being combining violent anti-racism, arrogant teetotalling, and mosh pits, after all?) and say, no, this is an original organic culture, a Mario? Or do you privilege the modern self-conception of fighting against white supremacy, a bedrock and organic American value, and call them Warios? The possibilities afforded by odd and flexible Nintendo metaphors is endless
I think it’s some kind of post-Wailuigi. You can’t compare to the anti-nazi punk scene because the anti-nazis didn’t come into the punk scene, they were kicking nazis out of the punk scene. If they didn’t do anything, the nazis would have the punk scene. Correct?
On the other hand, the NPI people were having their meetings for ten years with no history of then pouring into the streets to beat up minorities. If the people who went out to confront the “nazis” had stayed home…nothing would have happened to anyone. The nazis would have marched and then gone home and that’s that.
And they didn’t show up waving American flags. If you look at the picture from the car attack the counter-protestors were waving other Waluigi-esque flags, like the red and black anarcho-syndicalist flag. That guy’s probably not really part of an anarcho syndicate. His name is probably Kevin, and he works at Panerra Bread when not blogging about how any day the workers will take direct action to smash all hierarchical power structures.
So you have people who are so clueless as to adopt 70 year old dead and despised foreign cultures that are not their own, confronted by people so clueless about politics and the world to swamp the streets against the other clueless people. I might as well get a cardboard sword and a horsey made out of a broom handle and call myself a Knight of the Round Table.
Left-identitarianism was engineered during the Cold War as an attractive alternative to pro-Soviet Communism, like abstract expressionism. Right-identitarianism developed by the process of white people noticing left-identitarianism and deciding to do that too.
This is a good post on intellectual humility, but as pointed out, there is the possibility that some maps people construct have very little to do with the territory they claim it to describe. There is absolutely nothing wrong with New Atheists challenging a worldview that doesn’t hold up to scrutiny, for instance; and pointing out that a lot of people do not take the Bible literally is a weird way to frame that particular discussion, seeing as the points made by New Atheists are directed to the fraction of people who do. Now, failing to respect other people, have proper manners and not devolve into binary good-evil thinking are obvious issues, but unrelated to that matter.
An actual issue I have with a lot of modern philosophy is its obscurantism. As pointed out by minds wiser that mine is, Orwell found clear thinking and clear writing to be of utmost importance. A lot of post-modernist literature is famous of its difficulty to read and digest; Foucault and Butler, for example, not to mention some earlier philosophers.
I’m no expert on philosophy, but if I may ask: why the heck would someone want to write text that’s hard to understand? What are their motivations for that? Why exactly are the points made by the aforementioned writers of the sort that cannot be formulated in a clear, concise manner; and if they can, why aren’t they? I believe this alone is enough to alienate plenty of people from such literature, which I find to be a huge loss with few benefits.
So if I am to remain humble – as I will – I will also expect that people posing the ideas I can’t yet understand will do their best to say what they are saying as clearly as they ever can.
I have long suspected that philosophers are those with reasonable intellectual faculties but not the rigour to be scientists, the talent to thrive in the arts or whatever extra chromosome it takes to do serious math. Unfortunately (or perhaps fortunately) with philosophy they’ve come across a discipline whose rabbit holes are either four inches deep or infinitely long, and have to go on pretending they’ve come up with something important and/or useful until physics or biology demonstrate that the problem was empirical all along.
Well, I suppose it’s possible, but I wouldn’t dismiss an entire field without being very well acquainted with it. If you see no point for dat fence…
That is to say, there’s a remarkable amount of remarkable philosophy out there, and from time to time it happens so that people more familiar with science than philosophy stumble upon philosophical concepts and start discussing them almost as if all the work ahead of them hadn’t been done already. That’s what happened in this blog just recently.
Now, that isn’t really a problem in my opinion. I have a blog myself where I have mentioned the importance of doing philosophy and preserving the right to philosophize no matter what one’s level of education or intelligence. After all, it is the process, not the results, which matters when doing philosophy. But I do think it gets troublesome if we a) claim to be among the first to stumble upon the ideas under discussion or b) claim that philosophers don’t have this or that enough so they just do nonsense. Not claiming that you said that, but not quite far from it either.
My original question remains: why be obscure unless the thought process itself is muddled? If it is, doesn’t one have the responsibility to clear it up a bit before publishing? Now, that may very well be due to deceiving or pretentious intentions, but I’m willing to err on the side of caution here. But that does trouble me.
EDIT: I ended up reading about deconstruction on Wikipedia and found something interesting – a computer scientist’s text on deconstruction.
Here’s a passage:
“Contrast this situation with that of academia. Professors of Literature or History or Cultural Studies in their professional life find themselves communicating principally with other professors of Literature or History or Cultural Studies. They also, of course, communicate with students, but students don’t really count. Graduate students are studying to be professors themselves and so are already part of the in-crowd. Undergraduate students rarely get a chance to close the feedback loop, especially at the so called “better schools” (I once spoke with a Harvard professor who told me that it is quite easy to get a Harvard undergraduate degree without ever once encountering a tenured member of the faculty inside a classroom; I don’t know if this is actually true but it’s a delightful piece of slander regardless). They publish in peer reviewed journals, which are not only edited by their peers but published for and mainly read by their peers (if they are read at all). Decisions about their career advancement, tenure, promotion, and so on are made by committees of their fellows. They are supervised by deans and other academic officials who themselves used to be professors of Literature or History or Cultural Studies. They rarely have any reason to talk to anybody but themselves — occasionally a Professor of Literature will collaborate with a Professor of History, but in academic circles this sort of interdisciplinary work is still considered sufficiently daring and risquÝ as to be newsworthy.
What you have is rather like birds on the Galapagos islands — an isolated population with unique selective pressures resulting in evolutionary divergence from the mainland population. There’s no reason you should be able to understand what these academics are saying because, for several generations, comprehensibility to outsiders has not been one of the selective criteria to which they’ve been subjected. What’s more, it’s not particularly important that they even be terribly comprehensible to each other, since the quality of academic work, particularly in the humanities, is judged primarily on the basis of politics and cleverness. In fact, one of the beliefs that seems to be characteristic of the postmodernist mind set is the idea that politics and cleverness are the basis for all judgments about quality or truth, regardless of the subject matter or who is making the judgment. A work need not be right, clear, original, or connected to anything outside the group. Indeed, it looks to me like the vast bulk of literary criticism that is published has other works of literary criticism as its principal subject, with the occasional reference to the odd work of actual literature tossed in for flavoring from time to time.”
https://www.info.ucl.ac.be/~pvr/decon.html
That might have something to do with what I’m concerned about here.
Too well acquainted with, unfortunately. But to be fair to that entire field, it does sometimes serve the practical purpose of allowing one to feel a certain companionship of thought . . .
Regards obscurantism in philosophy, I’m not sure that’s the precise problem. Parse Derrida and you’ll find him full of pop psychology, none of it especially obscure; no more so than Nietzsche, anyway. What philosophy has appeared to want to be, since about Schopenhauer, is significant. And there’s a fine art to getting significance just so, as anyone who’s tried their hand at poetry (and, likely, failed) will recognise.
The real fun is that a lot want to be significant without actually, like, signifying anything.
Chomsky made the point that the problem with so-called ‘theory’ is not that it’s wrong but that there’s nothing in it that couldn’t be explained to a bright twelve-year-old. (In fact I’m not sure he used the word ‘bright’.) Scott’s post, I’m assuming, is contra that — or at least open to the possibility of there being something more than the apparent sum of the parts.
I’m sympathetic to the possibility of missing the point with regard to certain ways of seeing, here and elsewhere. Equally, I’m sympathetic to the possibility that members of the critical studies faculty are taking part in a costly signalling ritual, where the cost is appearing silly to everyone outside the faculty. But actually, I think I’m just kind of impressed they’ve managed to make a go of it for so long.
> Chomsky made the point that the problem with so-called ‘theory’ is not that it’s wrong but that there’s nothing in it that couldn’t be explained to a bright twelve-year-old. (In fact I’m not sure he used the word ‘bright’.) Scott’s post, I’m assuming, is contra that — or at least open to the possibility of there being something more than the apparent sum of the parts.
I’m confused; what theory are you referring to? Do you mind elaborating this paragraph a bit?
EDIT: found this bit, assuming you’re referring to this. https://genius.com/Noam-chomsky-chomsky-zizek-debate-annotated
Interesting you should mention selection pressure, because it interacts with Chesterson’s fence oddly. If you keep applying Chesterton’s fence, you’ll be left with only the ideas that are capable of surviving longest under Chesterson’s fence scrutiny. And even if only a few ideas have no good reason behind them, having no good reason is a trait that leads to survival–it makes the ideas immune to being torn down, since you can’t understand them, so you’re not permitted to tear them down.
tldr: If you reject ideas only when you can reject the reason behind them you’ll end up preferentially believing ideas that don’t have reasons.
On Chesterton’s fence: I mentioned it assuming that Spurious might be criticizing doing philosophy without having immersed himself into it enough to claim something of the sort he did. A bold assumption, indeed, concerning which I stand corrected.
On selection pressure, to Jiro: I thought the text I quoted made some interesting remarks, but didn’t mean to imply it explained all of obscure writing. I don’t think Chesterton’s fence should be applied like you describe, for reasons you describe; I guess it should be more of an exercise in humility and lateral thinking. But that is an entertaining point of view.
Non-STEM academics in Canada regularly have to explain themselves to interdisciplinary grant selection committees, so that people in the humanities and social sciences actually live on a rather big island together.
Ah, Chip Morningstar. I remember when he first published that. I wonder whatever happened to him, he would fit right in here, I think.
While this is usually a good rule of thumb, I feel no need to seriously consider the merits of crystal healing before I dismiss it. Using mainstream appeal isn’t really a good metric for this either: people who’ve decided not to vaccinate their children for whatever reason are a sizeable minority (perhaps comparable in size to Postmodernists?) and yet without having read any of the literature for or against I’m confident that they have no real basis to their platform.
It’s not weird at all if e.g. you don’t interpret the Bible the way fundamentalists do and New Atheists still insist that what they’re saying applies to you and to all other Christians. Which has indeed happened to me.
Some ideas are just hard. Or through no fault of the author the ideas are harder to express now, or cultural assumptions or words have changed in a way that obscure the point. Or the author values something like getting their ideas down or working through their ideas or getting the shape of the thing more than being clear and concise, possibly because they haven’t grasped it well enough yet themselves to formulate it clearly and concisely. I have little sympathy either for deliberate obscurantism, but not all difficult writing is deliberate obscurantism.
Unfortunately, if you’ve never encountered the kind of realization that Scott speaks of in Non-Expert Explanations, there’s not much I can say to you here. For obvious reasons, I can’t prove to you in the space of a blog comment that some particular idea is really hard to express.
> It’s not weird at all if e.g. you don’t interpret the Bible the way fundamentalists do and New Atheists still insist that what they’re saying applies to you and to all other Christians.
True, which is why I pointed out that there are obvious issues with New Atheism, including binary good / evil thinking. I haven’t personally witnessed anybody trying to prove to a non-creationist Christian that they’re in fact creationists, but I’m sorry to hear that apparently does happen.
As for obscurantism, I acknowledge that what you’re saying may very well encapsulate all of the obscure writing present in 19th to 20th century philosophy and late social sciences. But I am skeptical that it does. I didn’t imply that all obscure writing is so deliberately, but it seems that obscurantism also has characteristics of fashion of a certain time and of certain academic circles. Do you disagree?
I understand that some things are hard to express. The problem with pointing that out is that it’s impossible for someone who doesn’t understand an idea to know whether they should just put more effort in understanding the case or whether the idea is somehow confused. We can’t expect others to entertain our obscure nonsense forever just because they have to account for the possibility that we’re talking about difficult things. Of course it’s a matter of probability for the listener: the more she tries, the more does the pendulum swing one way or another.
What Scott talks about here is the importance not to bash something just because we don’t get it. What I talk about is the importance of not abusing that charitable intention by means of obscure writing to serve whatever purpose it might.
Having ‘your beliefs’ explained to you by atheists is extremely common if you’re a Christian with atheist acquaintances who
A) haven’t lived in the church or faith since before they were anywhere near to intellectual maturity
B) were only ever loosely affiliated with the church (their parents went on Easter and Christmas, and replaced Because I Said So in the parenting manual with Because God Said So)
C) grew up in a church that calling itself Christian but was in communion with nobody, did not have an orthodox confession was heretical on an extremely basic level, etc.
D) didn’t grow up in the church or faith at all, but closely associated with people in categories A, B, and C
Of course there are plenty of functional and/or avowed atheists who genuinely grew up in the church, are intellectually conversant with its concepts, and know what Christian belief and life is like on both a theological and emotional level. Likewise there are many who never had contact with the church and faith at all, and approach it from a neutral outside perspective. Conversations with those atheists on Christian beliefs are usually very stimulating and productive! But they are very few in our culturally-Christianized society compared to those who fall easily into the above categories.
I say this by way of explanation and apology for the above categorized atheists: many of them believe quite firmly that they *know* Christianity and what Christians believe. But, like the anosmic person who says the food smells delicious, they are missing a vital portion of the experience that would allow them to understand it, while often still being very confident due to their cultural exposure that they understand your beliefs as much as you do or more, leading to conversations where you are told that, as a Christian, even if you don’t believe crazy balderdash or hold simplistic idiot beliefs, all of your compatriots do, and therefore *that* is Christianity.
I can agree to that.
Sure. I don’t think we really disagree about anything substantive here, then; we’re just coming at this from different angles. I don’t have a one-size-fits-all solution for distinguishing obscure from obscurantist writing, but there’s a number of heuristics that can help beyond just the repeated application of Scott’s “maybe there’s a there there” approach. (My phrasing, not Scott’s.) Like, how seriously is it taken by others? Is it the sort that requires a lot of background? What is/are the stated aim(s) of the work itself (“A Beginner’s Guide to x” vs. a collection of notes or technical essays)? As Scott implies in his post with the After Virtue example, these may indicate it’s worth a closer look.
I haven’t personally witnessed anybody trying to prove to a non-creationist Christian that they’re in fact creationists, but I’m sorry to hear that apparently does happen.
Not always atheists; I’ve had Protestants of various denominations (often leaning towards the Reformed side, but that’s what I get for getting in a fight online with Calvinists) earnestly explaining to me as a Catholic that no, [their version of a Catholic doctrine] IS really what I believe, or the Church states, and if I contradict that then sorry, I’m wrong and not really a Catholic.
Though you do get some of the young zealous atheist types who try the “gotcha!” trope that they’ve picked up from atheist websites on Catholics and then are confuzzled when “attack that works against Biblical literalist Protestant non-denominational group” fails. That’s always fun 🙂 (My favourite of that is when they try mockery about the Eucharist and ritual blood-drinking; I like to retort that excuse me, I indulge in weekly ritual cannibalism as well as vampirism, if you don’t mind!)
My favorite was a fundamentalist who told us (in a Catholic blog) what the Calvinist and Catholic doctrines were. I corrected him about the Catholic one.
Somewhat later, he told us what the Calvinist and Catholic doctrines were — with the Calvinist being my account of the Catholic doctrine, down to ripping off the metaphor I used.
Some things are just hard, but I expect that if it’s so hard to figure out what we mean when we use words like, “know” or “should” that we need semi-obscurantist papers, something has gone very wrong.
Fair enough, I suppose. I didn’t want to commit myself in my response to any particular obscure things being nonsense; I just wanted to defend the notion that something can seem or be obscure without being obscurantist. (Given the -ist, “deliberate obscurantism” was probably redundant on my part.)
“Why the heck would someone want to write text that’s hard to understand? What are their motivations for that? Why exactly are the points made by the aforementioned writers of the sort that cannot be formulated in a clear, concise manner; and if they can, why aren’t they?”
Note: I agree with most of what you said in your other replies in this thread. Lots of academic disciplines are internally focused on significance, prestige, etc., and people are only talking to each other, without nearly enough input from the rest of the world (empirical data, such as it is in each respective field).
However, I can think of one good reason: lack of adequate vocabulary, on the part of the writer *or* the audience. My favorite example is Isaac Newton. He invented calculus, and was able to figure out principles of physics no one else had been able to express. Look at any modern textbook, and Newton’s laws are summarized in a couple of equations, then explained at length. But if you actually read the original Principia Mathematica (translated for me, I don’t know Latin), there’s no calculus in it. Newton turned it all back into geometry, because he knew his audience didn’t know calculus (Awareness of inferential distance? Possibly, or just not wanting a book that much longer), and in the language of geometry it all looks really complex and obscure.
I think there are (or at least have been) brilliant postmodern philosophers (for example) who saw the outlines of something interesting and important, and tried to point at it as best they could using the best words that they had available that would make sense to others. Some of those others got it, some misunderstood, but since there was no empirical result to appeal to to tell those two camps apart, there was no way to attain or maintain high average quality of understanding in the community.
One defence of obscurantism goes like this. The language and metaphors we use affect how we see the world. If you want to transform [people/society/ the world /whatever] you need to produce radical rediscriptions. Anything less than a radical rediscription is mere incrementalism rather than transformation, and so holds less potential. Radical rediscription is a process of saying things in new and confusing ways, puzzling through them until one day your brain says “click” and all of a sudden the world is a different place. There are a bunch of assumptions here that people who think pomo is an insult would likely disagree with, but this is one of the reasons a theorist will give you for not worrying too much–even being satisfied with–the inscrutability of their text.
If that didn’t make any sense to you, you can find a rediscription of a similar line of thought in the first part of this blog post: http://www.critical-theory.com/defense-obscurantism/
The oddity with this argument is that a very clear and simple discussion can also produce this effect. People didn’t struggle with The Origin of the Species when it first came out. Instead we have stories like Huxley saying “what a fool I was to not think of this myself.”
Or there’s a great book by G.E. Gordon: Structures or Why Things Don’t Fall Down, which is explaining some very heavy material science, much of it counter-intuitive but the book is humorous and very easy to read.
There’s a thing by which people, e.g. authors, explain common place ideas in subtle ways to slow the reader’s brain down. E.g. “It is a truth, universally acknowledged, that a single man in possession of a good fortune must be in want of a wife” is Jane Austen expressing the ordinary idea of people wanting to marry money in a new way, giving her readers the mental reward of having solved a small puzzle. But that’s the opposite of the radical description approach.
So, if scientists can transform how people see the world with clear writing this casts doubt on the necessity of obscure writing in other areas (though obviously it doesn’t definitely disprove it.)
I agree with all of that including your conclusion. Obscure writing is not necessary to change, perhaps even to “transformative” change (whatever that is), but it might sometimes help.
Butler is actually a great example because although her writing is famously difficult (and, if I might add, just plain bad), she has had a >huge< influence not just within the academy but also, and I suspect people don't realize this, in non-academic queer communities. I have faggy friends across all walks of life who have tried to read her or just gotten her second-hand, and despite the writing they find something liberating in the text. Butler's redescriptions of gender and kinship (eg gender as performance or her theorization of chosen families) are now common parlance in my urban queer community. I do not hesitate to say that some of these people, armed with these redescriptions, are more free than they would have been.
I received some very good replies here. Having had a good night’s sleep, I’m starting to think that perhaps I’m simply objected to intentionally obscure writing no matter what the purpose; although obscure writing might serve as transformative, I’m not sure the obscurity is essential for that, as pointed out. I find it troublesome that the reader must battle themselves through dark woods just to find out whether there’s substance in the text, if the writer could simply cut through those woods or at least give them a flashlight. Also, as pointed out, often it might be that they do just that, but that the concepts are too novel or convoluted to allow for an easy ride.
As someone here hinted, obscure writing can hold the potential to be seen as more profund than it actually is. That possibility alone casts doubt on intentional obscurity’s intellectual honesty, in my opinion. However, it’s not an easy subject to discuss and my knowledge of it has met its limits – not gonna make any strong claims, unfortunately.
“seeing as the points made by New Atheists are directed to the fraction of people who do”. If I understand you correctly, that strikes me as a very odd way of characterizing the points made by New Atheists. Their main point was that gods do not exist and therefore religious beliefs are purely inventions. This applies to all religious people–fundamentalist or not.
More broadly, the criticism of the New Atheist movement looks like a typical case of historical amnesia. It’s easy to chide them for their stridency now, after the battle has been won and to forget how powerful religious groups were politically just a short time ago. In late 1970s England, for example, people like Mary Whitehouse were having publications shut down and movies banned for blasphemy.
Yes, it seems I have misunderstood, then. I was referring to the discussion between militant atheists and fundamentally religious people, whose worldviews and conceptions of facts are fundamentally different; discussions I have witnessed myself aplenty. Scott has published previously on the perils of pointing out that gods probably don’t exist and that the world is probably older than 6000 years to people who already believe it. Thing is, I slightly disagree with the way he sees New Atheists operating, which was the point of what I was saying. I have not witnessed atheists harrassing secular religious people, although it seems my scope has been limited, as pointed out above.
So as far as I had understood, it was that New Atheists pointed their discussion of intellectual dishonesty and such towards people who actually attack science insofar as it as at odds with the Bible or other religious texts. I know personally people who do this, so it struck me as odd seeing Scott characterize New Atheism as a movement devoted to preaching basic science to people who already agree with them. I find it less odd if I implement what people have said above about the subject.
Not sure about modern philosophy in general, but what about cryptic writing to protect yourself? I believe Scott’s tabooed a few concepts just to protect himself from losing his job or being targeted in flamewars, just as atheists in prior eras wrote obliquely to prevent being burned at the stake. I remember reading a book on cryptic writing, and wondering how much modern writers are omitting to keep themselves from running afoul of someone’s taboos, and leaving it to the audience to put the pieces together.
I suspect this is common–having everyone yelling at you is a drag even if they can’t get you fired or wreck your career or anything, so lots of people write or speak obliquely when they’re close to taboo topics. You can see this in almost any intelligent public discussion of education, crime, or foreign policy.
Problem #1 is that this necessarily leaves a lot of opportunity for readers/listeners to miss the implications and walk away thinking the speaker was saying something rather different than what he really meant.
Problem #2 is that when this is common, we also get people trying to back-imply what was meant by the speaker’s ambiguous or oblique statements, and then accusations of taboo-violation against people who just speak unclearly or have some kind of verbal slip. This can lead to a kind of arms race of ever-more-subtle dogwhistles and ever-more-broad accusations of dogwhistling.
Problem #3 is that some of the mental effort that should ideally have gone into thinking through whatever the speaker was talking about instead went into how to phrase it carefully. If there was some genuinely hard problem (improving US education, working out what our foreign policy in the middle east ought to be), we’d probably be better off if that extra brainpower had been spent working out how to solve some problem *other* than “how do I keep the witch-hunters from burning me at the stake this week?”
Maybe this post explains a dilemma I had.
One of my breaking points in graduate school for counseling that made me decide to quit the program was the attitude to transgender people. I’m not trying to troll, this is just an example. A therapist must accept anyone who comes in the door, and I agree with that. A transgender person is a person. Treat them with empathy and humanity, they aren’t evil or wrong or broken. I understand that. It cannot be any other way.
I suspect that in another time it would have been some other idea that would have derailed me. Right now trans is big in psychology. It could have been something else just as easily. Keep that in mind while reading this.
I have a memory. I was a nontraditional student and I’m a lot older. And there just didn’t seem to be any transgender people when I was younger. I get that they must have existed, but been hidden. I understand closet LGBT people in the past existed and had to hide just to survive, in a literal sense. I remember violence toward gay people being acceptable in normal society. But… I never met anyone who was trans. I met people who were gay, either I could tell at the time or in retrospect. But not transgender. I understand the “Different Worlds” thesis and I agree, but… no one? I understand that life was dangerous, in a very literal sense, for trans people in the past. I get it. But.. I mean, I was training to be a therapist. I empathize with people, I get people. And I never met one in all those years?
I can read, I know descriptions exist from the past. Not many, though. And there were cultures in that past that had trans people. That’s scary, not reassuring.
Only once the concept of transgender became commonly accepted in my culture did I meet people who were trans.
Worse, from my point of view, they were people who previously would have been something else. Either gay, or autistic, or just generally weird misfits (I am one, so I know the type). I think that it’s certainly possible, and likely, that some amount of people will be transgender because of genetic predisposition or getting the wrong hormone in utero. But… there are too many trans people for that to make sense. I even had professors decry the teenager who suddenly decides that they are trans and want hormones and surgery. An ethical therapist would never endorse that. Making a decision with lifelong consequences on the basis of what, for that teenager, is a fad, is terrible practice. But… how can you tell who is following a fad and who was just repressed?
Bisexual people exist, and I’ve met lesbians who chose to be lesbian even though they could have chosen to be straight. If it’s a conscious choice for some people, is it unconscious for others? Are some people choosing to be trans? If so, does it matter?
When people come out about their disorders, they seem to have had a sense that there was something different about themselves, but often not a good explanation of what that was… until later… when they read or heard about it. This disturbs me greatly. We know we retroactively change our memories to fit the present. We do it all the time (and I do it, too). How many people latch onto a mental disorder in the present and justify it with their past experience? How many people look for “the answer” to explain their unhappiness?
If you talk to trans people, this all sounds like bullshit. They all have life stories that explain why they are trans. I know I must be wrong and blinkered and perhaps prejudiced by age. Or by being a cisgender male. Or just being stupid and/or ignorant. The reason I am posting this is so that I can be called stupid and ignorant, which would make me feel better. If someone said this about me or my culture, I’d be profoundly insulted to the point of rage. Of course I make my own decisions, so doesn’t everyone else? Maybe I don’t. And they don’t, either.
I am cursed with a good memory. You know how people dig up stories from the past on the internet to show how someone is a big fat hypocrite for saying something the opposite of what they said a few years ago? That’s how I feel every day. I remember what I read and retain it for years. I remember conversations from decades ago. I just have a good memory. When I test it with the internet, I’m usually right. And I notice that the rest of the world does not have a good memory. I think I am unusually tied to the beliefs that I formed when I was young, and naturally stubborn. Maybe this explains my problem.
I think people are far, far more suggestible than even psychologists and postmodern lit-crit types believe. I think that mental disorders are contagious and transmitted by culture. I think “labeling” people really does change some people into the label. For some reason the internet has caused more homogeneity within subcultures than existed before. I think that the real disorders are the ones where people have personalities that are overly suggestible, often because of low self-esteem caused by abuse or simply being low status. People have a need to fit in, and if they can’t have a place in the culture at large they can fit into a subculture by following the rules of that subculture. The smaller the subculture, the more alike the members become.
Now, before saying I’m an asshole for saying trans doesn’t exist… I think it does. In the sense that if it wasn’t trans, it would be something else. So it doesn’t really matter. Like Scott says, if treating it works and makes people happier, do it. My only qualm is that unlike some other issues, trans often means surgery and hormones. If I am right that society suggests disorders and changes its mind about which ones are acceptable at a given time, then pushing people into permanent changes is something we should be very careful about doing. If the autism diagnosis disappears it won’t be a big deal. If trans is found to have no biological basis… or just becomes uncool… think of all the people who were influenced into making permanent physical changes that they have to live with. Not their enablers and cheerleaders, but the clients that bought the idea, must live with the consequences. I don’t think it’s OK to encourage other people to do something so drastic in order to signal my own virtue. In ten years it will just be something else. I will have moved on, if I’m still alive. The client has to live with it.
Eating disorders are caused by culture, right? All those magazines and billboards of skinny women. Gender dysphoria looks like an eating disorder to me. Unhappy with your body? Willing to go way to far to fix it? Suicidally depressed otherwise?
You don’t meet the societal norms for your gender? It’s making you unhappy? Here’s a label…
Is gender dysphoria different than body dysmorphia? How? If someone would be happier if we removed an offending leg, should we do it? What if that becomes the next fad? Are we really going to endorse removing body parts? If not, why not? What’s the difference?
OK, so what, people can make their own choices. Except they don’t, really. The most vulnerable people are the most likely to make decisions based on what other people think. People like… therapists.
And that scared the shit out of me. It suddenly seemed completely unethical to push a diagnosis of gender dysphoria, ever. There are plenty of stories of unethical or misguided therapists making people crazy. Is that what is happening?
So I ran. And quit school. I don’t want to make people crazy. Nor do I want to be an enforcer of cultural norms. It’s a tough job, I respect therapists, but it’s not for me.
This is a blog where I can write something like this and have a chance at being understood. Have at ye.
For what it’s worth, I’m pretty blue tribe and I agree with you. I don’t doubt there are actual cases of gender dysphoria, as I remember the concept of the “sex change” surgery being pretty well ensconced in the 80’s, and I fully support peoples right to dress and express sexual norms however they feel. What we see right now seems much more than that, much too faddish and too strongly amplified. I worry that a young person could be railroaded into an irreversible choice simply by being curious or insecure.
If it’s any consolation: you are apparently not alone in your suspicion regarding trans people.
I was thinking about asking something along the lines of “where the hell are all these trans people coming from all of a sudden?” on an open thread at some point. And I’m not sure which of the two explanations I had considered would be more scary. The first would be an Alex-Jones-The-government-is-turning-the-frogs-gay type of explanation that some external biochemical influence is seriously fucking with the hormones of a lot of people. The second would be that some societal influence is seriously fucking with the self-perception of a lot of people, like you are suggesting.
ETA: The third explanation, following melboiko, would be that there have always been people who felt that way, and they’re now free to express that. Which would be the most benevolent interpretation, and I’m sure there are cases like that, but does that really explain the extent of the phenomenon?
I’m trans and I don’t discount the possibility that environmental pollution is doing something. But the cross-cultural and pan-historical existence of gender-crossing people, plus all the evidence for genetics, suggests that this is at best a partial explanation. If you actually listen to the experiences of trans people, there’s a much simpler explanation: people are more likely to act up on their intrinsic gender dissonance when they have social support for it (i.e. I’m more likely to come out/not suicide the less likely I am to be beaten on the streets). Much the same thing happened with the LGB liberation, which caused much the same “where are all these gays come from?” reaction.
There is strong cross-cultural and pan-historical evidence for gender-crossing people, including with gender dissonance and bodily modification (Cybele priestesses, Sumerian temple prostitutes, fa’a fafine, muxe, voluntary eunuchs of several flavors, dual-spirits, onnagata, male-identified people through history who turn out to be secretly female-bodied, hijra…)
There is growing evidence for a genetic and epigenetic etiology for transgenderism, including but not limited to: strong evidence from twin studies, genes related to hormonal receptors, measurable effects from neonatal hormonal exposure, similar effects on nonhuman animals, a large and uncanny correlation with certain genetic conditions including autism and left-handedness, differences in brain anatomy, differences in reaction to pheromones and so on. This is why this is the current position of the international Endocrine Society.
The cross-cultural variation is exactly what would we would expect from such a biological impulse: the same desire is given shape in different ways in different societies. If your society has no concept of “transgender” except as some sort of perverted pariah prostitute, and you’re terrified of it, you’ll represses your feelings and/or kill yourself. If in your society accepts gender-crossing people as dual-spirit shamans, you’ll become a shaman. If your society calls people with gender dissonance “transgender” and uses that as a precondition to get access to hormones, you’ll use the label. You’ll use whichever label They want, if that’s what it takes to live as your intrinsic gender.
I’m guessing you just need to read more on the topic, and your doubts will be solved.
> If I am right that society suggests disorders and changes its mind about which ones are acceptable at a given time, then pushing people into permanent changes is something we should be very careful about doing.
No one pushes us to do anything. On the contrary, we have to fight gatekeepers in the System tooth and nail to get access to said surgery and hormones, invest enormous quantities of hard-earned money into hostile and skeptical health insurance systems, become infertile, sacrifice our libido sometimes permanently (in the case of MtF people at least), and, worse of all, become social pariahs; frequent targets for violence, excluded from jobs etc. We congregate on Internet forums to pass information on how to get around therapists so that we may transition, and we make political activism to get transgenderism out of the DSM; before the Internet, this information was passed word-to-word on the streets, all the little rituals you have to submit to in order for Them to, reluctantly, belatedly, give you the goddamn hormones. In my country, our life expectancy is 35 years old. I don’t have enablers or cheerleaders; if you know of any, please send them to me. Would be a nice break from the car-full of neonazis who stalked me on the street, all the people shouting death threats, the man who shooked and pushed me in front of my kids, the one who punched me, the kids in school who kicked me in the nuts, that one crew who threw a bottle on my head, etc.
That we still want the hormones and the surgery after all that should give you an inkling of how strong is the intrinsic sex dissonance, and the intensity of the relief that is felt with transitioning. Another hint might be had from the fact that non-transitioning suicide rates reach up to 50%, and study after study shows that it’s significantly lessened by a) transitioning, and b) being socially accepted, post-transition, by one’s family and community.
> Only once the concept of transgender became commonly accepted in my culture did I meet people who were trans.
I did not come out as trans before moving from Brazil to Germany. I just adapted to life in my assigned gender. This has to do with taking bottles to my head, being stalked by neonazis, frequent news items of public lynchings etc.
You’ve met tons of transgender people before they were accepted in your culture. You just didn’t know it.
I don’t have enablers or cheerleaders; if you know of any, please send them to me.
My best guess is that your experience is conditioned by where you live. Because, in major North American cities, there are plenty of enablers and cheerleaders.
If the autism diagnosis disappears it won’t be a big deal.
I think it would be, because I’m currently working (in admin support) in an early intervention service and some of the very young children are on the autism spectrum and it’s a real thing. This is not America, where kids get diagnoses and medication very easily, so it’s not parents jumping on the trendy bandwagon of “my kid has….” What I do think was a political decision rather than medically based was the abolition of the distinction between Aspergers and autism and folding everyone in to “on the autism spectrum”. There are high-functioning people with some quirks and there are people with severe damage who will need constant supervision and probably some form of institutional care as they get older because otherwise they will seriously injure themselves, and pushing this all into one basket was more about “I’m autistic, high-functioning, and don’t like the label of being a retard, so get the definition changed!” (and I’m saying this as someone whose paternal family going back at least a couple of generations probably are all over the spectrum but this was before the days of diagnosis when you were just classed as weird, crazy, stupid or all three).
I do think the American psychological/psychiatric organisations are vulnerable to this kind of PR pressure where something that was an illness becomes a disorder becomes “not a problem at all, nobody ever said it was except bigots in the bad old days”.
So if there is a push to do away with autism diagnosis altogether (on the grounds, say, that this is not a medical problem it’s simply another way of functioning) then I think that will be harmful to people who aren’t “what this means is I’m good at maths, focus intensely on things that interest me, and fit in as a computer programmer for a good-paying job while I have easily-controllable sensory impairments”.
This is true, I was casting about for something to make the point and I should have not said anything at all.
Sorry if the question’s too blunt, but wasn’t it possible for you to remain in the program, become a therapist, and just not push people towards a diagnosis of gender dysphoria, if you genuinely feel it unwarranted?
Like the first reply above, I broadly agree with what you wrote. I wrote the following 3 years ago on Lesswrong: “There’s no solid evidence for a gender bit in the brain. While many or most transgender people feel something, explaining that feeling as “I’m an X brain trapped in a non-X body” is essentially a memetic phenomenon. Additionally, genderqueer and non-binary persons are typically participants in a memetic fad.”
The culture war has intensified a lot since then, but I don’t think I’ve seen much evidence that pointed away from this summary. The way this issue intersects with kids is really really scary but there isn’t much I can do about this. When “positive” articles about this don’t even mention the studies about the majority of kids reverting to natal identity (recent example), that’s in a way even scarier than the “negative” articles with warnings and forebodings.
I wrote an longer answer but my browser ate it, so here is just the conclusion:
The amount of underweight people in western societies is 30%. If magazines and billboards had something to do with that I would rather expect those numbers to be the other way around.
The rest of my text where basically a lot of arguments why this thing our host wrote, also applies in this case.
According to my (now possibly dated) undergrad education, culture-bound syndromes (which definitely include Western eating disorders, outside of susto they’re basically the most canonical example) don’t really work on a direct stimulus-response way. It’s more about people experiencing very real stress for very real reasons, unconsciously searching for a way to display that stress (presumably pursuant to deep-seated instinctual processes that made sure that individual problems did not go unaddressed by the band in the primordial environment), and finding a means to do so that happens to be particular to the culture. The prevalence is therefore determined by stress levels, and the culture only really determines the form (except insofar as the culture causing stress, but that’s a different issue)
(p.s.: mass-shootings are a culture-bound syndrome–indeed, it is one that was initially observed and described hundred of years ago as SE Asian culture-bound syndrome in the form of “running amok”)
Sorry I thing I am doing something wrong:
Should say something else, but somehow I can’t even write the correct thing in a reply.
Is it possible that I hit some weird moderation scripts?
underweight->overweight?
I used this terms, yes.
My point was that the one, usually associated with eating disorder is a lot less common in the western world. This is the opposite of what I would expect if advertisement has a big role in eating disorders.
But as I said about half the sentence I quoted was deleted so the meaning of the sentence is distorted. I only saw that after the edit dead-line was over.
I’ll offer an explanation which I suspect answers at least part of what you asked… fashion. I suspect there always were people with ‘genuine’ transgender feelings, but until there was a social niche for them to occupy they occupied no niche in your observations (duh). However once the social niche was available a lot of younger people (or their therapists) found it attractive to associate the cause of their struggling to find their place in society with a fashionable niche, recently opened for exploration – whether the ’cause’ was true or not.
I offer also the ‘fashion’ of suicide clusters amongst young people in particular town or occupations. Perhaps the ease with which ordinary people get swept up in newly fashionable political movements? The rise and fall of hemlines. The fashion against eating fat. The developing fashion against eating sugar. How smoking cigarettes has fallen from fashion in many parts of the world.
If people can attempt suicide as a fashion I’m sure they could opt for gender reassignment as a fashion statement.
I think people are far, far more suggestible than even psychologists and postmodern lit-crit types believe.
Some people are really suggestible. That’s incredibly important to know. But some other people are really ornery and resistant to suggestion.
I wonder if people who are highly influenced by the social environment are more willing to believe that society can be engineered. After all they seem to take everything to heart.
The sense I get from your comment is that you were afraid that as a therapist you’d be helping nudge people to transition when you feel that transgenderism is maybe a passing fashion and that therefore you’d be using your skills to cause harm (in the form of people’s regrets?). Am I getting that right?
This raises a bunch of interesting questions about what the role of a therapist is, how to judge when you’re abusing your authority, how to use power ethically, and about how that’s always complicated and imperfect.
But my main thought is wondering how deciding to transition is different from other big life-altering decisions people make based on whatever story they’re telling themselves at the time.
I’ve worked with some people who are transitioning, but also these other situations: people who decide to get married and have children and then decide later that was a mistake; people who decide to get bariatric surgery when it’s unlikely to address the underlying problem in their particular situation and then they face a lifetime of new health problems; people who decide to get divorced (and the consequent harm done to kids) rather than deal with problems inside their own heads (I’m speaking of specific situations, not that divorce is wrong in general); people who change jobs and move their families to different places with all the dislocation and cost involved only to realize that didn’t solve their problems; or people who go a quarter million or more dollars into debt for an education that doesn’t deliver them the career they want but now they’re shackled to paying off the debt.
In other words, we all tell stories that lead us to make big decisions in our lives, and we all do our best to make good decisions with the information we have available at the time. Some of those decisions are irrevocable or nearly so, or simply have huge consequences that are hard to anticipate ahead of time. Sometimes later, we come to tell different stories that, looking back, make our previous big decisions seem unnecessary or dumb or deluded or just not right for us in retrospect.
It seems to me that transgender people get to make these kinds of big decisions too and that our ideas about whether they are deluded or caught up in some culturally-and-temporally specific fashion are largely irrelevant. Just because we might have a different story to tell about what’s going on, doesn’t make our story any more right or real.
When I encounter these various situations with clients, I realize I don’t know what they need. I consider it my job to help clients clarify for themselves what they need. I don’t have a privileged viewpoint that says what they should or shouldn’t do, what they will or won’t regret or wish they’d done differently. I can listen to their stories and try to ask good questions and reflect back to them what seems to be guiding them to test whether those values or stories or factors are what they want guiding them. I can offer up data, alternate stories, research, other people’s experiences that I’ve seen. But I don’t see it as my job to make some ultimate judgment about their story or their choice.
If in ten or fifty years not as many people want to transition, that doesn’t make it a mistake in my mind that people want to now. And if one person transitions and decides a year later that was wrong for them, that doesn’t make their choice wrong in some grand scale. It’s just a mistake that they learn from like so many other mistakes we make in life. They aren’t even really mistakes in the sense that we made the best choice we knew how to make at the time and then we learned from it; this is more or less how life unfolds.
Also, some people tolerate more risk and are willing to make more big irrevocable changes that they may later regret than other people; I don’t consider it my job to turn one kind of person into another that way. I’m a fairly risk averse person, so I’m mindful that I can’t use my risk tolerance level as a measuring stick for other people.
This is a good response. Thank you.
The following (of course imperfect) heuristic may help:
If skepticism of the theory is predicted by the theory, you may do well to be skeptical about the theory.
As in: Resistance in psychoanalytic theory is used to explain resistance to psychoanalytic theory; I don’t believe in Chomsky’s manufactured consent because my consent has already been manufactured; and the widely held conclusion that post-structuralism is mostly silly is a sure indication that it’s anything but.
No lecturer will explain to the slow learner that his failure to master perturbation theory is a consequence of perturbation theory; though it might seem like it is at the time.
With atomisation the heuristic breaks down; though I have never heard the term used apolitically (as in, a nasty consequence of civilised living we’d all do well to stop enjoying so much).
Right, it’s not the case that you have difficulty understanding atomization because you’re so atomized — if anything like that is true it’s the exact opposite. I never had any trouble understanding the concept of atomization, because I’ve never had even the tiny scraps of community Scott had.
Basically, anyone using a kafkatrap ought to be ignored (at the very least) until they stop doing that.
Sorry to say it, I don’t have huge insight and long useful dialogue to add, but I just wanted to say that I appreciate this post.
I kind of operate at this level all the time, assuming I am missing some key component of understanding. It’s easy for me to look back at all the “nonsense” past-self dismissed and realize that I still likely have concepts missing and try to just not be a jerk when encountering all foreign ideas by smart and well meaning people.
I feel like this post may do more to explain postmodernism that the postmodernism post did.
If what is contained in this post is a good explanation of post-modernism, than the old criticism that what is in it that is true is not original and what is in it that is original is not true seems to apply.
I’m in this state ever since I heard about “The map is not the territory”. It seems to be quite a big deal, but I find so incredibly obvious and trivial that I’m asking myself what I’m missing.
It’s still useful to explain and coin terms for things that are incredibly obvious and trivial, just to make sure that 1) you aren’t wrong about how obvious and trivial they are 2) you have the conceptual tools not to forget them. If everyone knows something but no one knows they know it, it’s easy to forget to take it into account.
It’s obvious in theory, but very few people actually keep it in mind in practice. Bringing it to conscious attention encourages people to pay attention to when they see people ‘eating the menu’ (and, hopefully, eventually identify when they themselves are ‘eating the menu’).
Isn’t it a statement of faith rather than an argument?
It is neither. It is a statement of a truth.
My perception of reality is not reality.
The statement only has meaning if we assume that reality has some relation to perception, but I think that has to be an assumption.
Do you think “I think” is a statement of faith, or a conclusion formed by a reasonable appraisal of all the evidence available to me, or are those the same thing? It seems fine to me to call it a truth if it emerges from a convincing web of evidence.
That said “the map is not the territory” is not a statement of faith, truth, or an assumption–it’s a methodology!
Hmmmm… right, so saying “I think” is different from thinking, but also a part of thinking.
“I think” is the map, thinking is the territory.
A word is not the same as the thing it represents, but it is the same sort of thing.
Is “I think” a statement of faith or a reasonable conclusion formed from evidence… it might be tautological, when you get down to it. What is “I” except a load of thoughts?
There are thoughts. There are things. Is that a statement of faith… I suppose that whatever assumptions underly the statement would also be implicit in any statement. “there are things”/”I think” is implied to be true by us saying anything.
I would say that since it’s necessary for any statement, that it must be an assumption. Must be a matter of faith.
Or maybe not…. I suppose it’s not a matter of faith from our own perspectives.
But yeah, so map is not the territory – a word is not the thing it represents – is that a statement of faith. I suppose not.
Much of the time we are only dealing with the words. When I look up where Italy is, it actually bears no realtion to the territory what-so-ever – I’m only interested in how the figures on the map relate to each other.
So, I guess “the map is not the territory” is just telling me not to sit at home reading atlases.
To me it was a big deal, so I will try to explain why :
– That made me understand that we only “see” the map, and not the territory. That there would be no fundamental difference between living inside a simulation, or not, because in some way not living in a simulation is impossible, we always “live” in our own map. (which is quite different than not being able to falsify the hypothesis of living in a simulation)
– That helped me to understand that some concepts are about the map itself, and not the territory :
Like the concept of possibility or probability. And so a intrinsic possibility would be nonsense.
And That helped me with a lot of other little conceptual problems.
That helped me because I was confusing, in varied way, the map and the territory on some subjects.
Mostly because the map seemed “so real”, that I didn’t or only vaguely understood, it was a map at all.
But if you don’t do this kind of mistakes, this reminder should seem pretty dull.
Also maybe I am completely wrong about how I understand what people mean by it.
It would have been good to define schizoid, it sounds like an awful mental illness featured mainly in Jethro Tull songs.
I’m not so sure atomized society is a bad thing. I’ve been on the edge of societies that function as communities living together and it sucks for them. You’re not you, you’re just a part of your family. You’re expected to slave away with zero thanks and then have your money, which is never “yours”, get taken away by endlessly needy relatives. These ungrateful people don’t feel gratitude in the slightest because it is their good fortune to have a relative with a well-paying job. The whole thing is ruled over by the grandparents, who were ruthlessly exploited in their youth in this way and are damn sure not going to waste their opportunity now that they’re of age.
I’ve also seen Americans marry into such communities and then be appalled that they’re now expected to contribute. “A walking ATM” is what they feel like. Hey, you should have known that before you signed up for it. They’ll do awful things like move thousands of miles away just so they can live an atomized existence instead of live within the bonds of a genuine community. On the rare occasion that there is an American who fits in, he’s derided as “been there too long” or “gone native” or other epithets and regarded as a total nutcase.
The whole “advertising and consumerism” thing is the same as the right-wing “we’re losing our morals” fretting and may be dismissed.
Maybe you’re thinking of King Crimson here, not Jethro Tull.
Do you think something in the middle would be possible or desirable? On the one hand from living in an atomized society it does feel like there’s a lot more isolation and social uncertainty that can cause serious issues in a large segment of the population. And beyond that an atomized society can lead to divergent social norms, values, and morality. We lose a sense that we’re a part of the larger society or we can come to see it as opposed to our own norms and values or our subgroups.
But you’ve highlighted exactly the reluctance I feel when actually considering what my life would be like if I did live communally. In my ideal imaginings we would all work to the best of our abilities for each other and while the community would always be there for support it wouldn’t be suffocating of overly hostile to those who have interests or behaviors that fall outside the norm. But that’s magical fantasy land, in reality it would operate a lot more like you describe with the elders lording over everyone else while the most capable are drained of their accomplishments by the rest.
I think it depends on what kind of person you are. If you’re more ‘normal’, it’s probably great, because you’re able to pal around with a lot of other people like you. If you’re not, it’s probably hell because no one will understand you and you’ll be expected to do and not do all these things that run contrary to your preferences.
I for one am thankful for the atomization that I experience, and really want more. I’m just to weird to be able to deal with other people expecting me to be more like them.
A great thing about erring on the side of assuming things are meaningful is that, given enough effort, you can extract real meaning even from nonsense. It doesn’t really matter if the meaning you extract is intended — so long as the insights are useful. (And, “enough effort” is actually not very much.)
Interpreting explicitly-nonsensical statement as though they definitely contain truth is a great way to produce novel true statements, as well: the juxtaposition of a random input and your internal mental state is very likely to be pretty unique, so when you filter total noise by removing every interpretation that isn’t both seemingly-true and interesting, your list of hallucinated signals will have little in common with someone else’s.
Assuming that you have nothing to learn from something seems like a really bad idea, because even if what you’re reading is Not Even Wrong, you’re still going to gain something. And, sorting by what’s most likely to provide useful information is full of danger, because if your sorting mechanism is too similar to other people’s then you risk going along the same path as everyone else, avoiding extremely important ideas from other domains.
Semi-related: I have depression and, before it was treated successfully, had that common feeling of colors being dulled. But I was totally shocked when Scott wrote here that they actually were dulled in many depressed people. It’s such a common metaphor that even I thought it was some kind of metaphor in my own head, and that when colors looked brighter again it just meant I was happier so I could appreciate colors or something. I’m still kind of puzzling over how I managed to mistake my own perception for a metaphor.
Man, this so perfectly describes my daily experience. I guess it’s a shoreline of ignorance type scenario, but every day I feel more and more like I know less and less.
A few general rules:
1) Burden of proof
As a general rule, if somebody is trying push a certain concept/theory/system of belief, then the burden is on them to provide compelling evidence and arguments in its support. This does not mean that a theory is not valid unless everybody can grasp it at an intuitive level, advanced math for instance is beyond the reach of most people even if professional mathematicians explained it to them, but most people can grasp simple math and then move to more advanced topics until they notice that they’ve reached their limit, and extrapolate from there that mathematicians are probably not talking out of their assess.
2) Practical applications
Does everything of practical value comes out of the theory? If it does then it is good evidence that the theory is worth, assuming that it was not created post-hoc. If a theory has no practically applications, then it does not necessarily mean that it is nonsense, but it is a red flag.
3) Vulnerability to Sokal hoaxes
Is it possible for a non-expert to produce parody that can fool the experts? If yes, then it is a big red flag. For instance, at a modern art museum…
4) Chesterton’s Wall
If it’s been there a long time and you don’t see any reason why, you’re probably the blind one.
Counterexample: Medicine before 1900 or so.
It seems like lots of cultures had some variant on physicians (healers, witch doctors, whatever). In western civilization, we had physicians with an intellectual pedigree going back to ancient Greece. And before some point in recent history (sometime between widespread acceptance of the germ theory of disease and the invention of antibiotics), a sick person was much wiser to avoid them.
They persisted because they met a psychological and social need, despite the fact that they did more harm than good, or at best did cosmetic stuff that had no effect.
I’ve seen this assertion often enough, but how strong is the evidence that it is true in the general case? All I’ve ever seen is presumably-cherrypicked lists of specific treatments that sound so horrifyingly bad that they can’t possibly be any good, except even some of those turn out to be useful (e.g. letting maggots eat at your infected wounds). And equally-cherrypicked lists where if the physician was a formally-uneducated woman or pagan their herbal lore was a pharmacopia of medical miracles, but is there an attempt at objective analysis somewhere?
John Schilling:
I don’t have a link to anything definitive. My understanding (this is far from my expertise) is that when modern doctors read medical texts from far in the past, they often see recognizable descriptions of diseases alongside completely hopelessly wrong treatments.
However, I think we can bound this a bit:
a. From old medical writings, we can know that the medical theories of the times[1] were completely wrong. For any illness where they used those theories, the best they could possibly have been doing on average was harmless cosmetic stuff. Maybe sometimes they got lucky and bled someone with hemochromatosis, but mostly they bled someone with a cold and had no effect.
b. Nobody knew about the germ theory of disease, so anyone treating sick people regularly was almost certainly spreading stuff around, even if all they did was go from house to house and examine their patients.
c. We know some of their treatments were pretty hard on the patients–bleeding and purging, for example.
Because their knowledge was so limited, it seems to me that (b) and (c) almost certainly must have overwhelmed the limited times when they knew enough/got lucky enough to do some good despite (a).
If there’s any data on this anywhere, I expect it would be some military organization somewhere where they had doctors some places and not others.
[1] I’m thinking in terms of Western traditional medicine, but I’d be shocked if anybody’s witch doctors were actually a net benefit for most stuff.
they did more harm than good, or at best did cosmetic stuff that had no effect
I wouldn’t say that; there are some remedies that do work (think of willow bark and digitalis) and physical treatment like setting bones and surgery, no matter how crude, were functional. We wouldn’t declare that farriers and the like were absolutely useless until the advent of veterinary medicine and the same for doctors. Agreed, probably the more “I’m prescribing this on a pragmatic basis because it works and we know this through experience of years of giving it to patients, even if we don’t know how it works” was more useful than theories about the humours, but doctors and surgeons did have some useful knowledge and practice.
Midwives probably were less lethal to women giving birth than the medicalised practice that took over where doctors didn’t wash their hands in between examining patients, but let’s not throw the baby out with the (lack of) bathwater, either! To quote from Conan Doyle’s collection of medical stories, “Round the Red Lamp”, about a very old-school doctor:
I’m not saying the medical profession in 1800 was completely 100% harmful, or even that they never made anyone better. But summing up over all illnesses that people had in those days, I believe a patient who avoided doctors had a better prognosis than one who did. The set of problems for which they had a useful remedy was small, the set of problems for which they had an equally-plausible useless or harmful remedy, applied with equal confidence, was large. And the doctors didn’t know about germs, so they didn’t know they needed to wash their hands and clothes and sterilize their instruments between patients, which means they were probably spreading disease like crazy if they were busy.
I’m not sure whether the same thing was true of injuries. Certainly setting a broken bone is something doctors in 1800 could do, and that’s useful. But surgery before the germ theory of disease and anesthesia had really horrible survival rates, even though sometimes surgeons could cure things. (In fact, I think amputation of an infected limb was often your only chance of survival.)
I’d love to see some actual data on this, but I don’t know where anyone would have collected it.
How is this a counter-example to the parable of Chesterton’s fence? It’s not the case that “you don’t see any reason why” Western Medicine before 1900 existed; rather you give a specific theory (“me[eting] a psychological and social need”). You’ve identified why the wall was there, determined that the reason no longer applies if it was ever a good one, and advocate getting rid of it on that basis.
Incidentally I have seen the parable of Chesterton’s fence misused in conservative arguments in a similar way before. They take the parable to be about needing to find a >good< reason for an existing or previous state of affairs. Requiring that a progressive find a good reason for something they want to change before you will be convinced to let them change it is fallacious and perverse. Chesterton's point is fully addressed by finding a bad reason for an existing state of affairs, and in fact that is precisely when states of affairs should be changed.
Because you can claim that pretty much any idea serves a psychological or social need, so either Chesterton’s Fence can be trivially sidestepped, or it isn’t supposed to include that.
I was ambiguous. The reason you find needs to be convincing of course. It needs to convincingly explain why the fence is there. In that sense you need a “good” reason for the fence. You do not need a good reason in the sense I meant–a reason that looks like a good reason today. (So your disagreement is with albatross11’s theory about why old medicine existed, not with my claim about Chesterton’s fence.) Example: why was sodomy criminalized? An answer: sodomy was seen as immoral or a sin. This could be the reason sodomy was criminalized even though (to many people today) it is a bad reason. If I want to decriminalize sodomy, you can’t go telling me that I need to find the true good reason first (something something social cohesion reproductive adaptivity).
That has the same problem: It makes it trivial to find a reason behind anything. If Chesterton’s Fence lets you use such things as reasons, Chesterton’s Fence is useless.
Also, that means that whether something “has a reason” depends on framing. For instance, instead of asking for the reason “why is X illegal”, you could ask for the reason “why people are arrested for X”. The reason for that is known (because it’s illegal) even if the reasons higher up in the chain aren’t (we don’t know why it’s illegal. Or we know it’s illegal because people call it a sin, but we don’t know why people call it a sin, etc.)
Thinking about it, it makes sense that Chesterton’s Fence is about making sure you know why the fence is there, not about finding a good reason for it to be there. Though a smart person trying not to do unnecessary harm will also steelman the best reason for the fence to be there right now, along with knowing why it was originally built, in case it was originally built for a dumb purpose but is now having some kind of positive effect.
@Jiro
I don’t understand your point. I will enumerate my confusions.
1. Why do you think it is trivial to find a convincing explanation of the motivations behind a historical practice? Could it be you are not applying a high enough standard of what it takes for something to be convincing? If it’s not clear, I do not mean to assert that I am convinced by any of the particular examples we are discussing.
2. “Chesterton’s fence does not do what people think it ought to” does not seem to be an acceptable premise in this argument–rather it was part of my claim. (I am glossing your second sentence.)
3. I agree that the notion of a reason depends on framing. This is true in any case (ie whatever kind of reason you’re looking for). Turns out people put the fence there to keep the cows in and if you remove it the cows will escape. O wait, the reason the fence is there is actually because there are cows there to keep in. O wait, the reason the fence is there is because the other field has been tilled for crops and now the cows have to graze over here… This also isn’t just a linear “chain” of potential reasons, but a web. Eg, the reason the fence is there is really because we want the cows to be all in one place for convenient husbandry… The fact is that any of these reasons can become a bad one over time and undercut the original justification for the fence. In the future cowtopia where cowkin and humankind live in happy harmony, all such fences shall burn.
Another complication is that there are different kinds of bad reasons. Eg, is criminalizing sodomy because it’s immoral wrong because sodomy is not immoral (rejecting the minor premise) or because we should not criminalize things that don’t hurt anyone simply because they are immoral (rejecting the major premise). Again, I think it ends up not mattering: best to understand the major and minor premise and check both.
All this is beside the point albatross11 raises that we should still try to understand whether there is a good reason for the fence to continue existing, even if it is not the reason it was put there in the first place. But that’s not what Chesterton’s fence demonstrates. It’s about making sure you understand why the fence was put there in the first place as a check to make sure you aren’t missing an important justification for keeping it. This check is satisfied even if the reason the fence was put there no longer holds.
Because “convincing motivation” is defined very loosely. I could ask why people who do X are arrested, and then answer “they are arrested because X is illegal”. This is convincing–I am very sure that the illegality was a factor in the arrests. And judging from your examples, it counts as a motivation–there is no requirement that I also be able to explain “it’s illegal because people think it’s bad” and “people think it’s bad because of Y”.
You might object “I want the ultimate motivation, the one higher up in the chain, not the immediate motivation such as ‘because it’s illegal'”. But your own example precludes that objection–you think that “sodomy is seen as immoral” counts with no need to go higher up in the chain to ask why it was considered immoral.
Your question “why is it immoral” seeks the reason for a different fence. “It’s immoral” might be why the sodomy-is-criminalized fence is there. But by hypothesis we’ve come to the decision that sodomy is not immoral. That’s the whole story about the sodomy-is-criminalized fence. On the other hand if you want to ask why is the sodomy-is-immoral fence there, you need to find a different explanation. To give another example: sodomy is illegal is in fact a great (though partial) explanation of the people-are-arrested-for-sodomy fence, and if it turns out that sodomy is no longer illegal, one probably should stop arresting people for it.
The post I was responding to said (talking about new concepts/ideas that might be worthwhile-but-subtle or might be bullshit):
4) Chesterton’s Wall
If it’s been there a long time and you don’t see any reason why, you’re probably the blind one.
The field of medicine seems to me to be a good contradiction to this. The medical theories of Western traditional medicine in 1800 had been around for a good long time, and were utterly and unfixably wrong.
I think it’s useful to look at ideas/theories/claims and try to figure out what visible effects they should be having on the world. Like:
a. Are there experiments someone can do that demonstrate these effects?
(i) Are they actually being done, and then checked by others?
(ii) Can just anyone do them, or is it only a few highly-placed people?
b. Are there practical applications of the idea that people are actually using?
(i) Can you see that the practical applications are working out better than ones not following the idea?
c. Does the idea predict observable things about the world that you can check?
(i) Do they actually agree pretty well with the idea?
(ii) Does the idea predict stuff the idea’s originators didn’t start out knowing?
There are a lot of fields full of very smart people, where none of those three seem to apply. For those fields, being successful within the field involves mainly convincing other people of the rightness of your arguments/ideas, with no real way to test them against reality.
Philosophy is an obvious example of this kind of field. A lot of the social sciences fit, as well.
That doesn’t mean these things are useless or wrong, but it does mean it’s probably really hard to tell if they’re bullshit or not.
Even if it is essentially critical, ie making no positive claims?
Is criticism of practical value?
Yes.
It takes time and energy to figure out how to sort things into what to spend the time to pay attention to and what to ignore. There is too much to see, to read, to learn, to do. There is so much it would take more than a lifetime to merely read the list of all the options. We need ways of systematizing, improving, and recording “what is worth the time, and why, for who”.
That is what critics are for.
They. Had. One. Job.
And postmodernism and critical theory are doing their best to stop that job from getting done.
You have misinterpreted “critical”, which would be more understandable if TheAncientGreekAKA1Z hadn’t explicitly defined it: “making no positive claims”. I have problems with this definition but it clearly delineates critique from review (and the words criticism and critic are used for both). If you are looking for a quality-sorting professional you are welcome to read the film reviews in the New York Post. This is not “the one job” of critical theorists.
I think your questions gets at exactly my problem with burden-of-proof language: it can be arbitrarily applied. Raising “burden-of-proof” is a contentless defensive manœuvre that is sometimes used to avoid appropriate introspection. If your conversation has more than one person in it there is nothing in the burden of proof line of thought to tell you who should raise it. I do not think there is a line you can draw between “you have the burden of proof to show me your claim is justified” and “I have the burden of proof to show that my claim is justified” and that doesn’t change if the claims involved happen to be “my present beliefs” and “your critique thereof”.
(Note that I think there are at least two reasonable uses of burden-of-proof: 1) as a pragmatic claim about what you need to do to change other people’s minds; and, 2) as short-hand for “I’ve already got a bunch of evidence that you are wrong that you are going to have to overcome [which is not actually about one person having the burden of proof, but rather about the other person already having met it].)
The scenario we are considering is not some sort of idealized debate where there are two sides that start with equal priors.
We are considering the much more common case of you having your own beliefs and coming across some group of people who want to convince you of some theory which is novel or inconsistent with your beliefs, or at least they want you to recognize them the social status (and possibly actual power) for advocating a valid theory even if you personally can’t understand it. Do these people have a valid point, or are they bullshiting you with nonsense? How do you tell?
Burden of proof is an excellent heuristic in this setting.
The trick is, how do you avoid turning it into a way to filter out uncomfortable evidence or ideas without giving them a fair hearing?
You give them a fair hearing, and if there is nothing that makes sense, then you dismiss them.
If new evidence comes out you can give it another try, but of course at each iteration your prior for it being bullshit increases, until you eventually stop paying attention.
I think you are describing what I called “2” above. You come to the conversation with evidence already. In this context telling your interlocutor “I think the burden of proof falls on you in this instance” is sloppy short-hand for “I have a bunch of reasons to think your position, on its face, is wrong”. But if you conclude that they are “bullshitting you with nonsense” it should be because you had a bunch of reasons to think it is nonsense already. And if you don’t have good evidence for a belief, and someone asserts the opposite, then you say “well you have the burden of proof here because I happen to believe such and such though I’ve never really thought about it”–I think that is a problematic deployment of the concept.
I would say that the proponents of such criticism have the burden of convincing you that their criticism is valid. By this I don’t mean they have to prove a negative (e.g. prove that God does not exist), but they have to convincingly point out the flaws in the arguments for the positive claims.
I guess it depends on the thing being criticized.
The only thing you need to be schizoid is to dislike contact with other egos, and to shave off the experience of those other egos ruthlessly before they can reach the fantasy world you retreat to.
My response to that is “And? You say this like it’s a bad thing!” (I also like the diagnosis at a distance: ‘I’ve never met you but based on my own conceit, Imma say you’re schizoid’).
I don’t like people or interacting with them more than I have to, so if that makes me schizoid then yippee and hurray, I’m a happy schizoid.
Either understanding “consumerism” was so easy for me that I got it immediately and effortlessly, and I live a charmed life that has prevented me from ever encountering that problem. Or I have only a superficial fascimile of understanding it, and when I actually understand it, it’ll seem profound and important, the same way “atomization” did.
Or you do not possess the vice of envy. Advertising and consumerism are predicated on envy, on stirring up emulation not in a good way but in a “see the shiny things? these popular/successful people have the shiny things, don’t you want to be popular/successful? then get the shiny things!” way. That’s based on assuming that everyone envies the status of others and judges their own status by the same measure, so to “keep up with the Joneses” you need to parade the obvious signs of status as presented by the advertising industry.
That doesn’t work for some people. They are perfectly happy for others to have things that make them happy, but they don’t want those things themselves, and trying to spur them on to buy things by “but don’t you want to be like Model Example here?” just gets the answer “Not particularly; if Model Example likes and enjoys gold jewellery, expensive whiskey, and tropical holidays good luck to them but I don’t want or need those myself”.
That’s a very… Christian… way to put things.
Sometimes, advertisement is based not on envy, but on what a Christian would uncharitably call “greed” or perhaps “lust”. For example, I’ve personally spent way more money than I should have on drones, because I wanted to experience the sensation of flight without endangering myself. Some of my purchases have been undoubtedly guided by advertising; nonetheless, I buy the drones so that I can actually use them, not just because I want to own them.
Wanting to use things that are not 100% essential for survival is absolutely a sin, I do admit.
I’ll admit I’m drawing that from the seven deadly sins, but we can think of it in a secular framework as well. People are familiar with the concept of jealousy, and often confuse jealousy and envy. And certainly advertising plays on the whole range of human desires, not simply envy alone. But mostly they present you with “Here is thing. You want thing. Why do you want thing? Because thing is desirable. Why is thing desirable?* Because – ” and this is where the hook gets the fish ” – thing will help you be like these people (cue the stock shots of happy families, beaming houseproud mother, young people having fun on holiday, happy boozing buddies, etc)”. The assumption is you want to be like those happy successful smiling people, that you wish to have that for yourself, and that you will be motivated to imitate them by buying whatever is being advertised (getting pop stars and athletes to advertise fizzy drinks by invoking sympathetic magic where their aura rubs off on you if you drink/wear what they allegedly drink/wear).
I’m not saying it’s a sin to want fun things! But when you can be coaxed to buy Fizzy Drink A instead of Fizzy Drink B not on grounds of “I prefer the taste” but “this is what Cool Role Model advertises”, then we’re moving away from “I like this” to “I want to emulate them by consuming this”.
*A thing that is desirable in itself needs little to no advertising, as does a thing that is a necessity. So getting you to buy Brand X rather than Brand Y needs that little extra tickle.
That’s not necessarily true. For example, in the past I have bought Kindle books that I enjoyed very much, based on an ad (or because a friend recommended them). The books were desirable in themselves, but there was no way for me to somehow infer their existence from first principles. But because my money was limited, I relived on advertisement — such as the blurb, the reviews, etc. — to determine if the books were worth risking my funds on.
I suppose this depends on the precise doctrine one adheres to. The Bible at least appears to explicitly declare certain thoughts to be sinful, even if one never acts on them. It’s pretty down on desiring things in general, as far as I can tell.
Do you remember the titles or authors of the books you read?
I would also like to know. Sounds like interesting stuff
Awareness of the range and differential presentation of human experiences, such as the anosmia example, could be a useful addition to primary and secondary curriculum.
The upsides could include light bulbs, or equivalent non-visuals, going off in classrooms around the country.
Group discovery and investigation may be ultimately productive for societal mental health than expecting individuals to stumble upon or miss what could be material to their growth and participation in life.
Downside examples include teasing and awkward discussions of differences.
Some will pursue, others will avoid.
Lots of people said McIntyre’s book had no clothes. Or rather, lots of people pointed out that it was clothed in finest silks (style, argumentation, etc), but was actually a mannequin underneath. This is something philosophy as a field is particularly prone to, and people have called them on it dozens of times.
Indeed, philosophers tell all the rest of us that we’re just failing to understand Real Philosophy, but when asked what that is, they lapse into a kind of Alex Jones-esque, “I’m just asking questions!” rhetoric, under which there is no need to arrive to true positive conclusions in order for anything to be Real Philosophy.
By which standard, we could note, McIntyre’s book was a wonderful success. All the processes followed, all the right moves played, and if the game has no outcome at all, well, the rules never said it had to, apparently. Behold profundity!
This is incorrect. Alex Jones will tell you exactly what’s going. Just ask and he’ll give you the basic gestalt on the interdimensional psychic vampire pedophiles.
“They are turning the frogs gay!” – which is actually true, sorta.
EDIT:
Btw, isn’t the insane rant in the video essentially all the same things that Elon Musk says (simulation, singularity, merging with machines)? Ok, I can’t fit the part about psychic vampire pedophiles, but… 😀
Epistemic status: speculative
“Scott Alexander” uses a pseudonym because his real name is Data Soong. The years he had no emotions were his first six years on the Enterprise. After getting stranded in the early 21st century, he went to medical school to go into psychiatry because every other skill he had experience with required 24th century tools.
The obvious objection is that he repeatedly claims to admire EY for exposing him to AI risk, when he should know the exact year humans first encounter AI without any help from a 21st century crank.
Is Data unable to type out contractions, or is he only limited when it comes to verbal speech?
To be fair, though, Data would possess a complete historical database, circa 24th century. Thus, he would know the full history of the Daystrom Institute, where “AI Risk” is practically their middle name. Dr. Soong is an outlier, and Data is basically a miracle; every other attempt to create an AI in the Star Trek universe has been an unmitigated disaster. This includes Data’s own brother, Lore, BTW. Thus, it would be perfectly reasonable for Data to try and put the kibosh on the whole AI thing.
Dr. Soong is an outlier, and Data is basically a miracle; every other attempt to create an AI in the Star Trek universe has been an unmitigated disaster. This includes Data’s own brother, Lore, BTW.
Yeah, it was becoming a cliché in Kirk’s time that AIs inevitably crushed the flourishing of sentient beings. Besides having his starship host Daystrom’s first unfriendly AI, he met a man-made genocidal AI that had FOOMed in space (Nomad) and another planet’s failed attempt to build Robot Jesus (Landru in “Return of the Archons”, a title that screams “the Gnostics were right!”)
So between those incidents, everything else about the Daystrom Institute, holodeck programs doing a limited FOOM into human-level sociopaths, and Lore, Data would find it perfectly reasonable to put the kibosh on AI.
ISTR there was a Star Trek novel set right after the events of ST:TMP about the the society that was the aftermath of “For the World is Hollow and I Have Touched the Sky”.
Kirk had acquired a nickname among xenosociologists for this sort of thing: “godkiller”.
How many gods did Kirk kill, including V’ger? Lots…
There is quite a bit of good, clearly written secondary literature on postmodernism out there, both from proponents and detractors, so if you read up on it, it isn’t hard to become reasonably informed on the topic.
One fundamental problem here is that there are only so many hours in a day, and there are a lot of complicated subjects that look like they might be bullshit or they might actually yield some useful insights.
I can spend the next year of my life trying to get a basic mental handle on postmodernism, Austrian economics, or population genetics. Which one is more likely to yield insights that help me understand the world better and do useful things?
+1
There’s a lot of concepts out there. The vast majority of them are BS. The only reason that most of what we read is approximately true, is because the best get signal amplified. For every Einstein there’s about a thousand crackpots with a not inconsequential following. Discarding a concept is “cheap”, there’s very few core concepts that are essential to leading a happy and productive life. If you don’t grok something, it’s probably better to just forget about it and move on. Breadth-first search. Even if you do eventually crack that nut, trying to jam in a concept that doesn’t fit in your mind comes at the opportunity cost of foregoing more intuitive concepts that you could learn at a faster rate.
In contrast falsely accepting a bad concept is quite often expensive. Being actively misinformed is often much worse than being ignorant. Particularly if your awareness of your own ignorance is well-calibrated. I’m well aware I don’t know how to count cards. I won’t get rich in Vegas, but at least I won’t go bankrupt following a dubious system based on faulty math.
I forget where I read this, but there’s a maxim most saying intellectuals tend to spend a disproportionate amount of time thinking about their weakest area. Newton’s obsessed with alchemy, Kant won’t stop going on about kooky theories of space and time, Milton Friedman spends way too much time on school choice. I think this tendency tends to manifest, because the human mind spends most of its effort trying to rationalize self-identified beliefs. Theories that are wrong typically take much more effort to rationalize than that which elegantly and naturally describes the world. Let in 10 good core concept, and one bad concept, and you’ll quickly find yourself intellectually obsessed with the latter.
Having no opinions is low status.
I wouldn’t know about that.
To comment on the schizoid comment, my understanding that diagnosing a personality disorder has to necessarily focus on the disorder part. How do you determine when a person’s mental processing is “wrong”, over the whole spectrum of human mindstates? There isn’t a reliable metric beyond looking at outcomes (such as not being able to hold down a job or marriage).
So when OP says that people sometimes discover they have a certain personality disorder, I think what they are trying to convey comes across. But it’s a misleading characterization – much better to say that they discover that they have a schizoid personality. Which when manifested in people at the extreme can become a disorder.
Saying that modern society is ‘atomized’ suggests that it’s somehow wrong or abnormal. If someone instead told high-school-Scott that more traditional cultures were moleculized (or whatever the opposite of atomized is), would he have had the same resistance to the idea?
My rule of thumb is that if I don’t see the point of something I read in a textbook heavy enough to stun a moose, with a stock photo or some kind of abstract geometric design on the cover, that was put out by an academic press in a print run of 1,000, it’s probably my fault. (It might not be a good point, though.)
On the other hand, if I don’t see the point of something that I read in a 300-page large-format paperback with some friendly, brightly-colored high-end design work on the cover, that my mother gave me for Christmas, there’s probably just nothing there.
> And I don’t care about brands, except ones that really signal high quality.
And how do you know what brand signals high quality?
Shared context is needed for effective communication! Language is merely a subset of this.
I’m extraordinarily late to the table on Virtue Ethics (I didn’t discover this blog until this year), and I’ve never actually tracked down a copy of MacIntyre (I probably should). But I’m going to chuck in my thoughts on Virtue Ethics in general, and if they’re rubbish they’ll just get lost in the hundreds of comments, no harm done.
IIUC, the distinction between VE and everything else (apologies for stating the obvious) is that is not about what you do so much as who you are. That is, while a consequentialist concerns himself with what should be done in a given situation, virtue is more concerned with having the strength of character, cultivated through habitual practice, to make the right choice, whatever that is, in the first place. Because morality is bloody hard, and doing it works against a lot of our strongest instincts. A consequentialist doesn’t deny this, per se, but consequentialism doesn’t seem to regard it as a constant and strong factor.
There’s a baseline assumption that, if the trolley is bearing down on the five people, the man at the switch has all the emotional equipment he needs. He doesn’t notice that one of the five is a rival or enemy; he isn’t paralyzed by the gravity of the situation; he doesn’t think about sneaking away ASAP so as not to be blamed for whichever catastrophe he would otherwise have to choose. The virtue ethicist, if I understand him correctly, is more concerned with forming the sort of person who does not acknowledge, in the abstract, that there is a right thing to do, and he’d like to do it, but [list of excuses]. Because trolley situations are rare, while the desire to half-ass and weasel is more or less endemic. I fail to live up to my own moral standards pretty routinely, and I imagine so do most other people.
I can’t speak for MacIntyre, of course. I’ve only read people who like him a lot, most notably Rod Dreher.
This is probably a naive question, but it seems like your description of virtue ethics is orthogonal to the question of what is right or wrong.
Suppose Alice believes that the right thing to do is to follow divine law as revealed in Scripture. Bob believes that the right thing to do is determined by some kind of utilitarian calculation. Both of them would probably want to do what you’re describing as virtue ethics–cultivate habits of mind and behavior and personal virtues that would lead them to do the right thing. But which thing is the right one is still not nailed down! And that probably determines what habits of mind and action, and what virtues, they should cultivate.
> But which thing is the right one is still not nailed down!
You’re still framing the question in terms of making some objective “right choice”. Virtue ethics isn’t concerned with the decision, it’s concerned with the mindset of the actor. What’s ethical for person A may not actually be ethical for person B, even if the objective action is identical. If person A switches the trolley because his wife’s ex-boyfriends on the other track that would be unethical.
I’m fundamentally a utilitarian, but virtue ethics acts as a good heuristic check. Particularly when reasoning under noisy information and personal biases. A little bit of error introduced into the inputs of our utilitarian equation can produce substantially incorrect answers. Witness 20th century ideology. Virtue ethics (and deontology) is more robust to bias and error.
The objective function is ultimately utilitarian. But only given perfect information. If you’re making a decision that’s rationally justified but “feels wrong”, it’s often a sign that something’s faulty in your chain of reasoning. E.g. killing children is very likely wrong, even if the commissar has convinced you that it’s all some sort of trolley problem writ large because the kulaks and their families are hindering the next stage of the dialectic.
I think it’s reasonable to think of them as complementary approaches, rather than competitors. Or possibly as distinct but related fields, like anatomy and physiology. I’m more interested in virtue than utilitarian calculations, because in most difficult moral situations (yes, this is a gut-instinct, unfalsifiable guess, mea culpa), I believe the difficulty springs from the hardship imposed by making the correct moral choice, not any ambiguity in the circumstances.
For example, when I’m at work, and I have to choose between being responsible and doing my job right, or doing things sloppy so I can go home on time. The right course of action is obvious, but it also sucks, and it’s very tempting to craft a theory involving the utility of my getting home somewhat earlier, the net happiness enjoyed by my wife and kids, etc. I’m not saying consequentialism is useless; it’s only my experience that it’s not the most relevant consideration in typical situations.
So how do I decide which habits of mind and action are virtues and which are vices? Alice never turns down a chance for some hard work; Bob never turns down a chance to spend time with his friends; Carol never turns down a chance for some fun consequence-free sex. Which of them should I make my role model?
Which one dies of old age happy in their own bed, surrounded by their grandchildren’s children.
Which one dies of old age happy in their own bed, surrounded by
their grandchildren’s childrenthe skulls of their enemies, in mounds reaching to the ceiling. The Nobel Prize collection.And of course the just-devirginized future progenitors of two or three final sets of grandchildrens’s children to come.
Look, I’m just saying I have particular standards for the life well lived, and there may not be room for my early rounds of grandchildren’s children around my deathbead. I’ll try to remember to record an inspirational video for them beforehand.
Hubris – not just the minimized modern conception of it but the ancient concept in its full flavor – is I think one of the most essential ideas for any human mind to grok.
Agreed–and welcome back, Freddie! You’ve been missed.
Seconded! Welcome back! Good to see you again. Just went to check your site and read your most recent article. I think maybe I’ll need to re-read it after I understand the written measures of vocabulary better but the result looks important to me especially if it holds up across further samples.
I think I can explain all of this behavior. Bear with me:
Let’s assume that human memory forms a scale-free network of elements, where each element can be roughly thought of as a “concept”. A small number of concepts will be very densely interconnected to other concepts, while a larger number will have fewer connections. Tickle enough of the concepts connected to a specific concept and it will be activated, which will typically draw attention to it. Direct attention to a concept and it will tickle its associated interconnections, potentially drawing attention to those associates.
Let’s call this direction of attention from concept to concept in the network “thinking”, and the creation of a new concept and its connection to the existing network “learning”.
Now we can think about what’s involved in learning a new concept, i.e., carving out a piece of neural real estate, hooking it up to the web of previously existing concepts, and potentially changing the connections between the existing concepts to accommodate the new one. The first thing to note is that the existing concepts with high connectivity are very, very hard to change, because they’re constantly going to be reinforced through stimuli that have nothing to do with the newbie. On the other hand, lower-connectivity concepts are easier to dislodge or morph to fit new stuff.
This leads us to four potential learning cases, which describe whether a concept is easy or hard to learn, and whether it’s likely to be learned correctly or not:
1) The new concept can be correctly decomposed into a set of existing low-connectivity concepts. Such a concept will be perceived as hard to learn, because it requires more attention be applied to the various existing low-connectivity concept to reinforce their connections to the newbie. Once learned, though, the new concept is likely to be learned correctly.
2) The new concept mostly tickles low-connectivity stuff, but forces some of the existing stuff to get rearranged to accommodate it. This is a concept that’s hard to learn for the same reasons as #1, but it’s more likely to be learned wrong, because it conflicts with the existing network. On the other hand, because it gets jammed in with a bunch of low-connectivity stuff that causes dissonance, the existing stuff will slowly get rearranged to accommodate the new concept. Eventually, a new connectivity will emerge that allows the fact of the newbie to co-exist with the now-altered low-connectivity nodes. Whether that new arrangement represents a “correct” learning of the concept is kinda up for grabs, though.
3) The new concept correctly decomposes into a bunch of existing high-connectivity concepts. It’ll get learned quickly and accurately. It will likely generate that “of course, what’s the big deal?” feeling as it’s learned.
4) The new concept requires lots of existing high-connectivity concepts, but its existence doesn’t fit well with the existing relationship between them. However, because it’s almost impossible to change the high-connectivity concepts, it too will generate the “of course, what’s the big deal?” feeling, but that feeling will be incorrect. Odds are that this is a concept that will never be learned correctly, because it’s just too hard to re-wire the core stuff.
Let’s apply this to Scott’s examples above:
The concepts associated with genuine emotion and human connection are partially instinctive and get developed very early in childhood, so they’re likely to be involved in almost everything that gets learned subsequently. In short, they’re high-connectivity, and all subsequent social development will get decomposed into that early set of connected concepts. Some people will have a set of concepts that have decomposed into a more useful form than others, and will likely do better socially. Some people will have a decomposition that got warped somehow, and will likely be socially stunted or downright sociopaths. But in all cases, everybody will have no problem categorizing new emotional and social experiences based on their network of core concepts. It’s just that some of those categorizations will be more useful/functional than others. In other words, whether learning new stuff here falls into category #3 above or #4 is strictly a question of how functional your core network is.
The lack of visual imagination and anosmia cases are very similar, but are good examples of #4. Because both sets of behaviors are going to be built on high-connectivity stuff that either pre-exists or is learned early, the deficits just become part of the network, and concepts like “smell” and “the mind’s eye” just get hooked into the core set of concepts without ever being questioned. That they’re objectively incorrect is beside the point.
The social atomization issue represents the #2 category: Concepts like “society”, “neighborhood”, “gossip”, and “obligation” are sorta-kinda abstract, but they rely heavily on core emotional concepts. So you can learn about something like “social atomization” and have it not really resonate with your world view, but there’s enough wiggle-room in the network that it can ultimately change the relationship between those more abstract concepts. That process of re-arrangement may ultimately cause you to change how you think about your place in society, and lead you to take actions to learn new ways of interacting with it. But you’re rapidly going to run into hard limits on that process as the things you change impinge more and more directly with the high-connectivity concepts, which simply won’t budge.
The concepts that nobody gets on the first reading are an example of category #1: You don’t understand the concepts because they’re based on highly abstract concepts that you already know. Inserting the new stuff almost certainly requires exercising concepts that are hard to reach in the network, but are plastic enough that getting them to accommodate the new idea simply takes time, not a radical reorganization of one’s world-view.
“In the first, Francis Galton discovered that some people didn’t have visual imagination. They couldn’t see anything in their “mind’s eye”, they couldn’t generate internal images. None of these people knew there was anything “wrong” with them. They just assumed that everyone who talked about having an imagination was being metaphorical, just using a really florid poetic way of describing that they remembered what something looked like.”
I was told by a fellow who was a grad student under heavyweight psychologist Leon Kamin (co-author of “Not In Our Genes” with Lewontin and Rose) that he didn’t really, when you got down to it, believe other people had visual imagination. He felt they were just kidding or being metaphorical. (I’m not saying this was Kamin’s public view, just the impression he gave in private conversation.) Kamin himself had no visual imagination but had prodigious text and numeric processing skills such as being able to multiply large numbers in his heads=.
Given that everything is some smallish bag of neurons firing, I can sympathize with this viewpoint, but it doesn’t seem quite right.
Let’s imagine an apple sitting on a table. When you direct attention at your generic “apple” concept, stuff that’s normally upstream of recognizing a real apple sitting on a table will get tickled by the generic concept being tickled. For example, one of the concepts it’ll tickle might be the “kinda round” concept.
Let’s now direct our attention to the “kinda round” concept. Just as it was tickled by the “apple” being tickled, it’ll tickle things like “light shining off a curved surface” and “curved line segments”. If we direct our attention to one of these things, things still further upstream from “apple” will be stimulated.
Different people are going to have different propensities–and abilities–to follow these concepts upstream. I suspect that following things further and further upstream from the original source requires more and more attention, so it’s not going to be a choice that a lot of brains make. But the ones that do are likely to have excellent visual imagination. The ones that don’t will have less visual imagination.
This is the kind of thing that will be deeply influenced by what your early experience is, and consequently what the highest-connectivity concepts in your network are.
Both this post and the last one reminded me of martial arts. If you’ve ever practiced a real martial art long enough to get good at it, long enough to absolutely destroy an amateur at it, you know what it feels like to (1) hear about a concept and sort of feel like you get it and (2) much later after thousands of hours of drilling and practice understand it for real and realize you really didn’t get it all in stage (1). And you can look back at people stage 1 and feel sort of helpless trying to explain it to them in words.
I think this applies to any sufficiently “deep” activity. From my own experience, Chess and Brazilian Jiujitsu definitely qualify. Other activities lack this, and are “shallow”, e.g. Tic Tac Toe and Twister. Some philosophical concepts are probably deep and other shallow. I’m slowly learning that management might be deep, whereas for years I made fun of it for being shallow.
Finding out that certain things are deep has been one of the great revelations of the last few years of my life. Reaching stage 2 in at least a few things is really important for personal development, because it gives you an appreciation for real expertise and the humbling awareness that you might be at stage 1 in a lot of deep concepts/activities/skills. But it’s really hard from the outside to know for sure if something is deep or shallow. In some competitive areas, depth can be inferred by the gap between professionals and (otherwise age/size/strength/intelligence-matched) amateurs. In other areas I’m not so sure. If you can probably conceptually circumnavigate the entire area in a short period of time, then it’s probably shallow, but I don’t know how you prove that you’ve done that. Humility is warranted.
> In some competitive areas, depth can be inferred by the gap between professionals and (otherwise age/size/strength/intelligence-matched) amateurs.
Is that actually true though? The Spelling Bee offers an enormous gap between the top contenders and untrained amateurs. Yet I can scarcely think of anything more shallow then just memorizing the letter ordering of a hundred thousand words.
(Yes, yes I understand there’s maybe linguistic logic to understanding etymology and what not. If this example doesn’t satisfy you, replace with a competition to see who can recite the most digits of Pi)
Whereas management (which I agree with you about being deep), doesn’t seem to have an obvious gap between trained professionals and amateurs. Mark Zuckerberg went from a socially awkward hacker who never took a business course in his life to CEO of one of the largest companies in the world in under a decade. John Sculley looks much more like a professional than Steve Jobs, yet the outcome between the two could not be more stark.
Memorizing long strings of numbers (or really anything else) is deep field. The trick is to assign pictures to two-digit or three-digit strings of numbers and visualize the pictures along a spatial route that you are familiar with, the typical example being a tour of your house (aka memory palace). This is because the brain’s capacity for spatial memory and landmarks is really, really good compared to other types of memory. An amateur at reciting pi digits, who tries to memorize pi like one would memorize their phone number, might get to 50 or 100 digits. But an expert can memorize tens of thousands of digits.
If there is a deep gap been amateurs and experts in a field with a strong mental component, it’s probably because amateurs don’t understand some concept that experts have internalized and mastered. However, if there is a high luck component to the field, that can obfuscate who is an amateur and who is an expert. A real estate agent working in a housing bubble might look like an expert compared to other agents in other ares with more normal market conditions. I think luck can play a big factor in the management field, if your metric to judge expertise is the success of the company.
Also, Mark Zuckerberg might be untrained, but consider how many people take these business courses and suck at management! Maybe the training is not actually providing the concepts and skills needed to become an expert manager.
I am now rehearsing the sentence,
“I enjoy Schizoid Personality Euorder”.
Sounds like the 21st Century Schizoid Man to me.
This really resonated with me:
It’s annoying because, when I have this feeling, it’s really hard to actually talk-out our misunderstanding because I’m worried I’ll either insult the other person or reveal myself as a moron.
I feel like this confusion is especially bad even if you know where the point of misunderstanding is in an object level sense because it’s almost impossible to strike the right tone to avoid both possibilities at once. Sound confident –> accidentally insult other person. Sound questioning –> put giant moron sign on forehead. Sound neutral –> either mistake possible.
Thank you for this. This puts into words the sensation I have been grappling with in the wake of #MeToo: I am aware that sexual harassment is a problem. It is a bad thing, and it must be eradicated. But my understanding is academic: I haven’t experienced sexual harassment in the workplace, personally. (I’m not sure that I’ve experienced sexual harassment outside the workplace, either.) And my experience feels so completely alien to all the other experiences I’ve seen people recounting that I have been questioning my own perception of reality – to the point where I find myself asking questions such as “You can experience sexism that doesn’t involve harassment, right?”
Oh yeah. (If you’re on the spectrum or somewhat geeky, it’s possible you missed it or wrote it off as nonsexual unpleasantness.)
For my part, I keep hearing about men joking about treating women as body parts, and bragging about how they took advantage of this one or that. I’ve seen enough allusions to it that I believe it must exist, but I’ve never been able to get into a situation where enough manly-men have been around me in a relaxed fashion that I’ve actually seen it in person. Donald Trump, yeah, I saw that video, but this was always something more-manly-men did somewhere else that I didn’t have access to.
But their name means
“Modernists? we’re sooooo past that”,
so it is in fact pretty safe to assume that they’re not scrupulous humble philosophers.
Like, calling yourselves “the new vanguard” isn’t even a dog whistle.
The name fits perfectly with the whole exercise, which is maintaining plausible deniability under the most trying of conditions. A urinal in an art gallery? let me explain it’s hidden brilliance for you real quick..
And I’m sure it’s a great discipline, like driving or boxing or a lot of things really, that demands a lot from practicioners and makes them better people in some ways.
But their mistake is not being merciful to unsuitable opponents. By all rights, ideally speaking, a society should be tough enough and smart enough to handle such a challenge, so what obligation is there to restrain your sophism?
It’s to trollish arguments what provoking bar fights (with non fighters) is to boxing. -Truly, a grand game, with its own unique brilliance, heights, and aesthetics (etc), -just one some people aren’t willing to compromise on for the sake of generosity/mercy etc. A joke or a dance taken too far.
_
Practically speaking, it’s entrenched and stuff, so it’s not just some kids playing at sophism, but in terms of the basic idea there, I don’t think there’s any mystery.
TL:DR imagine that the emperor enjoys himself as he strides through the crowds. He may or may not know that he is wearing no clothes, but he wouldn’t care either way if he did, because he knows he likes how walking under this particular mantle makes him feel. -Daring, brave, testing his soul, etc.
_
In the case of the emperor, that would be abhorrent, because he’s the boss.
-And that points out the essential problem of modern post modernism.
Artistic-whole-life existential-self-assertion is great for a side show, -Evil Knievel is a lunatic that we are all better off for having around, but poor taste from a person (or movement) in power.
_
It’s as if the guy from ‘sexy and I know it’ was made emperor, and strode down the column to mild applause and no one’s surprise.
Audacity is admirable in the precarious, or unknown, not so much in the comfortably entrenched. One ‘bursts onto the scene’ with audacity, one does not venerably hand down the wisdom of the elders with audacity.
So. post modernism is a victim of its own success. The more successful it gets, the worse the joke gets, and the worse the class of people it attracts.
_
You are an A+ postmodernist and this was an excellent ironic deconstruction of itself. Bravo.
technical note: I wrote the Quora answer on smell. I wrote about “freshman year”, but meant high school, not college.
Another example of this: my non-religious friends don’t really have a conception of “reverence”. Occasionally they’ll try to ask why I do a certain thing, and reverence comes up, and they’ll go, “sure, but what’s the actual reason?” When I look for things in their life to use as analogies, I have to admit that (at least from my perspective) reverence just isn’t part of their world. There’s a little more sense of it among my Protestant friends, but to a lesser degree of that sense that they’re “colorblind” to something.
Interestingly, back when I was Protestant, Yudkowsky’s reverence for the truth was my best example of the concept, and one of the reasons I got into Rationalism.
Perhaps ask what they would be really offended to see in a jar of piss. At “Piss Christ” they might shrug, but perhaps they would be appalled at “Piss Martin Luther King Jr.” Whatever they don’t want in the piss jar is what they revere.
Perhaps in line with knockoffnikolai’s point (I am certainly non-religious), I cannot think of anything I would be really offended to see in a jar of piss.
Just a quick OT note about the site: I’m suddenly getting warnings from AdGuard that SSC is pushing a blockchain miner to the browser. I suppose it’s possible that I’ve picked up some kind of creepin’ crud in Chrome, but thought you ought to be aware.
Anybody else having problems?
There is a genre of apologism for just about everything that consists of saying “no, you don’t get it, there’s a concept-shaped hole here for you; please don’t come back until you’ve read a stack of books on $IDEOLOGY and indoctrinated yourself into believing the same stuff that we do”. If you’re very lucky they will then tell you which stack of books.
I am concerned that taking this line of thought too seriously amounts to writing that genre a blank check.
You can be afraid of spiders without ever seeing one and may affect your decisions in general – especially if you haven’t seen one before.
Same thing with anxiety. It often seems unnecessary to worry about the past, but you are your past and therefore (possible) bright future holds no value if it doesn’t contain the past, i.e. you, or like Louis C.K put it: Theres no limit what you can do if you don’t give a s*** about people.
Louis C.K. “Of course … But Maybe”
Information bottleneck
In Game Theory, No Clear Path to Equilibrium
I think this actually plays a big part in how people understand—or don’t understand—different socioeconomic classes. Or more broadly, how people perceive other cultures, for the broad definition of “culture” that includes not only different countries or ethnic groups but things like social classes or self-selected subcultures.
To give an anecdote, when I was younger, I grew up in a blue-collar working-class neighbourhood. For many kids I interacted with at school, their dads worked in construction, or some other typical blue-collar job, and this is the kind of future they saw for themselves as well. And it’s hard to convince yourself to pay attention in science class when you think you’re going to be hauling lumber for a living.
When it came time to go into high school, there were several different schools available to me. Most of the students in my schools, including my friends at the time, were moving to nearby schools within the same school system. My dad, on the other hand, was very adamant that I not attend those schools, but instead transfer to another school system, that tended to have more academically-inclined students. He said it would be a better environment.
For the life of me, I couldn’t understand what he meant. All I knew was that the friends I’d had since childhood were going to one place, and he was trying to send me to another.
The question ended up being moot, because my parents both got jobs halfway across the country and we all moved away. Only when I got there did I understand what my dad had been talking about: I was now living in a more middle-class area, and I was able to find friends who I could relate to in a very different way than my old friends: people who had been brought up more intellectually, who cared about school and were interested in it, people for whom university was, if not expected, than at least a major option. In contrast, most of my old friends from back where I used to live, nice as they were, tended to bond with me because we were all kind of misfits, in a variety of different ways. I ended up maturing a lot when I moved, and being in that kind of environment was, as my dad had anticipated, a very positive experience for me.
But until I immersed myself in that environment, I had no idea something like that could be possible. I just assumed that my own lived experience was the norm. I didn’t even know what I was missing.
These days, I’m living in the Bay Area, interacting mainly with people in the upper middle class, and I’m seeing the opposite of what I experienced. A lot of people here just do not comprehend the way working-class cultures like the one I grew up in tend to work, because they try to understand it by extrapolating from their own experiences, and their experiences are missing something fundamental, in a way that’s very similar to what’s described in this post.
I think this is a big part of why it’s so difficult for some people to understand other cultures or subcultures.
I’m a little late to the party here, but I don’t have depth perception and didn’t realize it for a long time.
Can anyone point me in the direction of books/articles to read on this?
I’m having a difficult time understand the whole drift of this. Atomization. Does it have a useful definition? If so, do you know it? If you do, can you use it to tell whether your community is atomized or not? If not, what failed: your ability to apply your knowledge, your knowledge, or the lack of a useful definition?
I think that what makes more sense is not putting it into a false dilemma between two polarized positions like this.
You are in the company of many others that think the emperor has no clothes. Even if the criticism is correct, that doesn’t make Alasdair MacIntyre ‘really dumb’. The value of a book usually doesn’t depend on it’s ultimate ‘conclusion’ being wrong. Consider that a book can be useless for the purpose for which you read it, while being very useful for its target audience.
Philosophy as such, as a subject, is like this as a whole. There’s the Philosophy 101 level, at which every educated person has read a few books and knows a few things – what Hume’s Fork is, Descartes “cogito”, etc. At that level, it’s tempting to dismiss philosophy as an unimportant subject.
Then you might go deeper, perhaps 4 years at university studying it. Then you start to see that the problems are a bit harder than you initially thought, but you feel you’ve got enough of a handle on the subject that you can plump for one or another of the well-known positions with some confidence.
Then there’s an even deeper level where you’ve been round the houses on the main positions several times for many years, and you start to understand just how difficult and perplexing the big philosophical questions actually are.
In a some cases, perhaps it’s because the relevant concepts haven’t even occurred to any of us yet.
Fascinating post. When I read this part I thought “and that’s why AI will be very very difficult if not impossible”.
I try to follow CS Lewis’s rule here. In Surprised by Joy, during the chapter on his boarding school:
> Alasdair MacIntyre is really dumb?
Three comments here
1. One only has to read historical philosophers to know that eminent philosophers often say some utterly illogical unfounded and wrong things. Philosophers are often smart but they can be just as irrational as I am, and possibly as irrational as the reader of this comment.
>> philosophers can be and often are really wrong
2. Smart people can do apparently dumb things if they are playing a different game from you. Example: “I know these statistics are stupid but they are what you need to do, to get published”. The game in philosophy is to convince other philosophers that you are smart/interesting.
How do we know that a philosopher actually knows something? It is not like they can point to a bomb they built as proof. The proof is mostly social, not massively unlike the proof that Taylor Swift is a great musician.
>> the proof that philosophers know much is pretty weak
3. I remember when learning a particular foreign language I first noticed that when people use complicated confusing language they are usually covering something up. Reading this book, as a result of your blog, triggered this recognition. Eventually I came to suspect that he was trying to justify Roman Catholic ethics without admitting it. I have seen this sort of thing before. And when I checked:
> (wikipedia) MacIntyre converted to Roman Catholicism in the early 1980s, and now does his work against the background of what he calls an “Augustinian Thomist approach to moral philosophy
[note: Which was of course derivative of Stoic philosophy]
I think the confusion is a result of a hidden agenda. And nowhere in the book does he actually justify Virtue Ethics as being qualitatively better than other ethical systems.
> AM has a hidden agenda which explains why the book is so confusing