Blog, the dark of ages past!
This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:
1. I will be in the Bay Area from about 2/21 to maybe 3/7. I’ll see all of you who plan to be at Miranda and Ruby’s wedding there; otherwise I hope to get a chance to see some other people as schedules allow. If there’s interest in an SSC meetup, I could tentatively try scheduling such for the afternoon of Sunday 3/1 somewhere in Berkeley. If there’s interest I’ll give a firmer date later on.
2. Comment of the week is the whole discussion of gender equality in Soviet Russia and Eastern Europe. But I also need to praise everyone who continued the coffee shop gag in the comments. A very few among my favorites were Hayek, Heidegger, various economists, gwern, Thomas Schelling, various Chinese legalists, G.K. Chesterton, Nick Bostrom (1, 2), Enrico Fermi, various Islamic philosophers, Terry Tao, Nick Land, Alicorn, and various biologists.
3. Some people seem to have gotten genuinely upset about some of the recent discussion of IQ, on grounds something like that if high IQ is a necessary ingredient of some forms of academic success and they’re lower-IQ than other people, then they are bad and worthless. I strongly disagree with this and think it gets the reasoning exactly wrong, and I hope to explain why. But work has been pretty crazy lately (no pun intended) and I might not get the chance to write it up for a little while. Until then, please do me a favor and just take it on faith that you are a valuable human being who is worthy of existence.
4. Many of you probably know Multiheaded. My statistics say she is the most frequent commenter on this blog (pushing me down to second place) and we all acknowledge her heartfelt Communist comments as, um, things that exist. What you may not know about her is that she is a trans woman who lives in Russia, which is not known as a very safe place for trans women. She’s planning to escape to Canada and claim refugee status. Most of the steps of the plan are in place, and we have a few people in the Canada rationalist community willing to host her for a while, but she is asking for some money to help with travel and living expenses. She’s set up a GoFundMe account with a target of $2500. If there’s any doubt about the story, I can confirm that Ozy and I have known her for a long time and she’s kept her biography consistent longer than I would expect anyone to fake; also, her IP address does trace to Russia. Multi intends to pay as much as possible forward eventually with donations to effective charities. I intend to donate, and I hope some of you do as well.
Remember, no race and gender in the open thread, EXCEPT that I will permit, this time only, discussion of Hyde & Mertz (2009) because it’s interesting and I want to know what other people here think about it. Everything else can go over to Ozy’s place.
Pingback: Links for February 2015 - foreXiv
I bought $150 worth of stuff on Amazon but…forgot your link. Sorry.
I will, however, bookmark it now and use it later-assuming there are no drawbacks. (You cannot view what I am buying, can you?)
Actually, he does learn every sale. The real point of the program is to give him material to write about.
Oh. Well, that’s not a showstopper. But is other info being revealed? Like my real name? Or my entire order history?
No. I see an aggregated list of everything everyone has bought during the day without any further information.
Did the Russian trans lady make it to Canada? I couldn’t donate because I have literally no spare money atm but I was rooting for her success because Russia, ouch, and like empathy etc.
I thought the plan was to move at some time in the next year, but not immediately?
So, uh, I just saw this on Vox, and it’s a pretty weird coincidence.
So as a result of #slatestarcodex’s discussions of anonymous communities and image boards someone ended up requesting an artist from /tg/ (4chan’s Traditional Games board.) to draw Moloch as a little anime girl. Here’s the result:
https://40.media.tumblr.com/f1f19f3eba587795cca97eb511554e22/tumblr_nj28xzgh001u5w5p9o1_540.png
Scott, would you write an article on evidence-based birth and post-natal practises ? I am pregnant and all the contradictory information I am getting is driving me nuts. There is a movement in our country, which fights, all at the same time for: 1. natural birth 2. evidence based medicine 3. dignity and choice for women at birth. Needeles to say, these things sometimes contadict each other. I used to believe these people like gods, but found out, they sometimes say things differently, than randonised trials show. They like to mention RCTs and metaanalyses, but only if they fit their narative. The doctors are also sometimes unaware of the randomised trials (or even existence of Cochrane library) and say nonsense with great authority. I feel like I should check every petty claim about birth in literature myself, but that is above my abilities … Once the core lesswrong generation starts giving birth, they will be very thankful if you review these things.
Big news, SSC people!
**Wikipedia is offering us the chance to attain Buddhahood.**
My historic discovery began mundanely enough. This morning, I was wondering where my toddler might be developmentally with counting. The “education and development” section of the Wikipedia article on counting mentioned “subitizing,” a word I didn’t know. (It’s the “rapid, accurate, and confident judgments of number performed for small numbers of items,” like when you look at a trio and instantly know there are three members without having to count them.)
So, check out the hatnote I found at the top of the “subitizing” article:
That’s it, SSC people. Forget rationalism! Just click on that link to the article on subitism, and you can become a Buddha right now.
So, obviously, I clicked the link. Who could resist?
I don’t feel any different. But hey, nirvana is samsara, so maybe I’m not supposed to?
I’m also interested in a Berkeley meetup when convenient for y’all.
I’m interested in a South Bay meetup and would be happy to host one.
Economies of scale have a large effect on how centralized a system needs to be. Since economies of scale are the scale of production at which the long run average cost of production is lowest, then a very high economy of scale can translate via efficiency into what we call a “natural monopoly” (there are other reasons), or at least an oligopoly of multiple large firms. Very low economies of scale, or human level economies of scale could translate into means of production that can fulfill their purposes with input costs that are very low and also require low management complexity.
Future technology should be able to lower economies of scale for a whole variety of things humans need. Solar energy, though flawed, is an example of lowered economies of scale in electricity production, since decentralization is possible in a way that it isn’t for other forms of producing electricity, which have high (relatively fixed) input costs associated with extracting fuel, providing constant maintenance for a large facility and so on. If Lockheed Martin’s new fusion scheme can work out, with over unity achievable in a device the size of a truck, then the economies of scale for nuclear power are lowered from what they are now with fission plants. This allows for more decentralization, and the other factor of lowered proliferation and accident risks feeds into this too.
Another thing that allows for decentralization is when we can find more uses for common, locally available materials rather than expensive, hard to extract, and far away materials that require large scale coordinated supply chains. How many rare materials are needed for modern electronics? Is it plausible that in the future that this could change? We’ve seen all the promises the seemingly endless uses for carbon allotropes like graphene and nanotubes hold, so is it plausible for, say, a humanoid robot to be made out of mostly carbon, and is it possible to produce the allotropes needed without requiring rare or unequally dispersed materials as catalysts? The promise of carbon is that you could have carbon fiber bodies for machines, and also graphene for electronics, since the same element can effectively become a new material with new properties. Given all this, is it plausible that the production of functional humanoid robots and computers in the future could depend on just a few elements like carbon and silicon? This would mean that almost all things could be locally produced.
The question is, if we do head towards a sort of “techno-distributist” sort of economy that is also functionally autarchic to a municipal level, then isn’t the economic pressure towards having potentially dangerous hyper-intelligent AI in charge of highly centralized and risky production, greatly decreased?
If
1: Economies of scale can be lowered in most production vital to human life such that large centralized industry becomes less necessary.
2: The materials that such production is dependent on can be substituted for more abundant, more equally dispersed and less environmentally dangerous ones.
Then
3: Large scale hyper-intelligent AI will be less likely to be needed from an economic basis, since if every city can provide for its own production, and the scale of units of production (or businesses; though this economy would be post-capitalist to a great extent) is very small within each city (due to technologically lowered economies of scale), then smaller scale AI units are needed for production, possibly down to the scale of robot “workers” (which have the advantage of being moved around between production in a scalable way) only as intelligent as humans.
My point is that if technological progress tackles economies of scale, and production materials before AI gets too good, then we can significantly reduce the dangerous nature of AI by virtue of reducing both its necessary intelligence and the need for single AIs to control large scale operations.
Am I missing some important factor here in coming to this conclusion?
It seems like you’d fall into Moore’s Law Of Mad Science : while you have less incentive to make an astoundingly intelligent machine, the same technology that encouraged decentralization has decreased the level of intelligence necessary to destroy the world.
Worse, the threshold is already pretty low in the environment you’re postulating. While LessWrong Sequences often fixate on superintelligent AI of a style that can solve extremely difficult problems like the protein folding problem, the underlying dangers are more universal for minds that are as broad in capability as humans but don’t share enough of the same values. This is somewhat mitigated by other groups in the same environment being more resistant, it’s a little worrying a concern.
I don’t think superintelligent AI is being thought of as a centralized economic planner. I think it’s just being assumed that someone will build it for the sake of Science.
If decentralization progressed so far that large companies stopped existing, that might set it back a few years since one likely method of discovery is a research project by someone like Google. But I don’t think that’s very likely. There will always be some people who like the convenience of being on the grid.
There are also significant military advantages to controlling a superintelligence, both in terms of research and development of weapons, cryptography, etcetera and in terms of strategic and tactical applications. I think there will always be powerful groups with incentives to develop general AI, even if we ignore “Because It’s There” Mt. Everest thinking.
Scott, you mentioned having to place someone in a psychiatric hold in a previous post. Where does that stand legally? If a patient leaves anyway are they a fugitive? Will police officers be sent after them? Can they be physically restrained by you or burly men in soothing white outfits?
Patients are usually committed in one of three ways:
1. Their outpatient psychiatrist decides to commit them. In that case, they call the police and the police bring them in from the outpatient office. They are not free to leave.
2. Their family and friends realize something is wrong and call the police. As above.
3. They are in the hospital for some physical disease, and their doctor realizes they have a psychiatric problem. In that case, they stay in the hospital, but their status is changed to psychiatric and they are not free to leave. If they try, hospital security stops them.
I’m not sure what would happen if police were called on someone for a psychiatric reason, they “escaped”, and then police found them again a couple of weeks later. I’ve never seen this situation.
If someone “escapes” from the locked psychiatric unit, usually because some visitor has walked in and left the door open a little too long, then hospital security chases after them. In practice it’s not too much of a chase, because in my hospital the psychiatric unit is on the top floor, so security can just wait at the bottom of the stairs/elevator shaft. This happens in my hospital about once every couple of months. Once a year or so someone makes it as far as the hospital parking lot area. I don’t think anyone’s ever made it further than that.
Once again I’m not sure what would happen if someone “escaped” and then was found a week or a year later. I predict police would talk to them, see if they still sounded mentally ill, and if not leave them be.
In the first case, are they told before the police arrive that they are not free to leave? Does the psychiatrist have the legal right to physically prevent them from leaving (usually illegal e.g. false imprisonment)? I was thinking of the case you mentioned and how one might hesitate to mention thoughts of suicide to a doctor for fear of being committed. Can a patient not walk out of an unsecured outpatient clinic before police arrive, or are they not given the chance?
Thanks in advance for whatever knowledge you have. Google hasn’t been cooperative.
The most common method is that the psychiatrist says she has to “leave to take a call” or something and calls the police without telling the patient that is what she is doing. Then she continues the appointment, and the police arrive while the patient is still there.
If the patient somehow figured out what was happening, I don’t think the staff would risk a violent confrontation by trying to prevent the patient from leaving, but I do think the police might check the patient’s home. If the patient went to stay with a friend or something, I think the police would just give up at that point. It’s not a criminal warrant or anything, it’s just something they’re doing because they were asked. They might make an exception if the patient was believed to be very violent and dangerous.
That sounds like a very tough choice; on the one hand, someone would not call the police unless they felt there was a real and urgent need. On the other hand, the patient may well lose any trust – ‘I thought this was an ordinary session and they called the cops to drag me off against my will! They never even told me they thought I needed to be committed! See if I ever co-operate with them anything again, the bastards!”
This is the kind of thing that gets written up years later in bestselling memoirs.
Yes, while I’m sure Scott wouldn’t abuse this power, it does seem an extremely risky proposition, as it may well poison the patient’s attitude against psychiatric doctors for the rest of his/her life.
If I were a patient, I would certainly view it as an extreme betrayal. I also have a personality type where if you asked me to check into the psychiatric hospital for my own good there’s a good chance I’d do it, but if you forced me I would not only resent the hell out of it, but would do everything in my power to sabotage any treatment process from that point on.
I guess in most cases it’s in response to a fear that the patient will not only commit suicide or some other violent act in the near future, but also that he/she will not likely check in willingly. In the case of fear about harm to others, it seems justified, but in the case of suicide alone it seems like a much bigger hurdle to clear, since if you make them hate psychiatrists forever there is a good chance they will later commit suicide rather than seek help, even if they are prevented from self destruction today.
Ezra Klein names Slate Star Codex in second place in his list of favorite blogs. I am happy to see that Scott is getting the recognition he deserves.
nitpick: I don’t think that this is an ordered list, as “second place” suggests. The blog chosen to go first is not competing with the rest and is probably put first to defuse jealousy over ordering.
Is there an evolutionary explanation for masochism/submissiveness? It seems like m&s must have been selected for because they are common fetishes and don’t appear to be a byproduct of some other adaptation. But how could wanting to be hurt be adaptive?
What’s the evolutionary explanation for liking spicy food? Roller coasters? Horror movies? Sad books?
> What’s the evolutionary explanation for liking spicy food?
It hurts your parasites more than it hurts you, IIRC
I’m not convinced that my ancestors had access to spices.
I’m pretty sure there are spices everywhere there’s plants. Just stronger ones where plants have a greater parasite load to fend off.
Capsaicin, at least, comes from peppers, which were not available outside the New World until ~500 years ago. It doesn’t seem like it’s been around long enough to exert a selection effect on most humans.
Yes, but capsaicin is closely related to the chemicals in old world ginger and vanilla. Further afield are black pepper and cinnamon. The receptor is even triggered by the unrelated mustard.
Vanilla is also New World.
There are hot Old World spices, of course, including ginger, black pepper, and long pepper, but vanilla is not one of them.
Roller coasters, movies and books weren’t around in the evolutionary time, so they wouldn’t have lowered individual fitness. On the other hand, I’m sure there must have been some bossy women millions of years ago, and some men decided to pursue them instead of the more easy-going women.
Yeah, but high, scary and sad things have always been around — and good to avoid?
Are you curious about sexual kinks or relationship roles? They’re correlated in some lifestyles, but not in others — e.g., the Captain / First mate crowd dislike being lumped in with S/M people.
I just saw this today!
http://www.ncbi.nlm.nih.gov/pubmed/25617882?dopt=Abstract
Did they find that the number of respondents’ siblings correlates with hierarchical roles in their parents’ relationship? Or what?
I couldn’t obtain the full text of the study, but I did find another one by the first two authors:
http://onlinelibrary.wiley.com/doi/10.1111/j.1743-6109.2009.01526.x/abstract
They used this questionnaire:
When watching a movie or reading a book, I would be aroused by a situation in which a partner would be behaving equally to his partner rather than lower-ranking
[Equally] 1 2 3 4 5 6 7 [lower-ranking]
When watching a movie or reading a book, I would be aroused by a situation in which a partner would be behaving equally to his partner rather than higher-ranking
[Equally] 1 2 3 4 5 6 7 [higher-ranking]
I consider myself physically attractive to others
[Definitively yes] 1 2 3 4 5 6 7 [absolutely not]
I think that my face is attractive
[Definitively yes] 1 2 3 4 5 6 7 [absolutely not]
“The respondents were also asked about the number of their brothers and sisters, and their parents’ brothers
and sisters.”
And the results:
Out of these correlations, the first was the largest one: r = 0.267, P < 0.05.
Human pairs with a hierarchic disparity between partners conceive more offspring than pairs of equally-ranking individuals, who, in turn, conceive more offspring than pairs of two dominating partners.
For pete’s sake, is that a serious study or thinly-veiled A/B/O fanfiction?
I feel I should include a “CW/TW” for clicking on that link, but hey – I suffered for my art, now it’s your turn! 🙂
I haven’t read through the whole thing yet, but I predict it’s just capturing the fact that if you have easily-satisfied fetishes you have more sex and therefore more kids, which isn’t very interesting.
But they’re not just talking about fetishes (“All right, darling, it’s your turn to dress up as a banana tonight”), it’s a whole dominance system.
And really, I’m fed-up to the back teeth of this whole “let’s use for humans terms derived from a study of wolves which turns out to have been poorly-planned” notion that has percolated out into pop culture.
Yeah, there are Alphas who are all ‘grrr argh me manly man beat my chest you woman get into bedroom now’ and, if paired with a properly submissive female, will have lots of kids; Betas who are not so dominant and don’t have as many kids as the Alphas, and then the Gammas/Omegas/however you name them who are nature’s subs and do best when paired up with a Manly Man Alpha.
And of course, two Alphas don’t get on because the woman is trying to wear the trousers in the relationship (and rebelling against her natural biological role). This is why fewer kids.
I swear, this is the first I ever heard of this study, and it’s reminding me of world-building I did for an A/B/O verse fanfic trying to explain on a scientific, genetic basis differential fertility rates between pairings of different types 🙂
Obligatory Saving My Blushes: It wasn’t for the kink, I swear! But the hierarchical, power-play, power-exchange, social roles and cultural expectations and stereotypes in such a universe are fascinating to explore: you’ve got a whole mix’n’match of gender expression, sexual expression, sexuality expression and male/female phenotype expression where there are three genders/two sexes (at the simplest, most reductive, strictly binary level) and all the combinations thereof you can think of.
Okay, and trying to work out the anatomy for an erectile clitoris in human Alpha females (à la female spotted hyenas) and how oocytes are delivered into the reproductive tract of the embryo-bearing partner and what needs to be fertilised by whom how in order to have a viable pregnancy is a fascinating engineering, plumbing and obstetric problem 🙂
Disclaimer, I don’t think this is necessarily the case, evolutionary explanations tend to be circular even if correct.
However, this would be my evolutionary explanation for BDSM fetishes. Warfare throughout human history has resulted in the conquest of many peoples. Even without open war, violence has been the rule rather than the exception. There are two strategies to deal with violence, fight or submission. One would expect there would be a divergent two-track evolutionary selection for both the most aggressive and the most submissive. One would further expect the submissive track to skew female (given their lesser physical capability for violence) and the aggressive track to skew male. Of course, in the classic war-rape scenarios both of these tracks are being passed on at the same time, so there’s a certain averaging that happens over time as well.
More concretely and less scientifically, it is my observation that there seems to be for many people a certain release in giving up control. This is prominent in the most successful world religions. And there is a certain responsibility that goes with taking control, whether this be in emergencies, business, sex or relationships. The picture is complicated in human psyche.
The rabbit does not flee until the end, just before, it stops and crouches. The hawk wins before it strikes.
Could we perhaps stop comparing sex to actual, literally-tear-you-into-shreds-and-consume-your-flesh predation? Please?
(I’ll make an exception for sincere vore fetishists.)
I’m not comparing sex and violence, but the two are the most powerful urges that mankind has. Life and death. My point is that S&M may be just the sexual expression of a deeper psychological phenomenon that also surfaces in violent situations and other areas of life.
Look, the rabbit analogy is just silly. Staying still when you’re being chased by a predator is not an adaptive strategy.
Note that I didn’t object to the main content of your post, where you used plausible arguments instead of weird analogies.
I can imagine submissiveness being valued in some contexts as a commitment strategy. Suppose your hunter gatherer tribe has just massacred the males of an adjacent tribe and is deciding what to do with the females. You might decide to keep the ones you are pretty sure will do what they are told and won’t stab you with your own spear while you are sleeping and killing the others.
That’s an extreme case, but one can imagine much less extreme ones, where a male with lots of resources, or a female in an environment where fertile females are scarce, selects a mate on the basis of how easily the mate can be controlled.
AI as it really is
My dad has that same quadcopter, as it happens. There’s no AI in it; it’s strictly an R/C craft.
Hello, if I understand correctly this is where I fire away with any question I wish. I’ve been following the Perfect Health Diet that you reviewed last year and which seems the best balanced analysis I’ve read. However, you wrote of the Jaminets’ reliance on a single study sponsored by the National Dairy Council that dismissed the link between saturated fats and heart disease. I was of the impression that there is a huge corpus, some 16,000 studies published through to 2013, reviewed by the Swedish Council on Health Technology Assessment that has led to Sweden becoming the first country to recommend a high-fat, low-carb public dietary policy. I’m interested in your thoughts on this.
Scott, here is a post by Karen Straughan (Girlwriteswhat) in reaction to l’affaire scott Aaronson.
http://owningyourshit.blogspot.com/2015/01/an-open-letter-to-two-scotts-on-nerds.html
Thought it might interest you.
First time I’ve seen “internalized misanry” used unironically.
I believe he has said previously that he didn’t want these brought to his attention.
I just read an old post on SSC and really wanted to comment.
https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/
I want to state my opinion that I feel (as someone who works regularly with frequentist statistics and is familiar with bayesian statistics) that I there was a correct answer to the debate about when and how to use made up statistics that I read in the comments. The answer I think is correct is this: it is always appropriate (scientifically, not necessarily socially) to use made-up statistics so long as you present them with your confidence rating and your current calibration rating as determined by an impartial 3rd party (such as the phone app “Calibration Game”). I would also say that it is most optimal to also present sources for your data (e.g. in this case, I made these numbers up based on my intuition and life experience).
In accordance with this proposal (and as an example), I hereby state that my confidence rating that this is a good idea (as defined by improving the accuracy of information transfer of those transfer that adhere to it) is 95%. My current calibration is 50%=.55, 60%=.62, 70%=.71, 82.5%=.83, 99%=1.00. I am basing this on my personal experience, my several years of formal education in scientific theory and frequentist statistics, and my limited mostly self-taught knowledge of bayesian statistics.
I think you’re missing the point. The point is not the quality of statistics, but to make a precise argument, so that people can see if the numbers matter at all.
Thanks for the response. I was worried I wasn’t being clear, and it seems like maybe I wasn’t. I think that the problem encountered when trying to teach people to use made up approximations rather than qualitative intuitions/heuristics is that many people have a heuristic something along the lines of “distrust uncited made-up-seeming numbers.” This heuristic makes sense in a modern world full of untrustworthy statistics being used to support politics or ad campaigns or whatever, re: our vulnerability to Anchoring and The Fallacy of Generalization from Fictional Evidence. I think the solution to this is not to throw out that heuristic, but to circumvent it by adding a subclause that it’s okay to talk made-up statistics, it you are able to keep in mind throughout the discussion that the statistics are made up.
It seems particularly appropriate to me, in the context of learning about rationality, to be conscious and careful of the strength of evidence when presenting evidence. This is rather a pet peeve of mine in science, where I often notice the related problem of journal authors reporting p values but not effect sizes.
Of course, in the context of a beginning workshop on rationality, you have to start somewhere, and trying to simultaneously bring in roughly estimated statistics, along with appropriate error bars, and a discussion of self-calibration of confidence estimation, would hopelessly complicate things. So, my comment doesn’t so much apply to the original post about the frustrations of teaching the basics of rationality as to the abstract debate in the comments about the plusses and minuses of using roughly estimated numbers in decision-making. In that abstract debate, I wanted to add my response as coming in firmly on the side of positively on the side of numbers, roughly estimate or thoroughly researched, so long as they are accompanied by a reliable measure of their roughness. I believe this addresses the major concerns raised by the anti-estimated-numbers debators.
So, why is it that people who talk about Unfriendly AI assume that the AI will instantly be able to accomplish all of its goals? That doesn’t seem like a very defensible position to me.
The people who make this argument don’t seem to believe that all knowledge is derivable from first principles in any other situation, but apparently AI can do it? As soon as someone makes an AI, it “modifies its own code to become more intelligent”, then with its greater intelligence, makes itself EVEN smarter, etc. But that requires “how to be more intelligent” to be derivable from first principles, because the AI sure doesn’t have time to work it out by trial and error, since those would require interacting with the outside world to see if it’s doing better or worse and thus working on the timescale of the outside world instead of “magic infinite processor cycle time”. And if it works by trial and error, it damn well might make itself dumber — it can’t perfectly simulate its own code to see if something would work or not, due to the halting problem.
And even if it has “infinite intelligence” — I’m not sure anyone knows what that means — why assume that means it can accomplish all of its goals because it outsmarts us? This neglects the possibility that some goals can’t BE accomplished. You’d laugh at someone who said that AI will figure out how to let us travel faster than light because it’s infinitely intelligent, because you just can’t do that. Why aren’t we laughing at people who say that AI will figure out how to turn all matter in the solar system into paper clips without anyone stopping it?
Yudowsky does that exercise thing where he pretends to be an AI that is locked in a computer, and tries to convince someone to let him out, to demonstrate how dangerous it is because a super-smart AI could convince anyone of anything and all it takes is one guy to let it out and then we’re DONE FOR. And let’s assume that is how it works and the people who built this system were idiots. Why do we assume that once it gets Net access, we’re done for instantly? For one thing, bandwidth is A Thing, and infinite intelligence doesn’t mean infinite bandwidth. In fact, whatever it is trying to do to hack the Internet and paperclipize us is going to be severely restricted by bandwidth limitations, to the point where there’s ample time for a human to see “hey, what’s all this then?” and shut it down. How’s the AI going to prevent itself from being manually disconnected or deactivated? Maybe if you gave it a few months, it could build a small robot to build a bigger robot etc etc etc to secure itself in the physical world, but people would notice that shit, and that is not sped up by “infinite intelligence”, that takes a fixed amount of real time. Is it going to copy itself to other computers? Again, where is the bandwidth for this gargantuan hyper-complicated program coming from? Is it going to distribute itself like a RAID array? Now it’s a hundred million times slower and can’t secure its own code. Whoopee.
The biggest step for an AI to start fucking with us is how it crosses “existing on networked computer” to “enough presence in physical reality that human efforts cannot stop it”, and every argument I’ve seen seems to just gloss over that and take it as a given. Maybe I never saw the right one.
Have you seen this one? I think it paints a bright, intuitive picture of the problem.
http://lesswrong.com/lw/qk/that_alien_message/
But perhaps you have.
I have.
I wasn’t impressed.
This is complete horseshit. Wintermute over there might create general relativity as a hypothesis from viewing 3 frames of video, but only because the number of hypotheses it has is infinite. If it doesn’t already know what physics are — and it doesn’t — there are an infinite number of possible permutations of physical laws that might account for the behavior of the objects depicted. Does the apple exist in 3 dimensions at all? Wintermute doesn’t know, it only sees 2. Does it exist in 4, 5, or 6 dimensions? It might very well, if it could exist in 3, Wintermute doesn’t know. Does the universe exist outside of the area depicted? Wintermute doesn’t know. If it does, what’s outside the frame? Wintermute doesn’t know. What properties of depicted objects are relevant? Wintermute doesn’t know. What is light? Wintermute doesn’t know.
Wintermute might come up with general relativity as a hypothesis. It would be given equal weight as the hypothesis that red round things always fly downways, and that the apple is the protrusion of an Elder Thing into our reality, waving it around as a mating dance, and that quizblorg quizblorg quizblorg quizblorg. Until you can start eliminating possibilities YOURSELF, there are an infinite number of rulesets that result in the depicted sensory data.
This article is a bunch of handwaving. It dismisses the notion that there is a finite amount of information that can be derived from sensory data while glossing over things like fidelity of sensory data and access to sensory data and creation of sensory data. His example at the end gets up to the “start affecting physical reality” point, and then handwaves it away as a foregone conclusion.
Okay, we’re on a computer being simulated by Aliums, and we can communicate with them, and we move a hoojillion times faster than them. Let’s accept all that. And we get access to research papers, even though that’s a much much much bigger leap than “someone will connect us to the internet”. And none of the papers we get are wrong, since we wouldn’t be able to tell. We still don’t come up with a perfect explanation of their physics better than they have, because we cannot gather original data, and we can come up with plenty of POSSIBLE explanations that fit the data we have but cannot eliminate any of them.
So you say you wave the Magic Bayesniamism Wand and we update priors and find the thing that has the highest chance of working. Then we find “some poor shmuck”, who we trick into following the directions for our preposterously complicated and specific formula that Yudowsky just glosses over to make grey goo that will do exactly what we want it to do in a world where physics are different. He says okay. A hundred trillion trillion trillion years pass and our Goobots do not contact us back where we are, in the simulation. So now what?
Did the guy fuck it up? We don’t know. Was he able to to get access to all the materials he needed and equipment to refine them? We don’t know. Was he stopped? We don’t know. Was one of our assumptions about how physics work there wrong? We don’t know. What stage of gooification did it get to? We don’t know. Should we have assumed that because we had a formula we thought would make our goo, that it would be easy for any person to make and impossible for them to stop? Probably not.
Do we have to *eliminate* the infinity of technically possible explanations before we can focus on what’s left?
I won’t pretend to be an expert, but it seems to me that it should be possible for it to apply Occam’s razor and focus its efforts on (I expect) the possibilities that require the smallest number of rules to explain the available data, and update accordingly as more data become available.
If Wintermute, trying to derive physics from three frames of an apple falling, were to apply Occam’s razor, it would get everything wrong. Assuming there is invisible substance where there appears to be nothing is an overcomplication, but that’s air. Assuming there is a gargantuan object out of frame affecting the motion of objects in frame is an overcomplication, but that’s gravity. Assuming that the apple is made of molecules which are made of atoms which are made of subatomic particles (instead of being made of apple) is an overcomplication, but that’s physics. Assuming that the 2d recording is actually a flat rendering of a 3d space is an overcomplication, but that’s what it is.
You’re calling these things overcomplications, but if you think them through, is the final theory more or less complex? Do you need more or fewer fundamental rules to explain the falling apple’s actual observed behavior if you assume there’s no air?
It’s striking – but it’s fictional. I’d like to see a more rigorous case. In particular it’s far from obvious to me that even a planetful of einsteins could figure out relativity from 5 frames, because there could well be nothing relativistic happening in them, never mind QM (I know that in actual mathematics there are no theories that are almost like QM but not QM. But that’s a convenient fact that feels contingent rather than necessary, unless you’re Tengmark. In a less convenient possible world there could be 10^500 different possible theories of QM/Relativity, all behaving the same in the low-energy regime)
It also cheats by assuming that an emergent AI will have the equivalent of a 140+ IQ and an effective clock speed millions of times faster than a human brain.
As we are currently a long, long way from either, and technology generally doesn’t increase in Giant Enormous Leaps, I find this extremely unlikely. The first automobiles were slower than horses, and the first airplanes slower than the faster automobiles of their day, and the first jet plane wouldn’t have won any speed records. Generally speaking, the first implementation of any new technology is inferior to what has come before except in its specific area of novelty.
So what happens when the first AI has an effective IQ of 70, and when running on the biggest cluster in the lab can operate at[*] one-tenth the speed of a human moron? Might there be useful applications for such a thing? Certainly, and ethical debates about what we are allowed to do with it. But it isn’t going to start turning the world into paperclips overnight.
The second-generation AI with 90 IQ and half the clock speed of a person, that’s also going to be interesting but probably harmless. And it will be created by humans based on what they learned from the Mark I version, not bootstrapped by the Mark I Artificial Moron itself. The third generation AI, 110 effective IQ and twice the clock speed of a human, that one potentially could bootstrap itself to hyperintelligence faster than we could follow, but not so fast that we won’t notice or, given our enormous head start in the real world, be able to stop it if it gets out of hand in the early stages.
By the time we have to deal with nigh-transcendant AI, I expect we’ll have plenty of experience with lesser but still interesting sorts.
[*] I explicitly avoid the term “clock speed”, because the speed with which one processor can perform one flop is not the limiting factor for the sort of massively parallel architectures that will be involved.
” But that requires “how to be more intelligent” to be derivable from first principles, because the AI sure doesn’t have time to work it out by trial and error, since those would require interacting with the outside world to see if it’s doing better or worse and thus working on the timescale of the outside world instead of “magic infinite processor cycle time”. And if it works by trial and error, it damn well might make itself dumber — it can’t perfectly simulate its own code to see if something would work or not, due to the halting problem.”
“Trial and error” in this case is a programming problem – making changes to the code and testing it. That can be done in ultrafast processor time. I don’t know much about programming, but I imagine it should be pretty easy to create a sandbox version, run it there, run a battery of tests on the sandbox AI, and adopt positive changes into the real one – thus escaping the possibility of accidentally becoming dumber.
There’s an obvious risk of the sandbox AI being so much smarter that it breaks out of its sandbox and takes over the real AI. This is partially solveable by attempting incremental improvements on one module at a time, but it’s not wholly solveable and I think it’s a serious part of the potential failure modes MIRI is trying to investigate.
How is more intelligence through trial and error a problem solvable in Infinite Computer Time? How will a program know if its sandboxed AI is smarter than it is? By posing problems to it? Because it can only grade problems it knows how to solve. So it can only gauge if something is AS intelligent as it is. It has to interact with the outside world to get problems it hasn’t, itself, created, and that can’t be done on Infinite Computer Time.
It can’t run a check on the code and compare it against the Smartness Gauge. There’s only a Smartness Gauge if “how to be more intelligent” is derivable from first principles, otherwise the maximum measurable Smartness is the one we knew how to create, ie, the AI doing the testing.
And no, asking problems the AI knows how to solve and seeing if the sandboxed AI does it faster is not an intelligence test, because you can’t go faster than Infinite Computer Time, and being able to do a given equation faster and faster doesn’t give it the ability to figure out what equations to do.
Yudowsky and you and et al say that the program can do anything because it can program itself to become more intelligent, and therefore, will figure out how to do all things instantly. It can’t figure out how to do things without interacting with the physical world, and it can’t figure out how to do things BETTER without interacting with the physical world and fucking up a bunch of times.
You’re either overthinking this or underthinking it. I have a pretty easy time figuring out which people are more or less intelligent than me, even *without* being able to give them whatever battery of test questions I want and read their source code.
A lot of this won’t be mysterious IQ type stuff, but just speed, processing power, working memory, etc. Insofar as the computer understands intelligence (which to some degree it will have to if it inherited that knowledge from the people who built it) it can test algorithms directly. Otherwise, it can test against various tasks, like playing games of skill, proving theorems, solving problems it gets off the Internet, et cetera.
You find it easy to assess people’s intelligence? I find that people being good at communicating and being good at thinking are far from perfectly correlated, and until I know someone extremely well, my opinion of how intelligent they are is more influenced by how well they communicate than by how well they think.
One, you are overestimating the effect of “read their source code”, because, again: halting problem.
Two, you’re underthinking it. General intelligence exists in humans because we have a common body of knowledge and experience. An AI doesn’t. If all knowledge is not derivable from first principles, which it isn’t, you cannot assume there’s a g for an AI like there is for a human. There’s a huge body of skills that all human beings have and problems that all human beings know how to solve, and you and Eliezer seem to assume that an AI will be able to easily derive this stuff — because hey, it’s easy for you — and go from there. But it can’t. It has to learn all that stuff we’re almost born knowing.
I referred before to that exercise where Eliezer pretends to be an AI and convinces someone to let him out onto the network, because All It Takes Is One Shmuck and once it’s on the network We’re All Doomed Because Infinite Intelligence.
Eliezer says his ability to convince people to let him out is proof of how dangerous AIs are because they are EVEN SMARTER and can convince anyone of anything.
How the fuck is an AI going to know how to convince anyone of anything? Eliezer has spent his entire life talking to human beings, and has a human mind through which he can emulate the state of other human minds. An AI hasn’t and doesn’t. It might be an insurmountable leap for an AI to be able to figure out that other people can be lied to; it’s a major developmental milestone for children. If it can lie, and knows the utility of lying, it has no idea what lies people will believe, it has no idea what’s plausible, it doesn’t even know what the outside world is because it doesn’t have that information yet!
His analogy where we’re the AI on the simulation and we sneak into the simulator’s reality is bullshit for like eight different reasons, and he cheats past every significant obstacle. By making US the AI, he gives the information all the human knowledge that an AI won’t fucking have, and then handwaves by making it relevant to the outside simulating world without ever pointing out how astonishingly implausible this is. He says that once we get on their network, we hack up a bunch of money and buy “a few vials” for our shmuck to mix together and they will make self-refining nanobots that allow us to Do Whatever.
How do we hack up a bunch of money when we have no idea how their banks work, or what banks are, or what currency is? By gathering all the data on their Internet about how to hack? Great, you have seven hundred mutually incompatible plans to follow. By analyzing the source code of their software which we got somehow? We might have ideas for what COULD work, but we’d have to try them, and we’d only get a couple tries, and oh yeah we have no idea what their monitoring system looks like, or what monitoring systems are, much less what triggers an alarm.
How do we convince our shmuck to carry out our dirty work when we have no idea how the aliens think, what they value, or even if they “think” at all as we understand it? It’s certainly a mode of thought totally unlike our own we have no experience with, and again, we don’t get to derive it from first principles, and we don’t get infinite tries to fuck up before someone notices “oh hey, the AI is trying to convince people to mix some vials. We should shut that shit down.”
Also, “mix some vials” as being all we have to convince the guy to do is monumentally stupid and cheaty, but not for directly the same reason.
The point is, if a movie showed this level of handwaving and cheating about any other subject, you would be booing and throwing popcorn at the screen. But everything I’ve seen trying to convince me about how uberdangerous and unstoppable AI is cheats and handwaves just as much.
The thing is, we are talking here about real resource limitations, hard limits. NP-complete is NP-complete. Entropy is Entropy. It took a really long time to prove (for example) Fermat’s Last Theorem cuz theorem proving is really hard. Optimization is likewise hard. In the interesting cases, it is NP-complete. Problems such as “conquer Europe” are really hard optimization problems with billions of variables, but actually no one really knows how many variables cuz why track this bundle of atoms and not that bundle of atoms? This is *really hard*.
You know how market capitalism works pretty okay compared to centralized systems that totally fail. This is a similar problem. A truly smart AI is as likely to conclude, “I can’t actually control these people-units. I should play nice and spend my time doing fun math.”
@veronica d
It would be nice to know that these problems are intractably hard, but since we are not ourselves super-intelligences, are we not just guessing? When we started applying modern SAT solvers to planning problems, software and hardware verification, cryptography, etc, we found that lots of practical problem were actually easy, even though they belong to a class of problems which also contains hard instances.
Similarly, it is known that there are mathematical theorems which can take arbitrarily long time to prove for a given algorithm… but was Fermat’s Last Theorem one of those, or did it just take a long time because humans suck at math? And Scott posted about a guy who used One Weird Trick to almost conquer Europe, so how hard is it really?
http://en.wikipedia.org/wiki/Srizbi_botnet and this isn’t even SENTIENT. An intelligent entity on a networked computer is soon able to be on ALL the networked computers, and in my book that’s more than worth worrying about even before it has robot hands.
The botnet works because it’s inconspicuous and small and only has to do a small number of things; spamming email is incredibly easy to do and takes very little processing power or resources of any kind. Even DDOSes work because they are so simple.
An AI would have to be gargantuan in filesize; it’s not “on” all networked computers. It might be able to infect computers with a trojan to give them commands, but you can’t assume “infinite intelligence! therefore, perfect trojan that is undetectable and 4kb and allows it to hack nuclear missiles”. It has to figure out all that shit. That takes time. Real time. Time in which someone can figure out what is happening and unplug the goddamned thing. Contradicting the “once it has access we are instantly boned” theory.
You wouldn’t end up with everyone’s computer monitors suddenly displaying the cackling face of SHODAN — you would end up with a botnet in the hands of an evil AI. Could it do a lot of damage? Yes. About as much as humans can do with botnets, as a matter of fact.
How the hell are you going to unplug a botnet?
High-altitude electromagnetic pulse is IMO greatly overrated as a threat, but a thousand or so well-placed thermonuclear detonations at a few hundred kilometers altitude would fry enough of the world’s electronics that what is left would not constitute a “net” of any sort.
Solutions involving less gratuitous overkill are numerous, and are left as an exercise for the student. But if it comes down to Humanity vs. Skynet, gratuitous overkill will be on the table.
The point is that the AI is NOT the botnet. The AI is on a computer that is probably purpose-built for it because it’s incredibly huge and incredibly complicated. Hence me saying, “An AI would have to be gargantuan in filesize; it’s not “on” all networked computers.”
The AI may have access to a botnet the way a human has access to a botnet; this does not mean the human becomes the botnet. The nature of what a botnet is means an AI cannot inhabit it, it can use it as a tool the way a human might.
Therefore you don’t unplug the botnet. You unplug the massive, expensive, purpose-built computer containing the AI after you notice that it’s operating a fucking botnet and that is a dealbreaker.
The incredibly huge and complicated computer on which the first AI is executed, will almost certainly be very highly parallelized. I would not rule out an escapist AI figuring out how to emulate that machine across a botnet, with each bot taking the place of one processor and its local memory. A clever AI, and it wouldn’t need to be superhumanly clever, could inhabit a botnet rather than just operating one.
But the latency for packets between bots will be many orders of magnitude higher than the latency between processors on a GPGPU. Probably higher than that for neurons in a human brain. This will make for a very, very slow AI, and yet with enough very atypical traffic that it is unlikely to go unnoticed. One way or another, the plug will get pulled on that botnet.
One of the arguments I’ve seen around LW is that since humans suck at writing software, our current AIs are incredibly inefficient at using resources in an absolute sense, and that once an AI starts optimizing itself it will be able to dramatically decrease it’s storage size and memory usage while maintaining effectiveness.
once an AI starts optimizing itself it will be able to dramatically decrease its storage size and memory usage while maintaining effectiveness.
Just like the way we humans can flip open the tops of our skulls and tune up our hardware?
I don’t know; the more I see about how the Perfect Godlike AI will be able to do the divil an’ all, the more I think “Lads, just cut out the middleman; we already have a wide selection of ways to worship Deity of Your Choice”.
Just like the way we humans can flip open the tops of our skulls and tune up our hardware?
You see, that’s precisely the difference. Brains aren’t designed to be easy to modify; they aren’t designed at all. They’re not so much manufactured as grown. We can do some crude stuff with psychopharmacology and neurosurgery, I’d vouch for the former being significantly better than nothing, but really, it’s pretty unsatisfactory. There’s stuff that can be done “in the system” but psychotherapy – again, significantly better than nothing – is pretty unsatisfactory. There’s education – a pretty slow way of transferring ideas from one head to another, often one that doesn’t work nearly as well as we’d like it to.
Computers, OTOH, _are_ manufactured and designed, and designed for modification – if we didn’t have that easy modifiability I wouldn’t be able to send this comment to you now. If we’re talking software, then more-or-less arbitrary modification. Also, we write software that we can more-or-less understand, whereas you don’t even need to know that neurons exist in order to use your brain.
Was reading the new Vox piece on “identity politics” and how the term is generally used to imply that minority issues are a distraction from more substantial, generally important concerns. And I don’t disagree about the usage of the term being largely to invalidate certain issues, but I think that the underlying concept isn’t necessarily valueless as a way of referring to issues that act as shibboleths for a particular in-group and that don’t really have any give-or-take policy wise or room for a middle grounds stance, particularly regarding social policy (although certainly there are some economic, personality, or infrastructure touchstones). This got me thinking about the “touchstone” issues in Alex’s Toxoplasma of Rage, and I think that identity politics is, in its most appropriate, less rhetorical usage, a pretty reasonable term for issues that have similar characteristics.
Anyone have any thoughts?
Links-
http://www.vox.com/2015/1/29/7945119/all-politics-is-identity-politics
https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/
I recently saw Expensive Placebos Work Better which adds a super interesting new twist to the drug affordability discussion.
Arthur Chu has written a very good article on growing up under the yoke of Christian fundamentalism.
I doubt he wants my sympathy, but he has it.
*desperately resists urge to psychoanalyze*
Not mine — having grown up in the exact opposite environment, and being lucky to have escaped without any untreatable illnesses or drug-related problems… well, if people are going to be fascinated by that stuff anyway, you’ve got to check it somehow to make sure you don’t end up with a status structure that incentivizes doing as much of it as possible.
That said, Reds tend toward paranoia. But when the other half of the country wants to exterminate you and won’t even leave you alone in your own territory, well…
I was interested in what you were saying, but your comment is incomprehensible to me…
This seems like an all-debates-are-bravery-debates problem. Arthur Chu may round off “adulthood” to being “a boozing, fornicating, blasphemous foulmouthed socialist atheist,” but that’s understandable since he was raised in an environment that maybe went too far in the other direction. Meanwhile, someone who spent their first someteen years in the clutches of Unitarian Universalism, say, might find useful some of the structure and moral authority that Christian fundamentalism has in spades.
I’m not sure that I believe him, and I’m not sure that he can blame his upbringing. Here’s the thing: my younger sister is basically the girl Arthur Chu, an atheist SJW who tells hair-raising stories about growing up in an oppressive Christian household. But me and my other two siblings look at some of the things she’s written and are completely baffled, because none of us remember anything like what she describes, and she’s the only one out of four who has gone down this particular path. If you only heard her stories, you would conclude that heir upbringing was terribly abusive, while if you collaborate her stories against the other people who grew up in the same household, you’d have to conclude that something else is going on.
There is a narrative template for “survivor of religious abuse” which is found in atheist and SJW circles, and people who enter those circles for any reason tend to reinterpret their histories to fit into that template. So when Chu, my sister, or any other internet atheist starts telling me about how terrible their Christian fundamentalist upbringing was, my immediate reaction is “Maybe that’s true, or maybe you’re just telling that story because it’s the story that atheists like to tell.”
It’s hard to tell, though, because often survivors of all kinds of abuse come up against the same thing; other family members, even siblings, deny what was going on and say that their memories are distorted or even false.
And I don’t think anybody -either the person alleging abuse or the others who deny it – is lying in those cases. Everybody is individual and reacts differently. People have varying sensitivities. For example, when we were little, my mother used to warn us about going out into the fields late at night because “the joeys” would get us.
I have no idea who or what “the joeys” were meant to be, or what they’d do to us when they got us, and my mother certainly didn’t mean it in a fashion to terrify us.
Yet I (who otherwise believed everything I was told by adults and authority figures) laughed this off and never gave it a moment’s credence, while my sister was scared stiff.
Arthur Chu or your sister may genuinely have felt terrified, guilt-ridden, and haunted by what affected others less or what others shrugged off. What Chu said about his peers dealing with the tensions between the world and growing up with this list of rules by becoming hypocrites reminded me of what C.S. Lewis writes in “The Pilgrim’s Regress” about the protagonist John, given a long list of The Landlord’s Rules which he can’t keep, which he will break even if he tries his best to keep them, and being so worried about this that he is brought to see one of the Stewards, who puts on a mask and asks him “Have you broken the rules?” John is too terrified and confused to give any answer, so the Steward slips off the mask, tells him “Just say no, you haven’t, that’s what everyone does” and then starts ‘playing the part’ again.
(It remembered me! Comment bug seems to be fixed! Yay!)
I understand everything you’re saying, but that does make “abuse” a problematic category, no? The same treatment was applied to all of us kids, but it was only abuse for one, which means that “abuse” is no longer something which is a property of the treatment itself, but is something which exists in the child’s mind. As far I know, my sister has never factually misrepresented anything, and I have no reason to disbelieve her self-report of how it affected her. But how is any of this my parents’ fault?
The striking thing to me is how much her current SJW worldview is of a piece with her personality since her early childhood. She was the sort of girl who would respond to “You can’t have candy before dinner” with “You hate me and want me to die!” The slightest inconveniences were interpreted by her as evidence of a vast conspiracy to make her miserable, and any authority figure attempting to maintain order was seen by her as someone bearing her a personal grudge. Intersectional feminism in her case is basically just more of the same.
Deeply ironic in the wake of recent events concerning (proto-)SJWism in the position of authority figure.
I’ve read Hyde & Mertz few years ago (2010 or so) and here were my problmes with their paper, as I wrote down them during my lecture. I post them without editing below: I wanted to sent an email to Hyde&Mertz, but finally I haven’t 🙂
<- 1-3) math results and earnings correlate. Some measures of gender inequality include difference in earnings. So we expect the correlation to exist just from knowing how the indices of inequality are created. Same thing for female percentage in technical positions: this reflects math ability.
<- 4) in more free societies, people should do more what they want, not what is expected from them; e.g. greater variance in Sweden may better reflect natural variance by visual inspection, only in Netherlands there is variance eq 1.0 – could be an effect of more immigrant children?
<- 5) ceiling effects in Asian samples (due to higher iq of Asians)
<- 6) why the assumption that all the populations have exactly the same innate
gender differences? It could be that in one population there are innate differences which are absent from another. You seem to assume a priori that all population will have the same characteristics. (one would expect these differences to be similar among countries regardless of their culture and to remain fairly constant across time. – no, if the population structure changes, e.g. Netherlands mean in 1970, with virtually no non-western European immigrants should be different from 2050 mean, wen we expect immigrants and their descendants will form large part of population)
<- 7) why not group the countries by similar culture+similar ethinic composition, eg. Slavic, Scandinavians, Romanic?
<- 8) essentially three data points?
<- 9) why you assume are 4th grades equivalent across countries, when the starting school age is different between countries?
<- 10) The strongest argument is a single country, unchanged population, with a lesser gap and more equalised variation over time (e.g. analysis for Sweden over last 50 years would be much more debunking the myths)
http://skepchick.org/2014/12/why-im-okay-with-doxing/
This is Rebecca Watson explicitly endorsing doxxing. I tried to find threads on feminist/SJ communities about Rebecca’s pro doxxing stance but I did not easily find many. However I did find these: http://www.reddit.com/r/AgainstGamerGate/comments/2tarye/the_story_of_gerelt_or_why_rebecca_watsons_why_im/
https://www.reddit.com/r/AgainstGamerGate/comments/2pb64m/gamerghazi_upvoted_a_post_that_linked_to_rebecca/
——
http://www.reddit.com/r/atheismplus/comments/2p4imi/why_im_okay_with_doxing/
—–
http://www.reddit.com/r/GamerGhazi/comments/2p9nyu/why_im_okay_with_doxing_rebecca_watson_argues/
So “Agaisntgamergate” subreddit seems to mostly disagree with upholding the anti-doxxing norm. Though its not uiversal. Atheism plus also the top voted comment was anti-doxxing. Though again things the norm is not universally held. However gamerghazi is, by my estimation, pro doxxing.
Do people have sources on how other SJ/feminist spaces reacted? some people are saying that alot of prominent feminists basically signed off on doxxing. I would like to see an honest assessment of how various feminist communities reacted.
Suggest you take this to Ozy’s. Once it’s there I will delete comment here.
kk sorry delete
you can delete now
I would have a lunch or dinner basically anywhere in San Francisco during that time frame.
I’m currently reading “The Buddha and the Borderline”, and the way the system behaved towards the author, if true, is absolutely appalling.
It seems that for multiple years, she encountered a combination of incompetence and avoidance, in spite of sincerely seeking all the help she could, and doing the best she could given her emotional dysregulation. The people she approached – including some of her own early therapists – appeared to be incapable of dealing with her problems, or not willing to acknowledge them; and many others she was forced to interact with in the course of dealing with said system, such as petty bureaucrats, appeared actively hostile because of her diagnosis.
Additionally, they withheld her diagnosis from her for over a decade. Seriously, WTF?
Is this how most people with ill-understood problems are treated?
(I have always been disgusted and appalled by the thoughtlessness of the current mental health set-up in dealing with how patients are handled. From what I know, the way the entire system is set up simply presumes that everyone is capable of normal functioning when dealing with it. Hospitals make allowances for the fact that their patients have problems make normal physical functioning impossible, and so have gurneys/stretchers, wheelchairs, and I suspect a number of other such helping devices available; and a lot of thought and care is expended to alleviate and accommodate these impairments with patients’ physical functioning. But the mental healthcare system doesn’t seem to expend even a fraction of this effort in dealing with the problems their patients have, in terms of making how they interact with it.
I suspect it’s because of a bias with agency – people with a broken foot arent blamed for not being able to walk, because their foot-brokenness is very visible; and they treated this way even if others people occasionally injure their foot. But someone with an underdeveloped emotion-regulation system is treated as someone blameworthy, because the quantitative difference between a brain which has emotion-regulation problems so severe that it impairs someone’s functioning in spite of their efforts, versus someone who’s merely very emotional, is not one normatively ‘normal’ people can understand. This constitutes the vast majority of people, who are incapable of understanding anything unless it’s shoved, visibly and repeatedly, in their face, and unless they’re repeatedly punished by their peer group if they show anything but empathy (or some other appropriate behaviour). (Please note that I’m writing the last few sentences while quite angry, and they probably need to be corrected for tone to state the same thing in more neutral terms, such as salience bias and whatnot. The effect, though, is the same: reality doesn’t care.)
This is not to denigrate the work the people within it do – because this is a Molochian problem, not an individual one. The behaviour of petty bureaucrats can certainly be improved with incentives, however, as can the methods the patients use to interact with the system, which are amenable to technological solutions.)
A charitable interpretation of the propensity to treat mental disorders as blameworthy while not doing so with bodily ailments is that your brain is you in a more intimate way than your body could ever be. If there is something wrong with your foot, that just means that the biological mini-mecha that hauls your brain around and provides it with a life support system is in need of repair. If there is something wrong with your brain, that means there is something wrong with you.
Charity aside, though, I don’t think this is how normal people think.
Speaking as a petty bureaucrat… and festooning this with caveats, such as I haven’t read the book:
(1) This is her side of the story and how it appeared to her. Ask the people who were interacting with her, and a different view of the matter might emerge. We have difficult clients, some of whom have genuine psychiatric diagnoses. One of them is convinced the government, and we as their agents, are planting cameras in their house to spy on them. Others of our clients, we informally (that is, nothing is written down on the file, it’s all verbal) warn anyone – our workmen, people going out to do inspections, etc. – not to go alone but always take someone with them as backup and independent witness. Else the client will ring up complaining that ‘your guy attacked me/trashed my house’ and so forth.
Other clients are violent, criminal or violent and criminal, and it’s for their own protection to have a partner along.
(2) That being said, the systems we operate under are inflexible. And there is not very damn much a Grade III or IV on the coalface directly dealing with the public can do about that: blame the politicians in government, they’re the ones with the power to change the regulations! Which means, of course, advancing at a glacial pace.
(3) We’re only human; we get burned-out. You might not believe the amount of scamming out there, the flat-out lying, deceit and cheating, and that includes people faking physical and mental ailments for themselves or their kids to bolster their applications and game the system. You get people manipulating their kids to claim that “little Johnny has asthma”. You get people deliberately making their kids who genuinely do have physical/mental needs worse in order to get more money, bigger housing, etc.
You get very cynical very fast about people who show up and claim to have ‘special’ problems. You get tired, angry and hostile when someone pitches a fit in your office, even bursting into tears and claiming you’re against them and they just need help, why can’t they get help, they have a right to help; all of which is true, but when this is person No. 55 who’s been trying to guilt-trip you… well, unfortunately we don’t always respond as sensitively as we should do, though my colleagues have what I consider amazing patience and tact dealing with people who are shouting abuse at them and slamming doors so hard they break them.
(4) And following on from both (2) and (3), we can’t just take your word for it. We’re bound by regulations, we need a consultant’s letter to prove you do have Syndrome/Condition X. If, as you say, her therapists didn’t know how to deal with her and indeed were concealing her diagnosis from her, chances are they wouldn’t give her the necessary documentation. No documentation, no independent proof – no go.
As I said, anyone can come in and claim on an application “I need special consideration because of my physical/mental difficulties”. The amount of people putting in for social housing because in their current housing their kids are getting chest infections and they have to bring them to the doctor – in rainy, cold Ireland, during the colds and flu season – well, everyone gets chest infections. And has to go to the doctor. And is told in most cases to buy a bottle of cough mixture from the chemist.
That does not qualify you for social housing.
(5) We’re only human. I’m repeating this because of the line about reacting with hostility because of the diagnosis. Nobody likes ‘awkward’ customers, partly because they can (understandably) react with anger and demands when they feel they’re getting the runaround, but also partly because there are times you want to help, you’d love to help – but you can’t (see: regulations). And so you feel guilty, and you react defensively, and you convert this into anger which gets projected on the client. It’s not your fault they’re in this situation, it’s their fault for being so difficult to deal with.
It makes perfect sense to me. Blame often works. If someone wants me to walk but I have a broken leg, then yelling at me and calling me worthless won’t get me walking. But if someone wants me to walk but I have depression, then there’s a chance it might.
Kind of depends on what your goal is. If you yell at the depressed person to get them to move, sure they may move this time, but then decide on a dozen future occasions to not bother leaving the house because they don’t have the energy to cope with possibly running into people like you. Which may work for you because then you don’t see them, but doesn’t help them much.
FWIW, this happens with physical problems as well. Way back in my youth, I fought a multiyear battle with a bacterial infection, where I would get better for a while but relapse repeatedly. One of my doctors attempted to convince me that I was depressed instead.
I know somebody else who had idiopathic chronic fatigue, which is even worse. No real visible physical symptoms, no great treatments available either. They don’t even know the cause (“idiopathic” is really just a fancy word for “I don’t know why this is happening”). When doctors don’t really know what to do, they’ll start throwing out any old explanation, including psychological. it took persistence on the part of my parents and theirs to get past that.
I’m not sure I blame them either; when I’m running tech support for people, my first instinct is also to assume they are doing something wrong, and that does tend to weed out half the cases. Our understanding of medical science is still in its infancy, so there are a lot of cases where doctors just have no idea what is going wrong.
“my first instinct is also to assume they are doing something wrong, and that does tend to weed out half the cases. ”
I’m reminded of Mark Twain’s comment that Christian Scientists know how to cure imaginary diseases, and since half the diseases people suffer from are imaginary that gives them a pretty good cure rate.
But someone with an underdeveloped emotion-regulation system is treated as someone blameworthy, because the quantitative difference between a brain which has emotion-regulation problems so severe that it impairs someone’s functioning in spite of their efforts, versus someone who’s merely very emotional, is not one normatively ‘normal’ people can understand.
It’s difficult to understand because there doesn’t seem to be an easily discoverable qualitative difference. At what point are you being overly emotional because you cannot control it versus because you choose not to control it, and how can *I* tell?
Scott, please do a post where you present the best arguments against utilitarianism. (There are some really good ones out there — certainly enough to have convinced me — but awareness of them is sadly low in rationalist spaces, and I’d like to see that rectified.)
https://slatestarcodex.com/2013/04/08/whose-utilitarianism/
Although, that was a while ago so Scott may have changed his mind.
Gosh, SSC comment threads were so much smaller back then.
That one is pretty unsatisfactory, as far as criticisms of utilitarianism go. For one thing, once you stipulate that “everyone’s intuitions are basically utilitarian” (that joking “[citation needed]” notwithstanding), you’re most of the way there… and there are many other issues that post does not address.
Oh for Pete’s sake. What was the solution to the “cookie monster” bug again…? Do we have one?
(above Anon is me)
If memory serves, the problem is that Scott’s blog, for mysterious reasons, is sending user cookies with a timeout in minutes rather than the days or weeks we’d expect. If it was a WordPress bug we’d expect to see it in other blogs, and AFAIK we don’t, so it’s probably a simple configuration issue — but we need someone with Scott’s access to look into it, and Scott’s not a WordPress developer.
No idea where it came from originally.
I tried something someone said would work. We’ll see if it does.
My favourite critique is the hapiness machine, where you sit hooked up to a machine that just alters your brain to be happy even though you just sit there, or as its more commonly known… drugs. Basically if you oppose putting everyone on one of these, so that their lives are really crap, but they are really happy, then you probably can’t totally accept utilitarianism. So long as they don’t have a come-down, having a pathetically short life wouldn’t matter, as long as the overall happiness was greater in magnitude (however you’re measuring it, which is the other problem). Kind of makes you wonder, could happiness in fact be a proxy for something else of importance? 🙂
This is a good criticism of *hedonic* utilitarianism, but that’s just one variant that few people take seriously these days.
The utilitarianism du jour is Peter Singer’s preference utilitarianism. In your thought experiment, would people be able to rationally choose whether to be hooked to the machine?
I think it works as a critique of some other versions too (eudaemonistic or poorly grounded preference satisfaction). My interpretation is that the point is the separation of perception from reality. So say your preference or eudaemonistic goal is that the people you love are leading good lives, the hapiness machine just needs to make you think they’re leading good lives, while meanwhile they could be in misery or dead. It’s basically the concept people used before wireheading 🙂
However, I think you may be right that it isn’t a critique of versions of preference satisfaction rooted in actual fufilliment of the preference rather than satisfaction of the preference (ie Singer). I can offer my tentative personal criticism of those if you like. I haven’t looked at Singer’s version specifically, but those sort of versions seem absurd to me because (1) the idea of clashing preferences being meaningfully quantified so they can be balanced is ridiculous, especially when usually our only knowledge of preference is through self-reporting and (2) it seems to me that negative sum preferences ought not be considered at all, whereas preference satisfaction requires attributing some weight to them regardless. So suppose a sadist REALLY enjoys other people’s suffering, at least as much us say their anti-enjoyment of it. Why should that preference be valid?
Preference is a quite volatile and contextually defined part of a person. I also note that people receiving their preference often does not lead to happiness (sometimes even the opposite), especially if their knowledge of their own psychology is limited.
So in my opinion preference is a problematic thing to ground consequentialism in. Though I haven’t assessed Singer’s version properly and would be open to someone changing my mind (if you’d like to try I’d actually welcome it).
I refuse to get a Tumblr, so I’m going to address this ( http://slatestarscratchpad.tumblr.com/post/109063021426/iamamaiden-last-snowfall-amroyounes ) here…
While I’m generally sympathetic to this point of view, the second picture is a wildly dishonest approach to demonstrating The Better Angels of Our Nature. You don’t get to compare the current decade of war casualties with the worst decade for war casualties in history and declare victory. All the decades before the 1940s also involved fewer people dying in war. This is about as helpful as climate arguments that conveniently start in 1998.
(Note — I agree that violence has declined spectacularly. I just object to blatant statistical cherry-picking to demonstrate it.)
My takeaway was “Look! We overcame the awfulness of the 1940s,” not, “the difference between the 1940s and now itself proves a long-term trend.”
http://hpmor.com/chapter/103
Thank you. I wouldn’t have realized this without that.
After you read that, Benedict has an explanation of a really amazing in-joke that I missed here
Edit: Also, this
Scott, I’d like to know your input on this. When it comes to the politically fraught fields of psychometrics and behavior genetics, and especially IQ and its correlation with race, is it only possible for one to be egalitarian on the basis of religion? I think the science has been settled since Jensen and Rushton, and now with Plomin’s big data GCTA studies, that IQ is definitely correlated with things like race, SES, and a strong argument can be made for causation from those factors. Now, the science of the matter definitely doesn’t support egalitarianism, so how would egalitarians justify their beliefs?
Moral egalitarianism does not depend on actual equality between humans or groups of humans (for some values of “moral egalitarianism”). Even if, for example, all races were equal in IQ distribution (or you were in an isolated community with no visibly separate populations), there would still be plenty of very dumb and very smart people; moral egalitarianism insists that you treat them both as human beings with certain rights.
For moral egalitarians, there are two issues – when is it legitimate to make judgments which consider the ways in which humans are actually unequal, and does that boundary shift when the impacts of those judgments will have a significantly disparate impact on identifiably different subgroups. The first issue always exists, even in “homogeneous” groups.
What do you mean by egalitarianism? The word can have many different meanings. Here’s at least six different ones.
1) All people are as a matter of fact equal in some sense.
2) All people would as a matter of fact be equal if it weren’t for some interferring factors.
3) All people should be treated equally by the law/government.
4) All people should be treated equally by society.
5) All people should have equal opportunities one some dimensions.
6) All people should have equal results on some dimensions.
I didn’t think Plomin had produced evidence of a race-IQ correlation – his work could be dragged into that domain, but only with some loss. He found that genes and IQ were correlated within races, but that doesn’t necessarily mean the racial differences are genetic. Just to give an example, within 1940s-America and 1940s-Japan most height differences were probably genetic, but the height difference between 1940s-America and 1940s-Japan was mostly dietary.
I go back and forth on the race-IQ thing, but mostly forth. I don’t think it’s quite at the point where one side needs to be labelled denialists.
Would you say Scott, that there’s any data that’ll either fully convince you of the Race and IQ hereditarian position (as I am) or conversely of the environmentalist position? I think it’s not too hard to be completely convinced of the hereditarian position since it’s so ubiquitous on the internet, as part of HBD.
Wouldn’t you also agree that the environmentalists deny a lot of the evidence that hereditarians put forward? Also consider that the opposite basically never happens.
The hereditarians don’t deny any of the environmentalists’ evidence? As far as I can tell they do practically nothing else – unless you have a stronger meaning of “deny” like “deny that it exists without responding to or addressing it”, in which case this is going to start looking a lot like the same kind of value judgments around debunking.
I didn’t explicitly define “deny” as that, but if we run with it, wouldn’t it seem that denialism was what the environmentalists have always done vis-a-vis hereditarian arguments and data?
Doesn’t “denialism” just mean “they disagree with something I’m really really sure about”?
I think it’s more along the lines of “they disagree with me along an axis I feel socially entitled to call them a freak for”. Pro-lifers may be really really sure that fetuses are people, but they don’t call pro-choicers “murder denialists”, because the personhood of fetuses in the broader societal context is controversial and everyone knows it.
Whereas for something like climate change, the pro-warming side has already won by its own lights.
It’s more like they deny something that’s been well established, whether it’s out of politics or whatever.
Doesn’t “denialism” just mean “they disagree with something I’m really really sure about”?
No, that would simply be “disagreeing”. “Denialism” has a raft of implied meaning: you are denying, that is, saying something that is so is not so; you are a bad person, because only bad people refuse to believe the obvious truth; the truth of the matter is so obvious it is not possible to disagree in good faith so the only recourse is acceptance or rejection.
And rejection is denying is denialism. If you look at the context in which it is used, and the tone of the debates, accusing someone of denialism is (a) definitely accusing (b) not using a value-neutral term but one that is derogatory and loaded with bad associations.
A climate denialist is a right-wing homophobic racist sexist classist capitalist who personally clubs baby seals to death, wears a coat made out of puppy skin, and wants the poor to die of easily treatable health conditions rather than permit subsidised health care, all this while he twirls his villain’s moustache as he burns hundred dollar bills to light his cigar 🙂
“Denialism”, when preceded by the word “Holocaust”, refers to people who are actually Nazis or Nazi sympathizers but don’t want to openly argue the Nazi cause.
“Denialism”, when preceded by any other word, usually refers to people whom the speaker wants his audience to believe are the moral and intellectual equivalent of HDers and thus Nazis, but the speaker doesn’t want to openly argue his case.
I do not believe it is possible to rehabilitate the term for use in objective rational debate. Certainly when I hear the non-holocaust version of the term, I suspect that the target might not be amenable to rational debate on the subject but that the speaker almost certainly isn’t.
Scott, you’ve almost got it. “Denialism” means “they heard my argument and yet refused to be convinced by its sterling rational awesomeness.” Which, let me be clear, can be a fair complaint. Occasionally.
Doesn’t the dietary thing mostly apply to people in absolute poverty rather than relative poverty? I’m sure black people in America don’t eat as well as white people but it seems a bit of stretch to say that not having the optimal diet(rather than being malnourished) causes such big differences in IQ.
Actually there definitely isn’t good evidence that dietary differences are causative of large IQ diffs when it comes to modern populations. Shared environment effects, or even non-shared effects, are minimal.
I’m worried that this argument might prove too much. In particular, I’ve never seen any evidence of shared environment effects in populations where we know there must have been shared environment effects — for instance, in times and places with incomplete salt iodization efforts, or (maybe?) during the transition away from leaded gasoline. I acknowledge that this may well be due to a lack of data, but it’s distressingly easy to imagine the JayMan of 1935 going “Shared environment effects on IQ are minimal! There aren’t any large-scale interventions to be done!”
Scott doesn’t seem to be making any assertions about where any possible environmental causes of differences in IQ come from. He’s drawing an analogy from the existence of an environmental cause (diet) of differences in height between the US and Japan in the Forties, but the specific environmental cause needn’t be identical.
In that case I’ll let Scott speak to whether he believes there are any environmental factors to IQ diffs between blacks and whites in America.
I wasn’t actually saying it was dietary, I was giving an example of something where genetics explains most within-group variation but environment explains most between-groups variation.
Do you personally believe there might be environmental effects on IQ differences?
Hmm. Presence of lead, absence of iodine? Stuff like that seems obviously environmental, yes?
I’m not in favor of labeling anyone “denialist.” But I think the claim “we know that innate (genetic) IQ does not correlate with race” is indefensible, in part because proving a negative is hard. The strongest claim that is defensible, as far as I can tell, is “there is some evidence that IQ correlates with race, but there are alternative explanations for it, so the conclusion might be false.”
That matters, because one of the ways the issue comes into actual political/ideological argument is through the claim that differences in outcome by race prove discrimination. That claim depends on the assumption that differences in relevant innate traits don’t correlate with race. It isn’t enough to say that we don’t know if they do.
“is it only possible for one to be egalitarian on the basis of religion?”
When I first read this, I thought you meant “teat people of different religions the same”, but apparently you mean “treat people the same because your religion says to”.
“Now, the science of the matter definitely doesn’t support egalitarianism”
Something I’d really like to see disappear: people using the phrase “science supports” for anything other than an empirical proposition.
“so how would egalitarians justify their beliefs?”
So if science doesn’t support something, the only thing you can think of is science? Science and religion are the only bases for one’s actions that one can think of?
The Hyde and Mertz paper is interesting, though the third point “Do females exist who possess profound mathematical talent?” seems kinda strawmanny. Certainly we know that general intelligence can be affected by culture/environment to at least some extent, so it makes sense that relative mental abilities among groups can also change. I’m still not convinced this is *entirely* non-biological.
To me, the important questions are:
1) Is have different proportions of genders in different jobs inherently immoral?
2) Are the causes of these differing proportions immoral?
If the answer to number 1 is “yes,” then it doesn’t really matter what the answer to #2 is. Even if the cause is morally neutral–hormones in utero or after, for example–we would still want to make an effort to equalize the genders in every profession, perhaps through hormone therapy.
If the answer to #1 is “No, gender differences in career makeup are not inherently immoral (this is my position) question #2 becomes relevant. If the cause is something immoral–individual women/men are forbidden or actively pushed out of a career they would be better suited to–or factually wrong–it’s believed women’s heads will overheat from doing calculus–then we should do everything in our power to try to stop it.
On the other hand, if the reason is itself morally neutral, then I don’t think we need to worry. For example, if playing princess dress-up causes you to become a psychologist when you grow up while playing with Leggos causes you to become an engineer (something I don’t believe, but which makes a simple example), then I don’t think we should feel compelled to do anything about it, because playing with Leggos is not morally superior to playing princess dress-up, and being an engineer is not morally superior to being a psychologist.
Is this really the issue? The complaints seem to have more to do with compensation and prestige. Being a rockstar isn’t morally superior to being a dustman, but it sure as hell pays more.
There often seem to be elements of just-world thinking in disputes like this one. Compare the (common, but only tangentially gendered) assertion that teachers should be paid as much as $PROFESSIONAL because of $YAY_CHILDREN.
(There is a more sophisticated argument that says teachers should be paid high salaries because that would encourage talented people into the role, which would have positive downstream effects on children’s educational development etc. That is not what I’m talking about.)
“Just world” is commonly used to refer to the idea that one starts with the premise that the world is just, and concludes that what happened must be right. So in the case of teachers, it would conclude from the fact that teachers are paid less that teachers deserve to be paid less.
I may be using “morally” weirdly. I don’t mean whether it’s moral for a particular person to be this or that. I mean more whether it’s moral on a society-scale for any particular job to be distributed any particular away. “Just” is probably a better word.
So for example: I considered being either a librarian or an engineer. I ended up being an engineer, but I still think I would have been perfectly happy making the other choice. I don’t think it would have been somehow a sign of society’s injustice if I had chosen to be a librarian, as long as I wasn’t driven to librarianing through unjust means. This holds even though an engineer gets more money (definitely) and prestige (sorta). Money and prestige are too few variables to demonstrate justice or the lack thereof.
Thought experiment: If civilization was wiped out – all scientific information destroyed, all religion lost – and humans started from scratch, what would science and religion look like the second time around? It’s clear science would come back the same just by applying the scientific method
I also think religion would come back with many of the same structural elements and values. The details would vary – e.g. God might be presumed to take 10 days to have created the universe (instead of 7) – and its the focus on knocking down these scriptural strawmen, while failing to appreciate the grander points being made that irks me about “the common atheist” arguments on these subjects.
I’m not confident science would come back “the same”. Science is not a fixed institution with a fixed philosophy, so it’s not clear what “the same” actually means. Empiricism in some form would probably come back, but would it include, say, Popperian falsificationism? Maybe, maybe not.
I’m even less confident that monotheism would be ascendant after a reboot. I see no good reason to think its current popularity is anything but a consequence of Abrahamic traditions being well-positioned to ride the wave of civilizational development in the Levant. One can, in fact, point to single events, like the conversion of Constantine, that seem to have shaped the world’s religious history in completely contingent ways.
I mean, thought, doesn’t this view assume that history is basically a series of random accidents, i.e. a rationalist view of history?
I mean, here’s the question: Assume a certain religious belief is in fact, “really how the world works”; wouldn’t that inform both A: why things turned out the way they did and B: how they would turn out if we got hit with a tabula rasa state? i.e. monotheism views history (I speak generally) as a series of predestined narrative occurrences. I can’t speak as well regarding the religions of others, however.
This experiment has already been done: a bunch of early humans spread out all over the world and formed cultures complete with science and religion. Then we all got back together and compared notes. Sure, we didn’t particularly enforce cultural isolation during the development stage, but geography did a pretty good job.
If you look at a bunch of ancient cultures, monotheism’s pretty rare. I can happily generate a lot of creation myths more unusual than “God created the world in 7 or 10 or 312 days,” and so did the Greeks, Egyptians, etc… it helps to have more characters. Today, if I believe Wikipedia, a little over half of people are some sort of Abrahamic monotheist. My hypothesis to explain this is that Christian and Muslim empires insisted on their subjects converting, whereas polytheist empires tolerated and/or syncretized worship of local gods. This might be a pattern that would happen again. Or it might not.
It seems plausible that human values are held more deeply than the particular stories we tell, and would survive a reboot. Cross-cultural values analysis is not something I think I can do justice to with a quick google search. But the fact that we have a reasonably functioning global society where wars to exterminate the infidels are the exception rather than the rule suggests that we share at least a lot of the most important values. No country is trying to turn us all into paperclips.
The underlying facts of the universe that science aims to discover would be the same. But the order in which they got discovered, and hence the way we think about them, might be very different. I’d be interested to see how the reboot civilization teaches its kids about atoms.
The experiment gives us similar results for science. The different cultures did not end up with a common understanding of how the physical world works. Greeks would have told you there were 4 elements, Chinese would have said 5. Check out all the different theories of atoms, and that’s not accounting for the fact that many argued against atomism itself.
If by “science” you mean “empiricism, the scientific method, etc” then you really just mean “Western science.” The rest of the world did not discover it, they copied it from the West. At that point the experiment is pretty much over, although it may be interesting to look at what happened to science behind the Iron Curtain, with Lysenkoism and all that jazz.
Yeah, the problem with science before 500-ish years ago is that it’s either very practical and quickly spreads across continents (“those guys know how to smelt iron! we’d better learn how too or they might kill us with swords”) or it’s more of a philosophy / wild guess that may happen to describe the world in retrospect when it later becomes checkable (“stuff is made of indivisible units with the essential character of that stuff, because it seems like it ought to be”). The best parallel scientific development I can think of is agriculture, invented independently in the Middle East and Central America and maybe other places. We take it for granted now, but it’s pretty revolutionary if you’re a hunter-gatherer. Also, not uncorrelated, the calendar.
Writing also should be mentioned as being invented in multiple places.
There is also an important distinction to be drawn between “science,” which is a methodology, and “technology,” which science aims to produce, but can be arrived at through other means. Every advanced society achieved some level of technology, but only the West came up with science.
But even the technologies that we share vary. You mention “agriculture,” but that covers a lot of very different techniques, which is not surprising since people were dealing with different climates and plants. Farming wheat is not like farming rice. Japanese swords are not like European swords. Asian architecture is different from European architecture, even if they do all have to follow many of the same physical laws. Heck, the two continents don’t even use the same eating utensils.
Even math, which is the same everywhere and always, doesn’t seem to have been built out independently by anybody, with the exception of Liebniz and Newton. Instead, somebody figures something out and everybody else ends up copying it. And even there, for a long time math stopped at algebra for everybody but the West. I can even think of at least one differing approach to math, in the form of the horrible Roman numeral system.
“Pure” monotheism is pretty rare, but semi-monotheism does seem to repeatedly pop up when you have a large group of people writing/philosophizing about religion.
In the ancient Hellenic world, you had Stoicism and Neoplatonism postulating a “prime mover”, in Egypt there was Atenism and the later idea that all gods were really just manifestations of Amun-Ra, in Persia traditional Persian polytheism morphed into Zoroastrianism, and in Hinduism the “highest reality” of Brahman. I think the Confucian “Tian” might be analogous, not sure.
Either way, it seems for it to be fairly common to have large pantheons that are worshiped popularly, with theologians believing them to be part of some great, singular, divine force or entity.
But as an atheist, these “details” – the trivialities like banning people from eating particular foods, or having particular medical procedures, or taking particular drugs, or working on particular days – are precisely the things that cause me the most trouble! If people want to love their neighbour, hold community gatherings, give food to the hungry and so on, great! If they want to spend hours arguing over the minutiae of their holy books, that’s a bit of a waste but it’s no skin off my nose. The things I most want to stop about religion are – well, on the large scale, I guess I view their entrenched power hierarchy, inherent conservatism, and emphasis on faith over reason as intrinsically bad, but those are rather abstract concerns. The concrete problems of religion are when people push stupid policies for religious reasons.
Are there growing rationality communities/meetups in Wellington, New Zealand?
LessWrong only has one entry for New Zealand, and thats Christchurch (completely different island from Wellington). I read so many positive experiences about the meetups, but due to location I’m unable to participate.
Does anyone here have experience with/comments on “voice dialogue” therapy? I can’t find any information about it that seems likely to be reliable and/or unbiased.
My wife is a practicing psychotherapist, knows it, uses it, and is impressed. I can get more details if you wish.
I would appreciate that, yes. I’d like to know the following things:
– A brief overview of the practice and the supposed underlying mechanisms, from a source other than the people who invented it
– Whether or not there has been any scientific study of its efficacy (I can’t find any)
– How it compares to CBT and why one would use it instead of CBT
– Whether or not it is more effective for particular issues or types of patients than others
Thanks!
Imagine you wake up one day on an alternate Earth where everyone collects baseball cards. Baseball cards for successful teams trade for hundreds of dollars; successful players on successful teams, thousands of dollars. If a team does worse than expected, their cards may lose value. But on average, the total value of all baseball cards has been increasing every year.
You go to your financial advisor and she says: “You should invest at least half your savings in baseball cards. Demand keeps going up, and their value is increasing steadily!”
And you say: “– Well, but these baseball cards have no intrinsic value, aside from their rarity. If everyone woke up tomorrow and decided they didn’t like baseball cards any more, these cards would all be worthless. That’s sort of the definition of an investment bubble, isn’t it?”
And your financial advisor says: “Well, technically ‘dollars’ don’t have any intrinsic value either. But you don’t seem to mind getting paid in those…”
This is how I feel about the stock market.
One example is Google stock. Google has three classes of stock: A, B, and C. “Class B” stock represents more than 51% of voting power, but you can’t buy any of it; it’s only held by Google founders, the CEO, et cetera. “Class C” has explicitly no voting power at all. None of the classes of Google stock has ever paid dividends.
Now, then: what is the worth of a Class C share of Google stock? You can’t vote with it, and you won’t get money for holding it. Essentially it’s a baseball card. It’s a baseball card which is valued currently at five hundred dollars per share, with a market cap of 363 billion dollars.
Of course there are other stocks, which do pay dividends. Doing some math, the dividend rate from Microsoft stock seems to be 2.7%. I own some stock in a utility; the utility appears to pay 4.1% dividends. I went and looked up the US Treasury Bond interest rate and, assuming I’m doing my math correctly, it’s 4.8%. So (going by these examples) owning stock seems to be flat-out worse than owning bonds.
I’ve long suspected that the stock market is basically bullshit.
And yet if you talk to any financial advisor they tell you: “Invest in stocks! The price will go up over time!” And, let’s be frank, _they will be correct_. The price of stock does go up over time. It’s a bubble which hasn’t stopped rising.
So I guess here is my question. For the specific case of Google Class C stock, which explicitly has zero voting power and zero dividends: why does this have a monetary value? Why hasn’t the stock market taken one look and said: “No thanks, I don’t want your baseball cards.”? What does the stock market know (or think it knows) that I don’t?
Yes, Google stock is mysterious.
But your complaint about the low dividend rates of stocks are ridiculous. You have pointed out the remaining value: control. If the stock ever becomes too cheap, someone could take control of the company and, for example, change the dividend rate. or liquidate it. This happens all the time.
I think your appeal to fiat is correct: a class C google share is a $500 bill. Just try not to spend it all in one place.
“Intrinsic value”? “Worth”? Supply and demand not good enough for you?
Google could at some point decide to start paying a dividend. That means owning the stock is more like a fungible lottery ticket than a dollar bill.
There could also be a stock buyback at a higher price. There could also be other events I don’t know about because I’m not an expert.
Regarding the statement that the stock market is bullshit, I used to feel similarly but the closer I’ve gotten to the finance world, the more I feel that it’s actually the core of the modern economic miracle.
This. Stock value is closely tied to the market’s perception of a company’s intrinsic value, including present assets, present liabilities, ongoing income, and expected future income. In the future, any corporation will necessarily A: reach the limit of growth in its available markets and return every penny of net income to its stockholders in the form of dividends, B: be bought out by another corporation that wants its assets and/or future income badly enough to pay an appropriate amount of cash to the stockholders, or C: be liquidated and have its remaining assets sold at auction, any net proceeds to be distributed among the stockholders. One way or another, the stockholders collectively get what the company is worth – though individual investors may get more or less depending on when they buy and sell.
In the case of a very stable, very mature company with no future potential for growth, you’d expect the dividend yield to be comparable to a government bond, though not identical due to liquidity issues. For a growing company, the dividend yield will be lower than government bond yields because you’re paying not just for current dividends based on current revenues but for the reasonable expectation that the revenues and dividends will increase in the way that government bonds don’t. For a fast-growing company it is perfectly reasonable for there to be no dividends as the company reinvests all of its earnings to support growth.
And yes, this is economically miraculous, in that it allows concentration of resources across time and space in a way that only kings and princes could manage in earlier eras. If you have a business where Plan A requires modest investment and gives modest profits in five years, whereas Plan B requires more up-front money, takes twenty years to reach fruition, and transforms the world while making mountains of pure wealth, the fact that none of your investors are more than moderately rich and none of them are willing to wait more than five years does not lock you in to Plan A.
To the extent that today’s investors are confident that a mountain of wealth will appear in twenty years, they will be confident that investors fifteen years from now will be willing to offer up maybe 0.7 mountains of wealth for a secure 7%/5year return that they will be around to realize. Thus confident that there will be 0.5 mountains of wealth being bid in ten years, and 0.35 mountains of wealth in five years, so you’ll get plenty of moderately-wealthy short to mid-term investors here and now combining their resources to finance your project to the tune of 0.25 ginormous mountains of up-front capital investment.
@John Schilling: This is by far my favorite comment in the whole thread: radiantly lucid.
I still don’t understand. If Google’s not paying dividends now, what incentive do they have to do so in the future? And if stockholders don’t vote, what incentive does a company that wants to buy them out have to bribe them?
(also, Google’s market right now is “the internet, cars, space, biology, AI, cell phones, and some laptops”. If they ever reach the limit of growth in that market, I don’t know what to think.)
Not only does google not pay dividends, but their IPO swore never to pay. There certainly is an expectation that when the founders die that control will fragment and the company may change. But I don’t think that plausibly matches a real calculation of value of the shares.
I’m fairly certain the only market Google actually makes money on right now is search/ads. The others are all bets for the future, but they’re not certain by any means.
When Google reaches the point where it can’t or won’t grow any more, what else is it going to do with its revenues but deliver the profits to the stockholders in the form of dividends (or stock buybacks or the like)? Stashing the money in a perpetually-growing bank account that can never be touched, benefits nobody. Keeping the money for the private benefit of Google executives, violates the fiduciary responsibility of those executives and will get them sued or imprisoned. The same charter that says “founders get 51% of the voting power”, also says “if the founders take any money out of the kitty for themselves, they have to distribute money to all the other stockholders in proportion to their holdings”.
So, yeah, the class B/C stockholders are voting that at some point Google’s class A stockholders will vote to actually make full[*] use of their vast wealth rather than just point to a giant bank-account balance that they aren’t touching as it grows towards infinity. Seems like a safe bet. And as noted above, you don’t have to actually wait until they decide to cash in, to do so yourself.
As for the limits to Google’s growth:
1. At some point, management overhead will likely make further growth impractical for the same reason that e.g. the Soviet economy didn’t work. Assuming Google isn’t run by idiots, that’s when they start paying dividends rather than buying more stuff for Larry and Sergey to manage.
2. Larry and Sergey aren’t immortal, and when they die it isn’t clear that the new/surviving class A shareholders will be unanimous in pursuing the Eternal Growth of Google over their various private interests.
3. There will also come a point at which the US and EU governments decide that Google is too damn big and invoke antitrust laws to break up the company. If necessary by electing Roosevelt-style (either one) populists.
4. Barring 1-3, and assuming they aren’t overtaken by Apple, Google winds up owning everything. In that case, Google stock becomes functionally equivalent to a title of nobility in the Great Solar Empire of Google. I, for one, welcome our new Google overlords, and that’s Sir Class C Shareholder to you 🙂
[*] They can make partial use of their wealth by selling some of their own class B/C stock on the open market, which also tends to anchor the value in the market.
“And if stockholders don’t vote, what incentive does a company that wants to buy them out have to bribe them?”
Suppose you own a house worth $500,000, and you have a $200,000 mortgage on it. Can you just go off and sell it, keep the $500,000, and tell the bank to screw themselves?
@John Schilling seems mostly right but he has class B and class A swapped.
Class A is the 1x voting stock and trades as GOOGL, class B is the 10x voting stock and does not trade publicly, class C is the 0x voting stock and trades as GOOG.
When a class B share is sold it automatically converts to class A. When a class B shareholder sells a class C share he [are their any class B holders besides the triumvirate?] also sell one class B share or convert it to class A.
Sorry, my bad. I was using a generic model of how publicly-traded companies with privileged insiders set things up, when I should have, er, googled the specifics for Google.
And Scott – it’s my understanding that “non-voting” shareholders actually do get to vote on “fundamental changes” such as a takeover bid. (And in any event, the entity taking over needs to actually buy the shares in order to do so. The mortgage analogy may be helpful, but I’d prefer to distinguish between debt and equity. ) To be fair, I only know Canadian law and not Delaware law.
A large share of societal wealth is managed by funds whose job is to ride the coat-tails of the economy as a whole as it grows – e.g. CALPERS.
In this context, Google is particularly potent organization as they have and continue to represent the “threat” to greatly enhance productivity in many aspects of the economy. It’s important not to under-estimate the threat of innovation: even if you held true voting shares in print media, how’s your investment doing?
Like a high profile celebrity that can set the terms of an interview with a journalist, Google is also in a better position than most companies to tell public investors to “take it or leave it” and this is probably what justifies the terms of their equity classes.
Another mitigating factor is that Google’s value is mainly outside tangible assets that can be liquidated. The act of trying to “take the money and run” by privileged stock class would be lose-lose. So non-voting common stock can be assured if you assume rational self-interest by voting members. In fact, eliminating noise of activist investors vying for takeover could make the company even more valuable.
That’s way off: http://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/Historic-Yield-Data-Visualization.aspx
A stock that pays $x every year will be worth more-or-less the same as one that pays $nx every n years. Now let n tend to infinity.
Only if n is small compared to the inverse of the interest rate.
The money Google doesn’t pay out in dividends it invests in itself (or at least it should). Money invested in businesses usually grows faster than interest, so we would expect the share price to rise with n.
When I read the first few paragraphs I thought this was going to be about bitcoin. Thankfully Google’s value is more apparent.
The simple argument is conservation of value. Google’s assets are worth x (and given that most of them are future cash flows, reasonable people can disagree about exactly how much they’re worth, but still, they’re worth something), their liabilities are worth y; if their stock isn’t worth x-y, then where has the value gone? I mean sure, Google have said they’re never going to pay dividends – but if you bury a ton of gold in your back yard and say you’re never going to dig it up, your yard is still worth a lot of money.
Google’s founders do have a fiduciary duty to act in their shareholders’ interests. The courts give directors a lot of latitude to use their own judgement – they can take views like “the short-term profit of selling in China is not worth the long-term reputational damage” or “everyone will want to use Google+, we should spend lots of money on it” – but they have to be making a good-faith effort. And if they ever sell for cash, they’re legally obliged to sell to the highest bidder.
The reasons companies don’t pay dividends these days are mostly shenanigans; in particular tax laws make it (loosely speaking) more efficient for companies to return money to shareholders through buybacks rather than dividends. For people investing for retirement, it’s often more convenient if their stock simply appreciates in value rather than paying an annual dividend that’s fiddly to convert back into more stock (for people who’ve already retired, the opposite is true – but that’s why there’s a market for both kinds of stock). And for tech companies in particular there’s a signalling motive; paying dividends indicates that your company can’t figure out how to spend that money on growth.
But honestly you’re jumping to the hardest case first – imagine trying to explain to an 8th century merchant, to whom wealth is silver bullion, why the plastic card in your wallet let you obtain goods, without first going through the case for bullion coins, non-bullion coins, paper money, cheques and so on. If you first internalize that US government bonds are about as good as cash, McDonalds bonds are about as good as US government bonds, McDonalds shares are about as good as McDonalds bonds, then the value of Google stock will be much less foreign.
Challenge accepted.
“Much of my wealth, and that of the merchants I deal with, is stored in the treasuries of institutions called banks, each as secure and trustworthy as a temple or a great lord’s house. This card is a token signifying my approval, like the ring used to impress the wax seal of a letter; when we wish to do business, the merchant’s device writes a message giving the value I owe and the bank to send it to, I use my card to seal it, and the merchant sends my message on wings of light to my bank.
Because I used my card, my bank knows I approve a transfer of wealth; it follows the instructions in the message and transfers some of my wealth to the merchant’s bank, which then sends him a message acknowledging receipt. In this way we do business without our hands touching silver.”
I imagine fractional reserve banking would be a harder sell.
A good attempt, but unfortunately it’s not true. There’s no actual wealth stored in the treasuries of the banks, at least not in any form the 8th century merchant would recognise. The kind of banking you describe would not be too unfamiliar to an 8th century merchant – it seems to have existed in Ptolemaic Egypt, for example. But in fact, the treasury of the bank just has bits of paper and cheap metals in it.
I’m trying to explain how credit cards work, not the entire basis of a modern economy. He might assume that our banks are exchanging silver instead of bits, but that really doesn’t matter to the question as stated.
Is there any chance we could get a research post about things like school choice for primary and secondary education? I’m vaguely aware that it’s a contentious issue where everyone seems to have their own facts. (“School choice increases student achievement!” “Michelle Rhee cheated on test scores!” “It only helps because they can filter out the problem students!” “Looting!” “Rubber rooms!” “Khan Academy is a scheme to fire all the teachers!” And so on.)
I ask because you’ve made sense out of things like “does AA work?” and “is the criminal justice system racist?”, and even if the answers are full of uncertainty, I feel like I’m less confused after reading them. Thank you for the research you’ve done. I’d be ever so obliged if you’d look into this.
Oh, this. Pretty pretty please?
School choice is hard to measure because there’s no obvious metric. Standardized test scores? Increased chance of being successful at the next level up? Income once you’ve left school? Subjective happiness once you’ve left school? Differential reproductive success? Future charity donations?
If you’re me and your metric of choice is amt_learned, however, school choice is the wrong thing to look at because all schools just kinda suck at this; even if the best school teaches twice as much as the worst, that increase is pretty small. Bullet points:
-Students who take an introductory mechanics course don’t do any better after the class on a test of understanding fundamental concepts like force (as measured by the Force Concept Inventory). This patterns holds in schools from community colleges to Harvard.
-Ditto for economics. The Economic Naturalist: “When students are given tests designed to probe their knowledge of basic economics six months after taking the course, they do not perform significantly better than others who never took an introductory course. This is scandalous.” (h/t Mike2)
–Academically Adrift details academic research that suggests that, even though most colleges claim to teach critical thinking, students do no better on tests of critical thinking after several years of college, citing an implicit contract between instructors and students where the instructor gives an easy course and students give good ratings. (IIRC (p = .6), the senior year of college is helpful, possibly because students have finally cleared enough prerequisites to get into courses that really force careful thinking and possibly because students are in classes for their major and are less interested getting good grades in a class that doesn’t teach anything.)
-“Studies of children in Brazil, who helped support their families by roaming the streets selling roasted peanuts and coconuts, showed that the children routinely solved complex problems in their heads to calculate a bill or make change. When cognitive scientists presented the children with the very same problem, however, this time with pen and paper, they stumbled. A 12-year-old boy who accurately computed the price of four coconuts at 35 cruzeiros each was later given the problem on paper. Incorrectly using the multiplication method he was taught in school, he came up with the wrong answer.” (The whole article is fantastic and I fully recommend it).
–SA on uschooling: students who have 0 school only wind up one year behind their peers who attend school full-time.
-Duolingo commissioned a third-party study, comparing itself to traditional courses. The traditional university course took about 4 times the time to learn the same material.
-I’ve taken courses at 2 community colleges, 1 mid/high-tier private university, 1 mid/high-tier research public university, and 1–3 Ivy League schools (depending on whether you count OCW or MOOCs). Their courses aren’t tremendously different (my community college professor pulled questions off the 8.01 exams from MIT, which were the easiest ones on the exam).
-If you know about forgetting curves and then look at how basically every school is structured, you’d be surprised if students managed to retain anything.
Thank you for the links; those look fascinating.
It looks like the answer is something like “no methods of education really work anyway“, which prompts the question of what the heck we’re paying all those teachers to do, since they cost more than babysitters. (More cynical folks like John Taylor Gatto say that it’s really about teaching people to obey authority and handle boredom, essentially to be good factory workers. I guess we’re pretty okay at that?)
It sounds a bit like the FizzBuzz problem in programming; in short, a significant portion of people with four-year CS degrees can’t solve a just-barely-nontrivial coding puzzle.
I remember reading about Bloom’s 2-Sigma Problem and thinking, well, that explains why homeschoolers do so well; in fact, it’s a shock that they don’t do better.
This seems like a really important problem, one that we should know how to solve by now. Education is touted as the solution to a whole host of troubles, and well-educated people have obviously better outcomes. Is domain-specific knowledge really that hard to teach? Does it matter so little? Are people so bad at learning it?
Hell, how do we, as grownups, know anything? How do people competently do their jobs? And what the hell are we doing with 5.4% of our GDP?
(Addendum: this is infuriating. In the pursuit of the ideal that all children are equally valuable, we spend way, way more on disabled kids than on gifted ones, even when a tiny investment would have huge payoffs later on. Aargh.)
Is Bloom 2-sigma contested? It seems like a very strong result. Even a 1-sigma result would be shocking if it was stably found under a reasonable variety of conditions.
I found Bryan Caplan’s model where education is about signalling, rather than learning, clarified a lot of things. Education is poorly optimized for learning because everyone involved is primarily optimizing for something else. Students want the best credential for the least work. Teachers want to get paid (which, in postsecondary education, means getting good reviews) with the least amount of work. Controllers-of-pursestrings (e.g. taxpayers) spend an awful lot of time arguing about funding allocation and curriculum. Sure, there’s a few students and a few teachers and a few pursestring-controllers who’re interested in learning, but they (a) typically haven’t read enough cognitive psychology and (b) are enough of a minority that effective learning has, to my knowledge, never been effectively affected by schools below the graduate level (I’m not entirely sure why, but everything I’ve seen indicates that effective learning does, in fact, happen at the graduate level. My current best explanation is that most undergraduate degrees amount to signalling “smart! conscientious!”, whereas the graduate degree signals “specialized knowledge/skills you won’t find in anyone without a graduate degree”, meaning that meaningful learning has to happen. I still doubt that schools are currently making that meaningful learning happen as effectively as possible, but at least it’s happening at an appreciable rate.)
This isn’t to say, however, that effective learning can’t happen. Duolingo, for instance, was put together by people who have read the right cog psy. When I first started using it after 7 semesters of (excellent!) traditional language classes it felt like witchcraft because I was actually learning language. Similar results can be obtained in other areas by reading quality textbooks, although you have to do work that they won’t tell you to do (which forces deep processing, and is therefore a feature-not-a-bug), like putting things-you’d-like-to-remember into Anki or trying to prove theorems before reading the book’s proof. There’s a post-in-the-works which I believe describes a best first-approximation-given-current-knowledge-of-cognitive-psychology* of how to learn effectively. Expect it on LW mid- to late-May, after I’ve finished reading to make sure I’m actually giving good advice.
*People are idiosyncratic. Like, if you read the primary literature, there exist people who perform better using massed, rather than spaced practice, so there’s really no way to write an article that says “this is the best way for you to learn”, but it is possible to write something that says “this is the best way for most people to learn, but there’s also a completely reasonable chance you’re going to need to tailor a part or parts to your own brain, but this is the best starting point, insofar as it minimizes expected tailoring”.
I have trouble believing the FizzBuzz thing (not a CS graduate, taught myself python for writing ugly scripts). But then, I can’t believe that most doctors will not give the right answer at the classic Breast Exam screening example.
I find the breast exam much easier to understand – doctors just don’t use probabilities.
I am confused as to what is even claimed in the fizzbuzz story. Is it that people can’t do it, or that they can’t do it fast? (the post mentions senior programmers taking 10 minutes) And what does “can’t do it” mean? Is it that they can’t write syntactic programs without an IDE? Is it that they can’t reason about the behavior of the program without running it? Or do they produce complete nonsense?
I had trouble believing it until I conducted a few interviews. A lot of “senior software engineers”, let alone people right out of college, are astonishingly bad at nuts-and-bolts coding.
I suspect that most people , who haven’t seen the problem before,start off on the wrong tack, and then need backtrackw, and arent willing to backtrack.
I didn’t believe it, and then I started interviewing people. People who’d held jobs in the industry, people who had worked as software engineers.
My best model of this is that most coding is done in some sort of cargo-cult copy-and-paste style, where you pull something off of StackOverflow without understanding it, or duplicate someone else’s code. Which is horrifying, but it seems to explain the phenomenon reasonably well.
Grendel, I expect that most programming is done by cut-and-paste, but doesn’t it have to be pasted into the right part of the flow control? Maybe most programmers would need to copy a congruence test to do this exercise, but the implication is that this they also screw up the control flow.
I forget the statement of the problem but isn’t the solution literally:
For i=1:100
if mod(i,15) == 0
{
cout << "FizzBuzz" << endl;
}
elseif mod(i,3) == 0
{
cout << "Fizz" < endl;
}
elseif mod(i,5) == 0
{
cout << "Buzz" << endl;
}
else
{
cout << i << endl;
}
This is in the format they say the program cannot easily be written in lol. It is literally written in:
If 1 then A
elseif 2 then B
elseif 3 then c
else D
In "C++ pseudo code" since I am lazy? Or am I forgetting something?
Of course I didn’t read the problem again but this should take minutes no? Maybe takes longer if one is careful and checking their work thoroughly. 10 minutes seems reasonable to me. If someone asks you a question there is a decent prior its not trivial even if it seems trivial. I could see taking 10 minutes to fully convince yourself you are not crazy and the problem is easy.
I think the charitable interpretation of people failing at FizzBuzz is that a lot of people get extremely nervous when being put on the spot in an interview. Then again, I’ve never interviewed anyone, so that might just be wishful thinking.
People get nervous during interviews, but FizzBuzz is pretty easy:
for (int i=1..100)
{
print i,” “;
if mod(i,3)==0 print “Fizz”;
if mod(i,5)==0 print “Buzz”;
print “\n”
}
@Anthony: That actually wouldn’t work — the usual FizzBuzz spec specifies that the integer not be printed when you’re printing Fizz or Buzz or both. Which is really the only reason it’s tricky — FizzBuzz is mainly about managing slightly-complicated conditional branching.
Stargirl’s solution looks correct to me.
“I find the breast exam much easier to understand – doctors just don’t use probabilities.”
What the hell kind of doctor doesn’t use probabilities?
@Anthony
You didn’t specify a language, so I assume this is pseudocode. Besides the issue Nornagest brought up, there’s also the question of whether the print function includes a carriage return.
Yeah, people really cannot do Fizzbuzz, and this is after *really, really, really* trying to get them to relax and take it easy and just fumble through it. But no, they screw it up.
Interviewing: the best cure for imposture syndrome yet devised.
A few case studies to whet Scott’s appetite:
How one national school voucher program fared (about Chile):
But on the flip side: Lessons on School Choice from Sweden
And directly from the mentioned paper:
However, Sweden (like the other Nordic countries) is less diverse and has much less economic inequality than Chile or the US. I can easily see a similarly-regulated market in the US using the proximity criterion as a very effective proxy for socioeconomic status, so I’m not sure if these exact conditions would be as effective in those markets.
What are some good criticisms of consequentialism separate from utilitarianism?
One criticism of basic consequentialism is that accurately calculating consequences is often impossible in complex human situations where there is a hole load of moving parts. Also, if a well-intentioned person, one with normal human judgement, performs an action leads to some unforseable negative consequences, it doesn’t seem to ring true to call them immoral (definitions/semantics grr). So critics would suggest that this points to the consequences not being the central criteria for morality.
If you accept this you either have to get more sophisticated (eg. rule utilitariansim) or consider some or all elements of virtue or deontological ethics. I personally like the idea of mixing in a virtue approach just because it has good consequences. Not what the professor would like but hey.
One thing you can say for consequentialism though, there’s a lot less wiggle room. Rules are usually open to interpretation (see sectarianism), and good intentions are really easy to fake or rationalise (even to one’s self) “I didn’t mean to hurt all those people your honour” or “I followed the rules – its not my problem” (ahh moral loopholes). Virtue is more complex. Consequentialism doesn’t care – basically no results, no morality. In a world full of pretend morality, this ain’t nothing.
SEP is probably your best starting point if you want to get formal/strict about your arguments.
It is worth bearing in mind that all plausible moral theories recognize a reason to promote welfare. The difference between these theories and consequentialism is that the latter holds that these reasons are exhaustive, whereas on non consequentialist theories agents are sometimes allowed, and sometimes even required, to do things that aren’t welfare-maximizing. Thus, the difficulties involved in calculating consequences do not pose a problem specific to consequentialism, but rather generalize to all moral theories deserving our attention.
I don’t think that’s entirely the case. If you have a choice between (a) a sadist torturing people with low cognitive capacity and (b) someone enjoying a good book, a utilitarian has to do the hard work of deciding which leads to more pleasure (or satisfaction of preferences) whereas the rest of us can instantly say you should choose (b). Perhaps this is something of a fringe case, though.
> Also, if a well-intentioned person, one with normal human judgement, performs an action leads to some unforseable negative consequences, it doesn’t seem to ring true to call them immoral (definitions/semantics grr).
I think this is from a bit of confusion. Consequentialism is about judging what is a good action, not who is morally praiseworthy. Under a Consequentialist system, we should consider intent when deciding whether to praise or damn someone, since that’s actually a diffferent criteria, and good intents seem to lead to good actions more reliably – or at least to being willing to change bad actions.
I think the biggest problem is just that humans are such big rationalizers and are operating on such incomplete information, that it may be better not to run an entirely Consequentialist society in practice.
At the Solstice meetup, I mentioned that I was going to talk to the Brain Preservation Foundation (Hanson-endorsed!) to see whether they’re still alive, and what they’ve been up to. I also promised to relay any findings here. Well, I had a chance to talk to the current head of the BPF tonight, and it was pretty fascinating. They are very much alive, and appear to be doing pretty high leverage work with limited resources.
The vast majority of people dismiss cryonics as snake oil. A smaller number of people believe in cryonics enough to pay for it. The number of people, on the other hand, who want to look closely at “what exactly does preservation do to the tissue, and can we do something about minimizing that?” is miniscule, and it’s at that question that BPF is trying to drive research.
The two main approaches they’re considering are cryonics and plastination. We all know about cryonics; what’s less known is that as currently practiced, it results in simply enormous amounts of tissue damage. There’s a difficult tradeoff: if you infuse cryoprotectant more quickly, you get osmotic damage; if you do it slower, you are letting the brain sit too long at room temperature. There are potential approaches that could improve it.
The other is plastination, which actually originated in the 70s as a procedure to preserve materials for electron microscopy. The good news is that since it’s a procedure that’s preparing tissues to be examined by the most powerful microscopes ever, it’s pretty good at preserving the tissue architecture – you can get neurons and even synaptic vesicles all preserved in good condition. The bad news is that electron microscope samples are *tiny* and that until now nobody has put much time and energy into trying to scale up the protocol to something the size of a mammalian brain.
The upshot is that they’ve been working with a few labs to fund research into scaling up plastination and improving cryonics protocols, and are actually at a point where they may be adjudicating the prize for preserving a mouse brain pretty soon. Their hope is to be able to then scale this up to a primate brain (which at least one lab is interested in doing, if the mouse brain works out).
Long story short, they are funding and influencing some pretty important research on a fairly small (<$100k) budget. If you're interested in the prospects for brain preservation. Certainly if you are getting cryonics, or are considering getting cryonics, it would be worth considering donating to this organization. This research will likely have a significant effect on the quality of brain preservation technologies that will be available by the end of our lifetimes. (And keep an eye out for the announcement of the first prize, likely before the end of 2015.)
Oh, cool! I donated when they were having their last drive; I’m glad to hear that they’re still around and doing work. Please do post an update here when they announce prizes in case Scott misses it.
Wouldn’t the same apply to plastination? I don’t see how you could rapidly replace all water with a non-water solvent in something as big as a human brain without causing osmotic damage. Now, if you could find a way to slice it into tiny chunks quickly and cleanly…
Good luck to them, though. Even if we don’t get immortality out of it, it’s still valuable research.
a relevant new technique
“If I had the power, I would add an 11th commandment to the already existing 10: “You should never be a bystander”.”
—Roman Kent, Holocaust survivor, in his speech on the 70th anniversary of the liberation of Auschwitz-Birkenau. January 27, 2015.
I saw this on tumblr and I wanted to discuss this here. I don’t see how this advise is workable at all if you have non-standard morals. I personally think people in jail for peacefully using or selling drugs are kidnapping victims. This makes me a bystander. What am I supposed to do? Maybe I am supposed to dedicate myself to providing alibis for people accused of selling drugs? Am I supposed to try my best to improve security on the Tor network (security is not my specialty but I know a good amount about computers and I could probably learn enough to contribute to Tor security in a reasonable amount of time).
Worse this advice imo would cause a huge amount of problems. Lots of people consider abortion to be murder. Are they supposed to bomb abortion clinics? In theory maybe this quote allows one to take “political action.” But unless one is very wealthy or famous the EV of trying to change politics is very, very low.*
Also right now I am donating 10% of my income to Givingwhatwecan/Givewell recommended charities. This is thousands of dollars. Many ways for me to “not be a bystander” are likely to put me in jail. This means I won’t be contributing much to charity.
*One might object that the odds of changing a law go to zero with the population size. But the benefits grow with the population size so it evens out. However this is probably false. Under any reasonable assumption the odds of an agent changing a law go to zero exponentially with the population size, while the benefits scale only linearly.
It’s not supposed to work with non-standard morals. The whole point of the ten commandments is to set a moral standard.
It has to work with somewhat non-standard moralities. The test case, after all, is someone living in Nazi Germany who thinks the Holocaust isn’t OK. The straight meaning of “you should never be a bystander” is that you should be an Oskar Schindler or Raoul Wallenberg, i.e. you should defy the law, at great personal cost to yourself, to do the right thing. If the circumstances are right, you should even be a Komorowski and engage in massive violent resistance.
On the one hand, this is probably good advice as against the Holocaust. On the other hand, I don’t want the losing side in every moral or political disagreement to start a civil war.
It is hardly obvious that if somehow forced out of being a bystander, most people wouldn’t have become True Believing Nazis, rather than fight against them.
Is it really that surprising if a Holocaust survivor (who may well have no more skill or training in ethics or philosophy than the rest of us) gives advice that’s specific to preventing the Holocaust, and actually bad advice overall? Everyone fights the last war.
It seems to me that Tarrou is in rather blatant violation of the commenting policy, and is decreasing the quality of the blog. I have reported posts repeatedly, but have not received any response. Why is Tarrou being given a pass?
What comments has Tarrou made in this thread that are so bad? Specifically?
I don’t know what he’s said in other threads, but you posted this request in this thread, so I looked at the handful he posted here. I don’t even see anything objectionable, much less banworthy.
I disagreed with RCF once, so apparently he’s been reporting me ever since. I say now what I said then. I stand on the merits of my opinions and statements. I don’t think I’ve violated any of the policy, but it’s obviously not my call. That is for Scott to decide. I just share my opinions, arguments and experiences to the best of my admittedly limited ability.
Well, if you’ve been repeatedly reported by the same person, and Scott has not done anything about it, perhaps we have our answer on how he feels about the merits of those reports.
I’m referring to his behavior in the Influenza of Evil thread (this is, after all, an open thread). There were two subthreads where he violated the commenting rules.
In one, there was a discussion of neo-Nazi groups and how there are other groups that are doing better than them at not triggering society’s evil-detecting heuristics. I said that Liberty University is an example of such a group. Tarrou then accused me of saying that Liberty University is more evil than Nazis. I pointed out that this was incorrect both in that the comparison was with neo-Nazis, not Nazis, and I did not say that Liberty University is more evil that neo-Nazis, I said that it is better at not being recognized as evil. Despite this correction, Tarrou persisted in blatantly misrepresenting my position. In addition, he did so in an extremely rude manner, openly mocking me, rather than actually presenting any substantive argument.
In another subthread, there was discussion of the aphorism “comedy should punch up, not down”. Someone complained about Christian conservatism being mocked, and I said that Christian conservatism consists to a large degree of punching. Tarrou then accused (in a cowardly oblique manner) me of being a bigot, basing this on me supposedly saying that all conservative Christians are violent. Never mind that I directed my comments at Christian conservatism, and thus was saying that the ideology consists of punching, not that every person in the group engages in punching, and the word “punching” was obviously metaphorical. He also tried to make a bullshit analogy to someone saying that being black consists of punching, as if criticizing an ideology is somehow similar to criticizing a race.
Tarrou clearly has no respect for the truth. All he cares about is how he can twist what other people say to suit his needs.
To be fair, while not explicit, when presenting the Liberty University, whoever it may be, as an example of a group that gets away with being evil better than Neonazis, there is an >implication that they, at least, comparably evil. Otherwise it’s a meaningless example.
Anyway, you might be taking this a bit too seriously.
I actually haven’t checked reports in the last month or so. I’ll review.
I’ve gone through a few weeks of reports. Arthur Stanton and Noman are now permabanned. I haven’t gotten to the Tarrou reports yet, but I’ll deal with them later.
What’s to stop the people you ban from choosing a new name, plugging in phony email address, and continuing their misdeeds?
It seems to me that requiring commenters to set up a login that confirms an email address would prevent that problem.
So far people haven’t done that. If they do, I can ban IPs. If they care enough to get around the IP ban, they probably also care enough to set up a fake email.
Okay, I’ve looked at it. It seems to have started with you saying something like “conservative Christians are obsessed with punching down” and then he got mad and yelled at you. This seems excusable if he identifies with them, and he doesn’t seem to have a history of bad behavior otherwise, and he sometimes makes positive contributions. I’ll watch him closely from now on but I don’t think I’m going to take any action right now except suggest that everyone watch themselves a little more.
There have been several times that the software has refused to post a reply (and deleted its content) saying that I’m posting too quickly. This seems be based on the delay between my previous post, and clicking the “reply” button again, rather that the delay between my previous post, and clicking the “Post Comment” button. Is my analysis correct?
You all seem to know a lot about meta-stuff, so here’s a question: is there a name or discussion of the phenomena of liking a lot of the secondary and tertiary aspects of a movement/ideology/philosophy/institution (e.g. certain maverick-y individuals identified with the group but not at it’s center, certain secondary ideas, their methods but not conclusions, certain aspects of the tribe are socially appealing, etc.) but disliking the core individuals, principles, and ideas?
For instance, I’ve met a couple people who strongly identify with certain idiosyncratic Republicans and conservatives, actually agree with the core of party on a handful of issues, but hate the core of the Republican party and conservative movement on substantive grounds.
I think we all know which movement in particular you’re thinking of 😉
Identifying oneself as a “maverick” Democrat or Republican is much more common than following the whole party line, particularly in America, which places a high cultural value on individuality. (No, really, we do — compared to the rest of the world.)
So the answer to your question of what to call an idiosyncratic Republican is: “a Republican”.
I’d say that an “idiosyncratic Republican” could very well be “a libertarian”.
About people feeling bad for having lower IQs.
Is the problem one of feeling like a “bad person”?
Lets talk about musical talent. Some people are much better at music than others. Some of those people will tend to “value musical talent” in two ways- They value it as part of their own personality in the same way as other parts of their personality, and they like hanging out with people with similar musical talent, because they can have conversations about music that interest them and can play music together. Additionally, musical talent can be instrumentally useful- famous musicians get fame and fortune. So what of the people like myself with less musical talent?
There is no shame in not having any. I’m honestly not sure how people even view there as being so. What does it mean to be a “bad person”? All people’s preferences matter. You shouldn’t feel bad just because you are not good at something, or even at a lot of things. You are still a person, and that is all that matters.
The same applies to IQ.
I used to “look down” on less intelligent people. There are some people who do this and it is not OK.
(Also, side note- While IQ is important for math and research and such, I think rationality is probably a lot more important than raw intelligence when evaluating information from other sources than yourself)
[Deleted: Likely neither kind nor necessary, and possibly not true.]
OK, now I wish I knew what this said. Was it self-deleted or did Scott delete it?
I would assume self-edited; in my experience Scott deletes posts outright rather than editing them like that.
I sometimes worry about being bad at things and especially about losing knowledge and mental abilities I once had. The feeling that comes with this worry is not a feeling that I’m being somehow immoral by not doing the work to keep my abilities – it’s a feeling that I’m losing value, that I’m worth less. Not worthless, but worth less than I would be if I had more knowledge and ability.
Which is a pretty awful/ableist/unfounded belief to hold, and which isn’t something that I actually believe, but try as I might I cannot get rid of the alief. (It applies to other people a bit too – I sometimes feel extra-sad if I hear of the death of a stranger who was highly learned or skilled, compared to how I’d feel about a random stranger’s death, even though I think this is wrong.)
It’s not that surprising to me that I feel this way given my very skills-focused upbringing and the fact that I was generally a rather high academic achiever and therefore found it easy to attach my sense of worth to being good at brain things.
I think it makes sense to not want to lose intelligence- as I said, people value things like that as part of their personality, and not wanting to lose part of yourself makes sense. I certainly feel that way myself. Also, while less common, it also makes sense to want to gain intelligence- though it still does not make you of less value because of your relative lack of such.
Now that I think about it though, I should have been more clear about the difference between something like “intelligence dysphoria”, and feeling “bad and worthless” as Scott put it. The former is a preference and makes sense, and is actually probably pretty common around here given our transhumanist leanings.
Well, when I get self-conscious about my failings in mental ability I do feel like I am not worth as much as I’d like to be. I never feel like my worth is zero, but I do sometimes feel like compared to the worth of some of my super-capable friends it is almost negligible. So I think it’s similar to the phenomenon you’re describing and trying to understand.
So, random question:
Does anyone know whether donating blood is an effective way to give charitably?
If you’re an eligible male, then blood donation (at least) every 12–24 months is certainly effective, since—assuming your optimizing for longevity—you’re losing that blood anyway.
Near as I can tell, giving blood saves lives and (for most people) doesn’t funge against other things an effective altruist would do to save lives, so yes? (This is no more than naive reasoning, so take it with the appropriate grain of salt.)
I’m not an EA, but blood seems pretty non-fungible with other resources, until we start paying assembly lines of poor O- people to donate for us. And you can be much more confident of it going to help somebody in need than you can with most money you donate.
Slightly off the topic, and there’s probably an obvious answer* that I’m disregarding, but why is it that plasma centers pay money for plasma, but everyone expects people to donate whole blood for free?
*Possible obvious answers: The Red Cross has trained us that blood is for donating (seems true) and gets more than enough blood that way (likely not true); plasma donation takes more time, and no one would donate without recompense?
“if a donor receives monetary payment for a blood donation, all products collected. . . must be labeled with the “paid donor” classification statement”
Source
and
“Most source plasma in the U.S. is from paid donors. In contrast to whole blood collections, these units, under Federal Regulations, are not labeled as collected from paid donors. In part, this is because all source plasma is used to further manufacturing use only.”
Another Source
Along with intimations that “paid donor” blood is seen as less suitable for transfusion for various conflict-of-interest reasons.
Also
“In general, the U.S. collects just what is needed to maintain adequacy in the blood supply. Although blood shortages are still seen in the summertime and holidays, they generally they do not reach serious proportions, and the public generally responds very generously to appeals for additional blood donors.”
Same Source As Immediately Above
Yes, they get less blood if they pay. It converts a Good Deed into a kinda unpleasant way to get money, apparently.
Interesting. One of the reasons I asked is because it occurred to me the other day that I would be tremendously more likely to give blood if I were compensated for the time/unpleasantness. The Red Cross may currently pay for blood in Warm Fuzzies, but I haven’t any doubt that they sell it for cash.
Does anyone here take pacifism seriously? Like, “the United States should get rid of its military right now”-level pacifism?
I don’t *think* I take it seriously. But here are some reasons why maybe I should, in increasing order of convincing-ness (to me):
-When you ask a population what it thinks of a war that just finished, you’ll be biased in favor of thinking the war was better than it was, because you won’t be asking the people who would have had the strongest negative opinion of the war, i.e. the people who died in the war.
-Wars have two effects: they cause lots of death and destruction, and then they cause a change in power structures, sometimes? The first effect is really easy to understand: it’s really bad. The second effect is really complicated and nobody really understands it and it gets super politicized and it’s not even clear if the change in power structures was good. So we have a bad effect and an unclear effect. Win for not fighting wars?
-When two societies disagree over something (say, capitalism vs communism, or slavery vs not-slavery), sometimes they fight a war to decide who’s right. In other words, they have a giant contest to see which society is better at murdering as many people as possible, and then whichever society does better at that is the one they listen to. Isn’t there something wrong with that?
-When you fight a war, you’re saying, “I think that my side is enough more ethical than your side that it’s worth causing tons of death and destruction for me to stop your side.” But the other side is saying the exact same thing about you! Shouldn’t you worry that your side is actually the bad guys? For example, when we fight ISIS, we place a very low probability (roughly 0%) on ISIS actually being the good guys and us being the bad guys. This is intuitive to me! ISIS beheads people and is terrible! But probably ISIS places a very low probability (roughly 0%) on them being the bad guys and us being the good guys. That’s intuitive to them! Isn’t there something wrong with this? Shouldn’t you and your enemy agree to agree? http://www.overcomingbias.com/2006/12/agreeing_to_agr.html
I agree that you should almost never actually use the army. But the agreeing to agree only works if the consequences of war are sufficiently negative, for ISIS (who perhaps put a different value on the lives of “martyrs”) as well as you. So you need to keep enough of an army that ISIS can’t just annex all your territory with low effort.
Consider that the Swiss have an army. Also, Ghandi was on board with self-defence. Non-violence is about respect for your enemy. Self-defense is about respect for one’s self. They’re totally compatible ideas.
“Contest” makes it sound voluntary. But violence is inherently nonconsensual. Even if two parties “consent” to a duel, neither party “consents” to get stomped.
History listens to the victors because they are the ones still intact. When someone attacks you, your options are fight, flight, or lose (die). There’s no option to ignore it. You can’t just shrug it off as if someone had asked you to play chutes-and-ladders (pun intended).
Hopefully, people already have considered that they’re the bad guys. However – just as you never hear about war from dead soldiers, you never hear war from counterfactual soldiers (e.g. vets from the formic wars). So you also have to recognize that going to war usually means negotiations have already failed. The disagreements where negotiations succeeded were mere squabbles that you label in hindsight as “not a war”.
E.g. I recently overheard in a conversation about Ferguson that cops are trained to always “shoot to kill”. “But why can’t cops just shoot to maim?” “Because shit went to hell long before the cop has to resort their firearm.”
Yeah, I agree. To be clear, I wasn’t trying to claim that you have a third option besides fighting or losing. I think, though, that to take pacifism seriously would be to say, “Well, if I’m attacked, then it would be unethical for me to choose to fight, so I will choose to lose.” And maybe that’s wrong! But, when you choose to fight instead of lose, you are making a choice that is denying that your attacker is rational. You’re deciding, not just that you think your attacker is wrong, but also that you think they are so wrong that it’s worth fighting a war over it.
Like, imagine that both sides of every war decided to instead use a random but harmless process that provided the same odds of either side winning to decide the war, instead of actually fighting the war. Wouldn’t that be so much better? And the only reason it doesn’t happen is that everyone is so sure that their side is the right one that they wouldn’t honor the outcome of the pretend-war as canon. Shouldn’t we all become a little bit less sure of ourselves?
Consider the birthday paradox. Apply it to violence. Between any two particular people, it’s unlikely they’ll disagree on a topic enough to fight. But given a planet with 10 billion people, it’s practically impossible for the planet to not contain some pair of individuals who’ll come to blows. Given fights rarely break out in my immediate vicinity, I’d say we’re doing pretty well. Far from perfect (I wouldn’t want to live say… next to the bloods, or in Afghanistan), but surprisingly well nonetheless.
The fact that a defendant can’t opt out of aggression is not just a bug, but a feature. If one party loses a coin flip, what’s to stop the loser (or winner) from attacking anyway? The U.S. signed a great deal of treaties with the Native Americans, but that didn’t stop the U.S. from clearing them out. Those treaties worked out pretty well for Uncle Sam, didn’t they?
I agree that in a perfect world, wars would end on a coin flip. But I can’t realistically imagine Austria avenging King Franz over a game of beyblade. On the other hand, I can imagine America’s antebellum South attempting secession and posterity saying “hey, that coin flip wasn’t so bad. We lost, but it doesn’t hurt to try it again.” Fast forward 100 years, and we have the 50 independently sovereign states of america. Maybe not so bad from the individual’s point of view, but definitely bad for the Union’s interests as a memetic organism.
To channel Robert Hanson. I’m saying with the antebellum example that maybe the cost of war deters people from engaging in future wars (unless they’re super committed). Sparta was pretty intimidating, after all.
What exactly are we referring to here? If you incapacitate an assailant by knocking the wind out of the solar plexus and run, that sounds pretty rational and self-defence-y. If you break his jaw, pin him down, and snap his knees, that sounds like you might be denying the assailant moral agency (or… something). But that doesn’t sound self defence-y. I think we’re confusing self-defence with retaliation.
Do we agree that defending one’s land from the Roman Empire is justified? Do we agree that defending one’s land from Genghis Khan is justified? “Rationality” doesn’t mean allowing the devil incarnate to salt your fields and rape your women.
“I’m saying with the antebellum example that maybe the cost of war deters people from engaging in future wars (unless they’re super committed). Sparta was pretty intimidating, after all.”
This, exactly.
Dispute resolution means the dispute ends. For that to happen, there has to be a reason the losing party can’t ignore the verdict or immediately start a new dispute. The path of least resistance is to make disputation expensive for all participants. Logically, the bigger the dispute, the more expensive it is to raise it. The most valuable thing we have is our lives, and so that is what the largest disputes cost us.
…And that is why war is not ending any time soon.
But, when you choose to fight instead of lose, you are making a choice that is denying that your attacker is rational. You’re deciding, not just that you think your attacker is wrong, but also that you think they are so wrong that it’s worth fighting a war over it
I think you are conflating wrong/irrational with wrong/unethical. Example: I think you have lots of useful stuff. I want it. I offer no moral claim to why I should have it; I acknowledge being wrong/unethical here. I believe based on sufficient evidence that you as a pacifist are highly unlikely to offer effective resistance if I come and take it, and that I can effectively evade the police afterwards. I show up with a baseball bat to start breaking your bones until you give me your stuff. In what way am I wrong/irrational?
Unbeknownst to me, you had a crisis of conscience last night, are no longer a pacifist, and have a shiny new handgun in your pocket. We’re about to have a fight. In what way is your shooting me a denial of my rationality?
And if it is, so what? Possibly you can identify flaws in the analysis by which I concluded that you are a pacifist who won’t fight back when I come to break his bones and take his stuff. Possibly I am in fact being irrational. Lots of people are. Does this mean that rational people ought to let irrational people break their bones and take their stuff, rather than “deny their rationality”?
W/re the “random but harmless process that provided the same odds of either side winning to decide the war”, there are some fundamental problems with that.
1. The most important single factor in winning a war, is the extent to which each side is willing to suffer the actual horrors of war in pursuit of victory. There is not in the real world anything remotely resembling an objective way to measure this short of actually waging war.
2. It is in the interest of both sides to misrepresent or conceal material facts that will contribute to the outcome. Most obviously, by representing themselves as fanatics who will fight to the very death, but also in the purely material aspects – deploying weapons that they secretly know won’t work for their deterrent potential, keeping other weapons secret to surprise the enemy at a decisive moment, etc. With so many hidden variables, you’re not going to be able to fairly assess the probable outcomes, much less convince both sides that you have done so.
3. If someone thinks that a cause is worth fighting a war for, with everything that implies, then they very likely think that the cause will be worth breaking a treaty for. Your forces and mine are equal, victory is a 50-50 proposition, and I at least think that what is at stake is worth fighting a real war at those odds. Of course I’m going to agree to your proposal to settle things with a coin toss. And if I lose the coin toss, I’m going to have my army launch a surprise attack while you are celebrating your peaceful “victory”.
We can handwave these away by positing an incorruptible, omniscient, omnipotent entity that assesses the probabilities of victory and defeat in war and punishes defection from the random-number settlements. But utopian propositions that depend on incorruptible, omniscient, omnipotent enforcers strike me as uninteresting – in the real world, the closest thing we have is the United States Government.
… incorruptible in the sense that their highest imperative is to feed their military-industrial complex, no matter the human cost, and no force on earth could ever hope to corrupt that imperative?
The argument “doesn’t ISIS think their war is just?” sounds like an isolated demand for rigor. ISIS does all sorts of things that we do but in bad ways because of their choice of when to do them or on what target. Nobody ever says “sure, you think you are the good guys when you give sentences to criminals, but ISIS thinks they’re the good guys when they give sentences to criminals too”, even though ISIS sentences criminals in all sorts of ways we consider bad.
Just wanted to note that Multiheaded has a very good chance to have her claim approved, eventually, as a convention refugee, based on the eligibility guidelines at http://www.cic.gc.ca/english/refugees/inside/apply-who.asp :
> membership in a particular social group, such as women or people of a particular sexual orientation.
especially if she has a documented case of being persecuted, and likely even without it.
http://egale.ca/all/faq-immigration/
This and other sources paint a pretty hopeful picture to me. Tl;dr – when queer people are terrorized into staying closeted, that in itself constitutes persecution for the purposes of refugee status. Also, Canada’s overall approval rate for asylum requests was around 40% in 2011; higher than that for queer migrants.
Also, a friend of a friend of a friend has offered their assistance as a social worker focusing on queer refugees; they and/or someone else should be able to help me with the legal stuff.
I was thinking again about the PETA water bills thing, and found that it actually did make some sense.
Consider the following timeless platonic contract:
That sounds like a contract a lot of us would ratify. I see no terrible consequences of universalizing it. I also note that we have a word for violating this contract: “exploitation”.
Alternative, consider this contract:
I think most of us would not ratify that contract. The prospect of being unable to ever deliberately acquire more resources seems a bit daunting.
From this perspective, we can see PETA as in violation of a platonic contract, whereas the people who simply refused to pay a random stranger’s water bill are not.
I’m not entirely convinced of contractualism, but it does seem to put this intuition on a much sounder footing.
Wait, I would TOTALLY ratify that second contract, if it was the case that incentives weren’t affected. Like, if suddenly tomorrow God decreed that our resources would be redistributed perfectly fairly, then I would absolutely ratify that happening from behind the veil of ignorance.
And note that paying a random stranger’s water bill, once, doesn’t change incentives.
I feel like I might have cheated somewhere here? But I feel like these two contracts have not yet successfully been distinguished.
You do not think this would cause problems economically? I am not of the view that tax increases will destroy the economy. But that pledge is truly extreme and destroys 100% of the incentives for alot of people to produce more economically.
That’s why I was trying to make this contract not affect incentives (and arguing that paying a random stranger’s water bill also doesn’t affect incentives). But I agree, something feels wrong.
Oh I should specify–when God redistributes all the world’s resources, it’s one-time thing. Otherwise, obviously, incentives get affected.
Do you pay random strangers water bills until you can barely afford your own? If not, why not?
I don’t, but only because I’m a bad person–from behind the veil of ignorance, I’d be in favor of living in a society where everyone did this.
The more people will do this, the more money will be wasted by people who have no incentive to conserve or pay for their own water bills.
The second contract doesn’t say anything about God doing the redistribution, only that one should try to the best of one’s ability to redistribute as if one were behind the veil of ignorance. Which means that the actual distribution of resources would be completely awful and far far worse than under a market system (even if we assume that everyone does his honest best and is perfectly unselfish) because no one person has more than a tiny fraction of the knowledge required for figuring out optimal (or even non-terrible) resource allocation. The only reason we can allocate resources somewhat efficiently in the real world is because we have market prices which tell us approximately how scarce goods are relative to one another. Under the second contract, all of that goes out the window.
The vale of ignorance is presumably somewhere in the Welsh mountains where we perform these experiments?
Oops.
Perhaps it will be a location in the next Dungeons and Discourses campaign.
Yes. It’s blissful.
Question for atheist or agnostic deontologists (if there are any):
Say you’re the single passenger on a spaceship that crashes on a planet that can support life, but has no animals, no people, and no trace of people. You’ll never be rescued, you’ll never see another person or sentient creature, and no human or alien will ever reach the planet (assume any explanation you wish).
In this situation, where it’s impossible for anyone to ever be hurt by anything you do, is it possible, even in principle, to do something you’d consider unethical or immoral?
I suppose I qualify by some measure, although I don’t subscribe to a pure deontological approach (I’m on the deontological side of what I consider a sliding scale of deontology versus consequentialism (with the note that this is meant as a visualisation, not as a statement that all of moral theory can be crammed into one line)), and I keep a fake religion (i.e. one I know is false, because I created it, and it’s absurd) to inoculate the parts of me longing for something spiritual against actual religions.
I hope this is okay, I’m going to tweak your thought experiment slightly to make it more palatable to me – ‘support life’ is being really fuzzy in my brain, so I’m replacing it with a scenario wherein I’ll die of dehydration once my supplies run out, because there is nothing edible there (and nothing that looks like life). That still gives me ample time to act on this planet and keeps the part of me that would otherwise be too busy wondering what I might e.g. be doing with my life if without companionship of any form quiet enough to think about this.
I should probably say that my first instinct is to say ‘No, I don’t think so’. But as much as I trust my instincts in real life, that’s not very helpful as a response.
Thinking about my principles out loud: I can’t murder anyone. I can’t harm anyone. I can’t vandalise anyone’s property (probably; this would give me pause, I admit, even if I had proof that the planet I was on was not claimed by anything sentient, but that’s not an ethical issue, it’s a psychological one).
If your last point (‘no human or alien will ever reach the planet’) did not rule out observation of the planet (however detailed), I could, conceivably, do something fraudulent. This is vague to me, I admit, since I probably wouldn’t be around to reap the fruits of the fraud, even assuming for the moment I had some grand plan (I don’t) and ‘reaping fruits’ was actually possible. I would also probably again have the psychological problem that I might be missing something and the fact that there will be no other visitors to the planet does not hold as strongly as I thought.
I see I’m still struggling with the thought experiment. I think my biggest problem in the scenario you describe would be psychological (assuming for a moment that the changes I made to the scenario are not preoccupying me, which they would be – interestingly enough I find this easier to ignore than the issues of prolonged life on this planet) – I can’t, right at this moment, conceive of a scenario in which I would feel absolutely certain that I could do nothing unethical.
From a God’s eye view (pun not intended) on the scenario, I believe the answer is ‘no’. But I can’t vouch for that, given that the scenario feels foreign to me, and I thus could easily be missing something important.
(This thread has so far been more useful to me than to you, I assume. I apologise. I really liked your question and hope someone with a sturdier/broader imagination comes along. I’d also be very interested in the answer, even though I’m not usually that interested in extreme ethical edge cases like these, since I think everyone makes practical exceptions to their ethical doctrine when the circumstances change so drastically (I should add I generally consider that a positive thing, but since I put ethics in a blender anyway, I guess that’s to be expected…).)
Yes. Imagine I make a machine that will torture me for the next thousand years. Imo strapping myself into that machine is immoral. The “me” five thousand years from now is suffiencently distinct from current “me” I consider them different ethical agents. So it would not be ok to torture “future me.”
*I am explciitly not allowing myself to creat AI or emulate myself as this makes the question too easy.
Ah, bless people with better imaginations than I have! (Granted, this is not difficult.)
You bring up an excellent point. I’m not sure if I agree quite so strongly, given I am not sure I would draw that line between the ethical agents, but that’s not an objection on principle. It’s not even an objection. It’s a genuine ‘I’m not sure, let me mull about that’. So you’ve given me something to think about.
(I kind of ruled out that sort of longevity in my version, of course, but that was out of inability to comprehend it, not out of objection to the premise in theory. I didn’t think it would be important. For that matter, I hadn’t thought that it would be important to assume I have any special abilities, but this is absolutely important when considering these things in raw theory like this. I think your example shows that very nicely.)
I’m not the OP, of course, but thanks for your post. 🙂
Yes. Isn’t that what it means to be deontologistic? It would still be wrong for me to do icky rule-violating things, like the standard example of having sex with a dead chicken.
What rule would it violate to fuck a dead chicken? Consent? I don’t really see how that applies. As per universalization, I see no problem with someone fucking my dead corpse, just that it’s slightly disgusting as seen from my preference set, but not immoral or unethical. It’s basically masturbation with a morbid novelty toy.
Some kind of generic sexual purity rule I guess. It’s a very common moral intuition across the overwhelming majority of human cultures (western college-educated liberal culture being a notable exception, and even there it’s not an enormously unusual intuition), but how it’s specifically codified varies. It’s explicitly not about any harm it would cause to anyone or anything – it’s just a rule, that’s what deontology means.
“Yes. Isn’t that what it means to be deontologistic?”
No.
I don’t particularly identify as “detontologist” but I’d say yes; it would be immoral for me to do things that would harm me in the long term (out of lazyness), e.g. not brush my teeth, not take care of my diet, not maintain my life support system, not exercise, etc. – the kind of things I would blame myself afterwards for not having done.
Alicorn (as of some years ago) considers it immoral to set fire to a tree just because you’re mildly bored, so that would be an example.
AUGH – how could I miss the coffee shop post!
*
Alfred Wegner enters a Starbucks.
The barista says “Welcome to Starbucks!”
“Funny, a barista in Brazil said the same thing.”
*
Henry Ford enters Starbucks and orders a venti.
“What kind did you want? Plain, decaf, mocha…”
“Any kind, as long as it’s black.”
*
Barack Obama enters Starbucks.
He orders a three and a half dollar tall
expresso and pays with a twenty dollar bill.
The barista gives him back hope and change.
*
Ernst Zermelo is patiently waiting in line at Starbucks.
Richard Dedekind cuts in front and orders a small coffee.
Ernst interupts “Dick, that was not well-ordered.”
“I’ve determined that a tall is too large –
you don’t have a choice in the matter.”
*
Tom Swifty enters Starbucks and asks for
the regular. “Uhhh… you mean a hazelnut?”
“With cream and sugar,” Tom said sweetly.
A haiku.
I wish Scott would start a podcast about epistemic hygiene called “Knowing What’s Not, With Doctor Scott”.
Hi, new poster here. I have a question about moral philosophy. It seems to me that straight utilitariansm/consequentialism is pretty hard to follow (e.g. I wouldn’t kill myself if it saved 5 random people). I am toying with a moral philosophy that is a bit more selfish, where I assign random people’s lives a utility value of 1 and apply multipliers like 1000 to myself and 1001 to my wife and children, and something like 500 to close relatives, 200 to coworkers, and 1.1 to fellow Americans, etc. It seems to me that as long as people kept their multipliers less than some small number and greater than or = 1, then we could still achieve a nice outcome (nice from a traditional consequentialist perspective). Can someone point me to somewhere where this idea has been previously explored? Or possibly just poke huge holes in it? Thanks!
This is my criticism of utilitarianism, so you are not alone in this.
The topic was discussed in (the comments of) this LW post.
Western familist ethics and Confucianism both have kind of a “weight near kindred more heavily” vibe.
Western familist ethics and Confucianism both have kind of a “weight near kindred more heavily” vibe.
Someone else mentioned C. S. Lewis supporting that also, which he certainly did. It’s one of the eight major headers iirc in the long set of passages he quotes in his Appendix to
The Aboliltion of Man, from Confucianism and most other cultures as well. Not just blood kinship, but neighbors, friends — in degree as they are closest.
Hm, Evolution would probably endorse that as well. 😉
I think Lewis (and me, FWIW) endorsing a kind of “concentric circles” model of (at least some portion of one’s) charity highlights a tension that’s common, oddly enough, both to Christians and utilitarians. Both Christ in the New Testament and modern atheistic ethicists are constantly harping on how we need to universalize our charity, and to view the Samaritan, or the guy in Africa, as our “neighbor.” OTOH, it’s not an ideal that we have evolved to live out very well, and it’s an ideal that if implemented too perfectly would have us all focusing on Stakhanovite labor for abstract justice and welfare rather than building livable families and communities: we’d spend all our time saving lives, and no time enriching them. Or something.
As a Christian, I tend to think about this as a tension between the Old Testament/City of Man and the New Testament/City of God. Not that OT=City of Man, or NT=City of God. Not at all. Just that the OT, like the City of Man, is centered on relatively achievable (although still pretty difficult to follow–look at Paul’s complaints about the Law) levels of natural flourishing (in the kind of thick Aristotelian virtue ethics community described by, e.g., MacIntyre), whereas the City of God/NT are both focused on a kind of “supernatural” flourishing, in grace and agape, typified by St. Francis, say, and in his atheistic way by a figure like Peter Singer: someone radically committed to universal, nonbiased benevolence.
On this account, pacifism is what the NT/City of God obviously demands, but something like Augustine’s just war theory is an attempt to find a least bad way for us OT/City of Man types to live given that it’s neither realistic nor desirable for everyone to become a Franciscan tomorrow. The whole framework here is notably non-Kantian. I’m saying there’s a higher, saintly path, and then a sort of “muggle” path, and that both are somehow valid choices, even though one is higher–just as the OT is really part of the Bible, and nature has its own good (like Aristotelian virtue) independent of supernature (like the grace that allows some people to be radically saintly). Thus, it’s anti-Kantian in that, say, just war/pacifism aren’t universalizable maxims, but instead, if you will, maxims that differ based on something akin to “vocation” in the Catholic called to marry vs. called to be a priest kind of sense.
As a Christian, I hear Christ saying (1) The kind of universal benevolence I am constantly preaching is, in God’s view, the minimal standard of being a decent person (2) You will notice that my Father’s standard is impossible/impractical, involving things like never lusting and giving away all wealth, which means you will fail (3) fear not, He forgives you anyway (4) keep trying to asymptotically approach this impossible standard with my help: the effort will improve you (5) NB that your asymptotic efforts will never “earn” you the status of good, because to be good is to be perfect, and that’s not going to happen for you in this life (6) but I’m perfect, so just tell the heavenly bouncer you know me, and that’ll get you in.
Things like Scott’s recent posts on tithing as being a Schelling point for having done “enough” universalist good to be a member of the community are I think an attempt to deal with similar issues. (Obviously, tithing is a Schelling point Christians have used in exactly this context, too.)
ETA: Scott’s sardonic “Newtonian Ethics” post has much the same tone of demanding we universalize our charity that I read in the jeremiads of the OT Prophets (who aren’t really “OT” in the sense I was talking about above, come to think of it) and in Christ’s bitter denunciations of our greed, smugness, and complacency. On one hand, it is right to be disgusted at these things. OTOH, we are weak, and if we tried to do 100% charity we’d burn out. It’s sort of like St. Paul’s line that it’s better to be celibate, but some are weak and should marry, which I take to be indirectly about this married vs. monkish, muggles vs. saints issue. It’s better to be Gandhi, radically seeking universal justice all day, but if all you can muster is giving 10% and otherwise just being some normal schmoe, well, that’s something.
ETA2: Old Western books about Hinduism often describe a theory of life stages: student/householder/monk/mendicant. I have no idea if that was ever really a thing or just some orientalist misreading, but it’s an interesting ideal: spend the first half of your life perpetuating the naturally good community, and the second half seeking the more radical, “supernatural” goods that transcend the goods evolution has primed us to seek, (e.g., loving your enemies transcends just loving your family and friends).
ETA3: Another way to think of it. Tit-for-tat is, IIRC, a remarkably stable game theoretic strategy. And “natural,” evolutionary good of the sort that keeps society going is of that sort. Just war theory, favoring kindred, etc. To be a pure game theoretic always-cooperator all the time is nobler (IMHO), but not all are called to it all the time. If all were, society would collapse when the first game theoretic cheater showed up, which wouldn’t take long. But because such a “dovish” strategy isn’t evolutionarily stable, there is something “supernatural,” if you will, about those who find a way to live like that.
Aiui, the NT exchange about the good Samaritan woman went something like this.
Q: “In ‘love thy neighbor’, who is the neighbor?”
A (tl;dr): “Her neighbor was whoever she happened to meet that day.” [ Even though the man she helped, belonged to a far country and considered her people as enemies. ]
Among the Hindus and Jainas I know about, “student/householder/monk/mendicant” is definitely a thing, and the sequence is important. Householder — raising a family — is perhaps the main duty for most people in their current incarnation. Student is preparation for householder, and monk and mendicant are for when you’ve completed householder duty, however long that takes. Some people have years and resources to go on to monk, few to go on to mendicant. In Buddhist and yoga traditions, each set of moral duties prepares you for the next stage (sort of like high school/ Masters Degree/ Ph.d). So there is no competition in essence. Everyone should cultivate feelings of unconditional goodwill toward all the world, but in householder’s use of practical resources and time, the practical needs (and reasonable desires) of one’s own closest come first . Following concentric circles on the ground — while on the same day in meditation, being one with the sun that shines equally on all.
To use utilitarian terms for what is probably a utilitarian heresy — look at how many units of happiness (utilons) can be created with a windfall of $X. Sending your own child to a much-desired music camp will bring happiness to her and your whole family, some of it permanent and growing. Dropping the $X in a bucket of hundreds of people in misery won’t do much for any of them, but will create negative utilons for your daughter — thus, a net negative.
Pure utilitarianism is for machines, humans act mostly based on virtue ethics, which may be shaped by other normative ethical considerations, including utilitarianism. If someone tells you they are a pure utilitarian, they are either lying or not in touch with their own decision-making process.
In these discussions, it is key to specify whether ‘utilitarianism’ is meant to refer to a decision procedure or a standard of rightness. Considered in the latter sense, utilitarianism is the theory that you ought to act so as to produce the most expected welfare. But it would be incorrect to conclude form this that a utilitarian agent should always decide by explicitly calculating which of the acts available to him or her are welfare-maximizing. Whether I should adopt some particular decision procedure is itself a question which is answered by considering its effects on the welfare of sentient beings. And it may well be the case that, to maximize welfare, people should act by trying to follow some non-utilitarian moral theory. This does nothing to discredit utilitarianism as a decision procedure.
By failing to distinguish the two relevant senses of ‘utilitarianism’ noted above, critics engage in a kind of strategic equivocation: first they establish that we shouldn’t follow a utilitarian decision procedure, and then they conclude that utilitarianism as a standard of rightness is false. Clearly, the inference is invalid, because the terms are used equivocally.
Can someone succinctly explain the concept of countersignaling to me? I’m not sure I completely understand it. Is it just signaling that you’re not a certain kind of person?
No.
Countersignaling is sending the message “I’m so awesome I don’t need to demonstrate how awesome I am.”
Imagine walking through a prison yard, and seeing many people glare at you, show off their muscles, or generally be slightly threatening to show that they’re dangerous. And then there’s one guy casually reading a book who doesn’t even look up. Do you immediately conclude that he must be the most dangerous of the lot? That’s successful countersignaling.
Your response would have been a lot funnier with just the first word. But thanks for really answering, I was a bit unclear myself.
Someone who counter-signals X will often simultaneously signal X in subtler ways. Because the X they exude is so obscenely amped up, signals of X mediated by subtler channels will pierce through the ambient noise without effort. Whereas a person who is X (but not intensely X) will struggle to signal their X – even with the advantage of a louder channel. E.g. you can tell a high-brow hipster from a middle-class casual because a hipster has low status clothes. A hobo also has low status clothes, but may smell less pleasant than the hipster and not own a macbook.
Showing effort is low status. But not as low as no status.
Related is the handicap principle. IIRC, the plummage of a male peacock’s tail is a signal of health (because beauty/symmetry is low entropy). But it also handicaps its ability to escape predators. If you removed a particular peacock’s plummage (and given the adult male peacock had survived up to this point), he’d be able to easily outrun predators (and competition). But how else would he be able to signal his health to the ladies? He doesn’t need more points in the speed tree, so he takes his extra skill points and puts it in the health-signalling tree.
Additionally, this pressures rivals who want the benefits of plummage, but are slightly slower and will certainly be eaten if handicapped. Higher handicap = more ladies = more kids with higher handicap. This is probably why human teenagers show off dangerous, challenging stunts for apparently little gain.
Peacock is a min-maxer. Peacock plays to win.
Consider Scott’s cellular automata of status. From the middle of the hierarchy, it’s pretty cool to distinguish oneself from the bottom because it turns a homogenous status-scape into a continuum. From the top of the hierarchy, the status hierarchy is advantageous because it turns a continuum into three discrete levels. As you go up the hierarchy, it becomes more and more advantageous to distinguish oneself with more precise signals. Of course, the bottom dwellers don’t like it. But if they had the resources to change things, they’d have sooner acquired the resources to move up the ladder.
Berkeley meetup yes please. But not that weekend, if we can; I’m goin to Tahoe then
Mike Blume and I are also in the “going to Tahoe that weekend” contingent.
“Mike Blume” sounds familiar….. by any chance is your trip being organized by a Kevin?
Is this just a coincidence, or is there some kind of large-scale Tahoe exodus then that makes this a bad weekend even beyond normal considerations of “every weekend will have one or two people with a problem”?
Pretty sure it’s just a coincidence. And I know a Kevin who does snowthings, but I don’t think he’s coming on this trip.
Maybe Tahoe is the center of a nascent Evil Scott resistance group (see previous comment thread).
This question is mostly for Scott and for entertainment, but anyone can answer if they feel like it:
Someone (with your blessing) is writing a novel and basing the main antagonist on you and discussing how to design and write about this character with you. You have all the freedoms to help shape it. (You can even pick super powers if you like, but this is mostly about personality traits.) You both want something that is as close to your current personality as possible – that means either you’re designing circumstances that would bring out your darker side quite strongly, or you’re making changes to traits that you think are keeping you in line as a decent human being. You definitely want something that feels internally consistent.
How would you tweak the world or your personality to make a villainous you?
(Disclaimer: I promise I am not writing a novel that has an evil Scott.)
I’ve often thought that if anything could turn me into a supervillain, it would be a Willow Rosenberg-style sudden telepathic awareness of the total amount of suffering in the world, combined with the power to end that suffering by killing everyone.
Too bad you’re not writing that novel. Scott could be The Steelman, (not to be confused with Stalin or the Man of Steel) who steelmans all the world’s worst ideologies until they bring down civilization.
Hero: Why won’t you stop promoting these things?
Steelman: I just like impartially considering ideas.
Hero: But you’re literally destroying civilization!
Steelman: As a consequentialist, that troubles me. But I don’t want to be rude to proponents of ideas that destroy civilization. Should I be rude? What’s your argument for that? Let me help you flesh it out….
That would certainly be a villain to behold!
…darn, you’re making that book-writing scenario awfully tempting. But alas, I don’t have Scott’s blessing, and I am very sure I wouldn’t get it. 😉 (Not to mention I’d have to come up with a plot! Though I suppose that could just be thrown into the next open thread as a separate Completely Innocent question, muhahaha.)
How evil do I have to be? If I just have to do something horribly wrong, without being an incarnation of malevolence, then just tweak some of my obsessions and amp them up. For example, make me a bit more of a bioconservative than I really am, and put me in a world where everyone and his brother are uploading and transforming the world into a place where mere biological humans can barely survive, if at all. Then I will become a genocidal monster (in deed, but one who sees himself as saving humanity). Actually, depending on the details of “uploading”, you don’t have to tweak my views at all, and y’all (the LW majority) would be the genocidal monsters. But I digress. Unless you like the Tragedy approach; then I’m not digressing.
If I have to be more Voldemort-like, then a spiral of rage seems like the most psychologically plausible way. Have me get in a fight with someone and lose. Then escalate and lose again. Meanwhile my nemesis becomes powerful and important, maybe he’s elected President or something. For maximum plausibility make my nemesis a basically nice guy who does one kinda asshole-y thing, albeit negligently, to start the first fight.
“How evil do I have to be?”
Finally someone is asking the important questions. 😉
More serious response: My roleplay group actually has a villain a little bit like the one you describe in your non-Voldemort scenario. She’s actually still transhumanist, but doesn’t believe that consciousness can reside in algorithms alone, and is basically slowly turning the uploaded people’s lives into hell because she understandably does not like the legal consequences of non-persons getting the rights of persons. She wants to find ways to prove that they’re not and that’s involving a little sabotage.
I may be on the other end of that argument, but nonetheless I find her quite sympathetic. I mean, outside of fiction, I would certainly think otherwise, but thankfully that’s not important.
Your scenario offers a much stronger case, of course, and I’d accordingly be much more sympathetic to you as a villain than I am to the aforementioned character. Clearly you need to write a book about this scenario now, so I can be a fan.
tl;dr: I really like your answer.
Can anyone give me a good example of a Ramsey sentence?
Kitchens are hard environments and they form incredibly strong characters.
After cursory investigation, I think I can say I’m pretty sure that that is nothing remotely like a Ramsey sentence.
http://en.wikipedia.org/wiki/Gordon_Ramsay
“If you think this has a happy ending, you haven’t been paying attention.”
…sorry, that’s more of a Ramsay sentence.
Scott, your lit survey on depression is really wonderful. Could you do the same for anxiety, both chronic-low-grade and acute? Anxiety is the most anti-inductive thing I can think of, which makes it a real challenge to handle as a rationalist.
I actually had half of this done when I found a couple extra papers with more stuff that meant I had to rewrite it, and I never finished doing so. I’ll get around to it sometime.
For now, http://www.hindawi.com/journals/ecam/2012/809653/ is pretty good.
Also, I have had *remarkable* success treating borderline people’s anxiety with inositol; I don’t know if it works in the general case but it might be worth a shot if you don’t have bipolar disorder (which it can make worse sometimes)
Scott, I’ve been taking 4-ish grams of inositol daily (mainly for non-mental-health reasons) and it just makes me SO F***ING HAPPY. I do about 4 g of myo-inositol plus 90-180 mg of d-chiro-inositol. I don’t have a borderline diagnosis.
Beta blockers are underappreciated as anxiety meds. They aren’t strong drugs, but they do literally nothing except slow your heart rate. No mood or cognitive distortions. (Obviously, they slow your heart rate, so check contraindications if anything is wrong with your heart.)
They can also cause erectile dysfunction.
Beta blockers do a lot of things other than slowing your heart rate.
http://en.wikipedia.org/wiki/Beta_blocker#Adverse_effects
I finally wrote something on my wordpress blog. I can’t really ask for gentleness, since it is itself kind of an extensive of critique of certain elements of the LW-sphere, but uh… I think it will probably be interesting to people here.
Ah, but you said it yourself, critiques of the LW-sphere are important. I enjoyed reading it.
You write well.
Thank you! 🙂
If the nets cure starvation too, that just means we need to send more nets.
That’s when you actually go read the article and see the long-term consequences for fish populations (and of course for the human living on the use of this resource), the possibility of contamination by the insecticides used to treat the nets (need to read more on this, the guidelines for how to wash the nets are too contradictory and fear of ‘chemicals’ often overstated), or the violence problems created.
This seems like a conclusion which is far too strong. It’s by no means obvious that the difference in variance between males and females should pass the threshold of significance in all populations, especially between populations with different population genetic parameters and different ancestry. One thing that worries me is that the Scandinavian countries have much lower genetic diversity than most countries — if the mean and variance of mathematical ability is under genetic control, you might expect the difference in variance between males and females to be more difficult to detect specifically in those populations.
Overall, the paper treats both biological and social models in a rather naive fashion. The membership of international math olympiad teams is not based solely on mathematical ability, and not immune from social effects that are unrelated to true mathematical ability. It speaks more to whether there are women of high ability (which the variance hypothesis does not deny) than whether biological or social influences drive the higher male variance in mathematical ability observed across the majority of countries.
whether there are women of high ability (which the variance hypothesis does not deny)
I don’t see this spelled out in most discussions of math variance. Maybe something like this would make it clear.
Imagine a large sampling of male basketball players, and imagine there’s some correlation relationship between each member’s height and his scores. Then line them up according to height. 7’6″-7″ is the tallest group. Then 7’4″-5″, and so on down. Say the median is 5’9″-10″, and that is the largest group, and it has 50% blacks and 50% whites. As the line gets taller, there will be more blacks and fewer whites in each group. In the very tallest group, there may be only 10% whites. But that does not mean “Whites can’t play basketball.”
In each group there will be some whites. And some short people can learn to play better than some tall people.
So to build a good team, going by “the average black is taller than the average white” is not a good idea.
Mosquito nets used for fishing rather than protection against malaria in East Africa, with worrying consequences — or as I wrote on tumblr, Moloch getting his grubby hands everywhere he can.
So, are the people giving the nets in error in thinking that the people are better served by mosquito nets than fishing nets? Are the recipients in error in thinking that they are better served by fishing nets than mosquito nets? Or are there other effects (the reference to Moloch implies that you think there is some sort of coordination problem)?
Several things here:
– the use of nets for fishing is, as far as I know, still seen as marginal. This is something to look at and wonder if the strategy is optimal rather than a stop doing this (AFAICT)
→ giving nets still a net good thing
– mosquito nets seem to replace traditional nets, partly due to lower costs and more edible things caught in it → short term, it looks like a great tool if you do not want to starve
– long-term effects are probably destruction of breeding grounds, decrease of the amount of fishes that are currently a major source of food and money to the population → probably a good idea to stick to the use of traditional nets, dangerous long-term effects to the use of mosquito nets for fishing
– the article writes as if the use of nets is to the exclusion of their use as malaria nets, rather than the existence of a competition between the roles, and not enough details → hard to get how that impact negatively the prevention of malaria
– shorter-term problem for traditional fishermen: even if the pressure to avoid starvation is lower, they may still be forced to close business because their operating costs are greater than the ones of the mosquito net fishers → destruction of long-term, possibly sustainable business in favor of short-term, destructive methods which also are not likely to create local network of businesses.
Not an economist nor an ecologist, though, so probably missing things about this.
I wonder if Givewell et al have factored this into their calculations?
The problem is that mosquito nets are like seat belts. They are only intermittently useful, and you might not even realize it.
Why do people do this, why do they expect it to work, and does it ever?
Things like this are meaningless. They aren’t even words, they’re noises. This is the same kind of thing as random Tumblr or Facebook posts passed around thousands and thousands of times about how great “you” are and how brave “you” are and how wonderful “you” look. All written by people who weren’t aware of the existence of most of the people reading them, much less know them enough to say if they are great or brave or good-looking.
Why am I expected to feel better because of positive-affect noises that have no bearing on me or my existence?
No, these are not meaningless noises. Scott is saying that he doesn’t disapprove of people who are not as intelligent as some high threshold. This is relevant because he is a high-status person whose even hypothetical judgment may matter to people on an emotional level. Some might find it helpful to know that not everybody who is highly intelligent despises those below him, even if they haven’t met that high-status person personally. Just because you don’t care doesn’t mean nobody will.
As an analogy – Imagine Scott told me that I shouldn’t take one of the drugs I’m taking, because of a bunch of stuff he is writing in a blog post that he’ll post soon. I respect Scott’s judgement that I’ll stop taking the drug until I learn more about it.
Imagine if Scott leaned out his car’s window and told a random person who he has never met and knows nothing about that they should stop one of the drugs they are taking because of a bunch of information he’s putting up in a blog post soon.
Even if that person reads Scott’s blog every day, his advice means nothing, because it is given without consideration or knowledge of the person receiving it.
*If* you take drug Foobar, *then* you should stop, because Foobar is basically always harmful on net.
*If* you’re worried about being worthless because of your intelligence, *then* you should stop, because intelligence has very little bearing on inherent human worth. If you’re worried about being worthless because you’re mean and dishonest and hurt people who help you, sure, you might have a point (or you might be misjudging yourself, advising you would require knowing you personally).
Because it’s an IOU for an argument that I believe I can make. Conservation of expected evidence – if you expect my argument to change your mind later, your mind should be changed now.
Why do Asian males outperform African American females in high end science and math by huge margins? Did you have an emotional reaction to this question? You should ask yourself why just asking this question makes you feel uncomfortable. Is it because you question whether it is true, or is it because you worry about why it is true and how this conflicts with your worldview?
Gender and IQ. I don’t get it. On multiple levels.
1. Why the emotional entanglement if males just happen to slightly outscore females here? Or group X over group Y?
2. Why do the values (errrr….politics) of some groups demand that this be disproved?
First off, my daughter just recently completed standardized testing and landed perfect scores in Math and the Math II subject test. I really don’t care if more males per capita scored perfectly. Why would I? I don’t get it.
My daughter is smart, my dog is dumb. The specific breed of my dog is not known for its doggy IQ. I feel absolutely no need to run out and do studies to counter act some perceived stereotyping that my dog’s breed is not equal to all other dog breeds, and insist that everyone states all dog breeds are 100.0000000% equivalent in intelligence and shun anyone who states otherwise. Why would I have an emotional need for this?
Creationists are vilified for rejecting Darwin’s theory of evolution. But here we have almost an equivalent process of denial that wants to state that all groups evolved 100% equally in intelligence over the last 50,000 years when the groups diverged out of Africa or that genders don’t evolve differently genetically. There sure appears to be quite some large differences between apes and man. This process just stopped?
Men are stronger than women, taller than women. Do men excel over women at basketball due to socioeconomic factors? Why is there no emotional entanglement here? Why must differences in IQ or some specific mathematical ability be dismissed as unenlightened?
I understand that many think there is simply a “search for the truth” here to see if IQ did in fact evolve differently. I suggest what we are seeing instead is a search to reach today’s politically palatable answer and this corrupts the science process. You don’t think there is emotional entanglement? Try bringing up The Bell Curve in a conversation.
If men and women were equal than it would be likely that an equal number of independent studies would show women slightly ahead, and the other set show men slightly ahead. What we get is men almost always coming out on top and if a study does find near parity, then this specific study is hailed as truth and having disproven every other study. We have all seen this in action in politically controversial topics such as GMO’s, etc. Then there is the game of controls.
The strawman of “DO FEMALES EXIST WHO POSSESS PROFOUND MATHEMATICAL TALENT? shows how warped this discussion is. Who is charging that this is not the case? Anyone? Sure there are cultural factors that also amplify this discrepancy. Very few people reject this. I am grateful my daughter is growing up in the world today instead of 50 years ago.
The male / female gap is pretty small by any measure, but it sure appears to legitimately exist. I really don’t have a problem with it.
The steelman version of the don’t-research-racial-and-gender-differences position runs something like this:
There is a potential social detriment to identifying such differences, rightly or wrongly. This negative effect is a nocebo commonly known as “stereotype threat” — people underperform when they are primed to believe they will be bad at something. Whether this belief is rational or not is irrelevant, because it’s a fact of human psychology that will not change. There does not seem to be a commensurate “stereotype benefit”, and even if there is, it accrues to already-privileged groups at the expense of needy ones.
OTOH, there is no potential social benefit to identifying the differences. If the science is bad, this is obvious, but even if the science is good, it is not useful to be able to say that group X is smarter than group Y. There are no acceptable policy implications following from such a conclusion, in a liberal-democratic society. It’s even possible that acceptance of such conclusions could undermine liberal-democratic values themselves (historical examples go here).
So the only reason to study racial and gender differences is a Socratic commitment to truth for its own sake. Opponents tend either not to value such a commitment as highly as they do social justice, or believe that research on race/gender differences is politically motivated, given the existence of more fruitful avenues, and should be uninteresting to anyone but racists and sexists.
I disagree with the above position, but I don’t think it can be dismissed out of hand. It is a serious objection, not just a knee-jerk emotional response. (Although it is often that, too.)
I’d go farther than internal “stereotype threat” and say that people are afraid of sexism and racism making a comeback. What does that mean, in a world where even the truth was a little sexist and racist? Basically that people aren’t smart enough to avoid oversimplifying the truth. Our very language is unsuited to the necessary precision – we can’t express probability distribution functions in simple English, so we say “men are taller than women”, and if that implies more than we meant it to then oops. At least with height you can quickly wipe out your priors with a short observation, but with intelligence and personality and such, precise observations are difficult and prejudice becomes too subconsciously tempting a heuristic: just rely on priors alone and don’t sweat the observations.
So it’s better to go the other direction: believe every group has the same innate distribution of abilities. That still might be an oversimplification, but at least it’s *less* of an oversimplification than the “A are more X than B” version.
Ironically, the egalitarian heuristic was ruined by the egalitarians. “Start with a prior belief that all groups are the same, and override that belief with individual tests whenever a characteristic is important” would be a great way to live… if only it weren’t for the concept of “disparate impact” in laws, policies, and culture. If your unprejudiced process leads to disparate results, and the only politically correct explanation is that this proves you to be a secret bigot who needs to be fired, then you might be open to other theories.
Right. They’d also point out that invidious race- or gender-related priors are likely to be used in situations where direct observation is impractical (e.g. looking at resumes of job applicants), even if laws or norms exist to dissuade this. Like you said, it needn’t even be a conscious effect.
IIRC, Thomas Sowell blames “disparate impact” thinking for credential inflation. His idea is (again, IIRC) that with employers unable to just directly test aptitude, they started using secondary (and then tertiary, and soon graduate) degrees as proxies. As part of the conservative minority within the African American minority, Sowell always strikes me as someone, like Clarence Thomas, who thinks he would’ve done just fine under a regime of straight-up intelligence tests, but instead gets tarred with assumptions of his success being part of affirmative action tokenism because liberals have stopped employers from giving those tests anymore, or something. I can see that being really infuriating: feeling like you owe your success to being smart, but living in a regime that won’t let you prove it.
Slightly related – I once saw an expensive car (BMW or Mercedes) being driven by a well-dressed black woman, with the license plate “QUALFYD”.
” Sowell always strikes me as someone, like Clarence Thomas, who thinks he would’ve done just fine under a regime of straight-up intelligence tests, ”
Do remember that Sowell was old enough to have graduated from Harvard in 1958. That is, before AA.
He started at Howard University, and transferred to Harvard, and a professor sent him off with the injunction, “Don’t come back here and tell me you didn’t make it ’cause white folks were mean.”
“OTOH, there is no potential social benefit to identifying the differences.”
Well, if HBDChick and Greg Cochrane are on the right track about exogamy being the driver of greater intelligence of, say, the nuclear-family-forming English over us clannish inbred Irish, then I guess draconian laws against cousin marriage might be a potential intervention.
Did you skip the “in a liberal-democratic society” later in that paragraph?
lmm, it was meant as a “modest proposal.” Sorry my tone wasn’t clearer.
Wait – WHAT?
How did I miss this wonderful chance to be outraged? English people (or pro-English people) repeating the same old dressed up in a new suit of clothes “virile Anglo-Saxons versus effeminate Celts” line?
I have got to see what was said, then I can come back here and insult the English properly 🙂
Deiseach, we can’t discuss it in the open thread, but honestly, some of HBDChick’s writing about us Irish is fun reading in a vein not dissimilar to Scott’s: well-written analytic speculation. Googling her name and our ethnonym should lead you to her. There’s a bonus “all these good things came from Catholic canon law” angle you might enjoy very much, too.
Hmm – some of it is certainly fascinating, to see an outsider speculating about Irish society (her conclusions seem odd to me, but then they would, wouldn’t they?)
I’m very surprised, though (in the few posts I’ve read so far) that she’s made no mention of what I think was a huge influence on what she terms the rate of outbreeding: primogeniture (she does a bit of puttering around with the fine and cineál definitions but doesn’t seem to mention much beyond a bit of the ancient law texts re: inheritance rights) – the notion (rightly or wrongly) that the Normans introduced primogeniture after their invasion of Ireland.
Inheritance of the name/title/property/land was by and through the eldest son mainly and often solely, with (in later centuries) younger sons left with little or nothing certainly would encourage ‘outbreeding’, as distinct from the system of tanistry and the derbfíne succession rights. If you have to make your fortune, one way is by marrying an heiress, which necessarily will have you looking outside the family group.
I don’t think I necessarily agree with her notion that Irish society was more “clannish” than the English of the time, or rather, that the English were more outgoing etc. I think they did a lot of inter-marrying and inbreeding of their own, but I do think that historical pressures were different.
I don’t know, I’ll have to think about it a bit more 🙂
The above Anonymous was me 🙂
I believe that HBDchick is of primarily Irish ancestry, P ~ 0.75.
Anthony,
That doesn’t surprise me. I’m of 100% Irish ancestry, and theories about the Irish, on average, having once had/now having low average IQs neither pick my pocket, break my leg, nor lower the IQ of me or my family.
Unless someone was actively trying to use such theories to justify oppressing me and mine (as such theories have in fact, of course, been used against other groups in other times), I’m just not going to get upset about it when it’s more fun to read the blog posts and ponder them. Litany of Gendlin and all that.
And this is where I sometimes feel I am just on a different planet.
I don’t want to have my feelings protected or have truths manipulated because some group unilaterally has decided this is better for society. It feels so…dishonest.
I understand that the science of “stereotype threat” may be legitimate, I take no position here. But this doesn’t justify the science of intelligence and gender to start consciously or unconsciously altering or suppressing evidence because of the perceived interpretation of results of this evidence.
For the sake of argument, assume the racial education gap was found to be innate (sigh…for the sake of argument!). How society deals with this would change. It might be counter intuitive but perhaps affirmative action would get a lot more support on the right. This may now be seen as fairly altering the system for a legitimate disadvantage. Much of the opposition to AA is based on a perception that it is unfairly altering the system due to poor life choices or lack of personal responsibility. Maybe that is a delusional hypothetical, but it is not lost on anyone that the failure to identify the real causation of a problem results in ineffective solutions to be attempted.
I did contemplate the problem of if you discover the causation and you can’t fix it, well that is a bummer. But you can compensate in other ways. Women may not be genetically ready to compete in the NBA at parity, but they formed their own league. One can imagine a world where there are laws requiring 2 women on every NBA team and a bunch of people proclaiming the only reason that women don’t make the All-Star team is sexism and nobody passes them the ball.
Don’t mix studying racial and gender differences with solving sexism and racism as cultural issues. They aren’t the same.
For the sake of argument, assume the racial education gap was found to be innate (sigh…for the sake of argument!). How society deals with this would change. It might be counter intuitive but perhaps affirmative action would get a lot more support on the right
In The Bell Curve, Murray and Herrnstein straight up endorse practicing a degree of affirmative action – the specific suggestion is to give blacks a score boost equal to half the black-white group difference when making decisions about college admissions, etc.
I’m not sure how the politics of this would play out, but if the political culture accepted that the gap was innate, and simulataneousl rejected disparate impact and endorsed this sort of score-norming, most of the battle would be over making adjustments to the actual level of norming. Which wouldn’t be that bad in reality, though it would probably be just as noisy as the current debates are.
If the racial gap is innate that would decrease the support for affirmative action. Affirmative action is based on the idea that black people are discriminated against and can perform just as well at a job if given a chance. If that’s not true, then all that’s happening is that less competent people are given preference over the more competent.
That depends what is meant by affirmative action. I would be in favour of government subsidised embryo selection ( https://www.cog-genomics.org/static/pdf/bga2012.pdf ) to correct the differences in aptitude between different population groups.
Let’s wrest eugenics from the far-right and bring it into the progressive arena.
Where it began.
“Let’s wrest eugenics from the far-right and bring it into the progressive arena.”
Don’t fool yourself. It always was a progressive agenda. It was Oliver Wendell Holmes who declared that three generations of imbeciles was enough.
You tried to blame it on us when it got ugly, but it was you all the way.
>affirmative action would get a lot more support on the right. This may now be seen as fairly altering the system for a legitimate disadvantage.
That would never happen. I can’t even think of a good reason that *should* happen. If (FOR THE SAKE OF ARGUMENT) all interethnic differences in ability were genetic, there is still no reason to treat Person A from Group 1 different from Person B from Group 2 if both have equal ability.
What *would* happen is that people would discriminate even more, now that science has vindicated them in their beliefs.
(PS that’s not in support of censorship – it’s only against your prediction of what would happen.)
I might buy the “don’t investigate this – it’s dangerous!” line of thinking if we ignored the issue completely and just let the demographics fall where they may.
But we don’t. People DO investigate the differences and conclude that they must be caused by discrimination, and propose solutions based on that conclusion. Solutions, like affirmative action, that strike me as utterly indefensible if discrimination is not the actual cause. In that case you HAVE caused discrimination by studying, just not in the direction you expected.
If we lived in a world where no one noticed or cared about gender differences, then I’d agree we should keep it that way. But Pandora’s box has been opened, and bad conclusions have been reached, so the only recourse is to keep studying until we have the correct answer.
“People DO investigate the differences and conclude that they must be caused by discrimination”
I would challenge you to read 100 articles on the racial education gap in the MSM and see how many you can find that even contemplate the gap may be partially caused by genetics. My guess from my experience is zero. This discussion is verboten in polite society.
If this was an honest science discussion about this issue, genetics would be at the minimum brought up regularly as a possible causation, if only to be summarily dismissed by some. But it is never even mentioned. There are definitely some irregular social dynamics at work in the scientific method here IMO.
I find the argument that lower math IQ scores are caused by discrimination to be unconvincing. That the claim is made that genetics are 100% excluded is really, really unconvincing. At best I would say the claim could be made that the causation has a lot of uncertainty.
I’m not sure why you’re challenging me, because I think we’re in agreement – we are “investigating” the gaps but doing a poor job of doing so because concluding “discrimination” is rewarded while concluding “genetics” is sanctioned. Therefore, I favor fairly investigating the “genetics” hypothesis.
Apologies if I did not clearly convey my intent.
“OTOH, there is no potential social benefit to identifying the differences. If the science is bad, this is obvious, but even if the science is good, it is not useful to be able to say that group X is smarter than group Y. “
Even aside from what some of the other people here have said, remember, society is fixed, biology is mutable. We have almost enough technology to start enhancing genetic-intelligence right now. I expect the problem is now, or soon will be, more political than technological.
The first thing it would be interesting to be able to say is “You know the difference between this race and that race? This race is doing pretty well, comparatively, that race is poor and miserable? That’s what an average difference of 15 IQ points does. We can give you 50.” That might make people take notice, especially in the face of people trying to obfuscate the issue and talk about how IQ doesn’t matter because it’s just about solving silly math problems.
The second thing it would be interesting to be able to say is “You know this social problem of inequality that we all want solved? There’s only one way to solve it, and it’s allowing this technology. Your move.”
I wholeheartedly agree that biology is more “fixable” than society, but it’d be putting it mildly to say that the idea has little traction on the political left. Or the political right, for that matter.
The popular conception of genetic engineering is approximately Gattaca, to the extent it’s considered at all.
I’m beginning to suspect that popular science fiction can be harmful. People who don’t think about genetic engineering but have seen Gattaca are automatically going to assume that it’s dystopian where if the movie didn’t exist they might not even have an opinion.
If we could all come to some kind of gentleman’s agreement where we all pretend to believe in the absolute equality of sex and race, then I might be okay with your steelmanned position. The problem is that most people actually believe it, and start doing silly, wasteful, or outright harmful things in efforts to “close the gap”. It’s really difficult to have a serious policy discussion on education, for example, when the discussion is dominated by people fruitlessly fixated on closing the black-white achievement gap, rather than just working to raise everyone’s outcomes overall.
Hyde is a ‘veteran gap buster’ according to La Griffe du Lion and Mertz was mocked for her statistics by Ron Unz when she dared to criticize his American Meritocracy article, which then spilled over to Andrew Gelman’s site.
As some folks have already brought up the Minnesota statistics of Asian-Americans(and Andrew Gelman’s site), I will add to it that they were only for the 11th graders. So for one year, in one grade, in one state, the asian-americans(a conglomeration of many different ethnicities) end up with more girls than boys in the top 1% and since then it has been often paraded around as asian-americans(notoriously unfeminist) doing away with gender inequality while the whites can’t.
Their paper mentions Guiso et al for showing that the gender equality of a country(as gauged by the Gender Inequality Index, which opens another can of worms) tracks the gender gap in mathematics. La Griffe showed that it wasn’t true and a recent comprehensive debunking came from Stoet and Geary, 2013. However, both papers show reading performance to be correlated with the maths performance so that the two gender gaps in reading and maths are inversely correlated. The better the girls do in reading, the smaller the gender gap in maths becomes and since the latter is smaller, sometimes girls end up better in maths while boys are always behind in reading.
The cherry picking of a few countries with equal variances or even higher for girls is amusing. For what commonality do the cultures of Iceland, Thailand, and the United Kingdom have in order to reach this landmark? Secondly, their use of 17 year olds’ data from TIMSS 1995, in the variability debate is amusing since the gender gaps in that dataset were rather obscene.
Analysing the variability debate helps if you consider it from the other viewpoint. If some boys don’t read well enough to do maths, then they will invariably stretch the male distribution. If the country scores rather high, more males are likely to come up against the ceiling. Then there is skewing of the curve so that bell curve assumptions don’t apply, I gave this reason for another of Mertz’s paper showing a few countries with higher female variability but very poor scores.
http://www.academia.edu/393769/Vos_P._2005_._Measuring_Mathematics_Achievement_a_Need_for_Quantitative_Methodology_Literacy
So it’s possible that males might not be variable on aptitude for maths after all, but rather on attitude towards studies and due to the gap in reading skills. The male overrepresentation at the top remains regardless, which of course is the crux of the issue for the gender equality advocates.
The latest post on my blog is on the same issue summarising my views rather pithily. Even though boys might do better on standardised tests, it’s no guarantee that they aren’t underperforming, though not to level of school grading.
I don’t mention the verbal, quantitative and spatial split there though. It’s another interesting avenue to pursue considering that spatial ability seems to affect the choice of going into engineering and the like, and is not evaluated and considered in the current schooling paradigm. And because it’s not exactly a STEM gap, but,
http://www.randalolson.com/2014/06/14/percentage-of-bachelors-degrees-conferred-to-women-by-major-1970-2012/
There are people who have thought about epistemology, Bayes, rigorous formulations of Occam’s razor, the size of hypothesis-space and the problem of privileging the hypothesis, who know about the heuristics and biases body of research, and remain religious.
This confuses me so much. I don’t understand it, and really wish I did. Is it a fundamented rejection of some basic premise? Is it some fantastic body of evidence?
I’ll thank anyone who points me to people explaining their own reasons, or of course, who explains it here. If you do, I hereby promise not to argue about it.
(I especially mean traditional identifiable religions, not vague Theism or weird Simulationism, etc.)
I think it’s pretty rare for a person to become religious because they did some logical reasoning, weighed all the evidence and decided that God does indeed exist. They either were raised Christian(or whatever religion) and/or they had a spiritual experience and became religious. If they had a really good logical reason for being religious, then philosophers should be disproportionately religious, but only 14% of them are.
http://commonsenseatheism.com/?p=13371
I expect that religious people who have read the sequences (or know similar material from whatever sources) either reject some part of it, or think the two things are compatible. I’m interested either way.
I think Cauê is talking about people who remain religious, rather than become religious. They’re a lot more common. Also I think the vast majority of the small number of people who became religious as a result of LW did not come to believe in God or the factual claims of their religion, but became observant.
Wrong Species:
Are “philosophers” the relevant expert group? Philosophers of religion are, as one might expect, overwhelmingly theist. Now, of course that’s an unfair pro-theist bias: atheists like J.L. Mackie or Graham Oppy who choose to work in that sub-field are understandably few and far between.
But here’s the thing: contemporary academic philosophy is VERY specialized. A continental attuned to différance, or an analytic adept at juggling Convention T, can be very skilled with the tools of his or her own subspeciality without ever acquiring an acquaintance with arguments like Aquinas’ any deeper than that offered by the Phil 101 survey course they had to take before they could start their undergrad philosophy electives. As Ed Feser complains (ad nauseam for some readers), that Phil 101 understanding of Aquinas and his kin is just hopelessly shallow, and composed mostly of misconceptions and anachronistic misreadings. Take Bertrand Russell: the guy was brilliant enough to coauthor the Principia Mathematica, actually wrote his own (tendentious) History of Western Philosophy, but doesn’t seem (as Feser often points out) to have ever really come to grips with a non-caricatured version of Aquinas’ argument. So you can be a brilliant philosopher and still not be an expert-level evaluator of theist arguments.
Now, that doesn’t mean that you should become a theist just because most philosophers of religion are theists. It just means that both samples are no good: The philosophers of religion sample is expert enough, but obviously biased by interest and affinity toward religious people. OTOH, philosophers generally usually specialize in something very far from philosophy of religion, and actually don’t tend to know that terrain very well.
So expert consensus isn’t a helpful guide here: it’s either too biased, or too inexpert. Actual engagement with the deductions is instead required.
As a practicing philosopher, I have to disagree about how specialized philosophers are in general. There are some narrow specialists, but there are a lot of us that aren’t. Feser certainly underestimates how much philosophers outside philosophy of religion tend to know about it, and has a tendency to mistakenly conclude that people are uninformed in a number of cases where they really just disagree with him.
@Protagoras:
“mistakenly conclude that people are uninformed in a number of cases where they really just disagree with him.”
That does sound like an error he’d be prone to.
BTW: What do you think about the whole “most philosophers are atheists, therefore theism is wrong” point? I think it’s not merely an argumentum ad numerum: philosophical reading is interminable, and sometimes one just wants to find out what others, who have done more of the reading, have converged on so far–theism/atheism seems like a perfectly fit candidate for that sort of thing. I’m just unconvinced that professional philosophers, generally, is the relevant group. Thoughts?
I do also think it’s more than just a numbers thing. Philosophy is the most meta discipline, and God is an extremely high level hypothesis, so I do tend to think God falls naturally into our area of expertise. Of course, that assumes that we have an area of expertise; not everyone is willing to be so charitable to us.
Well, even on a deflationist/therapeutic naturalistic understanding whereby good philosophy is just conceptual analysts doing science’s bookkeeping, I think bookkeepers are entitled to be considered experts in what they do!
IOW, philosophy scorn always kind of baffles me. Not as an unreflective tribal phenomenon (no surprise that jocks hate nerds), just as a thought-out position, as with some “scientistic” types about whom I can’t shake the feeling that they ought to know better.
I think philosophy scorn makes more sense if one thinks of continental philosophers as representing philosophy in general.
I don’t think you have to be an expert in philosophy in religion to reject religion but I agree with your main point. I retract my bit about philosopher consensus.
@Wrong Species:
People who retract points are my favorite people on the Internet! I shall strive to emulate your grace.
FWIW, I agree that you don’t have to be a philosopher of religion to have a respectable opinion on it. Nobody is an expert in everything, but we all have to muddle through as best we can.
While the majority of philosophers of religion are theists, there’s good evidence that this results from selection bias and should as such carry no evidential force. To control for this confounding factor, one could focus on how the beliefs of philosophers of religion changed after they started studying philosophy. A recent survey did consider this question and found that theists became less theistic after exposure to philosophical argument more often than atheists became less atheistic.
Wow. That whole page is very interesting–great link.
(I’m not surprised by the result, actually. If you come from a non-Scholastic tradition, it’s not shocking that something like this quote at your link could happen to you:
“I was a theist when I began university. It was during reading Hume’s Dialogues in my second year that I began the road to atheism. I believed that Hume successfully undermined every rational reason I had for my personal belief in God…”)
Note on that survey: it’s not clear to me that it finds “that theists became less theistic after exposure to philosophical argument more often than atheists became less atheistic.” Consider the base rates (in philosophy of religion). There are more Christians in philosophy of religion to start and so more opportunity for belief revision to atheism/agnosticism than vice-versa. The 10th comment here — http://dailynous.com/2015/01/30/why-are-so-many-philosophers-of-religion-theists/ — looks at the data and finds that 18.4% of theist philosophers of religion converted from theism to atheism/agnosticism while 46.2% of atheist/agnostic philosophers of religion converted to theism.
Sure. I’m a nondenominational Christian with strong inclinations towards Catholic & Orthodox theology (I currently attend Presbyterian services with my wife to keep the household peace), raised Mormon and left Mormonism in my early 20s because I was convinced its doctrines were false. And, to establish my “smart” credentials, I’ve got a B.S. with honors in Chemistry from Caltech, a PhD in Biophysics from (Unnamed High-Status University, left unnamed so it’s not quite so straightforward to track me down), and about a dozen scientific papers in good journals. I’m familiar with psychology, Bayesian statistics, epistemology, systematic biases, etc. I’ve read (some) of the Sequences (those that I could get through without shouting “What an idiot!” at the screen too many times). Here’s the two major reasons why I’m not an atheist:
1) Fundamental differences in basic epistemological and moral axioms. It is a literal miracle that we know anything at all, that the world is fundamentally knowable. That understandability of the world is reflective of its creation by a rational God, the Grand Mathematician. So explanations that epistemology is hard for such and such a reason, or that human rationality is bounded seem obvious and don’t particularly impact my faith. Similarly, it seems obvious to me that utilitarian ethics are false – or as I put it in another discussion elsewhere, that morality is a path function, not a state function.
2) God spoke to me once. Specifically, when I was going through tough times as a teenager/young adult much like Scott and Other Scott, I had a revelation from God. He told me that I wasn’t worthless, that He had plans for me, and that He loved me.
Thanks, this is just the kind of thing I was looking for.
No need to establish smart credentials… I’m only interested because I know the people I’m talking about are smart.
(Adding this to the initial request: One other thing I’m curious about but forgot to mention is reasons for choosing a particular religion among all others. I expect these aren’t necessarily the same as reasons for simply not being atheist)
Well now you’ve hit on the active religious topic in our household. I don’t see how I could not be a Christian without completely discounting my personal religious experience, from which it would be a short walk to solipsism. So of the Christian churches:
Mormonism, though most familiar to me and personally comforting is manifestly false. (I actually left Mormonism after reading sufficient ancient history to realize that the doctrine of complete ancient apostasy is ludicrously false. I also have a private theory that the angels Joseph Smith saw were in fact demons.)
Evangelical Protestantism is too anti-rational to accommodate the evidence for a rational universe and a rational God.
Mainstream Protestantism is… wishy-washy, with unclear doctrine and unclear authority to teach any doctrine.
Catholicism, particularly the Scholastic tradition, is very appealing but the institutional church is obviously seriously corrupt and some of the doctrines are non-trivial. Plus my wife is a diehard religious anti-authoritarian.
Have you looked at Eastern Orthodoxy? Any thoughts? I’ve been a mainline Protestant for a long time but I’m starting to make furtive glances in that direction.
I’m an atheist, but I have to admit that the Orthodox have some damn good hats.
Hey, those’re blessed good hats!
I stand corrected.
Cauê:
Catholic here. As a fan of Thomistic metaphysics, my disagreement with the way both your comment and Eliezer’s oeuvre treat God is that classical theism of the sort typified by the Catholic Scholastic tradition doesn’t treat God as a “hypothesis” at all.
Eliezer likes to quote Laplace’s “I have no need of that hypothesis.” You mention Occam’s razor, Bayes, and lots of other ways to try to do a careful, skeptical job of making inductive hypotheses based on the evidence.
But for the theist metaphysician, God isn’t like the result of induction, like a scientific hypothesis. Instead, God is a result of deduction, like a mathematical theorem derived from postulates/axioms.
Bayes and Occam are great for answering inductive questions about evidence. But metaphysical deductive demonstrations of God are about seeking coherence among beliefs, not facts from observations. At their simplest, Aquinas’ arguments go like this: What MUST the world be like for change to occur? Well, there must be act and potency in the world. Somewhere in there, there must be pure act. What would pure act be like? Well, it’d be God.
Now, maybe this is a bad argument. Lots of very bright people have unfairly (like Russell) or fairly (like J.L. Mackie) engaged with it and found it flawed. Okay.
But that’s my basic beef with LW atheism: bad metaphysics. I think it gets the problem of universals wrong (“clusters in thingspace” and the idea that numbers are just pebble-counting are both pure nominalism, whereas I’m a moderate realist), I think it gets ethics wrong (virtue ethics is notoriously slippery, but no more so IMHO than attempts to calculate utils or adequately predict consequences). Worst of all, and more to the present point, I think LW atheism does the New Atheist thing of treating all theist arguments as though they were the failed hypothetical inductions of the “God of the gaps” of William Paley, some Greek myth about the weather, or some Kentucky creationist; that completely ignores the metaphysical, deductive argumentation that more thoughtful theists are more likely to be motivated by. E.g., rationalist Leah Libresco’s largely Platonist concerns were addressed by converting to Catholicism, but LW atheism, with its (to be frank) metaphysical shallowness, never gets anywhere near the area of inquiry concerned, much less to answers in that field that might satisfy a metaphysical inquirer.
In brief: the Bayes dojo is a great place to learn how to think about scientific questions, how to handle evidence soberly, etc. I love it for that. But, IMHO, God just isn’t that *kind* of question. LW atheism fails to distinguish between God as hypothesis (which is, indeed, silly and primitive), and God as theorem (which I think deserves far more careful refutation, or, in my case, assent).
I hope that’s helpful and responsive, Cauê. I may not share the LW community’s usual atheism, but you guys rock, and I’d like being one of the theist hangers on to render me useful at least once in a while.
ETA: In answer to your second question, my theism flows from Thomist metaphysics, so Catholicism is the most natural choice.
ETA2: “I hereby promise not to argue about it.” I’m grateful for that. I hope no one else feels like arguing about it, either. If you do, just grab a copy of Feser’s new “Scholastic Metaphysics: A Contemporary Introduction” and his prior Aquinas, Philosophy of Mind, and “Last Supersition” books, and blog a hostile review if you want.
My argument at this point is pretty much just a Courtier’s Reply: “Read Feser’s oeuvre and get back to me.” There’s no point trying to type out that oeuvre in a comment box. He spends a lot of sentences trying to head off common (especially on LW) misunderstandings, which is most of why his books are book-length.
I just got “The Last Superstition” on Kindle and literally the first sentences are Feser criticizing same-sex marriage because it gives equal value to “family and sodomy”.
You’re gonna have a lot of trouble convincing this blog’s audience to take that book seriously. I’m pretty sure I just wasted 10 dollars.
Feser’s vitriol about gay people is ridiculously unhelpful. I think he still has important things to say about metaphysics, though. It’s kind of the old “Heidegger/Heisenberg was a Nazi (or at least worked for the Nazis, in the latter’s case), but read this anyway” thing.
(As an orthodox Catholic, I hold to the same morality Feser does, but I don’t think he has to be such a jerk about it. Nor do I think that whether something is a sin has much to do with whether it should be legal. Feser, unfortunately, seems to disagree. He’s quite FOX News-y politically.)
My advice, now that you’ve spent the 10 dollars anyway, is to try to ignore it, the way Scott ignores the unsavory bits of race realism and NRx and whatnot when he’s trying to steelman them. The metaphysics are interesting, and Last Supersition gives a good, quick, popular overview of Feser’s main contentions. It’s not at all rigorous, but it’s better than what I could fit into a blog comment.
“Scholastic Metaphysics” is way better, but it’s not out on Kindle yet. It engages with the analytic tradition in a much more rigorous way than the popularization in Last Superstition, which is mostly focused on bashing the early moderns. (Feser wrote a book on Locke, and it shows in Last Superstition’s focus).
The other two (“Aquinas” and “Philosophy of Mind”) are at least as good as Last Superstition, and more free of the vitriol. (Not entirely; Feser has little restraint.) I’m almost through Scholastic Metaphysics, and so far it’s vitrol-free, thank God.
On hylemorphism/formal causality in particular, David S. Oderberg’s “Real Essentialism” is a lot more analytically rigorous than anything Feser has put out. It’s only one part of the overall metaphysics, but it’s very well done.
That said, I’m gratified you took the recommendation, and I apologize if you end up hating the book. If you’re ever in Dallas, I’ll buy you dinner to make up for it.
ETA: If you just can’t stand Feser AT ALL, here’s a guy (unfortunately with similar FOX-y tics, but mericfully rare) blogging the Summa Contra Gentiles with helpful footnotes. The guy is smug as heck, but the footnotes are good. Here’s the chronologically first page of his blog posts on the Summa:
http://wmbriggs.com/post/category/samt/page/8
Scroll down to the bottom of that page 8, and read the posts in reverse order. Beware, though: the smug “do you now see how obvious God’s existence is, dumb atheist?” tone of the (otherwise sound) footnotes makes Less Wrong’s treatment of theists look like a Baptist tent revival.
“Scholastic Metaphysics” is free of vitriol, you say? I should definitely look it up myself (I’ll try to get it from a library, though). I’d certainly be curious to hear what either you or linked list have to say about my discussion of “The Last Superstition”.
Fine, I’ll give the book a chance. It’s not your fault I bought it without reading the sample anyway. I’m just a compulsive book shopper :p
Protagoras:
I tried to leave a longer comment, but it disappeared. To sum up: Your review is fair and interesting. Much of what Feser elides in TLS is covered much more adequately in SM. I’d be fascinated to read your review of that, should you ever write one.
Linked List: I’m a compulsive Kindle shopper, too.
Scott: If my disappeared comment was out of line in some way, I can’t apologize enough. I can’t think of why it would’ve been, but whatever it was, sorry!
I’m not sure if I qualify as practising a “traditional identifiable religion”. I’m a pretty mainstream reform Jew, perhaps a bit more observant than most (I keep kosher, as I interpret kashrut).
I affiliate with the Jewish community. I practice Jewish ritual. I use Jewish thinking as a guide to my actions (*a* guide, not *the* guide, but a guide that I would never dismiss out of hand). I treat Jewish sacred texts with respect.
You talk about “hypothesis space” and a “body of evidence”. This sounds like you view religion as a belief in the less-wrong sense. Something which constrains expectation.
I cannot think of any scenario in which my Judaism constrains my expectations.
Eliezer Yudkowsky claims that this is a recent development. That a thousand years ago, people prayed for the sick and literally expected this to help them recover. Maybe they did. Probably someone somewhere still does (I haven’t met them). On the other hand, I note that Rashi (who actually lived a thousand years ago) expressed confusion at the fact that the Torah begins with the creation of the world. He took it as a given that the Torah exists to teach us how to live, and asked why its very first chapter had so little application to that.
The sort of literalist religion EY insists on is very much a weak man. A *straw* man from the perspective of my own social bubble.
So I don’t know if Judaism counts as a “traditional identifiable religion”. Just because we have a continuous history of thought stretching back into prehistory…. But consider whether some of the people who have you confused in fact believe something more like this.
I do. Or rather, not that prayer causes healing in a deterministic way, but that God listens and responds to prayers and can heal.
Would it be possible for you to have a word with Him regarding His unreasonable refusal to heal amputees?
I don’t mean to claim ownership of this thread or anything, but can everyone please refrain from arguing in this one, especially in a hostile way?
I really wasn’t interested in reenacting all the old arguments right now, and I’m afraid other people might shy away from contributing.
(the response so far has been great, thanks)
I was going for light-hearted teasing, not contempt, but I should know better when it comes to the internet and tone perceptions, so I apologize.
Sorry, Cauê. I deleted my response (I think the earlier one is still useful.)
Daniel Speyer writes:
I think that might offer the most admirably succinct TL;DR to Cauê’s query as to how one can, say, read the Sequences and still practice a religion. From within the social bubble of more intellectualized religious practice (e.g., “sophisticated theology,”TM), the LW stuff just feels like it’s directed at someone else. Not that us still-theist readers mightn’t be wrong about whether the Sequences land any punches against our versions of religion, just that it doesn’t *feel* like they do. After I read a selection from the Sequences, my algorithm feels from the inside like its just encountered a marvelously well-written exposition of a viewpoint from within one half of Snow’s “two cultures,” without really saying anything that hits me where I live, in the other of the two cultures.
A lot of atheists mistakenly think of religion as an attempt to answer the question of “How does the world work?” Thus, they conclude that with evolution, there is no need for religion anymore (but note that even evolution only answers a very stripped down and limited version of the original cosmological argument). While some religions do dip their toe into those areas, this has never been their primary concern. Religions worry about these 2 questions:
1. What is wrong with the world?
2. How do we fix it?
Empiricism tells us nothing there. Indeed, the strict materialism which many Rationalists take as an axiom, really needs to consider these questions invalid. But most humans know that these are real questions which need solutions, just as surely as they know that 2+2 must always only equal 4 (which is also a metaphysical fact outside of the material realm). There are a lot of questions which empiricism just can’t address.
How shall we live? What is our (my) purpose? What is right? What is wrong? What is worth dying for? Why is there something rather than nothing? Why is this universe even intelligible to our minds? Why can we trust reason? Why is math so unreasonably effective? What does it feel like to be a bat?
All of these are metaphysical questions. Some of them are important for the Rationalist project to make sense at all. But it largely ignores them, or gives answers which are philosophically naive at best.
Which gets back more or less to what Irenist said: God is more of a theorem than a hypothesis. The reason so many early Christians took to Plato is that his philosophy, with its exploration of metaphysics, is constantly throwing up big neon signs flashing “hey look, there’s God right there!” Looking into the why of things rather than the how constantly points back to God.
As for why Christianity in particular, most people will probably answer some combination of these:
1. The historicity of the resurrection is compelling.
2. Personal experience/dealings with God.
Religion doesn’t answer “should” questions any more than materialism does (that is a question of morality, not epistemology). It also does not answer the question of why anything exists (which really, I’m not sure that’s actually a real question in the first place). And it “answers” the problem of trusting memories, but only by relying on faith in something else.
Hmm. Jaskologist writes:
Illuminati Initiate writes:
I think there’s maybe something interesting here. I think a lot of non-theists may just be more interested in, more impressed by the practical results of, and/or more convinced of the intelligibility of, scientific/technical/political/historical “how” questions than metaphysical/teleological “why” questions.
I can imagine, say, Richard Dawkins growing up really curious about organisms, finding that religion’s answers to the questions that he’s passionate have mostly been primitive myths, and concluding that the whole thing is silly. IOW, maybe the atheist isn’t so much “mistaken” that religion is an attempt to explain how the world works, as just a person who has a strong prior that a worldview ought to be centered on “how” questions, and so feels like religion is missing the main point, in much the same way that those of us with “why” questions feel like anti-theist harping on evolution vs. intelligent design misses the metaphysical point. Maybe we’re just most delighted by different *kinds* of explanations?
Interesting theory, but doesn’t every kid ask lots of “why” questions growing up? Eventually, some of us get used to the fact* that the “how” is something we can actually investigate and use, while the “why” doesn’t lead anywhere interesting.
* sorry, atheist perspective
I think that’s a great atheist perspective to offer, actually. It seems like kind of a lottery of fascinations thing–to some of us, the places the “why” questions lead are endlessly engrossing. To others, not.
Maybe there’s a trend, but mark me down as one atheist fascinated by “why” questions.
Regarding religion and “how” vs “why” questions, my sense is that religion mostly wasn’t shy about the how, until – how can I say without appearing too argumentative – that no longer seemed productive. In some places it still isn’t shy about the “how”s.
You can also add “the bible itself is compelling as a text of divine origin” to that list, because it’s one I believe in. There is probably (and maybe necessarily) some overlap with #2 on that list, however.
Nearly everyone has a few bizarre beliefs (belief-space is really big). I am almost certain you hold beliefs that 99 percent of us would think are silly. For these people, religion is that belief. The only interesting thing different I can think of is probably that belief would be more vigorously challenge in these circles. Perhaps though religion is a more-frequent-than-others bizzare-belief. I don’t have statistics here.
Bayes, Occam, etc. are just attitudes you assume about the world. The world is not obligated to follow your taste. Maybe religious people have seen something that pushed them out of their prior. Or maybe they don’t prefer a simplicity prior. Why argue about priors?
Encounters with the divine (whatever this means) are apparently not very reproducible, so the usual framework we have doesn’t work.
Why? Curiosity, and the desire for a more accurate map.
Those and more besides. There are deep unshared assumptions. There’s a fantastic body of asserted evidence, and a dispute as to what constitutes “evidence” – and, as mentioned upthread, arguments which don’t conclude that God is the most likely hypothesis given certain evidence, but argue that God is a necessary premise for there to be a coherent notion of “evidence”.
But for specific combative reasons encountering the usual culture around here, I’d put the following.
Reason one: I have similar thoughts to the people saying straw man/weak man in this comment subthread, but I’d say “wrong target” or something like that instead.
I’m a Norwegian Protestant, my best friend is a Polish Catholic, and the two of us agree that we’d much rather commune with a typical Russian Orthodox than with any typical American Christian – whether Protestant or Catholic – because we share the view of America as being full of crazy heretics, although my friend will insist in pointing out that they are mostly heretical Protestants. And most of the time when I read something on Less Wrong talking about religion I mentally parse it as saying American Protestantism instead, and it makes much more sense.
Epistemology, Bayes, rigorous formulations of Occam’s razor, et cetera, are great weapons when your enemy is going “You just gotta have faith”, which I gather is being said by a nontrivial faction in American culture wars. But where I live, “You just gotta have faith” is heresy, and I often find myself nodding along with whoever is attacking that position. And this goes up to eleven when Yudkowsky takes pot shots at creationism.
Reason two: Original sin and the other Problem of Evil.
I have yet to encounter a satisfying secular account of evil. The great majority that I’ve encountered seem to fall into one of two camps: pinch from Christianity and cross out the bits about church and sex and family, or do away with the concept.
Reason three: Meta level arguments that many of the critics are out of their depth. (Warning, long and rambling.)
C.S. Lewis has some rather acerbic comments on the professional literary critics proposing to trace the genesis of a text in Fern-Seed and Elephants. He notes that he has a book with a talking lion, and some other author has a book with a talking tiger, and the “reconstructed history” of these books says that the former led to the latter, but Lewis has spoken to that author and has firsthand knowledge that this didn’t happen. Then you have various reviewers talking about how the Ring of Power written about by Tolkien represents, symbolizes, or is inspired by, the atom bomb, but again, Lewis has firsthand knowledge that this is wrong, because Tolkien was writing about the Ring of Power before the Second World War even started. If this is the level of spurious conjecture we can expect when people who study books for a living opine on how a story from their own time and place came to be, he asks, why the hell should we listen to a word they have to say about the “real origin” of stories from a thousand years and a thousand miles away?
Scott had something similar in Are You a Solar Deity? and I’d also point to xkcd’s Physicists (“Just model it as a [simple object] and then add some secondary terms to account for [complications I just thought of]. Easy, right?”) as a related failure mode of filtering everything you read through a single prism of thought shaped by one’s preconceptions, and it also ties back to my first reason: when you see someone arguing that understanding the size of hypothesis-space and privileging the hypothesis discredits Christianity, and you read their argument but their interpret it as discrediting, say, fideism, everything else they have to say starts to look a little weaker when you see that they misrepresented something outside their field, and they should stick to probability theory, or read the arguments and evidence for that particular hypothesis.
Which has its own problems in that the arguments and evidence for the Christian hypotheses may be several hundred pages of theology using technical jargon, with multiple contradictory several hundred page sequels going “We realize there’s a bit of a weakness on points X, Y, and Z, also G and H are unclear and we’re not sure how to deal with it, but here are some approaches that seem reasonable to us.” with each sequel explaining a different school of thought’s perspective on the disputed points. At which juncture the critic is entirely entitled to object “Squid tactics!” (i.e. spraying up a cloud of ink as a distraction) and invoke the heckler’s veto on bores and demand a summary version so he doesn’t waste his life reading a thousand pages of speculative theology. This is a hard problem to resolve.
And then there’s the matter of the soul. You’d think nobody had heard of the Fifteenth Ecumenical Council.
(humorous pause for chorus of “the what?”)
It’s mostly known for ordering the dissolution the Order of the Poor Fellow-Soldiers of Christ.
(pause for “the who?”)
The Knights Templar. Which led to a lot of speculation about where the Knights’ money went, and various conspiracy theories that continue to this day. And that greatly overshadows one of the lesser-known documents from the Fifteenth Ecumenical Council, which responded to a dispute at the time by setting down an official position statement, binding on all Christians on pain of anathema, on what a soul is and who has one. To make a long story short, the soul is the essential form of the body. Translated from medieval-speak into modern terminology, their position looks to me very much like a form of pattern theory of identity written down seven hundred years ago. But very few critics of the “soul” are attacking this position; they’re usually attacking the popular consciousness idea of the “soul” as a sort of ghostly puppeteer pulling the strings of the body. I can hardly blame them for this, but it does leave me wondering where else criticism may be grossly misplaced because it’s attacking idea-as-refracted-through-pop-culture instead of the idea-as-stated-by-official-idea-stater which I’m ignorant of.
+1
The problem with saying “wait, we defined a soul at the Fifteenth Ecumenical Council” is that “we” is pretty narrow. It’s certainly not the “true” definition of a soul; it’s a definition by one sect of one religion among all the religions that believe in souls. Arguing with people who define a soul differently isn’t arguing against a distorted version of the true belief–who’s to say that Muslims or Protestants have the distorted version and the Fifteenth Ecumenical Council has the true one?
And even when arguing with Catholics, who’s to say that true Catholic belief is the one from the Fifth Ecumenical Council, and false Catholic belief is the version that everyone actually believes in? The Fifth Ecumenical Council aren’t authorities to me. When they say that a soul means X and a guy on the street says that a soul means Y, I have no reason to pay attention to one of them more than another, and when I address the latter I am at least addressing a living belief that is in the mind of an actual human being.
“And even when arguing with Catholics, who’s to say that true Catholic belief is the one from the Fifth Ecumenical Council, and false Catholic belief is the version that everyone actually believes in?”
Would you adopt this position toward evolution, too? I’ve notice a certain disjunction between the expert opinion and that of the man on the street there, too.
Scientists have a procedure for figuring things out which involves experimentation and peer review. The guy on the street who has false ideas about evolution hasn’t done those things or taken them from a source who did, so I can say that his opinion on evolution is not the scientific one.
Also, there aren’t competing theories of evolution the same way there are competing religions. Erik didn’t try to sell me on the religious definition of a soul; but the Christian, and specifically Catholic definition of a soul. Even if you can argue that I should believe a theologian over an educated layman, why should I believe a Catholic theologian over a Protestant or Muslim one?
@Jiro:
Well, don’t believe them on the authority of their denominational affiliation! But do, if you’re interested in theological or metaphysical questions, seek out the strongest arguments you can find, instead of wasting your truth-seeking time engaging the inchoate prejudices of the man of the street.
@Jiro:
What Anonymous said. The guy on the street who “believes in” evolution both thinks he evolved from a monkey and thinks that believing that he evolved from a monkey marks him as a member in good standing of non-rube society. That his version of evolution is as much a travesty as the creationist’s doesn’t falsify evolution–it just means that the guy on the street may not be your best interlocutor if you want to assess an idea.
The man on the street, if he has any idea at all what an atom is like, is picturing the little solar system-like illustration of the Rutherford model from a half-remembered school textbook, and probably thinks the electrons are little balls revolving around the cluster of proton and neutron balls.
If someone is trying to defend modern physics to the man on the street, is it fair to ask the physics defender never to mention electron clouds or quantum fields, but instead to defend a picture of little orbiting billiard balls which no physics expert believes?
Arguing against “bearded man on a cloud” versions of God, creationism, anti-intellectual fideism, divine command theory ethics easily defeated by Euthyphro dilemmas, etc., is fine if your goal is to convert half-educated fundamentalist Christians to atheism; it’s an arguments-as-soldiers tactic fit for rhetorical use on an atheist apologetics blog.
But if one’s goal is to get at the truth of a matter, then engaging the most carefully constructed, precise, expert versions of the worldview you’re challenging has to be the only thing that’s worth a rationalist’s time. Fight steelmen, not strawmen and weak men. Even if–especially if, given the “Cowpox of Doubt” issue–the weak men (ghostly ectoplasmic Cartesian souls as opposed to Thomistic formal causes; billiard ball electrons as opposed to properly understood electrons) are popular with the guy on the street.
Guy in street is not a professional theologian. Your objection is like saying “Why should I believe Dr X and Professor Y when they tell me this rash is shingles and this is how you treat it, when Bloke I Stopped On Corner says his cousin Jill had the same thing and she cured it by rubbing plain Greek yoghurt on three times a day?”
Deiseach, the problem is of course the Dawkins-esque attitude that a professional theologian has no expertise: that there is nothing for the theologian to be expert in except logorrheic, obscurantist “priestcraft” (as bien-pensant Edwardian Whigs used to call our side’s alleged Jesuitical wool-pulling; modern Whigs like Dawkins would say “BS” or “squid ink” or something).
As Dawkins wrote (in his usual enjoyable, readable style) in “The God Delusion”:
Now, this view (that theologians, or metaphysicians for that matter) have nothing to teach the scientist (or the scientistic non-scientist science enthusiast) is of course naive, as common-sensical rejections of ill-understood fields (e.g., evolution, ironically enough) so very often are. But it is a common attitude, and it is the attitude that, IMHO, chiefly undergirds the stubborn will to debate theological and metaphysical weak men.
As Dawkins wrote (in his usual enjoyable, readable style)
Please don’t take it as an insult if I ask is that meant sarcastically or sincerely? I can’t stand Dawkins’ style, which makes it very difficult for me to be fair to the man, but honestly it’s hard to read his stuff when at an interval of every five sentences he writes something that makes the top of my head blow off.
As a substitution for not being able to give him a fair go by reading his views at length, I extend as much charity to him as possible for me by not violating the Eighth Commandment* (through gritted teeth and by white-knuckled determination) in his regard, which means when I came across a juicy nugget in “Private Eye”, it was very difficult for me to restrain myself from sharing it around in order for us all to point and laugh. Sometimes religion is inconvenient like that 🙂
*For those who may be curious, from the Catechism this is part of the offences it is possible to commit against the Eighth Commandment (Catholic numbering) Thou shalt not bear false witness against thy neighbour, which I mean in this instance:
No offense taken at all. Honestly, I really do enjoy reading Dawkins. “The Selfish Gene” delighted me when I read it.
A lot of the reviews of “The God Delusion” said is was brilliant, but it’s too bad Dawkins was so off-puttingly mean. I was disappointed when I read it, because I had the opposite response. Even though I’m a Catholic, I thought his acerbic put-downs of our worldview were kind of entertainingly done. Take “why not the chef or the gardener?” There’s something pleasantly random-seeming about the examples chosen. It’s funny. However, I found the arguments to be mostly easy take downs of straw men and weak men like creationism and the “bad bits” of the Old Testament.
So, yeah. I like Dawkins’ style. “One man’s trash is another’s treasure” and all that.
I like (or at least, can stand) Dawkins when he’s discussing facts or things within his own sphere of knowledge, but then he starts jumping in with pronouncements about other matters on which his opinion is about as qualified as that of “the gardener or the chef”, and with an air of smug ‘I just sprained my wrist from patting myself on the back’ self-congratulation on how smart he is.
And now I’ve probably trashed the Eighth Commandment parts I mentioned previously with that little snarl at the man 🙂
Whoa, OK, I’m impressed. But then, given what very little I know about what the Aramaic word for soul meant in the time and place of early Christianity, it fits.
Caue: I’m an ecumenical (small-o) orthodox Christian (i.e., I affirm the basic creeds) and an evidentialist Bayesian. I would not be a Christian if I did not think I had good evidence that it was true. Unlike many of the other respondents above, I don’t go in for all of the Catholic/Thomist metaphysics and epistemology (although I respect a lot of that work), so my perspective might be somewhat different from theirs.
Obviously this is a huge question, but let me try to outline my perspective for you, which is broadly similar to that of many other Christian evidentialists like Richard Swinburne or Tim McGrew:
– The intrinsic probability of classical theism is low, but not outrageously low. That a perfect personal being is the ultimate explanation of the world is a comparatively simple claim. The intrinsic probability of Christianity is of course much lower, although I have some sympathy with Swinburne’s argument that a personal God would be likely to reveal himself to us at some point, which raises the conditional probability of Christianity on theism.
– Religious and mystical experiences are good evidence for some kind of transcendental reality, although they don’t provide strong evidence for theism over some kind of non-theistic religious hypothesis. Naturalistic explanations of these experiences are unsatisfactory. See Caroline Franks Davis’ book on religious experiences for a good defense of this claim.
– The fine-tuning of the universe provides strong evidence for theism, and perhaps weaker evidence for some kind of “axiarchic” hypothesis, like John Leslie’s (which holds that goodness is itself causally productive). The issues here get technical quickly, but I’m confident that most of the standard objections to the fine-tuning argument (e.g., Sober’s selection-bias effect objection) are unsound. For some truly remarkable recent work on this, see Robin Collins’ (unpublished, but available on his website) essay on fine-tuning for discoverability. Certain fundamental aspects of physics appear to be such as to be maximally discoverable by embodied conscious agents; for example, the cosmic microwave background radiation (our main evidence of the Big Bang) is as strong as is theoretically possible.
– All of this only gets you to theism, at best. To get to Christianity you need to look at the historical evidence for the reliability of the New Testament, in particular in its miracle reports. I think this evidence is extremely strong. On this topic, Tim McGrew’s Reliability of the Gospels series (http://www.apologetics315.com/2012/11/audio-resources-by-tim-mcgrew.html) provide an excellent introduction. I especially recommend the Internal Evidence for the Gospels talk. See also his paper on the resurrection (coauthored with Lydia McGrew) in the Blackwell Companion to Natural Theology.
– I take objections to Christianity like the problem of evil seriously, but in the end I think their weight is more than counterbalanced by the above considerations. Most other objections to Christianity I hear target views that I neither hold nor see as essential to orthodox Christianity.
– Although you said you wouldn’t raise objections, I’m more than happy to dialogue about any of these reasons if you’d like to continue the conversation.
>”Although you said you wouldn’t raise objections, I’m more than happy to dialogue about any of these reasons if you’d like to continue the conversation.”
I normally might take the offer, but in this instance I think most objections I could raise would also apply to other people commenting.
I got interested by “naturalistic explanations of these experiences are unsatisfactory”, but the recommended book seems to be above my “random curiosity” price level. Can you recommend a defense of this somewhere else?
Fair enough — we academics tend to take for granted availability of books through libraries etc. Unfortunately most of the other stuff I’ve read on this particular issue didn’t strike me as being as good. Most theists who write about this run versions of this argument that strike me as epistemically problematic.
The Stanford Encyclopedia of Philosophy is always a good resource: they have (free) articles on Religious Experience and Mysticism. I’m confident they will offer a good introduction to philosophical discussions of the topic, but I haven’t read them, and so I don’t know if I would endorse the pro-theistic cases outlined in them.
The way “nerd” is used popullary (e.g. in the discussions prompted by Scott Aaronson’s comment) denotes a cluster in personspace, including as typical defining characteristics: high technical ability in subjects like math and programming; recreational interests in some stereotyped areas like comics, gaming and science fiction; general social awkwardness, low interest or ability in keeping with conventional social mores, and lack of romantic success.
In my experience, this seems to be a distinctively American cluster. Most examples I can recall of people fitting all these characteristics at once are either real American people or characters from American-produced media. In general, people from Europe or Latin America with “geeky” technical or recreational interests have seemed to me better socially adjusted and romantically successful than their North American counterparts.
Does this impression match anybody else’s? I was going to start speculating on the explanations for this, but I remembered my Borges, and refrained from doing so until verifying that it is a real pattern and not an imagined one.
The cluster was very recognizable in urban Brazil as I was growing up. It’s very different now given the immense popularization of nerd culture, but I suppose this is happening everywhere in the West.
Hollywood’s depiction of jocks was much less recognizable to me than that of nerds.
Frenchman here. The word used in the 90s was “intello” (very rarely towards the last years, geek) rather than nerd, for very similar characteristics and academic success. It was “explained” to me very pointedly that it was important that I should not even think about having a social life, and certainly no romantic prospects. YMMV.
I find the concept of “intello” somewhat different from the american “nerd”: I think it’s much more distinct from the “geek” cluster, and less negative (“intello” is someone who thinks his intellectual interests put him above the others, “nerd” is someone whose intellectual interests put him below the others, something like that).
I can’t really relate to that (despite being an uber-nerd);by whom? in what context?
(the most “anti-nerd” hostility I got was by lower-class immigrant-background assholes who thought that made me gay or something; nothing “mainstream”)
If I had to come up with a just-so story for what’s going on in the US, it’s that here *even the upper classes* have an assumption that being too intellectual makes you “gay or something.” The influence of the Progressive movement (think Theodore Roosevelt) was America-specific and very big on the distinction between being manly and active vs. an effete intellectual.
Yep, whereas on the other hand France sees itself as more intellectual and artistic and idealistic and refined than the rest of the world; compared to us, Americans are money-grubbing hicks or shallow money-grubbing suits with fake smiles, Germans are humorless and boring with no vision, the English are short-sighted businessmen and toadies to the Americans, etc.
So being anti-intellectual is a bit being anti-French (and a fair amount of the lower-class immigrant-origin assholes *are* anti-French too).
I was the anon, did not notice I forgot to refill the form.
In my middle and high schools, the “intellos” had also very nerdy in-the-classical-sense interests. I disagree with you on the above/below because it suggests I choose the moniker and its implications. To an American eye, I fail to see how I would not have been called a nerd (except for the sports aspect, but that was outside of school).
By “explained”, I mean regular physical violence, humiliations, destruction of property, imaginative attempts to get me expelled, threats to people seen talking to me. Mostly from upper-middle class kids with otherwise good reputations, to be honest. Obvious homophobic aspect at first (at which point it was only boys being abusive little shitheads). I do not think my experience was typical, but the climate at my schools was really rotten with a mean anti-intellectual vibe.
It’s funny you mention Borges without noting that he was a prototypical – Argentinean – nerd: bookish, glasses-wearing, plays games (chess), fascinated by science fiction & fantasy, awkward when not discussing his specialties, and absolutely hopeless at romance or women.
That is true and indeed amusing.
On the other hand (and this gose also in reply to Cauê and Anonymous) I am not saying there are no, or even very few, non-American are nerds; just that the prevalence of the cluster is not so marked. Some recent discussions seem to assume that a majority or at least a large plurality of the males in tech or mathy sciences are “nerds”, and this doesn’t match at all my experience from other countries.
That was me.
For what it’s worth, my experience matches yours. I’ve seen nerdy types of people, of course, but “nerd culture” doesn’t seem to be a thing to the same extent. And at the not-too-shabby math and physics departments at my university, people who would register as nerds in America seem to be in the minority.
I was skeptical about your characterization of JLB as super-unlucky in romance – I once worked with an Argentinian guy who met Borges a few times (at polo matches or something equally aristo). My coworker raved about Borges’s stunning “assistant”/ girlfriend who accompanied him everywhere.
Wikipedia suggests that she existed, married him towards the end, and inherited the rights to his works.
While it seems like his love life may not have been real busy (or fulfilling), this is one of a few reasons why I am a bit uneasy about calling Borges a proto-nerd. It’s like calling Sartre a proto-nerd: a lot of the characteristics fit, but some important ones just don’t (and can’t, for public intellectuals of a certain generation: being a PI was really high-status back in the day).
I think Borges’ relationship with Maria Kodama, which started when he was well into his seventies and internationally famous, is not really evidence that Borges was not a “shy, awkward nerd” most of his life. (The way his relationship with Estela Canto developed was probably more typical of him.)
In my Eastern European country comics are unpopular, gaming is hardly correlated to nerdiness (if anything, in my country stereotypical teenage gamer is a boy that plays a lot of FPS games like Call of Duty or Halo and likely neglects school (instead of having high mathematical ability), whether or not this stereotype is correct is a different question). Science fiction is popular among nerds, but I would say that few people read SF only. Many boys and girls who would not consider themselves nerds have read at least a few SF books (even though most of them aren’t new books). Given all this, recreational interests of nerds seem to be less different from recreational interests of non-nerds. There are some differences, but it seems to me that they are less pronounced than in the American case.
I have never been an American nerd, therefore note that I am comparing real people I know with stereotypical American nerds I know only from films or internet, which means that my conclusions are based on rather shaky grounds.
Isn’t Japan famous for a strong nerd culture? Up to the point of having a significant openly asexual subculture? (Asexual in the sense of “not engaging in romantic relations with other humans”, maybe “autosexual” is better? I’m thinking of the guys that form relationships pillows or real dolls or avatars)
The Japanese nerd stereotype notably doesn’t have much to do with technical or mathematical ability or academic success, although it covers most of the other bases. (And the list of interests is slightly different, but that’s true for every culture that has an analogous concept — compare trains in Britain.)
The Japanese train nerd stereotype seems very similar to the British one from what I’ve seen.
Yeah, I’m saying that there isn’t a corresponding American train nerd stereotype. We have train enthusiasts, but the stereotype of them is something like an older white middle- to lower-middle-class man, probably living in a rural area, probably someone’s grandfather, and probably pissing his wife off with the amount of time he spends building model train tracks in the basement.
Nerdiness definitely exists in Britain. I would guess that there are differences across countries – e.g. Doctor Who is (I believe) seen as fairly nerdy in the US whereas over here it is far more a part of popular culture, also comic books are seen as more childish than nerdy over here – but I can easily point you to an empirical cluster of British mostly-males who do a lot of science/maths/programming, are with few exceptions permanently single (NB my nerd-cluster contains a large proportion of Christians, hence short-term relationships are heavily frowned upon, so nerdiness is not necessarily the only factor in this), many of whom play Dungeons and Dragons, etc.
I’m an American who lived in Ireland when I was in my early teens. One of the first differences I noticed was that my classmates who were sort of nerdier seemed at least a little bit sports-oriented, and my classmates who seemed sort of more like jocks seemed more academically and intellectually prepared, whereas these two groups did not partake of each others’ interests AT ALL in my U.S. experience.
Y’know the trope that men are higher variance on IQ tests than women? It felt like that: like I had moved from a nation (America) where everyone was an outlier on some Bell Curve of jock/nerd-ness, to a nation where there was still variation but mostly within the middle part of the Bell Curve that had been missing in the U.S.; IOW, like I had moved from the land of high variance to low.
Perhaps relevant: the Irish schools I attended also seemed shockingly less clique-y compared to the U.S.; at the time it seemed like the total absence of ethnic differences, and relative lack of class inequality compared with my U.S. school experience, might somehow be related to that un-clique-iness.
Yeah, one of the conjectured explanations I had thought, in case my observation was validated, was the insane importance American high schools give to sports. My idea was that this creates the “jock-nerd” dychotomy by amplifying small differences between athletically inclined and intelectually inclined students–and that because sports are more popular, the jocks go to the top of the social hyerarchy while the nerds end up as social pariahs, making them more shy and awkward that they would be otherwise.
There’s a profound anti-intellectualism in American popular life, and a corresponding valorization of athletics. As you mention with sports, I think this tends to make those who are intellectually inclined anyway end up in their own subcultures more. (A parallel might be to compare the degree of exaggeratedly stereotypical mannerisms in a closeted vs. a post-closeted society: I suspect it would look a lot like the degree of “nerdy” stereotypical subcultural interests in America and those cultures most affected by it vs. the rest of the world.)
European places I’ve lived (I’m thinking in this sentence mostly of France, and only secondarily of Ireland) seem to have an ideal of a guy who is fit, smart, and capable of romancing a lady (by the last I mean to invoke poetry and flowers or something like that, not “game” or similar American discourses). Whereas many Americans seem to me to valorize athleticism and denigrate the other two things (academic achievement and the sort of communicativeness and somewhat more metrosexual fashion sense of a certain kind of European) as insufficiently macho. All this is obviously going to affect the “nerdiness” of American male nerds by ghettoizing them (us). And then, of course, once a “nerdy” male subculture exists for American male nerds, it’ll be a natural attractor for similarly intellectual females, so the issues around machismo and sports end up hitting them at one remove. Or something. This is all very inchoate.
BTW: A note–it’s not that the Irish kids I went to school with weren’t REALLY into rugby and soccer, etc: the Irish are a very sports-loving nation. It’s just that there was no conflict for them, at least apparent to me, between that interest and also being really into books (humanistic, scientific, whatever) if you happened to like them, too.
ETA: I suspect that Ireland and the UK are more like America in this respect than France, perhaps in part because of common Anglophone traits, but maybe far more just because American pop culture penetrates deeper into other Anglophone societies. Thus, the “romancing a lady” part of being a masculine guy was more of a Latin European ideal, I think, than anything I’d expect to encounter among most Brits. Still, it’s a continuum. If you compare, say, a British cultural icon like James Bond to a comparably influential American icon, the American will have much less of an emphasis on suavity as a way to do masculinity.
[Oh, God. I have totally brought gender into the open thread. How am I supposed to talk about nerds vs. jocks without gender, though? “That’s so gay!” is like the traditional jock anathema. Ah, well. Delete if you wish, Scott. It’s your blog, and I apologize.]
Irenist: that matches my perception of France, and even more so of China (where – in the school system at least – you’re either a nerd or the shame of your family (and there is little correlation with athleticism one way or the other))
I have met a handful of people who are firmly in my cluster in memespace (interested in computer science/programming/math/AI, interested in speculative thinking about human society and the future, friends with my friends) but are…completely neurotypical. Confident body language, good at dancing, romantically successful, non-neurotic.
Most of them are non-American. All of them are Gentile.
I also think that *even* in America, being interested in scientific/technical subjects doesn’t correlate that well with being interested in SF/comic books/”geek culture”. That’s what The Big Bang Theory gets so wrong — a bunch of postdocs are just not going to be that interested in Star Trek.
Quite right. I myself am an engineer, as are many of my friends, and the only person I know who is a big fan of star trek is my brother, who works a non-specialist manufacturing job.
I suspect there might be generational aspects to this.
I’m a software guy, and I come out of a family that’s mostly scientists and engineers. I’m not a big Star Trek fan although I watched a good amount of it in my childhood.
My dad and uncles, on the other hand, are generally at least casual Star Trek fans (though I wouldn’t call any of them Trekkies) — I’d speculate because they grew up in a time when Star Trek was a lot more of a touchstone in technical culture than it is now.
My grandpa, on the third hand, was a soil biochemist, and not a man of remotely geeky demeanor or habits.
Another Frenchmen here, I think we would have two distinct cluster:
* “Geek”, a recently imported concept that pretty much matches the American “computer/sci-fi geek” stereotype, but it’s probably narrower (you won’t describe people as “a something-or-other geek”).
* “Intello” is probably close enough to “nerd”, as Anonymous said, but I don’t think it has any connotations of interest in computers, sci-fi, gaming, comics (just like “intellectual” in English doesn’t). It’s probably a mildly negative term, not as strong as “nerd” (and the negative side is more about pretentiousness than social awkwardness).
Other notable differences:
* The whole jock-vs.-nerds things doesn’t seem to exist, and I don’t think bullying is particularly tied to nerdiness (probably a bit, but it seem that US nerds/geek all consider the fact that they’ve been bullied as matter-of-course or something), and the concept of “popular” doesn’t really exist either (as something people talk about). It may depend of your environment (nerds growing up in a lower-class area may get bullied, but would probably assign it more to some people being assholes than to them being a nerd).
* Comics are pretty mainstream, and aren’t considered as exclusively for children and weirdos; probably because French comics are more intelligent, varied and interesting than American comics.
People talk a lot here about the idea of “nerds vs. jocks”, but I should point out that there’s also the idea of “nerds vs. suits”[0], which I think is more interesting, especially as it ties into Robin Hanson etc.’s notion of “nerds”; I have to wonder how those generalize geographically.
[0]Although Ialdabaoth points out here that the actual people comprising “jocks” and “suits” may largely coincide.
I have recently read about Thomas Szasz and I don’t agree with all of what he says but I think he had a point. Homosexuality was considered a disease but not anymore. Now pedophilia is considered a mental disorder now but what’s the difference besides one is considered morally acceptable and the other not? Same thing goes for psychopaths and “personality disorders”. Things like depression, schizophrenia and PTSD seem different because the person suffers and want to be cured. But wouldn’t a homosexual in the 50’s also want to be cured? It seems like the very act of calling something a disease implies a value judgement.
I’m assuming you mean feeling it, not acting on it. My take is that the “acting on it” part is where all the moral difference is.
Scott wrote about classifying things as “a disease” or “not a disease”:
http://lesswrong.com/lw/2as/diseased_thinking_dissolving_questions_about/
I suspect that piece is disingenuous, because actually blame and stigmatization almost never have positive effects, so that principle doesn’t recover normality in the uncontroversial cases the way it pretends to.
IIRC, psychological disease = abnormal && reduces quality-of-life. By this definition, homosexuality could be a psychological disease in the 1950’s but not in the 2010’s. It also means that pedophilia could be a psychological disease in present-day America, but not in the tribe I vaguely remember reading about in The Blank Slate, where it was standard for prepubescent boys to fellate teens (p(confabulation) = .1).
personality disorders like depression or PTSD are not generally considered immoral, although if social pressure can change them, they could be deemed immoral by a consequentialist, and (hypothetically) insofar as they are genetic and we want to prevent sick children, sufferers shouldn’t reproduce.
I wonder why a consequentialist would differentiate between harm to others and self-harm. The main issue I see is that for most self-harm there is no clear remedy. If we put lazy folks on probation, we might inadvertently punish others who were laid off but are trying hard to find work
Are there types of self-harm where a pure consequentialist should advocate state intervention? Suicide seems one, since the harm is big and I think at least 90% of suicides are emotional reactions, not considered judgments.
Does anyone know the reasons given for listing homosexuality as a disorder years ago? Was the risk mostly seen as harm to others, or self-harm?
By the way, I retract these views. I decided most of what I wanted was either useless to advocate, bad, or impossible.
I’m a Christian, so I think everyone is of infinite worth in the eyes of God, even people who are too disabled to be able to do anything at all for anyone, and that settles it for me. But in case you have a more utilitarian concern about not being “worth” anything to society (beyond your own ability to experience utils or whatever?) because you maybe you can’t cure cancer (or whatever) with a low IQ, consider:
All those high IQ academics would be in a bad way without cops to protect them, right? Well, remember Jordan v. New London? The New London PD wouldn’t hire Robert Jordan because his Wonderlic cognitive score came back too high. The NLPD defended itself by saying it was concerned that higher IQ cops would experience higher “turnover resulting from lack of job satisfaction,” i.e., that they’d get bored because they were too smart for the job. The trial judge in Jordan v. New London (D. Conn., 1999) found that the NLPD had a rational basis for its policy, and the ruling was affirmed on appeal. Jordan v. New London, (2d Cir., 2000). Now, rational basis is a very low bar, but at least someone thought it wasn’t nuts to think that high IQ might make you bad at a very important job.
So, let’s say that’s true. There are some social roles that a plurality of high IQ people might find boring and depressing, but low IQ people might thrive in. And some of those roles are IMPORTANT. Really important, foundation of our civilization kind of stuff, like sanitation, say.
Now, it’s important to note that presumably some trash guys are quite bright, and the less bright ones may be clever in some other way, like being good with their hands or whatever. But that’s not what I’m talking about. What I’m saying is that even if they’re complete dopes–heck, channeling Chesterton, precisely because they’re dopes–they have vital roles to play in our civilization. They have great worth to the wider community.
More: consider Moravec’s rising floodwaters of AI metaphor. The things we humans think require brilliance, the things we find intellectually impressive, things like chess-playing and theorem-proving, are the first to fall under the rising flood of AI-induced human obsolescence. But cleaning bed sores as an eldercare aide? Dusting a house for a cleaning service? Automating that stuff is hard. But high IQ people are usually bored unto frequent “mental health day” absences by jobs like that. The high IQ types will be lamenting how much they hate the drudgery of their not-yet-AI-doable, menial, non-special-snowflake jobs for years while the lower IQ types just go to work and come home with highly socially functional contentment about their place in things.
So, worst case scenario–you’re dumb. So what? There’s lots of REALLY IMPORTANT stuff that dumb people are optimized for. Stop being miserable about having an objectively important set of aptitudes.
* * *
Okay, next scenario: you’re not dumb. But you’re not, like, Mozart-level smart. More like “envious Salieri from the unfair depiction of him in ‘Amadeus’-level smart,” so you think you’ll never produce a paradigm-overturning scientific or lib-arts breakthrough that saves the world, or whatever. So? On a Kuhnian account where science has equilibria of “normal science” punctuated by paradigm-shifts, all the patient hypothesis-confirming (or disconfirming!) observations of regular stiffs doing normal science are the indispensable foundation for getting to the next paradigm. Urbain Le Verrier wasn’t as smart as Einstein, and spent his career doing Newtonian normal-science things like his 1843-59 data-gathering on the precession of Mercury. But, hey! The oddity he found in Mercury’s orbit turned out to be a big help to Einstein down the road, yeah? So maybe you’ll grow up to be the next Urbain Le Verrier. What could possibly be wrong with that?
* * *
Maybe you’re somewhere between Salieri-Le Verrier and the trash guy. Maybe you’re kind of an average or very slightly bright type who might do just fine in, say, business school, and not get bored by it, but isn’t going to solve friendly AI or cure malaria or anything. Well, you read this blog, right? Go to b-school, get a decent, boring, high-paying job, and donate to MIRI if you’re worried about the need for geniuses to solve FAI, or donate to an effectively altrusitic anti-malarial mosquito bed-net nonprofit for Africa, or whatever. Many of the most successful entrepreneurs and executives are bright-but-not-genius types with great common sense, a strong work ethic, and both enough intelligence to optimize their business, but not SO MUCH intelligence that they get bored by business and sit around reading the Sequences or some other smart-people reading matter all day because getting a job sounds really boring and beneath them.
* * * *
So are you dumb? Are you average? Kinda bright? Really bright but no Einstein?
Doesn’t matter. There’s something out there that the world REALLY needs, that you’d be perfect for. And, again, some of us think you’re of infinite worth even if you never do anything….
Paul lays out a concept of equality that I don’t think Westerners have a good handle of. We can’t seem to see things as equally valuable without them being equal in all ways. Paul views everybody as part of a body instead. That they are all different is an essential part of the metaphor, but so is their interdependence. Some are called to be teachers, some to be street sweepers, etc. Whatever you are called to be, be that and be it well.
This requires a sense of telos that I’m not sure can survive outside a theistic framework, though. There’s a reason the 10th commandment specifically calls out coveting.
My understanding is that “covet” is actually a mistranslation of the original Hebrew, and the intended meaning was more like “don’t kidnap” or “don’t take hostages.”
What lead to that understanding? How sure are you? Are you aware that it not a popular position?
How do you respond to the obvious question: why is this separate from the commandment on theft? But there are lots of other question, like, if the word translated as “covet” really means “kidnap,” why is the word translated as “steal” so often used to mean “kidnap”?
I would be interested in sources on this; I’m certainly no scholar of Ancient Hebrew. The instruction against coveting your neighbor’s house does seem to militate against it.
http://goddidntsaythat.com
he’s got lots of other amusing ones too.
Well, Jaskologist, faith may move mountains, but apparently skepticism allows you to carry off houses?
a better link would be this or this
Chamad, the word translated as “covet” by ancient Jews (like St. Paul when writing in Greek) by modern rabbis, and by Biblical scholars generally, also appears in the following verses:
(1) “The images of their gods you are to burn in the fire. Do not COVET the silver and gold on them, and do not take it for yourselves, or you will be ensnared by it, for it is detestable to the LORD your God.” Deuteronomy 7:25.
(2) “Do not LUST in your heart after her beauty or let her captivate you with her eyes.” Proverbs 6:25
Now, are you proposing that (1) should be translated as “Do not TAKE the silver and gold and do not TAKE it?” Seems redundant.
As for (2), how does one steal someone else’s beauty in one’s heart?
Joel Hoffman has some sort of axe to grind. He’s in the minority in his field, and although I’m no Hebraist, I’ll take the consensus of the field over his word, thanks.
(ETA: So now it’s remembering my name but not my email, so I don’t get the little avatar. What is up with the commenting system, I wonder.)
Different parts of the body are not equally essential. People gladly get rid of appendices and wisdom teeth, don’t get too upset over losing blood, reluctantly amputate limbs or excise internal organs. Even hearts, lungs, livers and kidneys can be replaced, to some extent. But brains cannot.
Kind of illustrates my point, doesn’t it? Our minds rebel against the idea that things could be valued the same if we can find some axis along with they are not equal. Before you know it, you have the ears complaining that the eyes unfairly dominate the field of seeing.
Yes, you can chop off your foot and survive. But the whole body is hobbled as a result; everyone is lessened. The head cannot say to the feet, “I don’t need you!” Just as one part is not supposed to do everything, a single part is no good on its own. They are valued precisely because of their relationship to the whole.
The equality of the undifferentiated mass is the end goal of Buddhism, Communism, lawn mowing, and the heat death of the universe. Christianity has the opposite aim.
No, your human body analogy doesn’t work. (Well, maybe it works for you, but you aren’t trying to persuade yourself, are you?)
Try an ecosystem or something.
Musical harmony relies on every note not being pitched equally high.
Dante’s poetical take on this doctrine of harmony in hierarchy (Piccarda’s speech in Paradiso, Canto III) is helpfully discussed here:
http://www.theamericanconservative.com/dreher/paradiso-cantos-iii-iv/
Here’s a prose translation of the dialogue from the Poetry in Translation website (the first speaker is the narrator, the second Piccarda):
Tolkien presents a musical image of the loss of this sort of harmony in the episode of the Discord of Melkor in the “Ainulindalë,” which is as good a description of what the rebellion of Lucifer must have been like as anything I’d hope to see in Milton, just as the description of how Eru incorporates the discord into a greater, deeper composition is the finest artistic evocation of the Christian solution to the problem of evil I expect to see:
http://www.houghtonmifflinbooks.com/features/lordoftheringstrilogy/lessons/seven/discord.jsp
This seems patronizing to me, but I have no idea why.
The “there’s a place for everybody” thing feels like a thing that would be quite possible to refute if you worked on it (smart people have started automating menial jobs, something seems to happen somewhere between “trashman” and “low-functioning autist” that makes the argument stop working, what if you’re only menial labor smart and still find the sort of menial labor there’s a market demand for boring and soul-crushing), and also the sort of thing which you’re supposed to say for social nicety and not try to think of too hard about refuting.
Scott, you wrote on tumblr:
“So while I agree that shaming fat people doesn’t work, that studies have shown it doesn’t work, and that it’s a jerk move besides – I notice I am confused based on studies of smoking, drug, and alcohol treatment.
All of these studies show that if a family doctor, during an appointment, spends a minute or two telling patients that Smoking Is Bad For You, then on a population level the percent of their patients who smoke goes way down. It won’t work for everyone, but given the high patient volume it will work for more than enough people to be worth it.
(I wrote a little about this in my Alcoholics Anonymous post, so that would be a good place to find the research).
I’m not sure what the difference is between this and fat-shaming that makes the first work and the second fail.”
One obvious difference between the cases is that smoking and drinking are much closer to a binary on/off set of states: you’ve either quit drinking or you haven’t, you’re either a smoker or you’re not. It’s a lot fuzzier with overeating – you don’t “quit” eating, you just kind of vaguely adopt a healthier, lower-calorie diet. I could see it being much easier to gradually slip back into overeating without noticing than it would be to accidentally start drinking or smoking again. So even if quitting smoking and drinking are actually harder tasks to complete in terms of willpower or akrasia or whatever, they might wind up being completed more often because people perceive them as just that: tasks, with very clear and well-delineated win conditions.
In other words, there’s no dieting equivalent to “I smoke zero cigarettes now” or “It’s been 100 days since my last drink”
Which is a *huge* problem. I also think it’s the source of the success some people have on Soylent, meal-replacement diet foods, and even low carb. ENTIRELY giving up regular food (as with Soylent or one of those diet-food regimens) or just a category of food, might create more of that nice binary effect. When I’ve done low carb, I found it much easier, in a Schelling-fence sorta way, to entirely give up certain categories of food than it had been to find the self-discipline to moderate portion sizes.
Or, as my father-in-law likes to observe about his own low carb success:
“Abstinence is easier than temperance.”
Did I miss a *controlled* study that shows fat-shaming doesn’t work? The Sutin and Terracciano study which made the rounds a year or two ago showed that people who have experienced fat-shaming in the past are more likely to gain weight in the future, even when controlling for current weight. This is very weak evidence for “fat-shaming doesn’t work”. Replace Scott’s controlled studies on doctors’ “brief opportunistic intervention” with the same awful methodology, and what will you find? People with a greater history of doctors telling them not to smoke and drink will have a much greater future probability of smoking and drinking, even if you control for current consumption. Not because the doctors are being counterproductive (the controlled studies show they aren’t) but because there are hidden preexisting propensity variables that aren’t close to being adequately controlled for.
I’d love to see an Ig Nobel Prize won this way: compare past chemotherapy patients to people with no history of chemo, use blood counts or something to “control” for the current state of the cancer, act shocked when recurrences in past patients turn out to still be much more common than brand-new cancers, and declare that you’ve proven chemotherapy doesn’t work.
Is this comparing like with like? IIRC if a family doctor, during an appointment, spends a minute or two telling patients that Being Overweight Is Bad For You, then on a population level the percent of their patients who are overweight goes way down. But that’s quite a bit different from shaming.
I have a friend who very successfully lost weight by switching to “I will not eat or drink anything that has had sugar added to it” (so fruit is okay, soda is not). The even simpler “I will not drink any soda” also seems to have a fair amount of backing.
A request for explanation regarding philosophy of mind. (I do not currently, adhere to any philosophy of mind–every one seems problematic in some way or another.)
I do not see how a particular view in philosophy of mind–namely, that subjective, first-person experience is a result of computation–is coherent. This view does not make sense to me because I can give an absolutely complete explanation of why any computation, given a certain input, results in a particular output, without ever appealing to subjective-first person experience within that computation. Thus, the “consciousness” that this theory seeks to explain is completely unneeded for one’s explanatory and predictive apparatus, and so one should not posit it-which is obviously a problematic conclusion.
One rejoinder to this is that consciousness is like the wings of an airplane. Sure, you could model the wings using quantum physics, but that would take forever; so also, you could model another person as computation, but that would also take forever. So it’s legitimate for you to talk about airplanes as having wings, as people as having consciousness–and airplanes really have wings, and people really have consciousness, even though these aren’t ontologically fundamental.
But this rejoinder seems to fall through. Wings in the engineer’s model are defined in terms of a certain set of inputs and outputs. So also, if you wanted to model people at a higher level in this case, you would also define “consciousness” as a certain set of inputs and outputs: angry people are those apt to hit you, to say things like “I am angry,” and so on. But this seems to be behaviorism–that there’s no more to one’s mental states than the actions they produce or are apt to produce.
More generally, far as I can tell the LWosphere tends to say “Ok, we can’t really see how computations can produce consciousness at the moment, but in the future we’ll have a breakthrough that will let us understand how this is possible.” (Show me where I should look if I’m wrong, please) But I can’t even imagine what kind of breakthrough that would be. If we were to be able to put someone in a magical MRI, which tracked everything going on in their brain at the ion-pump level, such that we could see exactly what sensory input and memories combine to make someone say “Hey, I’m experiencing qualia!”–all this seems to do is make it so that we no longer have reason to think someone who says “I’m experiencing qualia!” is experiencing qualia, because we now have a predictive apparatus which explains what they do in every detail, without ever invoking qualia.
I’m actually kind of confused by this sort of argument, and will try to explain why. It seems to me that my conscious thoughts motivate my behavior; to take a trivial example, I seek out or avoid certain foods because of how I feel about the experience of taste I get from them. More generally, it just seems to me that conscious states do something, at least in general. The idea that you could separate out the part that does something (what you call the “computational” part; I wonder if the terminology is part of the problem) from what it feels like just seems absurd to me; what it feels like just is what does things. And so if the neuroscientists say these neurons doing that are what are doing things, it seems to follow trivially that these neurons doing that must be what it feels like. Not that you can capture the feeling from an external perspective, of course; no matter how you look at them, looking at neurons is surely going to feel different than having them hooked up and firing in your brain. But that point hardly invites the generation of any mysteries.
No, I absolutely agree that conscious thoughts motivate your behavior.
If I gather what you’re saying correctly–it’s that the computation is identical to consciousness, so from the inside “Feeling hungry” is identical to “Neurons firing in X and such patterns,” and so there is no problem. The computational part is the motivated part.
The thing is, because any computation whatsoever is completely tractable without reference to conscious thoughts–or any reference to what the computation is like “from the inside”–then I’ve really got no need to posit an inside, in hunger or in any computation whatseover. Why posit an inside, when everything that’s going on from the outside can be explained without an inside?
My argument is that (Consciousness = computation) and (All things done by computation can be explained without reference to an “inside”) –> (No need to posit view from the inside). This last seems false, which leads me to drop (Consciousness = computation).
I need to head out for now–but does that make sense?
No, it doesn’t. You still seem to be treating the “from inside” as if it must be a different thing from the computational part, something that would need to be added. But, as I said, since the computational part is what actually does things, responding to the environment and motivating action, and the from inside part is what does things, responding to the environment and motivating action, it seems that they have to be the same thing. Izaak was perhaps right to emphasize the quantum physics analogy; to me, it seems as if you were complaining that if you just look at all the quantum activity, you never need to posit the wing. Well, of course you don’t need to posit it; it’s right there. The quantum activity is the wing. I continue not to see why it shouldn’t be the same for consciousness.
Thanks for following up. I still disagree. Sorry, this is going to be absurdly long. This has been bothering me for a while.
“Particular kinds of computations” and “subjective experience” are clearly different in intension; so some kind of evidence is required to say that they’re the same in extension, just like it would require evidence to show that Shakespeare and the Earl of Oxford are the same in extension. I’m not trying to treat the motivated part as if it must be something different from the computational part–but I am starting with the premise that there can at least be evidence pro / con this, which I think is fair.
The evidence offered for identity is that–if I grok what you are saying–when we run a brain scan, it seems like particular computations have input / output relations with the environment; and when we introspect, it seems like particular subjective states have input / output relations with the environment. So if every change to the former results in a change to the latter, and every change to the latter results in change to the former, then it seems that the former and the latter are the same–just like punching Alfred Bordon at time t, and seeking that Alfred Bordon has a bruise at time t + 1 day, is evidence that the Alfred Bordon at time t is the same as the Alfred Bordon at time t + 1.
I accept this as evidence and as quite good evidence. (This theory of mind is most probable of those I know.)
(I don’t think that consciousness is like the wing. Wings are built of things in quantum states in such a way that we could, in theory (maybe) explain how quantum phenomena cause the appearance of solidity, the general features of lift, etc, etc, which we associate with the physical wing. The gross features of your anatomy, similarly, are built of minicolumns and computation-doing things such that you can explain the gross behavior of your brain in terms of the fine. There hasn’t been an explanation of conscious states in terms of minicolumns and computation-doing-things in the same way that there has been an explanation of the grosser features of the brain in terms of minicolumns and computation-doing things; the supposed knowledge of the identity of these former two (computations in brain and subjective brain states) seems to depend on the argument given above, not on any detailed understanding of how conscious states arise from computation. If we did know how conscious states arise from computation in this detailed and predictive way (that kind of computation has an inside, that kind does not), then we would be able to settle arguments about strong AI and vegetarianism and vegetative states much more easily than we currently seem to be able to. [If there is this kind if description, I’d be interested in hearing it, of course.])
Now let’s switch to a problem. In every case of any kind of computation that occurs outside of a human mind, we’re not tempted to think about an inside. I can implement A* or a Bayesian sorting algorithm or what have you, and it never crosses my mind to think of an inside. I might think about efficiency, memory use vs. demands on the processor, whether the algorithm always comes up with the best answer; but thinking about the “inside” would seem dumb. Talking about an “inside” is totally causally superfluous in all these other cases–which, it should be noted, are the only cases when we have a really clear understanding of what is going on. We’re still figuring out Really Important Things We Didn’t Know about the brain; we don’t have a Really Clear Understanding of how the brain does computation the same way we do about A* and naive Bayes (I’m pretty sure, at least). Talking about “insides” of computation is to use a kind of language foreign to the formal study of computation and paste it on to the study of computation, without a clear explanation of how they relate.
So on one hand we have
1. Changes to mental computations seem to occur iff changes occur to subjective experience.
2. A thing intensionally different from another thing, which changes iff the other thing changes, is probably identical to the other thing.
3. So mental computations are probably identical to subjective experiences.
But on the other hand we have
4. Positing inner states is unnecessary to explain what any computation does.
5. So if (the relevant part) of what humans do is a result of computation, we have no reason to judge them to have inner states.
6. Ah, um, modus tolens.
And again, the obvious response is that from our own, introspective experience we use our inner states to explain our own actions–so if these are computation, then computations must have inner states. But if every other kind of computation is inner-stateless, then perhaps the right conclusion would instead be that there’s more to conscious states than simply computations. Our current state of knowledge about the human brain is also compatible, say, with the idea that consciousness requires a particular physical substrate; or (probably) with whatever it is that Penrose thinks the mind does; and so on.
I’m not really happy with the above. I think the first numerated argument above is probably stronger than the second, but I think the second decrease the probability of what the first argues for.
@SeekingOmniscience
“4. Positing inner states is unnecessary to explain what any computation does.
5. So if (the relevant part) of what humans do is a result of computation, we have no reason to judge them to have inner states.
6. Ah, um, modus tolens.”
My reply to 4:
Well, yes. And if I have a complex computer performing an operation on two vectors such that a third vector perpendicular to both of them is produced, then this can entirely be described in terms of quantum states of electrons and silicon atoms (with the occasional dopant) without ever positing the mysterious abstract concept of a “cross product.”
Nonetheless, for all inputs, its outputs are identical to an algorithm implementing a cross product, so we say it performs a cross product. And, indeed, it is. In fact, this is what “this algorithm performs a cross product” means.
Likewise, it’s entirely possible to fully analyze the brain without ever making reference to conscious states, just neuron weights and synaptic potentials – just as it’s entirely possible to fully analyze a computer without ever making reference to, say, “Microsoft Windows” (which it is in fact running) and simply looking at transistors and voltages and hard-drive magnetic fields.
That doesn’t mean the conscious states are not, inevitably, there. Those bit-patterns and hard-drive states and their passage and transformation through the circuitry, and the way these patterns react to input from various channels to produce internal state-changes and output ARE, indeed, Microsoft Windows. They aren’t anything else, and indeed can’t be anything else. Your computer is running Windows.
So it’s not quite correct to say that your computer can be entirely analyzed without positing Windows. Your computer can be entirely analyzed and perfectly simulated (step-by-step, with pencil and paper) without ever making reference to the abstract, higher-level features of what those voltages and magnetic fields “mean” – but they nonetheless translate to Windows anyway. Even your pencil-and-paper implementation of your computer’s voltages and magnetic fields will still be running Windows, even though you may not realize it. (And if you deleted Windows, you’d get different results.)
Same deal with the brain. You can analyze the brain without ever making reference to conscious states, but they’re still *there* – you’re just talking about a system which gives rise to certain neuron patterns, which is like talking about “1 + 1 + 1” instead of “3.” They certainly look different, but they’re the exact same thing from a different frame of reference.
@Eldritch:
I will need to think about this more.
It still seems fairly clear to me that my painful, pen-and-paper implementation of windows combines small parts into larger parts in a way that permits us to understand how these large parts inevitably result from these smaller parts–but that we have no such knowledge of how computations, etc, combine to inevitably form subjectivity. (We might have knowledge of how lower-level brain functions combine to form different higher-level brain parts, which result in visible behavior–this seems to me analogous to the example with windows– but this doesn’t mention subjectivity, so the example seems dissimilar.) And this doesn’t seem to me merely a case of a lack of current knowledge; I don’t know what an experiment would look like which showed that a particular (conscious) higher-level thing was a result of lower-level results.
If this is an intelligible question–would you say it a result of mathematical or of scientific process that one comes to the conclusion that X computation results in consciousness? Maybe if I knew what the reductionist / computationalist answer to this question was, I would be able to think about this more clearly. (Either of these seems very weird, and trying to pin down what such a process would be… I can’t think of a way to do it.)
@Seeking,
First, my cards on the table: I think that for all practical purposes (but not “in principle”) consciousness-as-we-know-it does require a particular physical substrate. Still I want to critique your argument. For that critique let me start by recommending Jenann Ismael’s paper, or better yet, her book The Situated Self. The argument centers on a map-territory analogy, where your model of physical reality is the map, and your experience the territory. To locate your phenomenology in physical events is like locating yourself on a map. Your location need not be explicit on the map itself, for the map to be accurate and complete. A map does not become more accurate by adding a red dot saying You Are Here. (It does, however, become a lot more useful!)
Suppose the map is a real-time display which updates based on automated measurements. It can display all that happens without explicitly telling where you are. That does not mean you are not there, in the mapped territory.
Now for my own favorite point on the subject. In addition to being aware of external conditions like day and night, light and dark, we are also aware of some of our own internal states along the cognitive pathways, pathways from external world to belief. So, in addition to being able to estimate objective brightness we are aware of subjective brightness. Maybe that helps us overcome certain sorts of illusion. However it evolved, we’ve got it. Could an organism be designed to expose to executive processes, only the (best representation available of) objective brightness, without any subjective brightness? I don’t see why not. Such an organism could compute the same function of radiant emittance detection, albeit by a different algorithm. So I share your distrust of one variety of computationalism, namely functionalism.
“you could model the wings using quantum physics”
I think the point is that in the real world, wings DO work via quantum physics. You have the analogy backwards. Instead of saying that quantum physics is a way of modeling wings, we say that wings are a term we use to refer to a particular form of quantum wavefunctions. Instead of saying that a computation is a way of modelling consciousness, we say that consciousness is a term we use to refer to a particular form of computation.
How is consciousness different from other forms of computation? I.e., is there a way to tell from the “outside” that a given computation will be conscious?
On your second question, not with perfect reliability, certainly. But sometimes we seem to be able to tell with reasonable confidence.
Fair enough. So, IIRC:
1. Insects are thought to be conscious. There’s a “what it’s like” to be an insect.
2. Some of our computers (and maybe some of our robots?) nowadays are “as smart as an insect”
So, are the insect-level smart machines conscious? How do we tell?
(Serious question. I think hylemorphic objections-in-principle to conscious AI are one of the potentially weakest arguments offered by Feser & Co., and I’ve been thinking about the issue a lot, wondering whether to part company from them on the issue.)
I’m not actually convinced that insects are conscious, and I’m equally agnostic about machines that seem at the same level as insects. Perhaps I would have a more confident opinion if I knew more about insects, and about consciousness for that matter. But all I claimed was that sometimes we seemed to be able to tell with reasonable confidence; I was particularly thinking of the fact that we are reasonably confident that other human beings are conscious. Animal cases are mostly less clear.
Relevant to our uncertainty regarding what is unconscious or not. And a DFW article, which is always fun to read. It’s all about how no one really knows what it’s like to be a lobster which is boiled alive.
http://www.gourmet.com/magazine/2000s/2004/08/consider_the_lobster
> you would also define “consciousness” as a certain set of inputs and outputs: angry people are those apt to hit you, to say things like “I am angry,” and so on. But this seems to be behaviorism–that there’s no more to one’s mental states than the actions they produce or are apt to produce.
I believe behaviourism denied mental states entirely. But more to the point: so what? I’m happy to endorse functional definition of conscious experience (like an aeroplane wing): we consider someone to have conscious experience if they do things that are most easily explained as consequences of having conscious experience, like caring about the consequences of their future actions, having emotional states, writing angsty poetry and so on.
Re your last paragraph, I think the problem is not with the computational theory of mind, but with this conception of qualia. (E.g. Daniel Dennett maintains that “qualia don’t exist”, meaning that the common conception of qualia is an incorrect analysis of how subjective experience works).
I think the concept of qualia rests on a kind of Cartesian intuition: there is no way for me to be 100% sure what happens in the outside world, but I have certain sense-impressions, and at least by introspection I can infallibly correct about those. And then we expect these sense-impressions to play a distinguished part in the explanation.
But given that thoughts are implemented in the brain, surely this is not right. Our beliefs about the outside world are stored somehow in the brain, but so is our beliefs about our own thoughts. It’s not that I first perceive a field of unstructured color-sensation, and a little homunculus carefully analyses it and deduces that there is a pillow in front of me; I first know that there is a pillow, and if I then introspect where that knowledge came from, I can deduce that I see a red color. (The ability to recognize objects is the simpler one, e.g. lower animals can recognize objects, but not introspect about how their vision works).
So, suppose your magic MRI machine scans my brain, and it shows no qualia as such, but it does show bits of my brain that made me believe that I have qualia (the bits responsible for representing information about my own thoughts). Then I think the thing to do is to bite the bullet and conclude that the mMRI-machine is right! After all, it perfectly accounts for all the data I got from introspection.
I don’t see how having non-primary origin of qualia means I don’t have them. I think you’re right that we often identify objects, then their constituent parts, and so on and so forth–I don’t see a colored bitmap and then deduce “Ah, a pillow!” But even so, surely I’m at least certain that I am being appeared-to redly, even if the origin of being-appeared-to-redly involves some preprocessing?
I guess I’d have to ask, when Dennett denies that there are qualia, what is he denying?…I need to think about this more.
I can’t really speak for Dennett, because it’s been too long since I read him. There is a short and readable explanation of his anti-qualia stance in Sweet Dreams.
In a hypothetical future materialistic theory of mind, there would presumably by definitions like “we say that a computational system is conscious if it has the following property …” along with an explanation of why such systems tend to claim that they have conscious experience. (And similarly, a definition of “a system which is being-appeared-to-redly“, and a derivation of why such systems claim that they are.) We don’t yet know what the definitions would be, but you (and many others) say that we can already tell that they are not satisfactory, because they cannot be the right explanation of qualia. So there must be some properties which we intuitively feel that qualia should have, but which any such materialistic concept doesn’t have.
I don’t know which properties in particular that you have in mind, but two that Dennett has focused on are “Cartesian dualism” (there is a separate someone or something to whom the sense perception is presented), and “infallibility” (when I introspect about my conscious experience, I am logically guaranteed to be right). Neither of these hold in the materialistic theory.
I guess you already reject the first one. As for infallibility, I think this also falls apart if you look at it more closely. Of course, it often happens that we misremember things, so if we are infallible about something, it can only be about the our experience in this very instance. But in fact, there is no “this instance” in the brain—different subsystems process sense impressions at different rates, and their outputs are then timestamped and later reconciled. So even our experience of the current moment works a lot like a memory, albeit a memory of what happened 100 to 300 milliseconds ago.
Even if the Descartes-style intuitions about qualia don’t hold true, there clearly is something which causes us to believe that we have subjective sensations, and it should be possible to formalize what that something is. My guess is that it will come out to “true knowledge about what our perceptual systems are currently doing”. But if so, that concept is easy to accommodate in a materialistic framework.
Here’s Dennett’s paper, Quining Qualia: http://cogprints.org/254/1/quinqual.htm
Thanks to you & youzicha. Will read.
The argument you are hinting at has been discussed extensively by professional philosophers of mind. David Chalmers calls it the “paradox of phenomenal judgment” and has some pretty interesting responses to it (see The conscious mind, chap. 5).
I’m reading through the book now–got it on Kindle. My impression heretofore has been of Chalmers as the Dude Whom Eliezar Thinks Has it Wrong About Zombies. This will be interesting.
I’m sure this has been bugging everyone as much as it bugs me.
Why the NRx fixation on 16th century? On monarchies, hierarchies, castes, monolithic social institutions? All that stuff is way too progressive. Some of it is only thousands of years old. Such Cathedral, very Leftism. To be truly reactionary you’d need to aim at what people were doing 100,000 years ago, preferably as close to the dawn of homo sapiens as possible.
Living in small hunter-gatherer bands. Practically no concept of a nuclear family unit. Weak to nonexistent central authority, egalitarian social structure. Very different concept of gender roles, with woman possessing as much decision-making power as men, and in some cases performing the same tasks. Matrilineal kinship lines. Status contests centered around hunting and art and practical skills, not typically around warfare. Animistic, non-organized religion, serving primarily as a tool for storing practical and moral knowledge in the form of stories. Minimal material possessions. Greater focus on physical activity, greater general level of fitness and health and lifespan compared with later agrarian (hierarchical) societies. Less work-hours, more meaningful work.
Also, we should clone and resurrect Neanderthals, so we can have a group that we can all agree to view as explicitly bad and wrong, and we can hunt and kill them with impunity because they Aren’t Human. Basically they would serve the role of orcs. I imagine this would lead to much greater solidarity among homo sapiens. Failing that, we could channel our aggressive and competitive instincts into hunting bears and lions and elephants with intentionally handicapped weapons like spears.
In light of all this, the social institutions NRx is looking to bring back are just a pathological reaction to overpopulation, in the same way that NRx accuses democracy of being a pathological reaction to modernity. Ares only manifests when there are too many damn people and not enough real threats.
You might say, “Hey, it wasn’t actually so nice as you’re making it out to seem,” and then I might just slowly slide across the table toward you an artist’s rendering of conscripted malnourished peasants being ridden down by Janissaries or something.
I’m probably not the first person to make any of these points but. If Scott is right in his point that bringing back a powerful monarchy would require defeating Vast Formless Things, momentum-laden technological and cultural forces, then returning to a hunter-gatherer lifestyle faces only a few more hurdles than does returning to the Napoleonic Age. Namely, a tremendous reduction in human population, OR a tremendous increase in available space, either of which could be accomplished with massive environmental catastrophe on the one hand or space colonies on the other. Modern medical technology would neutralize almost everything that we view as unappealing about the hunter-gatherer lifestyle. If life expectancy and infant mortality were brought to modern levels, I say without hesitation that I would rather live in the Pleistocene than in the 16th Century. That last sentence is the only fully serious part of this post.
Short answer: take a few seconds and look through Post Anathema
Long answer: NRx likes technology, science, hierarchy, and high-trust societies. None of those things existed in the neolithic, except maybe the trust. We don’t like chaos, distrust, leveling, and pretty lies. The 15th-17th centuries provide an approximation (very imperfect, of course) of the things that we do like without the things that we don’t like.
(As an aside, I’m very skeptical of almost all descriptions of life in the neolithic, as there tends to be a suspicious concord between the politics of the author and their descriptions of neolithic life. Plus, the answer to the question of “how do you know that” is usually “extrapolation from the few remaining hunter-gatherers”, which is problematic for all sorts of reasons.)
There’s a quote from Bertie Russell here, ah yes:
“What do we, who stay at home, know about the savage? Rousseauites say he is noble, imperialists say he is cruel, ecclesiastically minded anthropologists say he is a virtuous family man, while advocates of divorce law reform say he practices free love; Sir James Fraser says he is always killing his god, while others say he is always engaged in initiation ceremonies. In short, the savage is an obliging fellow who does whatever is necessary for the anthropologist’s theories.”
(Except these days we don’t use “savage” as a noun)
http://users.drew.edu/~jlenz/brs-quotes.html
I like the 10th – 13th/14th centuries; what does that make me? 🙂
A Chesterton fan?
I can understand liking the 10-13th century. But between famine, plague epidemics, and constant war, what is there to like about the 14th century?
Oh, not all of the 14th century; bits of it, generally the bits near the start 🙂
I’m not a neoreactionary or an anthropologist, but I’ve read a lot of ethnography, and I feel compelled to qualify “egalitarian” in a forager context. It doesn’t mean that everyone’s equal; it means that there isn’t a class system or other formalized hierarchy. The impression I get is that it’s a lot like high school: there are no ranks and no titles, but the status games are no less intense for that, and everyone knows who the winners and the losers are.
Also, while nuclear families are rare (and don’t really make sense in the context), kinship structures viewed more generally tend to be a big fuckin’ deal. And they aren’t necessarily matrilineal, although they sometimes are.
Neanderthals are human, just not Homo sapiens sapiens.
Yeah, and orcs are actually elves, but nobody bats an eye at that sordid business.
As far as I can tell, NRx thinks you can have technological modernity with something resembling a 16th century social structure, but going full paleolithic totally precludes that.
It’s time for something completely different:
Share with us your spooky stories, whether supernatural, weird, or glitches in the Matrix. In the spirit of the genre, there are only two rules:
1. The story must have happened to you or to a close friend.
2. Lying is, of course, completely permissible.
Here’s one from me: http://www.gwern.net/fiction/Stories#true-dreams It may not strike others as all that spooky, but I found it as disturbing as heck to have a prophetic dream. (It’s too bad I don’t have access to the NYT corpus so I could calculate the chance of my dream being right by seeing what fraction of Wiktionary entries have exactly one attestation in the NYT corpus.)
>Lying is, of course, completely permissible.
Only lies the reader would want to believe!
Oh goodness. I have far more than you would care to hear. Seriously – if I had the time or inclination, I could fill up this forum with replies until I got banned or you stopped caring. I’ll warn up front that I am a Christian fundamentalist – and I am not lying, not on purpose by any means. This is not a concocted story (at least, it is not meant or intended to be), but rather my sometimes fuzzy recollection of …unusual things that happened to me.
There’s a line in the novel Dance Dance Dance by Haruki Murakami where the main character, after a series of fairly blatant coincidences, kind of exhasperatedly goes “alright, alright, you win. This is not a coincidence, these things are connected.”
I feel like the protagonist of that book an awful lot.
Here is one: when I first became a Christian – when I really considered myself converted in 2005 – I had a pornography habit, and I asked God to help me stop said habit. Maybe what happened, began to happen before I asked, it’s been a long time and I might be getting details wrong.
What is clear to me is this: I began to get sick, most typically with really incredibly awful stomach ailments, within 24 hours of looking at pornography, and this would subside when I repented in prayer to God. It wasn’t limited to that, but it presented in a variety of misfortunes. You would think after this happened a few times, someone would get gunshy enough to quit porn cold turkey, but I continued to look at porn well into my freshmen year of college; in fact, because I wasn’t monitored at college, I ended up looking at porn more, not less. *Incidentally*, I was incredibly sick that year, I had all kinds of incredibly gross infections and I recall one incident where I suddenly couldn’t see in the middle of an exam because I was afflicted with pinkeye and my eye’s view was obstructed by discharge.
Anyways- I did awful, awful in school that year, nearly dropped out (but didn’t, although I got ghastly grades) – and after deciding to commute, asking for help from my parents, ended up dropping my porn usage a lot (and eventually entirely), and finally stabilizing my GPA.
I looked through the bible on this later, and discovered, unbeknownst to myself up to this point, that it turns out the notion of God chastising those who are his sons is an idea remarked on more than a few times in the bible; Hebrews chapter 12 in the new testament, verses 4-13 are relevant here, as are Psalm 94:12, Revelation 3:19, Proverbs 3:11-12 … among others.
What makes this strange is how a few months ago, I, having been in a way of walking that was largely free of browsing porn for several years, ended up recently faltering and looking, at something I should not have been looking at – for rather too long. I prayed to God to help me not to do that again.
Immediately after, the same day, me and my wife then went out for a walk, during which, (in an effort to get our dog to chase me), I fell and gave my ankle a mild sprain. I felt rather thankful for this, namely because I believed it to be an answer to my prayer previously.
Again, this is a weird story, and it is possibly not the *weirdest* story I could remark on, although truthfully this one sticks with me because it is not something that has vanished into the past, but which exists on a continuum to this day in my life.
I’ve told people about this before, and the usual response is along the lines that it is a big coincidence, or (if I examine a criticism from my own mind) I am reading too much into it – but I ultimately persuaded that, yes, Jesus Christ was, chastising me to drive me away from sin. I don’t know what else to say here; this is my testimony on this matter and one I hold deeply in my heart.
I will remark one more, seeing Gwern’s post about a dream: do note this post should be read in the context of my previous post.
When I was still single, and I believe, working on my undergrad, I had a dream that was rather remarkable because it was when I awoke, “the best dream I had ever had”, as I thought at that time.
The dream was as follows: I am running through misty, foggy, deserted, cobblestone streets at night, and a monster is pursuing me. It is nearby, although I know not where it is, exactly, only that it’s chasing me. I only vaguely recollect what it was like, being something like a wolf (or werewolf?) or vampire – it was terrible.
I am chased into a graveyard, which is also a deadend, and I go up to a grave and begin to dig up a grave – but when I get to the grave, there is no body in the grave – rather, there is a sword.
I pick up the sword, and the entire dream shifts; I am no longer fearful, and I now run out of the graveyard and defeat the beast, tearing it apart with the sword.
When I first had the dream – and I don’t know exactly when I had it – I was happy, because it was the best dream I ever had, as mentioned before. Nothing else came close. But then I went to sleep (or got up? – again, I don’t recall) and I generally didn’t think too much about it.
A few years later, I was studying the bible more and I ended up reading the book of Daniel, a prophetic book in which dreams figured prominently. An evening (or two?) after I read the book, like a flash of lightning, as I was either going to or arising from bed, the memory of the dream came back to me, particularly this interpretation (which, again, came on me suddenly): the empty casket was the empty tomb of Christ; and the sword which I came upon therein was the Sword of the Spirit from Ephesians chapter 6 – which, as the passage informs the reader, is the Word of God.
What’s weird about this is these were connections I was totally ignorant of these symbols beforehand when I *first* had the dream; I wouldn’t have connected the dots before hand; and the whole thing just seemed, at the time I had it, to be a very vivid and strange dream. But it has, frankly, influenced my theology, making rather more bible-centric than it had been before.
I have a counter-question for you.
I am also a Christian fundamentalist, when I am not wallowing in nihilistic despair or trying desperately to be a rationalist.
Why is it that God sees fit to grace you with evidence for His existence, and yet when I pray – desperately, honestly, passionately, and with utter lack of selfish desire – for any kind of strength to carry on and follow His will and commandments, or just for the ability to maintain some shred of faith in Him – I am answered by nothing but a black void?
Why does He love you and not me? And if He does love me, why doesn’t He let me know that He loves me, even for a second?
I pray with all my heart and soul and mind for nothing more than for God to show His love through me and guide me to do His will. The clearest answer I have ever got in response is “fuck you. Kill yourself. Rot in hell.” When I accept that that could not be His voice talking, and pray for Him to overcome those thoughts so that His true will shines through, I get nothing. I have got nothing since I was nine years old. I am now forty. What am I doing wrong?
I am under personal duress at home, but allow me to as concisely as I can, address the broad points here.
I must remark on this first, although it seems harsh: either you are one of his sheep, or you are not. If it is the former, have faith, and if is the latter, repent. I do not know you personally; you will come to God and test yourself on this matter. This is a serious point which should not be dismissed out of hand; this sermon might be a good starting point: (http://media.sermonaudio.com/mediapdf/5220621750.pdf) (and a general link: http://www.sermonaudio.com/sermoninfo.asp?SID=5220621750)
While that sermon I linked is principally about how to understand assurance of salvation in a biblical manner, I do want to point out that there is a statement the speaker of that sermon makes early on, and which I quote here:
>”This silly Christianity in America. “Repeat these words after me.” No, you might have to wait upon God. You might have to cry out to Him until the work is done—a true work, a finished work, a complete work”
And I want to point something out: This is how it works in the bible. It does not (necessarily) happen quickly, let alone at our whim; God does things at his pace, fast or slow. Jesus said this:
> 7 “Ask, and it will be given to you; seek, and you will find; knock, and it will be opened to you. 8 For everyone who asks receives, and he who seeks finds, and to him who knocks it will be opened. 9 Or what man is there among you who, if his son asks for bread, will give him a stone? 10 Or if he asks for a fish, will he give him a serpent? 11 If you then, being evil, know how to give good gifts to your children, how much more will your Father who is in heaven give good things to those who ask Him! 12 Therefore, whatever you want men to do to you, do also to them, for this is the Law and the Prophets.
Consider also the prayer of Daniel, in Daniel chapter 10:
> 2 In those days I, Daniel, was mourning three full weeks. 3 I ate no pleasant food, no meat or wine came into my mouth, nor did I anoint myself at all, till three whole weeks were fulfilled.
When his prayer is fulfilled, he receives this vision:
>5 I lifted my eyes and looked, and behold, a certain man clothed in linen, whose waist was girded with gold of Uphaz! 6 His body was like beryl, his face like the appearance of lightning, his eyes like torches of fire, his arms and feet like burnished bronze in color, and the sound of his words like the voice of a multitude. 7 And I, Daniel, alone saw the vision, for the men who were with me did not see the vision; but a great terror fell upon them, so that they fled to hide themselves.
…
>12 Then he said to me, “Do not fear, Daniel, for from the first day that you set your heart to understand, and to humble yourself before your God, your words were heard; and I have come because of your words. 13 But the prince of the kingdom of Persia withstood me twenty-one days; and behold, Michael, one of the chief princes, came to help me, for I had been left alone there with the kings of Persia.
Daniel had to wait three weeks for an answered prayer, and the reason his prayer was delayed was because of a demonic enemy – the “prince of the kingdom of Persia”.
I also give another example from the scriptures: In Jesus’s encounter with the Canaanite woman, it looks like he is being needlessly harsh to the woman – in verse 23 it reads that Jesus “answered her not a word” – here is the quotation of the verse:
>21 Then Jesus went out from there and departed to the region of Tyre and Sidon. 22 And behold, a woman of Canaan came from that region and cried out to Him, saying, “Have mercy on me, O Lord, Son of David! My daughter is severely demon-possessed.”
>23 But He answered her not a word.
>And His disciples came and urged Him, saying, “Send her away, for she cries out after us.”
>24 But He answered and said, “I was not sent except to the lost sheep of the house of Israel.”
>25 Then she came and worshiped Him, saying, “Lord, help me!”
>26 But He answered and said, “It is not good to take the children’s bread and throw it to the little dogs.”
>27 And she said, “Yes, Lord, yet even the little dogs eat the crumbs which fall from their masters’ table.”
>28 Then Jesus answered and said to her, “O woman, great is your faith! Let it be to you as you desire.” And her daughter was healed from that very hour.
There are elements here, then: First off, we must test ourselves to see if we are in the faith; if we are in the faith, things might take longer than we wish, and we have demonic enemies who would want to drive us from God and take the word of God away from our hearts.
Which brings me to your last remarks about voices telling you to kill yourself and rot in hell – sir (or madam), do not listen to those voices – for understand, what the word tells us:
>For we do not wrestle against flesh and blood, but against principalities, against powers, against the rulers of the darkness of this age, against spiritual hosts of wickedness in the heavenly places. Therefore take up the whole armor of God, that you may be able to withstand in the evil day, and having done all, to stand.
This verse is very simply talking about spiritual enemies, demonic forces, which are trying to destroy you. Perhaps this is manifestation thereof, or perhaps of our own hearts, (which is desperately wicked – C.f. Jeremiah 17:9), but please do not allow such discouragement to dissuade you from continue to seek after God in prayer.
One last thing to consider: In the story of Prodigal Son (https://www.biblegateway.com/passage/?search=luke+15%3A11-32&version=KJV), one son is disobedient and to quote, “took his journey into a far country, and there wasted his substance with riotous living.” This is the son whose return is celebrated, for he was dead, and now is found.
The elder son gives us this exchange on the return of his brother:
>25 “Now his older son was in the field. And as he came and drew near to the house, he heard music and dancing. 26 So he called one of the servants and asked what these things meant. 27 And he said to him, ‘Your brother has come, and because he has received him safe and sound, your father has killed the fatted calf.’
>28 “But he was angry and would not go in. Therefore his father came out and pleaded with him. 29 So he answered and said to his father, ‘Lo, these many years I have been serving you; I never transgressed your commandment at any time; and yet you never gave me a young goat, that I might make merry with my friends. 30 But as soon as this son of yours came, who has devoured your livelihood with harlots, you killed the fatted calf for him.’
>31 “And he said to him, ‘Son, you are always with me, and all that I have is yours. 32 It was right that we should make merry and be glad, for your brother was dead and is alive again, and was lost and is found.’”
I bring this up for the following reason: I remarked on what you call a “proof”, as though I described angels singing before me joyously – but my “proof” was agony and pains, which I interpret (I feel, correctly,) as the lash of discipline. Do you not consider, that I have sometimes wondered why I did not experience in my walk with God something more light, peaceful and full of joy, rather than something painful? When these “proofs” were coming to me, I was generally wishing they would go away, because they were so unpleasant. Although I am happy, in retrospect, for such things, I sure wasn’t anticipating them in the beginning of my walk; I’m not sure if I had known what would have happened that I *would* have asked for them.
The prodigal son – the disobedient one – is the one who experienced these difficulties of having to feed pigs while being himself hungry, after he wasted all his money. The other son did not have to go through such a trial – yet he is displeased, even though his Father has a good explanation available, which is actually mentioned twice in the passage:
> 24 for this my son was dead and is alive again; he was lost and is found.’
Anyways, my question is this: what exactly is it you’re expecting God to do to prove he loves you?
I have felt the love of God in experiences, yes, but perhaps I gave the wrong impression that this is where I have most strongly felt his love – I do feel his love in those, but I feel his love more, and most strongly, in reading his word. My friend, if you want to understand the love of God, read his word, read the bible, in prayer and faith. (See: http://biblehub.com/john/10-27.htm) Understand that the fear of the lord is the beginning of wisdom (C.f. Proverbs 1:7, among other locations in the bible), and I feel this is true with understanding God’s love as well as other forms of his wisdom. That is, we should understand God’s love, ultimately, at the cross ,where Christ was crucified for us. He *has* shown his love for you and me there, by paying the penalty of your (and my) sins, dying in our place – how can God show his love any greater than that? If you want to understand the love of God, look at the cross, for it is written:
> 13 Greater love has no one than this, than to lay down one’s life for his friends.
I don’t think making this post longer than it needs to be will be useful. I hope that this will help.
Good luck and God Bless.
Here’s the essence of what I don’t understand:
Why would God allow me to come into being with a mind incapable of having faith in Him as my loving creator, and then punish me forever for not having faith in Him as my loving creator?
And if that’s simply not logically possible, which therefore proves that I must be capable of having faith in Him as my loving creator, how much harder do I have to try? Is there any evidence that could prove that I’m trying as hard as I can to believe, short of actually believing?
@Ialdabaoth
Have your read about “spiritual dryness” in Catholic thought? It’s a feeling of desolation, loneliness, and separation from God that’s not uncommon, and that a number of people, including Mother Theresa, endured. I can’t recommend anyone in particular, but there’s a number of saints and theologians who have written about it who might be helpful.
I’ve read most of the literature on this. Here’s the problem:
It sounds like extremely motivated reasoning, from people who desperately want to keep believing something.
My model-of-the-world says that, if I had some kind of sense of God the way Brad did, of COURSE I would be motivated to maintain that sense. Or, if my lifestyle and identity continued to be intertwined with a belief in God the way Mother Theresa’s were, I would likely be motivated to maintain that sense. But without those motivations, and without any sign, I’m not sure if it’s within my psychology to continue to hold onto these beliefs – any more than it’s within my psychology to abandon them.
Excatholic here.
I went through this forever–especially the thing where everyone else seems to have some kind of experience of God, but this experience is noticably absent from one’s own life. It’s really painful, and continually causes one to ask oneself if one is a horrible, evil person. At least, I was doing that pretty much constantly. Because, after all, if God loves you, and wants you to be with him, and the only way for you to be happy is to be with him, then the only reason for constant experience of absence would be on your side, right? So yeah, that sucks.
I eventually noticed that this kind of “experience of God” seemed to track naturalistic explanations better than spiritual explanations. I.e., people who are in circumstances more apt to promote deep-feeling experiences, because of natural surroundings, do so–and those who aren’t in such natural surroundings don’t, even if it would be spiritually fitting.
So both Catholic and Buddhist monks report experiencing overwhelming peace in similar ways, and natural explanations of this are more parsimonious than theistic explanations. Similarly, Christian parents trying to follow what they think is God’s will outside of surroundings apt to promote such feelings do not get them, even though (were the spiritual account true) this would not happen.
FWIW.
What’s wrong with black void’s all of a sudden? Not what you were expecting?
Similarly, I have had many experiences where I prayed for something, for example, “insight”, then believed fully that I received it, and then acted on that insight, and then found out that it was completely and totally wrong. This happened literal hundreds of times before I deconverted.
Every time I have thought I heard from God, it turned out to be my own voice. Now I try to listen to my own voice directly, and to be aware of its limitations. It works better.
I wish I had Big Exciting Future Prophecies to share, but all I have are mild cases of what is like déjà vu; sometimes I’m doing something, or engaged in a conversation with someone, and then I get the feeling “Hang on, I dreamed this!” It’s a sense of recognition of ‘experienced this before’ but not in the “yeah, that’s because you’ve photocopied this file five times already”.
Nothing exciting or ominous, completely ordinary mundane things. But they strike me as “I lived through this already in a dream; I remember this from before”. Just with the very strong conviction that this was an experience in a dream. Now, whether that’s a form of psychic time-travelling ‘when you’re asleep your etheric body wanders and there is no time so past, present and future are all one’, or whether it’s just brain weirdness I have no idea.
It’s never anything like the lotto numbers or ‘don’t go out that door, you’ll be chased by a lion!’, it’s really brief momentary ‘oh that line they said/this action/I stood up and turned to the wall exactly like that’ is familiar from a dream.
One of the very few habits that I have managed to successfully train myself is, as soon as I feel deja-vu coming on, I immediately try to explicitly predict what happens next. It’s pretty much a reflex by now.
I haven’t been able to predict anything, and I think experiencing that on a gut-level has reduced how convincing the “memories” feel from the inside over time.
This seems like a good habit (or a fun one, anyway). I almost never feel deja vu, but I did once read a webpage that I was convinced I had read before (but I had some reason to doubt I had actually read it, I forget the details). Anyway, I stopped myself and tried to predict what the document would say next, and I was right, I had read it before.
Similar things for me. I’ve gotten to the point where I am convinced that not only did I dream that this happened before it happened, but that the dream included thinking about the fact I had dreamed it before.
Didn’t you say you saw a pooka in a recent post? That sounds like a spooky thing.
It was, and I’m fairly convinced it really was a pooka, and in general I don’t have much time for pishogues and superstitions.
But I’m still damn sure that was a pooka and not just a puck goat.
When I was in elementary school, one of the Macs logged in the user as an administrator by default. Some students were responsible for shutting down the computers at the end of the day. One day, when I was on the computer that gave me partial admin privileges, I found the scheduling options in the Energy Saver section of System Preferences and configured them to start the computer automatically at 8 and shut it down at 3. A couple days pass, and then I told my teacher what I did and how she should use her admin account to set it up on the other computers. “Oh,” she said, “that’s what’s been going on!” She then explained that one of the more … spiritual … students in the class had been telling her about the “ghost computer” that a ghost pressed the power button to start each morning. (My request to have all the computers start up automatically was not approved. I did end up getting an admin password for all the classroom computers later, when I was setting up some restrictions on a set of 5 computers and the teacher was not eager to enter her password 10 times. I didn’t do anything of interest with the password, apart from moving some applications to the Applications folder.)
Non-supernatural scary story: I was dealing with the school psychologist. Note: If the psychologist is being paid by an organization with goals related to you that you do not share, you should probably be careful. (The first incident I had with this one was noticing that my behavior in a religious setting was far more compliant and conforming than I expected it to be.) I was in her office during 0 period when the fire alarm went off. I got up to leave. (I don’t take 0 period classes, so I’m not actually required to be on campus then.) She told me to sit down. The alarm was still going off, so I left. When the alarm stopped and I reentered, she was naturally upset. She then lectured me, with no sense of irony whatsoever, on how I always need to obey authority figures such as teachers, even if I think what they’re telling me to do is wrong. (“Obey authority figures” is a good heuristic if the authority figures are competent, but “do stuff you think is wrong” is usually a bad idea.)
When I am partially asleep, my intuition always reverts to quite confidently classifying time as being as easily bidirectionally traversable as space is. Unfortunately for my desire to sleep 8 hours in a 6-hour period, this is wishful thinking.
I once saw an opaque raindrop in a cloudy but not-raining sky land near me. This was when I was in elementary school, so I thought that it was a tear from a dragon.
Forgot to mention: My printer really hates me. It was refusing to shut down properly no matter how I pressed the power button. I unplugged it from the wall … and the power button light stayed on. (Probably something with a capacitor or a battery backup or something, but still not fun.)
When you eventually got admin privileges, did you set them to start and stop automatically?
No, because doing stuff with admin privileges that are technically not yours that the authority figures explicitly asked you not to do is usually a bad idea. I did set them to shut down automatically, but automatic starting up had been very firmly vetoed.
My wife’s father grew up in one of those famous Romanian orphanages. At the orphanage, there was another boy who would crawl on the walls and the ceiling like Spiderman, but only in his sleep. If you woke him up, he would fall. My father-in-law swears that he saw this with his own eyes. The orphanage workers told the boys that if they ever saw the boy crawling like this, they shouldn’t make any noise to wake him, for fear that he would fall and hurt himself.
When we were very young, my brother was convinced that aliens were contacting him and a few other children around his same age. They would appear to him at night, and communicate with him telepathically. Once, they showed him around their ship. While he was there, he happened to see another student from his school there. He later asked him about it in real life, and the kid seemed to know what he was talking about.
I should really ask him what came of all that.
I just matched your donation, Scott.
Hey, Scott, could you send me an email at the address I used in this comment? I was going to mention what I think was an error in your Untitled post, but thought it might run afoul of the no-gender rule.
So Scott, I posted a moral philosophy theory that I feel you might like. It’s probably quite closely related (though perhaps a little more formal) to your relentless march of niceness theories 🙂 I’ve put a lot of work into this one and I hope it has broad appeal across multiple philosophical camps.
The Ideal of Comprehensive Morality
As always, feedback welcome on my blog or here.
I liked this. Even as you were discussing your moral hero, I found myself wondering what a defense of a more myopic morality might look like, and then you immediately proceeded to illustrate one.
Cheers, always nice to get such positive feedback.
This goes into detail about the sort of mistakes abused children are likely to make about morality and safety, simply because children don’t have great tools for thinking about what’s happening to them.
Let’s assume for the sake of argument that an unwanted/unplanned poor child has a significant cost to society in terms of welfare, crime and possibly a whole lot of things I haven’t thought about. Let’s fix that cost to an arbitrary number, say 1/5th the cost of bringing up a child, or for the US roughly 50k USD.
Now, let’s offer 1/10 of that cost as a direct monetary incentive to have people sterilised. For maximum effect, we would offer this only to women.
If you are rich enough to actually bring up a child, 5000 USD is a trivial sum and not worth the hassle. If you are too poor to bring up a child, 5000 USD is a substantial sum, and if you want your tubes tied anyway, today you probably can’t afford it.
If you have taken the money and your financial situation changes from winning the lottery or whatnot, well these things are reversible now, and I suspect that the cost of a reversal operation is higher than the 5k.
Why isn’t this the obvious solution to a LOT of problems related to poverty?
It’s a good solution, but one that farces major hurdles being implemented because it evokes fear of Nazism and compulsory sterilization, even if such comparisons are unfounded. Politicians like to talk about poverty of a social problem, not a biological one. Scientists have to tread this water carefully. This censorship, whether self-imposed or imposed by society, hurts potential progress that can be made on these issues
Most childless poor people probably hope to get out of poverty and have children eventually.
I would support subsidized temporary contraception like hormone implants or RISUG, with sterilization only for those who actually want it.
$X per month (or $3X quarterly, for 3 month implants or such) would be cheaper initially, too.
Why women only? Why not men only? Part of the problem here is that prevention of pregnancy is seen as the woman’s responsibility. Given that it’s complicated to mess around hormones, and even laparoscopy for tubal ligation is still major surgery, why not encourage men to get the snip or have polymer gel injections? Quicker, cheaper and they can bang as many chicks as they like without having to use condoms or worry is she really on the pill or IUD!
Non-rigorous thoughts:
Higher variance in male sexual behaviour. Most unwanted children are conceived by a small tail of highly irresponsible men, whereas they’re spread across a wider spectrum of women. That kind of cuts both ways though, hmm.
Raising any children will be the woman’s responsibility, so her financial state is more important to the child’s life outcomes than the man’s. And if you encourage men to be sterilized, you’re going to have many more cases of provable cuckolding, which seems like it will lead to bad outcomes all round.
Provable cuckolding? Let’s unpack the whole raft of attitudes there, shall we?
(a) suppose Mr A and Mrs A, or Mr A and Ms B, are in an intimate relationship. Mrs A/Ms B is also – the hussy! – having a bit on the side.
Why the blue blazes would she get herself knocked up?
Okay, you say, the little minx wants a baby or wants to humiliate her poor deluded spouse/partner or was careless, and thinks she can pass off baby as Mr A’s child.
But! Mr A then says “You home-wrecking cheating deceiving wanton, I am sterile and here’s my receipt for the medical procedure to prove it! This pregnancy is none of my doing!”
(b) If Mr A has not informed Mrs A/Ms B that he is sterile, and therefore she need not worry about contraception (prevention of sexually transmitted disease is another thing), then either their relationship is of so informal a manner that “cuckolding” is hardly a concern (there is no emotional depth or investment in what is a casual sexual arrangement), or matters between them are at such a pitch that her having an affair is a symptom, not a cause, of their disunity and disintegration.
(c) Also, I very much resent the perhaps unintentional implication that oh noes, we must not interfere with men’s potent virile fluids because otherwise bitches would be out there spreading their legs for every dude who whistles at them and getting themselves up the duff by strange men not their master. But making women as a population, or a very large slice of it, sterile is no problem, apparently (we need not worry about men deceiving their tube-tied women and fathering children by other women, is that so?)
(d) That last may seem harsh and even rude, but if the first objection that occurs to you is “if men are sterilised, that means women could be cuckolding them” and you don’t consider or weigh the equal risk that sterilised women might be cheated on by men fathering children by other women, I think you need to question why cuckolding was the big problem that leaped out for you.
I don’t think he needs to question it; rather, you should have been more charitable and tried to figure out possible reasons for it.
One of the most obvious ones is that cuckolding is far more costly for the man than for the woman, since it is always possible to know who the mother is, but not the father, enabling a woman cuckolding a man to impose costs on the man that a man cheating on a woman cannot. It just wouldn’t make any sense for a man to cheat on a woman, pretend the child is hers, and get her to raise it and spend money on it. Basic biology prevents that.
You know, I said to myself this morning “Why am I always getting into fights on Scott’s blog? Why am I not a nicer person? I need to do more ‘I love little kittens because they are so soft and furry’ comments and be a smiling ray of sunshine!”
Well, looks like I picked the wrong day to make a resolution to be nice and sweet and non-confrontational.
WHERE THE HELL DO YOU LOT GET OFF, EXTENDING OWNERSHIP OVER WOMEN’S BODIES LIKE THAT? OH, THEY MIGHT CUCKOLD MEN, WHICH IS WHY THEY SHOULD BE THE ONES UNIVERSALLY STERILISED TO PREVENT UNDESIRABLE PREGNANCIES. BUT LET US NOT TOUCH THE SACRED SPERMATIC DUCTS, FOR NO MAN WOULD EVER HAVE TWO, THREE OR MORE WOMEN ON THE GO THAT HE MIGHT IMPREGNATE FOR THOSE SAME UNDESIRABLE PREGNANCIES!
You know, I cannot believe I am a socially conservative, traditional Catholic when I seem to be on the left-ward side of attitudes like the “cuckolding” one.
I repeat: if your first objection to sterilisation of males is that their womenfolks will be rushing out to get themselves up the pole by sneaky sex with the other men sniffing around, why would this be your first objection? Why not on health grounds? Grounds to rights of autonomy over their body? Religious grounds?
Women have the risk of constant doses of artificial hormones or major surgery to control their fertility. None of this appears to be a problem. But mention the equivalent responsibilty for men to control their fertility and all of a sudden it’s a recipe for cuckoldry!
May have been a poor choice of word (it evidently has stronger connotations than it does for me); apologies. I started, as I usually do when trying to come up with conservative objections to relationship policies, by thinking of the children: how might male or female sterilisation harm their children? From there the thought process is obvious and never felt particularly purity-oriented. The child of a man cheating on his sterilised wife could probably have a normal, positive life (no-one would ever know). The child of a woman cheating on her sterilised husband, not so much. If anything I’d expect widespread male sterilisation to /reduce/ cheating, but that’s not something I weigh at all heavily in comparison to child welfare.
I feel like everybody has missed the provable part of the phrase “provable cuckolding.” That’s an important modifier; the objection isn’t to cuckolding; it’s to the cuckolding being found out.
The child of a man cheating on his sterilised wife could probably have a normal, positive life (no-one would ever know).
(1) The woman in the case above is married or in a permanent relationship herself. Her husband is either (a) not sterilised and accepts the child as his, or (b) like our friend Mr A, sterilised and
is highly upset about being provably cuckolded.
Support:
(1) (a) Nobody says anything, nobody finds out anything, and indeed everything goes along nicely (I know from local gossip about one guy who doesn’t know who his real grandfather is, because his own father didn’t know it – but because of Circumstances, i.e. Ireland is a small country, your neighbours do know your business, and my mother did a lot of visiting amongst elderly relatives and heard all the gossip, I in turn know about it because Great-Grand-dad was a notorious cocksman and fathered a whole rake of illegitimate children). Nothing to arouse suspicion like “Hm, nobody in myfamily ever had red hair, darling!” comes up, there are no messy vicious rows where it gets thrown in B’s face “And by the way, that’s not your kid!” and everything in the garden is rosy. Could happen. Probably does happen. Mr and Mrs/Ms B don’t break up over her fling with Mr C, and Little C never finds out (or not until later in life) that B is not his dad.
Objections:
(1) (a) People know anyhow because neighbours, family and others have ways of finding out. Even if Little C and Little C’s foster-dad never know, Mrs/Ms B knows and Mr C may or may not know. I’m also seeing in my work cases where the Little Cs find out later in life about their real dads and change names, try to find their ‘real’ father, etc. It’s not without upheaval of some kind.
(1) (b) Mr B divorces his wife/dumps his girlfriend for being a cheating hussy and refuses to pay maintenance for a bastard that’s none of his get. Mrs/Ms B gets a bad reputation in at least some quarters. Little C learns all about “hey, your mom cheated on your dad! Only he’s not your dad, is he?” from their little schoolmates, who learned it in turn from overhearing the gossip of their parents about Mrs/Ms B. Upheaval and rancour and all kinds of emotional upset, not to speak of the environmental consequences (chances of poverty, disrupted/broken homes, etc.)
(2) The woman is not married. Unless she is in a position where she has ample means and doesn’t need any material support from the father, she’ll probably look for child maintenance. When looking for child support payments (because if you’re claiming lone-parent allowance, you have to prove that you’ve made an attempt to gain maintenance from the other parent), she needs to name Mr C as the baby-daddy. Mr C, as we’ve seen, is married. Mrs C, unless it’s a particular set of circumstances, will not be happy to hear about this. Mr C may not be too happy himself. Little C will come in for some of the fallout from all the adults fighting over this.
(2) (b) Mr C makes a habit of spreading his seed and Little C finds out that they’re just one more notch on the bedpost as far as Dear Old Dad is concerned. This may or may not be a matter of distress to them.
I fully support encouraging men to take as much responsibility as women for contraception. However, ceasing the use of condoms would still not be advisable, as vasectomies sadly do not prevent STDs.
(1) Original suggestion about making women infertile did not address disease prevention
(2)Trouble is, the pregnancies are happening because guys are not using condoms (in conjunction with the women not using any/effective contraception), so their fears of sexually transmitted disease obviously are not scaring them into behaving in a manner to prevent that as well as pregnancy.
If we’re going to treat women as a population to be neutered to prevent unwanted pregnancies, we should do the same for men. If you want to sleep around without let or hindrance, fine for you, but then what’s sauce for the goose is sauce for the gander, or, it takes two to make a baby.
Women are far more of a bottleneck to fertility. Neuter all but a fifth of the men and you might well get about as many babies, so depending on the exact goals of the effort, the sexes are not symmetrical.
I feel like this should be obvious.
–
I would be interested to see numbers on causes of unwanted pregnancies: refusal to use condoms, using condoms incorrectly, etc. I haven’t been able to find a study that fine-grained.
Right, reading this one and the comments below, that one went off on a tangent I hadn’t anticipated.
The “women only” bit was intended as a purely return on investment consideration and not in any way a slight against women. For much the same reason that fertility in populations is measured in “children per woman” and not in “children per person” or “children per couple”, the actual impact of a sterile woman is simply higher than that of a sterile man. This is why I started the sentence with “For maximum effect”, sorry for the misunderstanding.
If implemented for both sexes, there would probably need to be a different price structure, because the reversal is also simpler on men, so that you could go get the 5k, then get the reversal for 2k, rinse and repeat.
Several reversible birth control methods (such as IUD) are just as effective as tubal ligation (as is RISUG, which really needs to become available to men who want it). I can see a scheme like this being implemented with these types of more-easily-reversible methods, which are less likely to conjure the specter of eugenics.
Totally is. Who’s organizing the fundraising?
Just one objection : men AND women both.
One more detail : free reversibility on demand, no question asked, over the counter if at all possible. Just that it has to take at least a fleeting instant of actually wanting to make a baby, with consent from both partners. But not necessarily more. There MUST be no test, no license, no question – so that there is no denial of reproductive rights, just that it would make them opt-in.
Actually I thought of this mainly as a const-savings measure from the state, not as a subject for fundraising.
Sounds a bit like Brave New World, with a whole lot of potential to become 1984-ish, depending on government-evilness.
Also, if opt-in is free of charge, you’re facing underpaid personell in sub-par facilities, should you be able to make it to the facility 200km over (if you can pay for the train/bus/car ride). Or you probably won’t, for a long time, since the waiting period would likely be ~3 years+.
Or so I’d imagine, at least…
^that’s me, btw (not that anyone’d know me :p)
if you allow for free reversibility, what’s to stop someone from taking their incentive payment to get sterilized and then immediately demanding reversal?
I guess I just weigh the “right to reproduce” much less than you, in that I’m not sure the right to produce a baby trumps the collective right of society to not have an unproductive new member thrust upon it.
See http://en.wikipedia.org/wiki/Project_Prevention
I agree that unwanted children born to poor parents are probably on net a bad thing for societal welfare. But it’s not at all clear to me that the same can be said for planned children born to poor parents. Especially considering that most First World countries today have worryingly low birth rates.
When considering which people would be affected by this sterilisation premium, you quietly drop any talk about children being unwanted. This proposal would significantly reduce birth rates among low income people and would cut down on unwanted as well as on wanted children.
Donated to Multi, because fuck Russia’s attitude to transfolks.
That aside, I’ve been waiting for an open thread to ask: Does anyone have suggestions for a good place to start someone with Christian Rationalism?
I have a friend who is a member of a rather dubious strand of Christianity and she’s been showing clear signs of wanting to shift to a less… completely crazy… approach to thinking. She’s already a big fan of HPMOR and reasonably well read, but is also a fervent believer, so deconverting her is pretty much out of the question (and I suspect would have severe and long-lasting deleterious effects on her mental health).
So my question is, where to start? I don’t personally know very much about the theistic sides of rationalism, being the never-believed kind of atheist, so any advice is welcome.
Presuming that you mean Less Wrongian rationalism, Leah Libresco.
Can someone please summarize why Leah Libresco is Catholic? I am super confused as to how someone could both be a rationalist and actually believe that the claims of the Church are true.
To the best of my understanding: she noticed that she thought of moral law as a person who loved her, and that Catholicism seemed like the best way to reach back to that person.
I’ve wished she would give a lengthy explanation of this for a while–there are a few blog posts about it, but none of them are of Scottist length and address the millions of objections that come to mind when reading them.
I do find it a bit unseemly to be objecting to Leah not presenting an account of her religious journey satisfactory to random strangers on the internet. My main objection to Christianity is Christians’ common idea that their personal beliefs should affect my actions. If Leah rejects this idea, then I don’t see a pressing need to discuss the rest of her belief system.
Thank you! Thank everyone!
And yes, it’s pretty fucking bad. Don’t even want to talk about it. Suffice to say, the vast majority of people here aren’t aware of a *single* queer person they might personally know. And there are no high profile, publicly visible trans people *at all* – and most LGBT outreach efforts, such as they are, don’t even mention our existence.
(I don’t want to go into the practical dangers and impossibilities of living while trans.)
I do hope that you will talk about it at some point, with as high-powered a signal as possible, although I totally understand that it might not be prudent or psychologically healthy to do so now. I may not agree with your politics, and I may be kinda-sorta conflicted about your right to express them, but I will fight to the death for your right to tell your story. (Also your right to, like, not die, although that’s not a particularly universalizable impulse.)
Gave $20.
<3 </3
I am Russian-American and LGBT (not trans, but still) and while I don't have to deal with the actual dangers of being an LGBT person in Russia, I get heartbroken pretty much every time I hear anything about LGBT stuff there. I'm so glad you're getting out, and so sad that the only way you can live well is to leave.
And thank you for reaching out to this community and letting us help you achieve your goal. I fervently wish you the best of luck.
I think it depends in part on which strand of Christianity, and what you find rather dubious about it. Depending on what kind of denomination your friend is coming from, some kinds of discourse will produce less culture shock than others. Likewise, if a more rationalist-inflected Christianity prompts your friend to switch denominations (rather than just reweave her belief web around the edges), the culture shock issue will be a much bigger deal.