This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:
1. Obvious change is obvious. There are still some bugs to be ironed out, and ironed they will be. Thanks to Michael Keenan and Trike (especially Catherine Truscott) for some help. All positive changes are theirs; all remaining flaws are due to my pickiness alone. My obsessiveness finally overcame my laziness and I standardized the ads too; if you’re an advertiser and you don’t like it, send me an email and we’ll talk.
2. I’ve also added a bunch of new links to the blogroll. The categorization system isn’t very serious (some of you may recognize it?) and I’m still thinking about how I want to do it on a longer-term basis.
3. I’m grateful for all the interesting comments I received about cognitive-behavioral therapy, which helped me understand it a lot better. The impression I’m getting is that its ‘insights’ seem obvious/patronizing to everybody, but the real agent of change is forcing you to do the worksheets and exercises on a regular basis until they’ve really sunk in. Comment of the week is Mirzhan Irkegulov describing his experiences with it.
4. After the successes of other people in the community like Alicorn and Miri and Gwern, and multiple strong recommendations, I am opening a Patreon account so you can give me money. I DO NOT NEED MONEY (though, like many people, I like it). I have a job and am fairly comfortable. This blog is provided free and there is in no sense an implied agreement that if you like it you are under pressure to donate. I also don’t want to funge against anyone giving to charity or anybody who needs money more than I do. But if you feel a burning, burning desire to give me money, and it’s not going to prevent it from going to anything more important, well, now you can. My Patreon account is here. I am a little proud of the subtitle.
This is the worst pitch for money I’ve ever heard. I can’t help but suspect it will be perversely successful.
He did just have a post on reverse psychology. I bet all of those accounts telling him they would donate are really just his alternate accounts.
Perhaps we’ll get an update next links post?
I’ll be honest, I clicked through just to see the subtitle. It was worth it.
Strongly suspect I would fail as an AI Box experiment gate keeper if the AI just promised me puns.
As of this comment he will be receiving an extra $450 per month on average. That is certainly a nice chunk of spending money. I wonder if he will just donate all of it to a charity.
I’m definitely donating. Cause if we all give you lots and lots of money, maybe you’ll notice that writing your blog allows you to impact the world far, far more than by working at your day job; and you’ll quit it and start writing your blog full time! Yay!! A post everyday! Nirvana.
After all, there are lots of people who are good psychiatrists. How many people can consistently say something new and insightful?
Psychiatrists have high incomes, which would allow a lot of discretionary income for effective altruism.
However, the side income may enable him to write more efficiently, by hiring someone to deal with the annoying but time-consuming things like first-level spam filtering, site maintenance, editing posts, and the like; so he can just spend more of his “SSC” time writing and commenting.
Prediction (not really serious): now that Scott is getting paid to write, it will feel like work, and he will be less prolific in his output.
Is raising children as Christians strictly superior to raising them as atheists? Either they remain Christian and accrue the benefits of faith (superior life outcomes, value preservation), or else they deconvert on their own and you get an atheist with the personal experience of having rejected religion.
I’ve heard similar arguments about teaching them of Santa Claus.
I think it depends on your experiences with it.
I was raised vaguely religious, my mother protestant, my father nominally catholic and my school was catholic. I grew up occasionally going to different churches. I didn’t have any terrible experiences with it. I wouldn’t say I “rejected religion” so much as it just sort of faded away like belief in santa.
I got to see the way it makes some people happy and the way it provides social structure for many who need it.
I’m an atheist but I don’t feel strongly about it. As I said to my SO recently, I have no compunctions about praising the good lord Santa or any other religion if there’s low or zero cost and some pragmatic social advantage.
If I were raising kids I’d probably get them baptized and bring them along to the occasional service if only to provide the immunization.
There’s a quote in Terry Pratchett’s Good Omens about some Satanist nuns who’d been *brought up satanist* who weren’t particularly evil, they just went to temple on Saturdays and did the chanting. They didn’t get excited about it or feel terribly strongly about it. There’s something to be said for that.
I’ve seen some families where the kids were brought up atheist with no contact with churches who went head-over-heels when they encountered the happy-clappy churches. Because they can have a tremendous appeal.
There’s something to be said for the satanist-nun type of upbringing even when it comes to atheism. Atheists who learned their unbelief at their parents’ knees, rather than as the result of a personal and readily self-dramatizable intellectual odyssey, are perhaps less in danger of breaking an arm patting themselves on the back about it.
My personal experience was that being taught about Santa Claus and the Tooth Fairy irrevocably damaged my trust in my parents. Maybe that’s a good thing, and almost certainly it was exacerbated by the way my autistic-like-not-autistic-brain thought about things, but it’s definitely something to consider.
And Santa etc are culturally accepted practices, which seems to blunt the effect of the betrayal of trust. Imagine if your kids found out that you were teaching them to believe in this God when you yourself did not believe in God.
I was furious at my parents for the deception involved in raising a child in the UK’s most populous religion, Vaguely Anglican-Flavoured Agnosticism (because the practices of VAFAs involve teaching their kids God exists despite actually not being sure themselves). I’m not sure my relationship with them would have recovered if they’d tried to feed me that stuff despite actually being atheists.
I will also agree with Dennet, Dawkins etc in that teaching children Hell exists pretty much adds up to psychological torture.
I think the creepy thing about parents teaching children about Santa Claus is that at least some of them get a kick out of being able to deceive their children.
As for teaching the belief in Hell as emotional abuse, see Marlene Winnell’s work on recovery from religious trauma. Belief in Hell is bad enough, teaching children that the Rapture could happen at any moment adds quite a bit to the PTSD.
Wouldn’t playing hide-and-seek, or even peek-a-boo, count as getting a kick out of deceiving children?
And of course we continue to engage in recreational deceptions through adulthood, but it may be particularly necessary in childhood as children learn to live in a world that is not always as it seems (and for reasons that go far beyond willful human deception). Doesn’t strike me as intrinsically creepy, if not taken to excess.
I think peek-a-boo is a very non-central example, or possibly not an example at all, of adults getting a kick out of deceiving children– I’m taking myself as typical here, but I think the kick is in how much children are delighted by peekaboo.
If you can set up a “just kidding” structure for when you’re deceiving a child, fine. I’m squicked by the idea of telling a child things you know are false, and not correcting it in a fairly short period of time.
I don’t recall finding out santa wasn’t real ever being a traumatic thing for me or even really something you so much “found out” as just gradually came to know. At the age of 4 I was running round the house as one of my sisters jingled some bells and a few years later I knew santa wasn’t real and that it was a game that had been played entirely for my enjoyment at the time.
Mildly peeved and slightly disposed towards double checking things they tell you in future I can understand but it seems an odd things to actually be “furious” about. That seems an overly strong emotion to associate with something as bland as Santa or VAFA.
Yes, it is an oddly strong emotion – from an adult perspective.
To as child, it’s literally the biggest deception they have ever encountered, perpetrated by the people they trust most in the world, with the help of (in the case of Santa, at least) every other adult in the world (it appears, from your limited perspective).
And yet I don’t think I have ever encountered a child, myself included, for whom the response was anything other than “That was FUN! Now it’s my turn to play on the grown-up team, which will be fun too!”
I’m interested in examples to the contrary, and you seem to be one at least in hindsight. But I’d caution against generalizing from personal experience with “To a child…”, when what you have is “To me…”
Agree totally with John Schilling. I found out Santa wasn’t real by spotting a present in my parents closet that “Santa” ended up giving to my sister. When I confronted them with this they were like “oh well, you got us, but don’t tell your sister, let her have the same fun you had at that age” and I felt really cool because I was now part of the game.
Even though she had no younger sibling to get one over on, I don’t recall her being particularly upset when she eventually found out about it either…
This is actually how I became an atheist at the age of six or seven. (As another datapoint I also have an Autism Spectrum Disorder) My parents lied to me about Santa, when I found out I was *furious*. In fact I was so furious and hurt that I resolved to never be tricked by a lie like that again. So I went through all the things I could think of that were a lot like Santa.
– Things that you never saw or had any hard evidence of their existence.
– That were also magical.
– Associated with holidays and celebration and special events.
So I went down the list, Easter Bunny, Tooth Fairy, God…
This eventually culminated in a scene where at six AM in the morning I woke my parents up to tell them it was okay they didn’t need to lie to me anymore, I’d figured out god wasn’t real. My mom looked over groggily and said something to the effect of:
“What? God exists go to sleep.”
I’m sympathetic to that view, but “strictly superior” generally just means you aren’t looking hard enough. Reverse causality may explain away all the benefits the faithful get, for starters, and it may also be a short-term approach that rejects the opportunity for innovation in favor of sticking to what was only a locally optimal strategy.
“…you get an atheist with the personal experience of having rejected religion.”
Perhaps, but depending on the sect (and other factors like how committed the person once was, how committed their family is, etc.) that personal experience of rejecting religion might include that person being rejected by family. This isn’t an issue if you’re asking about raising your own kids, but if we’re looking at the question of overall if it’s better, then the difficultly and personal alienation that can come from rejecting religion should be taken into account. Having been raised in a (borderline?) cult, and then leaving, I can say pretty definitively that I’d rather not have had to go through that for 24 years of my life.
Given that you’re presumably talking about atheists raising the child, I don’t think the family is going to suddenly shun the new atheist
How on earth is that supposed to be a positive thing? Sure enough, a lot of people out there consider suffering a virtue, and make statements along the lines “without going though this much shit you’ll never understand and appreciate life,” but don’t we know better than to believe that?
Worldview converts tend to be a lot more knowledgeable about what and why they think than people who’ve stuck with their native belief set are.
OK. What’s the value of knowing exactly why religions are false, is this value high enough to justify going through a major personal crisis for it, and is it even the most cost efficient way to achieve it, as opposed to, say, reading a book?
The major personal crisis appears to be the active ingredient here. It’s not about knowing why any specific thing is wrong, it’s about knowing how to stop being wrong.
There are countless world views out there to examine and reject. Why does this have to be applied to Christianity? Raise them as Socialists or something.
If for some reason we’re looking for a foundational false belief for our kids to overcome as part of their personal development, the idea of routing all physical consequences to an all-powerful-but-mostly-hands-off deity is one of the less dangerous ones I can think of.
If for some reason we are looking for a foundational false belief for our kids to overcome, I think something has gone wrong at an earlier step in the algorithm.
That is also true.
If growing up as socialists had better outcomes, I’d think about it.
What if the originally atheist family accidentally converts themselves to the religion they were pretending to believe?
Then the entire family gets the superior life outcomes of believing in God and attending church regularly. Actually, looking at some of those, it’s starting look as if “rational atheist” is the consolation prize, anyway.
This is why most organized missionary/evangelism projects have strict rules against arguing with atheists, especially in public.
Yes, but only because of extreme selection bias. It takes a lot to overcome the worldview you’ve been steeped in your whole life.
Escaping religion is a fundamental life experience for rationalists.
Well, I disagree. Unless your end goal is to reliably get upvoted on /r/atheism, there’s nothing crucial for systematic winning in life that can’t be achieved by less traumatic methods. People acquire a tremendous amount of misconception in their lives, and there’s always something out there on which you can practice stopping being wrong skill. For one, history – it is presented to most people in a bunch of so heavily redacted myths that there’s always something you believe about it that’s blatantly false.
If we’re talking about learning scientific method, well, to the extent that I’ve heard accounts of the upbringing of world’s top scientists, none of them listed deconversion as a formative experience; much more likely it would contain accounts of playing in nature, playing with computers or other machines, looking through a telescope for the first time, or something else along the lines of things that you’d expect to see in a scientist’s childhood, rather than this reverse psychology trick. Instead of spending formative years teaching a child how to think properly, in a way that it becomes automatic for them, and they’re able to move on onto more interesting problems using this thinking as a tool, you’re offering to spend these years knowingly teaching them bullshit, and then force them to figure out everything by themselves in the hard way. How’s that even supposed to sound sensible? Or else I have couple of other brilliant ideas.
For example, did you know that non-native speakers of any language tend to have a much better explicit understanding of their structure than native speakers. Many times I’ve seen adult native English speakers open beginner ESL books, and be completely shocked: “Oh, that’s how our language apparently works!” Thus, in order to make your kids better English speakers, raise them as native Korean speakers, and then let them learn English in school.
Or how about this: whatever sex your child will be, socialize them as the opposite gender. When they grow up, they will either end up transsexual and be fine (which is fairly unlikely), or they will go through dysphoria, rejection of the gender they were socialized with, and they will continue living as cis, but with a much better understanding of gender. Isn’t it amazing?
And while we’re on topic of gender, consider the following. Aside from being polyamorous (this is information from his public yearly post on facebook) and atheist, Eliezer doesn’t belong to any group that’s a target of systematic trashing by religions. For him it was an epistemic crisis. But for a gay or transgender child it’s also a crisis of “I’m literally the worst person ever, because my moral authorities say so, and if I complain, they’ll probably kill me with stones.” And while you can probably make a case for ignoring the probability of your kids being transgender, the prior odds are that they’re literally more likely to be gay than to be myopic. Are you really gonna take the 10% chance of trashing the hell out of your kids’ self-esteem in order to give them the fundamental rationalist experience?
Can’t you just raise them Episcopalian? Or as one of the other numerous Christian or other theist sects that aren’t terrible?
Although repeated frequently, the 10% figure for the prevalence of homosexuality is most likely a serious overstatement. This is a difficult question to study since you have to rely on self-reports, so the numbers vary quite a bit based on what study you look at. But a reasonable ballpark figure is that homosexuals and bisexuals make up about 3 per cent of the population. See for example https://en.wikipedia.org/wiki/Demographics_of_sexual_orientation#Modern_survey_results
>Aside from being polyamorous (this is information from his public yearly post on facebook) and atheist, Eliezer doesn’t belong to any group that’s a target of systematic trashing by religions.
AFAIK, Eliezer is Jewish, which can cause a lot of systematic trashing by some believers of some denominations of Christianity.
I’ve heard two opinions on this.
Opinion One, backed by theory, is that having learned to reject religion, they will be more skeptical from then on.
Opinion Two, backed by my actual observations of people, is that having been brought up to seek certainty and fanaticism, they tend to jump headfirst into the first fanatical non-religious movement that doesn’t particularly contradict science or the mainstream that they can find.
Granted, this is from my observation of people brought up in very evangelical kind of cultish faiths. The person raised Unitarian might not have the same problem. But then, rejecting Unitarianism isn’t exactly going to be a life changer.
Opinion Two seems to leave us guessing when it comes to people emerging from the broad expanse of religious belief which lies between “certainty and fanaticism” and Unitarianism. For instance, which reaction should we expect from a person raised Catholic?
Well, I was raised catholic, and rejecting it was a fairly gradual process with no particular change in cognition or stress.
Likely this means I wasn’t raised with a religious mindset, only religious beliefs, and when the beliefs fell away, little changed.
“having been brought up to seek certainty and fanaticism, they tend to jump headfirst into the first fanatical non-religious movement”
Could be due to genetics.
Yeah. It stands to reason that if religiosity raises fertility, then genetic basis for religiosity will spread. A deconverted, but genetically religiously inclined person would definitely seek something that pushes their religiosity buttons, even (perhaps sometimes especially) if it doesn’t call itself a religion.
It looks like religiosity (as well as things that correlate with religiosity, like empathy) is in the neighborhood of about 40% genetic.
Is this theory based on comparing deconverted people with all people raised without religion, or people raised by nerds, who were brining them to natural history museums, teaching science, and never refusing to get a comprehensive answer to the kids’ “why?” and “how?”? Because if it’s the former, there’s a very obvious action a reader of this blog may take to maximize the odds of having rational skeptical children.
It certainly seems to have been a fundamental life experience for Eliezer, but I’m not convinced that that generalizes well. Especially given the massive religion-shaped chip on Eliezer’s shoulder.
It depends on where you live. In a place where there are very few religious people, raising your kids as a strong christian could make them feel left out. Think about how Jehovah’s Witness kids feel during holidays.
In a red county, raise as Christians. Benefits of increased social network.
In a blue county, raise as atheists. Benefits of logical thinking.
Purple county, take your pick.
Why does raising a child as an atheist confer the benefits of logical thinking? If these an essential connection between the two I am blind to it. Perhaps it would be better to raise kids to be logical thinking, whatever their religious attachments.
I suppose the idea is that if you raise a child religious, you basically have to teach them that contradictions are okay. But I’m not sure how much of an issue this is because my impression is that religious people are pretty good at compartmentalisation and that they find contradictions only okay when they concern the religious sphere.
As if being an atheist meant you don’t have to teach them contradictions.
“Important nonsense”, anyone? The Logical Positivists’ gallant attempt to paper over the glaring contradiction that was their sole tenet?
That isn’t a requirement of atheism. You can always go with “I don’t know”.
Is there any difference here between raising your child religious and raising your child in any political movement that contains a contradiction or is wrong about an aspect of reality?
Not all atheists are logical positivists Mary, as a matter of fact it’s rather fallen out of favor in the last few decades.
I suppose the idea is that if you raise a child religious, you basically have to teach them that contradictions are okay.
In what sense? (Also, the writing of my first comment was so slapdash that I want to crawl into a hole and die.)
And you’ve established that every single religion contains a contradiction? That would take pretty exhaustive knowledge.
I was admittedly thinking only of Christianity (because it’s the salient, easily available option in Western culture), which is famous for being contradictory and requiring cherrypicking to have any semblance of sensibility.
Perhaps Creutzer has a contradiction in his own worldview, which, via the principle of explosion, he used to infer every proposition whatsoever, including the proposition that all religions contain contradictions.
I was admittedly thinking only of Christianity (because it’s the salient, easily available option in Western culture), which is famous for being contradictory and requiring cherrypicking to have any semblance of sensibility.
Were you thinking of a particular variety of Christianity, or do you think that any and every version of Christianity contains contradictions?
Nah, I think paraconsistently. 😛
I think the Bible contains contradictions. Your preferred version of Christianity may contort itself to somehow deal with that, but it’s still a way of “contradictions are okay”, in some sense. Mind you, I’m not at all saying that this is a very strong argument against teaching religion, precisely because I know many Christians who are as capable at dealing with contradictions in everyday life as everybody else (which isn’t to say much…).
Assuming that you are right (I suspect you have in mind some examples I would dispute), this only makes Christianity contradictory if Christianity is committed to the inerrancy of Scripture. But many Christians do not accept the inerrancy of Scripture, and I don’t see why they (we) shouldn’t be counted as Christians because of it — it’s not like acknowledging that the Biblical writers got particular details wrong commits me to denying anything in the early creeds, for example.
Christianity may contort itself to somehow deal with that, but it’s still a way of “contradictions are okay”, in some sense.
Only in the same sense that saying that a historical source is generally reliable even though it contradicts itself on a minor point, or that so-and-so’s theory of Y is correct even though their theory of X is contradictory, are ways of saying that “contradictions are okay.”
I’m pretty willing to believe that there are no religions which a reasonable person would consider reasonably organized which don’t contain contradictions, because we would have heard about them by now. If your religion is, “God wants us to love each other,” I’m totally okay with you asserting that it is internally consistent. If there are any much more complicated than that that don’t contain contradictions (or, as they put it in When HARLIE Was One, are “systems of belief not consistent with observable reality”) I am not aware of them. Have I tested them myself? Nope. But as I said, if they existed, somebody would be pointing them out.
” we would have heard about them by now.”
I really doubt that you would have. You do not seem particularly interested in such a thing, as your confidence that religions are all contradictory seems very high.
Forget religion, physics is contradictory. Our theories of general relativity and quantum mechanics do not yet align, and last I heard, it was gravity, of all things, that they had trouble explaining.
If you’re truly dedicated to keeping your kids safe from encountering any contradictions, better steer clear of science.
“I think the Bible contains contradictions.”
Please be more specific.
I’m pretty willing to believe that there are no religions which a reasonable person would consider reasonably organized which don’t contain contradictions, because we would have heard about them by now.
I think I’ve heard of many: Orthodox Judaism, Sunni Islam, Catholic Christianity, Orthodox Christianity, non-fundamentalist versions of Protestantism, Jehova’s Witnesses…
I think most of these are false, but I don’t think any of them is contradictory.
@ Mary: I think my favorite is Jesus being descended from David. On his father’s side.
With all due respect, you have no idea what the extent of my familiarity with organized religion is.
And it doesn’t matter, because as I said, I wouldn’t have to go looking for it if it were there: it’d be prominently discussed in every mention of the topic, ever. It wouldn’t be deep in the Gnostic Gospels of the Duly Initiated: it’d be the first chapter in “Religion for Dummies.”
To use another quote from HARLIE:
HARLIE: THE HUMAN RACE HAS HAD TWO THOUSAND YEARS IN WHICH TO EXAMINE THE CHRISTIAN ETHIC – IT STILL HAS HOLES IN IT.
AUBERSON: WE’RE – NO, CHECK THAT – THEY’RE STILL IN THE PROCESS OF WORKING ON IT.
HARLIE: NONSENSE. IT’S STAGNANT AND YOU KNOW IT. YOU ARE A POOR ONE TO BE DEFENDING IT ANYWAY, AUBERSON. IF IT – OR ANY OF THEM – WERE PROVABLE, THEY COULD HAVE PROVEN BY NOW, SHOULD HAVE BEEN PROVEN BY NOW.
Like so many Socratic dialogues, that seems super-convincing at first, because the author gets to write the side he doesn’t agree with as a total dolt.
But then you think about it for a moment and realize that Harlie’s argument applies to literally every human endeavor ever.
And it doesn’t matter, because as I said, I wouldn’t have to go looking for it if it were there: it’d be prominently discussed in every mention of the topic, ever.
I presume this is hyperbole, but I’m not sure what the non-hyperbole charitable reading is supposed to be. Most contemporary anti-theistic arguments in the philosophy of religion, for example, do not argue that theism or particular forms thereof are contradictory; they argue that the evidence is against them.
I still have not seen an argument from anyone in this thread that every version of orthodox Christianity (including ones that deny Biblical inerrancy), much less other religions, contains contradictions (unless the quote from HARLIE is supposed to mean that the Christian ethic is contradictory, in which case I’d like to see an argument for that).
It would work that way if you substitute religion with the philosophy of religion, which is a very nice introduction to logical thinking in general.
Or, alternatively, just use all the time you’ve saved them from religious commitments and guide them to spend it in a semi-productive manner. Learn coding, woodworking, some sort of sports, camping, whatever. Just make sure they don’t spent it all on video games.
“Why does raising a child as an atheist confer the benefits of logical thinking? If these an essential connection between the two I am blind to it. Perhaps it would be better to raise kids to be logical thinking, whatever their religious attachments.”
Bravo! Most will miss the subtlety, but bravo!
I’ll amend that, then.
Raise them in the most desirable (however you define that) church around. (Note that this could involve converting to Judaism in some parts of the Northeast!)
Any interest in a longer post on someone who converted to Christianity from Atheism rather recently? I don’t just want to get dismissed as The Guy That Couldn’t Hack Rationality.
(It involves LSD and Milan Kundera at one point.)
You don’t need to go much farther than CS Lewis for a positive account of a very intelligent mind adopting the tenets of Christianity. I think agnostics and atheists should read CS Lewis if for no other reason to test their own beliefs concerning the nature of the universe – that man could write a compelling case for God. The Abolition of Man is a wonderful account of morality as well that does not necessarily have to be considered Christian.
No doubt Lewis was a very smart guy, but his arguments in Mere Christianity were plausibly superseded by evo-psych accounts of morality, much as Divine Watchmaker arguments were superseded by Darwin. They’re not very convincing to a modern reader who’s familiar with naturalistic theories of behavioral ecology.
I haven’t read any of his other formal apologetics. I did enjoy The Screwtape Letters, though.
I’m a great fan of Lewis, but I’m not sure he’d be a good example of an atheist adopting the tenets of Christianity; he was never an atheist all the way down. He was raised Church of England, became ‘atheist’ around college age, back to CofE around age 30, iirc. His autobiograpy _Surprised by Joy_ is the story of all that.
@ stillnotking
I’m more interested in what Lewis’s set of tenets were and how they worked, than in his arguments supporting the set.
I was raving above about his subject headers in the Appendix to _The Abolition of Man_, and the system they form. It’s easy (for me) to imagine small communities of cavemen that followed such a system surviving — where those communities that did not follow it, did not survive (thus being unable to raise their children and grandchildern to reproduction age).
“he was never an atheist all the way down.:”
No True Scotsman.
What evidence do you have that he was not an atheist at the time? Unless by “all the way down” you mean “would never leave,” which is not what atheism means.
” his arguments in Mere Christianity were plausibly superseded by evo-psych accounts of morality,”
plausibly?
superseded?
Serious philosophy does not deal with plausibility. And claiming that a theory can be superseded instead of being refuted is, as Lewis usefully labeled it, “chronological snobbery.” Which is a logical fallacy.
I can’t refute the theory that the universe was created last Wednesday by Odin. I believe the Occam’s Razor discussion in this thread has some relevance here.
> What evidence do you have that he was not an atheist at the time?
I said, “all the way down”.
Iirc, during that period he felt a great emotional draw toward myths, ‘Northernness’, the sort of things that Tolkien called ‘lies breathed through silver’. I remember him writing (probably in SBJ) something like, “The things I believed true [ ie the materialistic, mundane world that most atheists saw ] were the sort I found dull and repellant; all the things I loved, I believed false.”
Some other atheists do not feel that way. They may never have experienced that draw, and some may consider it repellant and dangerous.
Here’s Hardy being wistful:
I am like a gazer who should mark
An inland company 20
Standing upfingered, with, “Hark! hark!
The glorious distant sea!”
And feel, “Alas, ’tis but yon dark
And wind-swept pine to me!”
Yet I would bear my shortcomings 25
With meet tranquillity,
But for the charge that blessed things
I’d liefer have unbe.
O, doth a bird deprived of wings
Go earth-bound wilfully!
I should hope atheism doesn’t inherently involve a dislike of the classics.
” I believe the Occam’s Razor discussion in this thread has some relevance here.”
You should be more specific here. I can’t even tell which comment you’re replying to.
What evidence do you have that he was not an atheist at the time?
I said, “all the way down”.
And I commented on it. And you did not respond to it.
There is no difference between being an atheist and being one all the way down.
The notion that finding myths wonderful is evidence against his atheism has no grounding. There have been many atheists who have openly bragged that atheism is more ” dull and repellant” — they seem to think that proves that they aren’t engaged in wishful thinking, and the religious people are. (Indeed, many of those are capable of coming up with fantastic contortions to explain that anything a religious person believes really is something anyone would wish for. They seem to neglect that one’s dislikes and likes have no necessary correlation with reality.)
@Mary,
In fairness, CS Lewis argues that one’s likes and dislikes do have correlations to reality.
@Mary: Sorry, I was replying to your comment:
I regard naturalistic explanations as more parsimonious, hence more credible, than ones that invoke a supreme being.
Lewis’ main argument in Mere Christianity was the lack of a plausible naturalistic explanation for our moral sense. Now we have one.
Lewis is not interested in the moral sense per se. You’re talking evolution of the eye and he’s talking Optics.
@Mary
Well, I think it would be fair to say some of the arguments in Mere Christianity were superseded in the sense that evolutionary psychology and game theory offer fairly plausible accounts of why we have the kind of conflicting urges that we have, and something that sometimes feels like a law within, without invoking a lawgiver, as he does.
And, I really think serious philosophy can deal with plausibility. “Probable” arguments are things scholastics talked about. And more to the point, attempts to get uber-strong foundationalist accounts of knowledge generally fail–you can’t get the metaphysical edifice of a lot of Christian philosophy with mathematical certainty. (When I did that kind of philosophy, and when I did mathematical proofs, they always felt very different). Probabilities are what we should work with in life, imho.
Are evo-psych explanations actually later than Lewis? I remember him arguing against an evolutionary explanation for Reason. (Tangentially, I think the problems of Reason and Free Will are essentially the same philosophically. Both posit a semi-magical force which our brains can access to do things that nothing in the physical world seems capable of.)
@Jaskologist: Mere Christianity was written in the early 1940s, a good thirty years before Trivers’ work on reciprocal altruism. At that time, the prevailing idea of “nature, red in tooth and claw” seemed to rule out the possibility of a natural process producing anything like our moral sense. As Lewis put it: “Conscience reveals to us a moral law whose source cannot be found in the natural world, thus pointing to a supernatural Lawgiver.”
Although — with the benefit of hindsight, of course — I probably would’ve responded to Lewis at the time that conscience doesn’t behave much like a “law”, unless it’s a law full of loopholes, exemptions, special circumstances, double standards, and naked rationalizations.
Except that Lewis’s arguments against it did not rely on “nature red in tooth and claw.” Indeed, he discussed it in terms of alleged instincts to preserve other lives.
“I regard naturalistic explanations as more parsimonious, hence more credible, than ones that invoke a supreme being.”
That’s nice. Lewis didn’t.
“Lewis’ main argument in Mere Christianity was the lack of a plausible naturalistic explanation for our moral sense. Now we have one.”
No we don’t. At least, not any better than the one he argued against. The pith and essence that the morality was a fruit of evolution. All we have done since is added some fillips on top. . . BTW, since our taste for sugar is also evolutionarily determined, do you regard it as on the same level as morality?
@Mary: “BTW, since our taste for sugar is also evolutionarily determined, do you regard it as on the same level as morality?”
I’m not sure what “on the same level” means here, but if you’re asking whether I think both of them are explicable as the results of evolution, then yes.
@ several people
Lewis fan here. His chunks of serious but readable theological stuff is buried under lots of “Radio Talks” level stuff in his apologetics books.
_Mere Christianity_ has a list of what his precepts say, ~one short chapter for each, toward the middle or end.
_The Abolition of Man_ last chapters or so go into is/ought. QFM quoting from memory) “Yes, these are the behaviors that will preserve Civiliation. But why OUGHT I act to preserve Civilization?” (Then the magnificent Appendix that I keep raving about.)
_Miracles_ has a chapter (#5?) about is/ought which he considers to come under mind over matter, which he treated (very well imo) in the first part of the book. The chapter/s (at the last part of the first section )that talk about Hume would be very interesting to some people here. (The second section of the book is not likely to interest non-theists).
I just ran into this C.S. Lewis doodle, in which it is clear that he is quite aware of naturalistic explanations for morality, and considers them bunk.
Yes very interested!
If you’re an atheist and can’t fake it convincingly, it probably won’t take. My parents tried having me and my sister go to church as kids, mainly to avoid a phenomenon they’d seen where kids raised in atheist households rebelled by getting extremely religious and staying that way. Both of us still believed in Santa Claus well after finding church pointless and not going anymore.
(Specifics: Our parents did go to church with us, we didn’t say grace or anything with any regularity, it was a Unitarian Universalist church (so while it was church, it might not actually qualify as religion), and when we stopped going I was 8 and she was 5.)
I very strongly suspect, purely from personal experiences, that going to a Unitarian church wouldn’t work. My family tried exactly the same thing as yours, when I was around 6 or 7. Curiously I don’t remember ever being told anything about God, Christianity, the Bible, or in fact anything even remotely religious when we went— the closest I can remember is being told that we must embrace the universe or something.
Which suggests another reason trying to fake it might fail: any church milquetoast enough for the average atheist to be OK with their politics is too milquetoast to actually provide any sense of tribal identity, or whatever other benefits organized religion brings.
To quote myself from the last OT thread: “If the thought of your culture being overtaken by outsiders doesn’t fill you with existential dread, that isn’t your culture.”
The thought of my culture getting overtaken by outsiders results in “so what, I’ll found it anew somewhere else”. My culture lives in Cyberspace and that place is big enough that I can exercise Exit Rights.
Seven billion people, half of them connected to the ‘Net : I’ll always find People Like Me on there, because human-mind-space is not so huge, or so unclustered, that I’d ever be the last one in the subspace of “sufficiently like me”.
Does the prospect of the Butlerian Jihad fill you with existential dread?
But you are right; Alarune’s formulation does need to be tweaked to account for people who imagine their culture to be eternal and indestructible.
I’m 100% convinced that’s true for everyone (I’m not sure there’s anything I identify with that strongly. Maybe “being overtaken by outsiders needs to be defined more vaguely”) , but it’s a very interesting line of thinking and if reworded or possibly weakened a bit could be a very useful heuristic.
I was thinking of something like Roman Catholicism, Eastern Orthodoxy, or Mormonism. I agree that if you are going to try Unitarian Universalism, you might as well not bother; utter weak sauce.
I grew up with one foot in 2 churches. for that matter 2 churches between which there’s an ancient social rift. (catholic and protestant in ireland)
Both pretty milquetoast but both with people who were really into it. Even at a young age that has a way of giving you a bit of an outside view. I even liked the social side, the people at the protestant church were much more likable.
But it’s also a little bit like watching a little kid excited about Santa.
I could see it brought them joy, I wouldn’t want to take that away from them but it was also a joy I couldn’t share except superficially for pretty much the same reason that I couldn’t get genuinely excited about the sound of sleigh bells.
But you didn’t rebel by becoming extremely religious, did you? 😉
So even if you didn’t get religion, the intervention had the desired effect.
I’ve considered such ideas as well.
The relationship between parent and child might be damaged. Also, it’s a costly investment and there might be better ways to spend effort on their futures.
I find it unlikely that taking your kid to church every week is the best way to teach scepticism. My father lied to me, in an amusing way, all the time when I was a kid. A game that involves stating convincing falsehoods and dissonant facts would be a much more efficient means of teaching the ability to question authority.
When you say lie to you, do you mean like Calvin’s father does?
Another possibly useful thing would be to get into the habit of betting small amounts with them on certain statements for calibration. If you’re Asian like me you could ask them to predict their test scores, or on how long it would take for them to complete a chore or homework, to teach them to plan better.
Come to think of it, nearly all the CFAR techniques sound like excellent tools for parenting.
As a former Christian, I’d say no. The genuine belief that I deserved to go to hell was but one of the psychological downsides, and my deconversion was a long and painful process
I don’t know why annihilationism is such an unpopular doctrine, it’s more text-compliant and resolves something like half of the standard gripes about Christianity in one step.
I actually went through a period of annihilationism after reading the Bible (I agree that it’s a better interpretation of the text, or at least some of it) but that was shortly before I realised that I didn’t actually believe in God and hadn’t for a while. Again I attribute the reading of the Bible as one of the main drivers of my growing disbelief. (Sad to admit actually, that probably the main thing that drove me away from Christianity was feeling ostracised from fellow christians, an emotional reason that I later successfully rationalised. I’m pretty sure my reasons for disbelief are solid, but I don’t know how I can be sure).
Of course growing up, I essentially believed what I was told about God, the Bible and the afterlife.
Yeah, as someone in the same boat, I agree. I was raised Southern Baptist (a ridiculous name for a denomination by the way, when you live in Australia) and I am pretty certain that if I weren’t I would have had a much healthier upbringing. All I really got from it was, at an early age I learned that life is a mass of contradictions ruled over by an unknowable monster who would probably damn me to hell even if I did exactly what he wanted me to do.
This came from three experiences that will forever remain seered in my brain:
The first, when I was 9 was my dad telling me that I had to fear God to love him. That God wanted me to fear him as that would show him that I loved him. At the time, and for the next ten years until he got help, this made me certain of two things – that my dad, a violent alcoholic who beat me and my brothers regularly, was trying to tell me that he wanted me to fear him, and that God was exactly like my dad except omnipresent, omniscient and omnipotent (although I obviously didn’t use those terms when I was 9).
The second – when I was 11 we were coming home from church at Easter, having been told the story of Jesus’ death and resurrection. I asked my parents why Judas or Pontius Pilate were thought bad, when Judas had done exactly what was necessary to fulfill God’s plan and Pilate did likewise and also let God’s chosen people decide his fate, which seemed to me like a neat thing for a ruler to do. They both seemed like good guys to me, I said, which was evidently the wrong thing to say – along with being thrown down the stairs by my dad, my mum made me stay inside for the rest of the Easter break and explained that they betrayed our Lord and saviour, and also that God works in mysterious ways.
The third, when I was 12, was after I had told my teacher that I wouldn’t let her teach nonsense about evolution (not using those words obviously) when the bible clearly stated that God made everything in seven days. This is really the only story that isn’t completely horrible – I was congratulated by my parents, they had never been more proud of me, and as this was well before the recent ‘teach the controversy’ nonsense, when we went to the principal to discuss the matter he didn’t know how to handle it. Public schools still had a religion class then, which I said proved we should believe the bible and he told me I could sit out of science class during this unit and get a passing grade. While I loved science, the idea of not having to go to class was too awesome for me to pass up, and I held onto that conviction – bolstered by the understanding that God doesn’t make sense and the only important thing was to believe in him – much longer than I should have because of it.
Obviously I went through a lot that no rationalist would hopefully put their child through and I can’t blame all of it on religion. But the fear and terror I learned is all supported by the bible, it is a successful religion because it is a closed loop, and not even logic is necessarily enough to break someone free from Christianity.
It wasn’t until I started listening to Bart Ehrman’s lectures in secret that I found someone who provided an explanation for Christianity that both came from a place of biblical knowledge and actually explained anything in a non-contradictory manner. Of course, like Ehrman I could no longer continue to believe in the bible as the literal word of God, and with that out of the way there was no fear keeping me tethered to God, so I became an atheist. But I would never wish the mental torture I went through, believing I was a slave to a mad deity, on anyone.
I just wanted to add two more things – 1) My experiences are definitely not universal and I have no doubt that there are faithful Christians who have an understanding of God that both makes sense and doesn’t torture them. I was not so lucky however. 2) While those incidents were definitely part of my rejection of my faith, they are by no means the only reasons, and my other reasons are not based in emotion, but rational thinking.
Limi, I don’t know how much this gesture means, but as a Christian I want to apologize for the terrible way your parents treated you. The God your parents speak of is not the God whom I know, and I and many other Christians would completely reject the idea that your parents’ actions could find any justification from Christianity.
Teaching Christianity or Atheism/Humanism to your children in order to give them good values is like giving them a football jersey to ensure they’ll be good at sport. It doesn’t take that much life experience to see that there is complete a-holes in every worldview, so banking on that to get you across the line is unwise.
Teach them good values themselves through leading by example and expecting them to do the same. Make sure they identify as people first and not a member of a group which is defined by a judgement on a single life issue. Remind them to put themselves in other people’s shoes, expose them to a large variety of ideas, encourage them to think rationally, and you will have done your job.
“Strictly superior” by what criteria? You want them to get the supposed benefits from religiosity, you could just as well raise them to be vegan ethical altruists as Christians (oh dear, am I starting another fight?)
I’m more sympathetic to the idea that “At least if they decide to be atheists, they’ll know what they’re rebelling against” but again, it depends: if you’re starting from a position of raising them as Universalist Unitarians, for instance, that’s going to be very different to Five-Point Calvinism 🙂
I’ve often wondered to myself how, if I were atheist, I would handle the practical issue of religion. It seems pretty clear empirically that religion is:
a) Inevitable. Australopithecine had no religion, but every human society ever has. It’s something we evolved right along with intelligence. Those societies which tried to get rid of it turned into Communist hell-holes. Those where it just faded get out-bred and overtaken by more religious groups, whether internal or external (this is a cycle seen throughout history).
b) Beneficial, at least in moderate doses. Believers do better on pretty much every objective measure of well-being we have. This would probably bother me more as an atheist. As a Christian, I can/must believe that Truth is literally our friend. But outside that framework, stark reality is be indifferent to us at best. I’m not sure how I’d thread the needle of being very desirous of truth myself, but also recognizing that lies may be better by every other metric of human thriving.
Those where it just faded get out-bred and overtaken by more religious groups, whether internal or external (this is a cycle seen throughout history).
I think the cycle more often involves infiltration by new religions that don’t really pattern-match the previous cycle’s religion, as memetics can work faster than genetics when there is a favorable niche in the form of a population with a religion-sized hole in its psyche. Early Christianity was something very different than the classical Hellenistic religions that weren’t really meeting the needs of the 1st-century Hellenes, and Gaian environmentalism very different from modern Christianity.
Can you give examples?
Ancient Rome is one. Moral authority of Hellenism tanks, rampant immorality (even by Hellenic standards) reigns. New religion infiltrates up to the elites, who then impose it without much opposition upon the masses.
The spread of Christianity and Islam in general, all over the pagan lands is another. The ‘fading’ is debatable, but the comparative socio-memetic weakness of the Hellenic/Slavic/Norse/Altaic paganism compared to the proselytistic Abrahamic faiths is obvious.
The former Soviet bloc countries, which largely have reverted to their pre-Communism faiths.
In China, we see a reversion to Confucianism, and rapid spread of Christianity, despite the government’s best efforts to suppress it.
“Those societies which tried to get rid of it turned into Communist hell-holes.”
Isn’t that communism’s fault? Communists wanted to get rid of religion because they saw it as an obstacle to communism. It’s not like a bunch of people decided to exterminate religion and communism arose as a result.
On part B: As an atheist, I can’t bring myself to believe something that isn’t supported by evidence, no matter how much happier it would make me.
The French revolution got rid of religion because they wanted the Church’s land and money; they wound up with something that wasn’t communism per se but had the general socialism-with-piles-of-skills nature. The early 20th century Mexican revolution is a similar example.
So this isn’t something specific to Marx, Lenin, et al, and may be a general consequence of artificially-imposed abolition of religion. Peaceful rationalist enlightenment, does not appear to be a general consequence of the abolition of religion.
I’d have thought abolishing a religion which doesn’t also own 20% of the property in the country is a slightly different situation; we know from the various east-Asian mid-C20 attacks on landlords that aggressive property redistribution from people who have it by historical accident, think they have a right to it and don’t want to give it up is prone to piles of skulls whether the people you’re redistributing from are religious or not.
It is a little unfortunate that one of the side-effects of the collapse of communism in the Warsaw Pact is that the local quite-state-attached churches got their requisitioned property back en masse, meaning that somewhere like Romania has an Orthodox Church which is a landlord on the 1940s scale, rather than something like the Church of England which has been through sixty years of deconsecrating churches as the believers went away.
But, as an atheist, you can bring yourself to disbelieve something you can’t support by evidence?
Your implied argument fails because there is no symmetry between belief and disbelief. Disbelief is the default because nearly all possible beliefs about the world are wrong.
Neither atheism or theism is the “default.”
Agnosticism is the default over both atheism and theism.
So if you are an agnostic, I agree with you. But if you are an atheist, I don’t.
Also almost all religious people think that the evidence supports religion, so don’t go around patting yourself on the back for thinking that your beliefs conform to the evidence–almost everyone thinks that. Instead, try to look for awards you can give yourself that the religious people can’t give to themselves. Such as “My beliefs on this question conform to what most philosophers believe.” Religious people can say that most philosophers who specialize in religion believe in God, but you can dismiss that as a selection effect probably.
Wrong, Daniel. My table is atheist. It is without theistic beliefs. Agnosticism and atheism are not mutually exclusive categories.
>Believers do better on pretty much every objective measure of well-being we have.
A quick google doesn’t back this up. There is a negative correlation between development and religiosity for both the countries in the world and states in the US.
I think there is a presentism bias here. How does the correlation look if we run it a few decades ago, when the Soviet Union, China, etc dominated the atheistic world?
I don’t have the link handy (I should really save it next time I find it, since it keeps coming up), but even our esteemed atheist-and-pretty-serious-about-it host has acknowledged that religiousness correlates with longer life, better physical health, and better mental health. Those are all things parents usually want for their children.
Is that the result you get when correcting for people not being in couples? It’s pretty clear that being single and living alone when the diseases of age start striking is bad for your mental health and is catastrophic for your recovery from heart-attack or stroke (and therefore for life duration) – having someone there to notice and ring 999 is incredibly helpful.
Tom, I don’t know. Join me in petitioning Scott to deep-dive those studies!
(As a practical matter, I don’t find it very important whether the story is “religious beliefs cause benefits,” or “religious beliefs encourage behaviors which bring benefits.”)
I don’t know how things looked a few decades ago. But the USSR only lasted for 75 years and that’s a pretty small chunk of human history. I think its been a consistent trend for the elites of society to be less genuinely religious than the masses.
I find your claim that religiousness improves life expectancy unlikely. Here is the first website I found:
http://www.gallup.com/poll/142727/religiosity-highest-world-poorest-nations.aspx
The avg life expectancy in the 10 most religious countries is 63.1. In the 10 least religious countries it is 78.6. That’s a pretty huge difference.
Why do you keep trying to look at country-level data? The question isn’t “should I move my kid to a religious country?” It’s “should I raise my kid religious?”
Anyway, I found the post where Scott mentions this:
He provides sources for each of those claims individually. And here’s a round-up from Forbes listing additional benefits. Scott’s study gives 2-3 years of extra life from weekly religious attendance; another study give 7 years.
Assume the world is inconvenient like that. Would you really withhold 2-7 extra years of life from your kid?
The Soviet Union in particular went from a medieval agrarian peasant society to launching the first artificial satellite in 40 years. In general, Communist countries during the Cold War experienced extremely rapid economic and technological development and massive investment in infrastructure like railroads, dams, factories, and so on. It wasn’t reliably distributed, and some of those big initiatives resulted in famine, and in other cases the leaders used state control over the economy to deliberately cause famines. I wouldn’t want my country to turn Communist. But not because Communism or atheism isn’t associated with “development”.
However, the positive associations on an individual level between religiosity and both life expectancy and life satisfaction are well-established.
There were two large communist countries, and one of them had basically zero economic development during its 30 years of communism during the Cold War.
Joe Teicher: there are so many differences between different countries around the world that it’s in general impossible to draw significant conclusions from how country X and country Y differ with regards to attribute Z without controlling for lots of potential confounders. Religiosity correlates with many factors which correlate with or cause reduced development/life expectancy; this is compatible with religiosity itself increasing those things. Indeed, if there’s any direct causal link here, lack of development probably increases religiosity, not the other way around. (Crime rate correlates with number of police, but the most plausible explanation is that crime –> police, not police –> crime.)
Jaskologist is right that more careful studies find correlations between religiosity and most measures of well-being. See, for example, ch. 13 of Hood et al.’s Psychology of Religion: An Empirical Approach for an overview of the literature on religion and health.
Australopithecine had no religion
Do we actually know this? Do we have enough evidence to state with confidence that Australopithecines didn’t ever engage in ceremonial burial or other “religious” rites?
I have no actual evidence for the claim. Feel free to move up the family tree until you get to something sufficiently chimp-like that you can be confident the claim is true.
What evidence would you require that contemporary bonobos don’t have religion?
I was guessing he said ‘australopithecine had no religion’ as a contrapositive to ‘Neanderthals ritually buried their dead’, which is about the easiest archaeological cognate to religion.
Tom, they’d need to have better communication before they could have persistent, group religion
Drawing a bright line around religion gets harder as material culture gets more scarce, but if memory serves, burial and other ceremonial behavior appears to be a fairly recent thing. (It doesn’t seem to be strictly limited to our branch of Homo, though; Neanderthals buried their dead and left other ceremonial-looking artifacts.) Wikipedia says that the oldest possible burials date to about 300,000 years ago, and the oldest unambiguous ones to 130,000.
Believers who go to church. Being a believer but being a loner has no health benefits. As a matter of fact, being an atheist who goes to church pretty much gives you the same benefits.
I think raising kids as atheists while instilling good values is probably ideal for several reasons:
– If a set of values is actually good, you should be able to teach them why they’re good. If the material reasons to live by a set of values aren’t convincing, then they’re not very good values. (Unless your kid is dumb, I guess. Then they might need GOD SAID SO AND YOU’RE GOING TO HELL IF YOU DON’T LISTEN TO HIM.)
– There are some important values you can’t instill with a religious upbringing, like skepticism.
– Not believing in things that are made up is better than believing in things that are made up, and you can’t count on people with a religious upbringing to deconvert. Most of them don’t.
The best argument for a religious upbringing is the sense of community one gets from belonging to an organized religion, but that can usually be found elsewhere.
I disagree that skepticism and religiosity are mutually exclusive. Ideally, you would use the religious framework as “training-wheels” while simultaneously cultivating the sense of skepticism which would ultimately destroy it.
Just going by my own experience, an atheist is not likely to have a child who develops early a religious belief that is more than skin deep, and it will be destroyed as soon as skepticism develops.
I don’t see why that would necessarily hold. Truth is not the same as goodness. I see no reason why the set of values for which there are the best arguments must be contained in the set of values which are good.
From a materialist standpoint, truth has to be equivalent to goodness in some regard for them to mean anything beyond personal aesthetic preference. If you can’t tell your children, “Don’t murder, lie, or steal because it will contribute to a society in which we live less fulfilling lives” and believe it to be true, in what fashion do you believe it to be moral to do so?
Or are you afraid a clever atheist child may deduce that if they are a sufficiently adept liar or cheater that there is no personal drawback and thus no immorality in any sense that they find binding?
What does it mean to impart “good” values, anyway?
Good question, I tend to support most forms of religion as useful forms of cognitive-emotional scaffolding, but I’ve never seriously considered raising any potential children as religious. My main objection is that it would involve deceiving my children about what I actually believed. I wouldn’t necessarily object to sending them to a religious school though, assuming it didn’t teach creationism or anything.
I’ve spent 12 years in religious schools. I’m also a fundamentalist atheist who never believed.
I simply had the good luck of being taught how to distinguish reality from fiction thoroughly enough, that I never begun to believe in anything supernatural.
I went to Sunday school, a Catholic middle school, and through Confirmation education and spent a long time wanting to believe & believing in belief, but I don’t think that I ever really believed either.
Eh. I disagree that it’s valuable to have rejected religion; it’s better to not make mistakes in the first place, and there are enough places to make mistakes that you don’t need to force any for pedagogical reasons. Might as well start them off in the best place possible.
I agree. Teaching children to reject superstition in all its forms, religious or otherwise, is highly beneficial. The human brain already seems to have an inherent tendency towards superstition and early teaching can help them overcome this natural defect.
The argument from “psychological benefit” is deeply disturbing. That argument implies that it is fine to tell people falsehoods if it makes them feel better. On a personal level it may be that telling someone a lie may spare their feelings, but that is not a very good general ethical principle.
I suspect the psychological benefits of religion come less from the beliefs themselves and more from the social aspects, like being part of a community and participating in rituals with other members of that community. Though maybe some people find comfort in simply having a structured belief system.
Also, atheism is the lack of a lack of a particular set of beliefs rather than a system in and of itself, so “raising a kid atheist” could mean several different things; it could mean explicitly telling them there is no God or gods and telling them to read lots of books criticizing religion, or it could mean just never talking about God or religion, but potentially replacing that with some other positive belief system…which would be less “raising a kid atheist” and more raising a kid as something other than religious.
If you are not a Christian, raising your children as Christians teaches a bad lesson about honesty—unless you tell them “I don’t believe this stuff, but I think you should,” which might not be very convincing.
My parents once asked me if it would have been better if they had reared me in their parents’ religion (Judaism—which neither of them believed in). My response was that I preferred having been reared in their religion—18th century rationalism, the religion of David Hume and Adam Smith.
It occurs to me that one solution to the problem is to marry a religious spouse, tell your children that this a subject on which their parents have different opinions, and let them decide for themselves. In our case that seems to have produced a Christian daughter and an atheist son.
I was raised in Conservative Judaism by an atheist father and a deist mother (she says she believes in a God, but not Adonai), both ethnically Jewish. I’m an agnostic and the main thing I got out of it was an occasional craving for gefilte fish.
I feel like in discussions such as this, Judaism and Christianity don’t really compare. Neither of my parents believe in god, but I was very confused when my atheist schoolmates claimed that they had never been raised with any religion. Religiousness and belief in god get pretty uncorrelated in many Jewish households in a way that doesn’t seem to exist in other religions. Or perhaps my sample size is too small.
No, its common with Jews.
A friend of mine who was Jewish (parents Holocaust survivors) and had multiple degrees in philosophy, etc, liked to say that Judaism has a higher proportion of atheists than any other religion.
Interestingly, I think that the least angry and most well-adjusted atheists tend to be raised in Reform Judaism/Cultural Judaism/Philosophical Judaism whatever you want to call it.
My parents are atheists. I was still raised in Reform Judaism. We went to shul on the high holidays and for a few shabbats (the rabbi requested ten a year)*, we celebrated Passover and Hannukah. I had a Bar Mitzvah and confirmation and went to Hebrew School until 12th grade.
A Deity probably does not exist. Yet I am rather happy with my very reasonable reform Judaism. The really angry atheists I know tend to have grown up in super-strict Christian families. Usually a form of evangelical Protestantism of the Jesus Camp/Duggar sort.
The way Reform Judaism views religious practice as culture/philosophy is pretty reasonable for me. I wouldn’t feel bad about raising my kids (if I have any) in Reform Judaism and I would probably tell them eventually that God probably does not exist but that Judaism connects them to 5000 years of history and culture and they should be proud of their Jewishness.
*It occurs to me that a rabbi asking people to attend services only 10 times a year would strike people from more religious households as very strange.
As someone who grew up secular-y jewish (had an orthodox Bar Mitzvah and usually did kiddush on Fridays), I’ve always been a bit confused by reform Judaism. It seems like “Let’s cut out only the bits of religion we like” – which is nice and all, but it seems weird to me that someone could pray to a god of a holy text while openly ignoring the parts they don’t like of said holy text. It seems like a formal way to be casual about judaism, which s just strange to me.
(*Hoping I didn’t come off as saying “hey everyone! Be more like me!).
The central tenet of reform Judaism is that the Torah is old and doesn’t make sense in a modern context without significant reinterpretation. For example, the prohibition on lighting fires on Sabbath doesn’t necessarily mean that god doesn’t want us to browse reddit on Saturdays. So they don’t ignore the parts they don’t like, so much as trust their instincts when something seems off.
Orthodoxy, with its central tenet of “Follow everything to its logical conclusion no matter how ridiculous,” is certainly more fun but far from the way any sane person would interpret the Torah.
I’d think that Orthodox Judaism is a very sane way to interpret the Torah if you start from the presumption that God exists, knows the future, and wrote the Torah in full knowledge of that future. If God knew that accessing the Internet would involve lighting a fire, and He still said “don’t light fires on the Sabbath,” then – as crazy as it sounds to us – it still makes sense to accept that God know what He was talking about.
(Of course, this also assumes that the Torah is still in effect today, and that “lighting a fire” covers turning on a computer in the first place. Christians would dispute the first, and I’m pretty sure Karaite Jews would dispute the second.)
Wait… what? Why would you teach them something you know to be false? And what’s this “value preservation” thing you’re talking about?
I think he means the “cultural Christian” aspect that both Christians and atheists/agnostics in the West share. Someone raised atheist will necessarily have a different memetic make-up from someone raised Christian, even if the latter happens to be an atheist too.
By “value preservation”, I mean the opposite of value drift; making sure your children will have similar values to you. For a simple example, image an atheist who grew up in a Quiverfull family and was raised to strongly value procreation, a preference which remains with him even after deconversion. He raises his children as atheists and finds, much to his chagrin, that they are easily infected by modern memes about the evils of overpopulation, the wonders of a childfree life, and the importance of one’s career, with the end result that he doesn’t have any grandchildren and his line becomes extinct; clearly a bad outcome from his perspective.
If instead this hypothetical atheist had raised his children in a religious tradition which emphasized procreation (whether Quiverfull, Catholic, or something else), his children would have had good chance of believing that having children of their own was an important religious duty, and if any child of the atheist ended up rejecting religion there would still have been a chance that the child in question could have retained a strong preference for being fruitful and multiplying, much like the original atheist did.
>Is raising children as Christians strictly superior to raising them as atheists?
Yes. Next question? 🙂
Semi-relatedly, I recall a study of the religious beliefs of kids compared to parents. Turned out that in the vast majority of cases, the children tend to follow the father’s religion, and the mother’s religion is largely irrelevant. Makes sense, I guess, but suggests that it is the father who needs to provide the example rather than vaguely ‘the parents’.
What happens with homosexual parenting?
What do you mean?
If someone has two fathers with different religions, which religion do they pick up?
If someone has two mothers with different religions, which religion do they pick up?
I have no idea.
This may sound a bit tongue-in-cheek, but I mean it: if your ideal would be a conservative atheist, a child raised atheist works better for that. Deconverting usually means becoming hugely liberal. A child raised atheist is not angry at it, hence usually not too angry at conservative values.
Because the thought process is usually e.g. “all the biblical ways to suppress my sexuality were wrong so fuck you and I will rebel” while the man raised atheist may think “hm, my parents urged me to fuck girls when I was 15 but it is surprisingly hard to find a partner, other guys find 25, maybe there are less crazy, more biological reasons why conservatives suppress it?”
A child raised atheist can be that kind of atheist conservative who does not believe religion but sees it useful from a sociological viewpoint like group cohesion. So basically endorses it for people stupider than himself.
A conservative atheist usually thinks religion is really helpful for people who are stupid. If a smart kid has religion pushed on him, he will rarely think so, because he will feel strongly it is a lie and why should we lie to stupid people? A kid raised atheist will see religion not as much as a lie but as just one of the social stuff people say but don’t mean. Ritual basically. Like when you say someone “good morning” and you mean “fuck of and die”. So he accepts stupid people can benefit from rituals that are never really meant seriously to be believed as a real truth, only a communal truth.
This is interesting to me because my parents raised both my brother and I in a milquetoast strand of Presbyterian. Importantly, when I was around thirteen it was also eventually explained (to me, at least) that they personally didn’t believe in God but I could believe whatever I felt to be true. After that I became an atheist, though I never quite believed before then either. I think my earlier disbelief was probably also influenced by the disbelief of my parents, even though they hadn’t yet told me explicitly, which might be a factor any atheists trying to raise their kids religiously might want to consider. I also don’t remember being particularly mad – it just seemed to make sense.
Although I no longer believed, and when the time came refused confirmation, I was still forced to attend church for four more years (being thirteen, my parents didn’t want me to simply choose whichever path was eaisest). Forced, however, is a bit too strong of a word – even now, I still attend church when I come home for the summers, and would ultimately reccommend it. Here are the reasons why:
Benefits
– Opportunities + Base material goods: My church is tiny and by most standards has few of either of those things, but I still benefited. One of my favorite jobs I’ve had is being the church pianist. Also, when I graduated high school I received a small scholarship from the church as well as gift money from most of the people in it. Bigger churches have things like volunteering trips and choirs.
– Community: This was the reason my parents raised us Presbyterian in the first place, and it’s probably the biggest benefit. Religion can serve as a very rough filtering mechanism between the kids who do well in school, don’t drink, etc. etc. and the kids who do drugs, skip class, etc. etc. Again, this is very rough and probably has less meaning in places where everyone goes to church, but in those places not going to church has the big drawback of excluding you from everyone’s social circle anyway. Also, prayer chains and the like may be more motivated by gossip than godly impulses, but church really does give an an automatic circle of people who care about you to at least some degree.
– Education: I know most SSC readers are probably in some sort of technical field, but literature is stuffed with biblical references and history is difficult to understand if you just don’t get why religion exists at all. Sunday school as a kid gives at least a basic knowledge without any real study, and if someone is motivated to explore theology further on that provides solid training in thinking and critical thought, even if it’s not the same flavor of skepticism as Less Wrong.
Drawbacks:
– Opportunity cost (in both time and money): Donations are socially enforced for adults that look like they can afford to give, and the money is probably not going to EA-approved charities. But some of the money is going to charity, at least, and I find the claim that (generic) you would have spent that money to help other people without the prodding eyes of the other people in the congretation slightly suspect. Similarly, every Sunday I’ve spent an hour on church that I could have used to develop useful skills, but I probably would have spent that time sleeping in if I hadn’t gone to church.
– The consequences of fervent belief: I’ve seen many testimonials of the trauma produced by belief in hell, etc. There’s also, of course, the fact that your kids would have a mindset that (you believe) to be false. I do think that the type of vaugely religious upbringing I’ve been discussing avoids most of that, though it has its own issues (a distinct lack of taking ideas seriously, for example), and this post has gone on too long and I have too few properly thought-out ideas. I’m sure others can find huge drawbacks in this section.
* Caveat: There are a lot of anecdotes in this comment, and aside from the usual problems with that my family is rather unusual when it comes to religion. Case in point – my father has now become a pastor (mainly for academic reasons) despite a continued lack of belief in God.
We wouldn’t have felt comfortable lying to our kids about what we believed, but we did try going to church for a while with my wife’s parents + siblings. We were hoping we could be somewhat odd members of the community, but it didn’t work out. (We just didn’t fit in well enough, and our kids, even though quite young, didn’t fit in either. Some things were actively off putting. Nothing positive seemed to be growing.)
I would be perfectly happy if my kids had turned out Christian, but that seems rather unlikely at this point.
I think Christians who deconvert often find that process quite painful. If deconversion includes alienation from the people you love, that is no small cost. Fortunately my wife has managed to stay close to her family, although I think Grandma simply believes she isn’t “really” an atheist.
“Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires.”
— Black Belt Bayesian
That’s basically what I was going to do, but I won’t lie to her, and my wife is lazy, and my mother died, and my stepmother keeps my father away from my family, so there was no one to try to convert her.
The categorization system is fantastic, and most of the groupings are clever, though iirc they’re out of order vs. Borges list, which bugs me slightly. Which subcat was the hardest to populate?
WordPress automatically alphabetized them, nothing I can do. Also, Emperor really had to be neoreactionary, and I was sort of wary of putting them first as the first thing visitors to my blog saw.
Suckling Pigs was hard, and of course the mapping between “Stray Dogs” and econoblogs is pretty much arbitrary, although econoblogs do often go astray.
Oh, snap. I’d surmised correctly on emperors, pigs, and dogs, but completely failed to notice the categories were listed alphabetically.
Prepend every category title with a non-printing Unicode character so they collate the way you want, duh.
Actually, I have no idea if that would actually work in practice, but it is something I would try if it bothered me enough.
I suppose you could number them, too. After all, they are in the original. (Or lettered – my source is not at hand. But lettered would probably work even better.)
Interestingly, I hadn’t noticed that the categories were out of order until I saw “Other” at the end.
Yeah, but there is a meaningful distinction between ordered and unordered lists, in that in an ordered list there is a substantial, for lack of a better word, weight given to the order. One is specifically ONE, thirty-seven is specifically THIRTY-SEVEN, and III. C. 3. (j) is… well, you get the idea. An unordered list can be, and frequently is, presented in a definite order for (e.g.) stylistic or thematic reasons, but where the reader is not supposed to infer from the order.
I am not familiar with the source. I inferred from the fact that they are not numbered that Scott did not intend to give numbered-order weight to the ordering, but that it would be “kind of nice” if they could be. Matching someone else’s order when their website doesn’t collate and yours does falls in the “stylistic or thematic reasons” category.
If the source is in fact numbered/lettered then… nevermind!
In case you haven’t yet figured it out, this comment explains.
The original source is lettered.
Scott, do you even read the N-Category Cafe?
Instead of trying to prevent every possible catastrophe, why don’t groups like the Future of Humanity Institute try to put more emphasis on rebuilding a society after a collapse? Take a group of smart people, put them in a bunker strong enough to withstand anything that could happen with plenty of supplies and give them enough information to rebuild civilization.
You’re thinking small. We want orbital monasteries.
And don’t tell anyone you are doing this to reduce the chance they will become a missile target.
Right. At this point, pretty much everything that plausibly threatens human extinction and most things that plausibly threaten human civilization, involve intelligence and purpose. And pretty much every exception to this, e.g. a nearby gamma-ray burster, is utterly beyond our ability to deal with.
So, if there’s an open internet site that says, “here’s a group of people building a bunker to preserve human civilization in accordance with values [XYZ]”, then whatever intelligent entity is purposely seeking the extinction of XYZ-aligned civilization will target that bunker. And this entity by definition has civilization-destroying weapons.
It doesn’t necessarily have to be something that destroys humanity. An engineered virus could wipe out 95% of humanity and that small bunker with all of the relevant info could be the key to ensuring that we keep civilization running. In that scenario keeping it a secret could be detrimental since the remaining people wouldn’t actively seek it out.
Have two. Foundation, and Second Foundation.
1: An engineered virus that destroys 95% of humanity, and the engineers are going to be OK with your bunker? Even though you are pretty explicitly trying to undo or at least mitigate their “good” work?
2: Why do we need the remaining people to actively seek out the bunker? Presumably the residents or guardians of the (secret) bunker can seek out the remaining people when appropriate. Arranging for the bunker’s true purpose to be forgotten by everyone involved makes a good plot hook for an SF story, but it’s lousy engineering. Oh, and those SF stories almost always have two plucky kids who figure it out anyhow, so it will all work out in the end 🙂
Well, not much actually does that, so it’s a much smaller threat in the medium term (next few millenia) than something that just knocks us down/slows us down on our path of development. And there are lots of things that can do that. Dysgenics, political repression, nuclear/biological war, some sort of weird disease/bioterrorism, extreme climate/ecological disaster… Plenty of stuff. Increasing our basic resilience to supply shocks and disasters seems like a great idea.
Pretty much no-one does this ever. Even Hitler didn’t. What is often desired is the complete destruction of some civ/population’s political/military influence within one’s own sphere of influence, which involves lots of slaughter, but has never involved searching the whole earth for members of that civ/population to destroy. Because that is never the point. The point is always to get rid of their ability to bother you, and killing 99% of them and driving 100% of them to where you will never see them again is more than enough. Even killing 2% of them and securing a surrender tends to be more than satisfactory.
Some alien civ could very well decide its in their interest to destroy our capability to harm them. I imagine that that would incidentally involve killing a whole lot of us, possibly all of us, but killing all of us wouldn’t be the *goal.* A few million humans cowering in scattered settlements while our system/s are secured and mined would make little difference to them, but quite a lot of difference to us – and it’s much more likely to happen if we took steps to make everything more resilient. Even though we want cockroaches dead they’re still around, because its more trouble than it’s worth to annihilate them. An instructive example. Maybe an even better one is mosquitos, which are a genuine health threat, killer of millions upon millions, but still thriving. As the bumps on my leg attest.
Pretty much no-one does this ever
Agreed, but I’m not sure how much of this is capability vs. desire. If we gave Hitler a magic Jew-Be-Gone button where each push killed the Jew closest to Hitler at that instant, he’d certainly have had his engineers rig up a high-cyclic button-pushing apparatus by late 1941, and it’s not obvious that he’d have turned it off when the Jews who were dying were the ones in Brooklyn.
But that just highlights the fact that we are considering here, threats that are literally Worse Than Hitler, and imagining that they are going to leave a publicly-known bunker unmolested.
A bunker is useful for threats that are considerably better than Hitler, and it takes threats much, much worse than Hitler to make our bunker totally fruitless. I don’t think you’d build *a* bunker, anyway, better to build lots and lots of smaller ones, well stocked.
The Russians have been building shelters of various sizes. Seems like a good idea to at least match them. We cannot allow a mine shaft gap!
Certainly bunkers are useful if, e.g., you want to win a war. At any scale from border skirmish up through Global Thermonuclear War, warfighters are going to want bunkers.
And civilians may want bunkers for protection against local threats that they would prefer not kill them. Depending on the circumstances we tend to call these things “storm cellars” or “panic rooms” or whatnot; “bunker” makes your neighbors look at you funny.
But Future of Humanity bunkers (to the extent that FHI does bunkers) are a different kind of thing altogether. There the reasoning is that we should pool our resources to build bunkers that will shelter some small percentage of Us and of Our Stuff through an Apocalypse, that A: humanity, B: civilization, and/or C: our sacred values, shall not perish from this Earth.
That only makes sense if you expect the destruction rate of the apocalypse to be asymptotically close to 100%; you’re not realistically going to embunker even 0.01% of humanity or of civilization’s neat stuff, so if we expect a 1% survival rate for the apocalypse we aren’t really getting anything out of the bunkers.
Now, what sort of threats are these that are going to wipe out 99.99% of civilized humanity aboveground, but aren’t going to go around systematically tearing up all of the bunkers? Or melt/pulverize the Earth’s crust, at the other extreme?
One of the stories in my book My Mother Had Me Tested involves this sort of scenario, with the alien explaining why just knocking humanity back to the stone age isn’t an acceptable solution either morally or practically.
Mr. President, we must not allow a mineshaft^H^H^H bunker gap!
Making bunkers is extremely expensive, there are already bunkers owned and operated by militaries, and they don’t help with the most worrisome risk (UFAI) anyways.
The important thing is not just having people alive, it’s also having a group of people with the right expertise in dealing with a post apocalyptic situation. Also, I don’t think military bunkers have people inside them 24/7 but I’m not sure of that.
If you stuck a bunch of survivalists and scientists in a secret bunker 24/7, what’s to guarantee they’ll be in any shape to rebuild civilization after a couple years of that?
They could be rotated out. Have each person do a year in the bunker and have them replaced by someone else afterwards.
Eliezer Yudkowsky’s “Plan to Singularity”, written in 1999 and updated in 2000, contains a section on “survival stations” intended to be launched into space in the event of a nanotechnological disaster, preferably into the asteroid belt.
Right, now that EGS “hot dry rock” extraction is well-understood, you could totally run a deep-subterranean society off of energy extracted from the temperature difference between, say, 1km depth and 3km depth. Boil water down below, condense it higher up, put your society in a cavern between, and use active heat pumping to keep your cavern at the appropriate temperature. Grow hydroponic crops under LEDs. You need all the rigmarole of life support in a submarine, with the same threat of immediate annihilation if something goes wrong with the mechanicals (a supercritical steam explosion, say, or a leak that lets in poisonous gases), but you’re not vulnerable to nuclear war, nuclear winter, global warming, witch burning, non-direct asteroid impact, temporary loss of ozone to a nearby GRB, earthquakes, or tsunamis, and even gray goo might take a while to make its way down to you. A false vacuum collapse will still do you in immediately, though, and a paperclip-optimizer won’t take that long either.
The most likely threat, really, is that without a sufficiently large population, you’ll end up like the Tasmanians in a few generations.
Is that the Australian specific joke about Tasmanians, or is there something about Tasmania that I am unfamiliar with?
It’s the standard Australian “Tasmanians are inbred” joke.
There were about 3000 to 10000 Tasmanians, a population isolated for ten millenia. (There may have been a larger population prior to extended contact with foreigners, but it’s hard to tell.) During that period of time they lost the ability to fish, make bone tools, sew clothing, and possibly even start fires,. A few thousand of their descendants, but none of their languages, survive today.
Strikes me as psychologically naive.
There’s a reasonable likelihood that smart people, once assembled with everything necessary to rebuild civilization after an extinction-level catastrophe, will begin to see rebuilding society in their image as an exciting/meaningful/noble/net beneficial destiny. And a non-trivial likelihood that some subset would attempt to bring about that destiny via inducing the very extinction-level catastrophe they were supposed to be mitigating against.
Compare the survivalist who, after constructing and stocking a fallout bunker, begins to scan the news eagerly for any tidbit that might portend nuclear war. Or the (American) homeowner who, after purchasing a gun to defend his household, becomes subject to persistent fantasies of break-ins.
“Compare the survivalist who, after constructing and stocking a fallout bunker, begins to scan the news eagerly for any tidbit that might portend nuclear war. Or the (American) homeowner who, after purchasing a gun to defend his household, becomes subject to persistent fantasies of break-ins.”
I know quite a few survivalists and gun owners. While I can’t speak universally, I can say with confidence that in all the cases i’m familiar with, you’re quite off the mark.
Which brings us to the problem with your other proposition. The only people i can think of ACTIVELY seeking to destroy the world are, basically, trying to immanentize the eschaton…..and frankly, Iran is not a civilization level threat (currently, or even with several nukes). Nor are, as demonstrated, conservative Christians.
And you run into the basic problem here- the resources required to Destroy All Civilization require lots and lots of smart, creative, hardworking people working very hard for a long time.
This is difficult for society-ending madmen to acquire. Hitler was never planning anything like a society ending event- quite the entire opposite. The entire raison d’etre for his war was lebensraum….to BUILD his ideal civilization. If he’d succeeded….frankly, I see no reason we’d be any LESS advanced, scientifically. To quote Cyril Figgus- pretty sure the Nazis had scientists.
The genocide of the Jews and other undesirables is a complex question- Hitler does seem to have made some attempts to kick them out. He didn’t work particularly hard to stop escapes, pre-war. I’ve heard the bit about the Rest of the West refusing to take Jewish refugees when he offered, but I haven’t really evaluated that claim.
If it was easy as pushing a button, I have no doubt he’d do it. But if he’d succeeded in beating the UK and USSR, he probably would’ve stopped. He doesn’t seem to have really cared about the Jews in Brooklyn, because he never seems to have cared much about Brooklyn, period..
Seriously- he didn’t want THE ENTIRE WORLD. He wanted a lot of land, food, and other resources to build an empire. Attacking France and the UK was standard German military policy- they’re the dangerous ones- take them out while you’re strong.
We can, in the colloquial sense, declare that Civilization is a Thing That Nazis Do Not Have and that ought to be preserved against Nazi assaults. My only beef with this line of argument is that it conflicts with other useful definitions of “civilization” and so I’d like a different term for the Set Of Virtues That Nazis Do Not Have. So there’s room for bunkers to preserve stuff we value in the event of a Nazi Apocalypse. If they’d do any good, which they wouldn’t because as you note the emergence of an apocalypse-level threat requires lots of dedicated effort by smart, creative, hardworking people – who will be motivated to tear up the bunkers along with the rest of non-Nazi civilization.
Same deal if we are concerned that e.g. Nazis might qualify as tolerably civilized and Communists might qualify as tolerably civilized but that their civilizations are mutually incompatable in a way that leads to a global Nazi-Commie apocalypse with each side trying to exterminate everything Not More Like Us Than Like Them just to be safe. I’d be inclined to hide from that in a bunker, but it either wouldn’t be necessary or it wouldn’t be enough.
I have a feeling that if we came up with a term for the set of civilizational virtues that Nazis don’t have, it would be hijacked almost immediately by the millions of people of all political stripes that don’t really care about how Nazism worked politically or psychologically but very much care about having a club to beat their enemies with.
This may have already happened.
I always end up pointing out that for a radical left wing philosophy, it was pretty popular with Big Business and the aristos and for a radical right wing philosophy it spent an awful lot of time on free healthcare and the welfare of the Volk.
And then everyone ignores me and cherrypicks aspects of the regime to hurl at each other.
In other words, Nazi Germany looks like a pretty normal European 20th century society with this GIGANTIC goiter labeled “Genocide” on its face. And you know how hard it is not to look at a goiter.
Well, that and the bit with invading and conquering the neighbors less than a generation after the rest of us decided we weren’t going to DO that any more. You were there, Adolf? Remember, with the mustard gas? What part of “War to END ALL WARS” did you not understand?
If he’d stopped with Czechoslovakia, or if the Poles hadn’t resisted, yes, he’d have gone down as an ordinary mid-20th-century European autocrat with a tendency towards pomposity and anti-Semitism.
CJB, be careful what you mean by “Big Business” and “popular.” In particular, be careful about the revolution vs the final regime.
I might say that the USSR had a lot of Big Business run by apparatchiks. The Revolution wasn’t popular with incumbent industrialists, but that’s redistribution, not a policy against Big Business. And there weren’t a lot of pre-revolutionary industrialists, so it wasn’t much of a cost to kill them all. If a communist revolution had happened in an industrial society, maybe the relationship with Big Business would have looked more like Nazi Germany. (Or maybe not. I don’t know. I just think your example assumes that it would not in a completely unjustified way. Which is not to say that I disagree with your general point.)
(But communists do generally pretend that they don’t have Big Business. That’s got to count for something.)
We’ve already extracted the easily available resources for bootstrapping an industrial civilization. Rebuilding after a collapse may be much more difficult than you’d think – you basically have to drop at least working Thorium fission reactors into it, and that may not be enough if they end up constrained by rare-earth metals.
Those resources, except for the fossil energy and the helium, are for the most part carefully stored in landfills. In a lot of cases, like aluminum, getting them out of the landfills will be a great deal easier than it was to get them out of rocks.
(Also, rare earths aren’t rare.)
You might think that the fossil energy is irreplaceable, but actually, aside from EGS geothermal, there’s plenty of solar energy around. If you can aluminize some BoPET membranes and stretch them over vacuum-sealed frames, you can concentrate that energy on a receiver for almost no material cost, even without semiconductor fabrication. The first concentrated solar power system was operational in Egypt over a century ago.
The tricky part is not getting the materials out of the ground. Although productive capital in the form of machinery may be a bit tricky, I think the really tricky part is cooperating on a large enough scale that you don’t have to be Tony Stark or Jacques Mattheij to get a civilization rolling again.
Assuming a significant rise in sea level from global warming, how much of that metal would be under water?
It would still be retrievable, but with considerably more difficulty.
How significant? The rough rule of thumb is that the coastline moves in a hundred feet for every foot of sea level rise. So a hundred feet of sea level rise, which is a a lot, shifts coastlines in by about two miles, which leaves most of the products of past human action still above water.
You’re right; a large rise in sea levels would put a significant number of landfills underwater. I don’t think it would be the majority, because governments tend to prefer not to waste beachfront property on landfills, but it would be significant.
I don’t know of a global database of landfill locations. Maybe there is one?
It’s somewhat out of fashion now, but dumping landfill into the ocean seems to have formerly been more common. I can think off the top of my head of two, maybe three beachfront landfill sites in my general area, dating from the early 20th century — I’m not sure if the third was a proper landfill or just construction debris.
Two of them are parks now, while the third is not a park but has some kind of murky public-land status.
That’s exactly why we need a back up plan. We can’t just rely on some random group of people trying to start from scratch. We need a group of people who could actually kick off an industrial revolution in their lifetime despite all of the challenges.
I think the private sector has this covered. Google “live in a bunker” or something. There are actually a bunch of groups selling spots in old refurbished missile siloes to millionaires, and a lot of people are biting. “Unusually risk-aware millionaires” actually seems like a pretty good genetic stock to build the New Society out of.
Here’s Nick Beckstead’s report on refuges for GiveWell:
https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxuYmVja3N0ZWFkfGd4OjE5ZjFjOTgwZWI5MTE3MjM
Tl;dr, there’s a substantial base of refuges, and they’re probably not the place for marginal effort.
And that’s the problem with this plan: you have risk-aware millionaires diverting resources into preparing for the New Society instead of embracing the one we have. Any success they have in doing this is at the expense of civilization.
Take a group of smart people, put them in a bunker strong enough to withstand anything that could happen with plenty of supplies and give them enough information to rebuild civilization.
Don’t worry, the Catholic Church has it covered — haven’t you read Canticle for Leibowitz? 🙂
The Church also has experience, which most others lack.
We all have the Catholic Church’s experience at this point. The guys who were Pope and Cardinals during the collapse of the Western Roman Empire and the collapse of Byzantium are already dead.
Anyone know if the Roseto Effect has been found in other towns? https://en.wikipedia.org/wiki/Roseto_effect It seems like close-knit towns should really stand out in the data. From what I can tell, even researchers who suck at controlling for things should find the effect since, if the summaries I’ve read are correct, the pro-heart effects of a tight-knit town will swamp all the poor diets and stressful jobs you can throw at it. So have other Roseto-like towns been found?
Is it weird that I was rooting for Dr. Trauer in the last post? He reminds me of me.
Do you also have a last name based on a German root whose meaning people debate?
No, my only German root hangs between my legs, and even that cannot compare to Paul “The Butt Batterer of Berlin” Nungesser’s coke can
Are race and gender still banned?
If not, I’d say it’s a lot less impressive than Emma Sulkowicz’s sleeping material.
Not at all; I feel that his conclusions, given the premises of naturalistic materialism, are basically correct.
Then again, I am Christian.
I feel his conclusions are still, not to say especially, correct given the premises of Christianity.
>When you’re a hijacked murder-monkey hurtling toward your inevitable death, sanity is a completely ridiculous thing to have. And when the universe is fifteen billion light-years across and almost entirely freezing void, the idea that people should have ‘coping skills’ boggles the imagination. An emotionally healthy person is a person who isn’t paying attention, and our job is to cure them.
In Christianity, death is not final; there is a resurrection and Death is the last foe to be defeated. In Christianity, people are made in God’s image, although they are yet sinners, as opposed to “hijacked murder monkeys.” In Christianity the universe is not primarily an indifferent void, but an ordered system created by a logical and loving God.
I don’t see how Trauer’s conclusions still hold under these circumstances.
Or so they say… I think rather different conclusions about God’s emotional dispositions and moral character are warranted.
Not a huge fan of the new links sidebar. It’s taking away from the “gateway drug” quality of SSC by making it ostensibly weird.
Ostentatiously?
This actually seems like a very relevant concern.
Yeah, I don’t get the categories at all. They seem like a reference to something I’m not nerdy enough to comprehend.
For reference, Scott is using these categories. I’ve come across the taxonomy before but didn’t recognize it either, and was expecting more of a deeper meaning behind the link classification. On the other hand, now that I know the reference I do like it.
You should get a mobile version, if only for Google ranking. You still do not pass this: http://www.google.com/webmasters/tools/mobile-friendly/
Does anyone actually prefer mobile sites? I can read SSC just fine on my shitty old iPod touch with a broken screen and out of date browser. Yet sites that are optimized for mobile suck. The text is too big, they tend to break zooming, site features are removed, javascript causes it to crash, annoying ads and headers get in the way, etc.
You are presuming that [increased] Google-ranking is a desirable thing. If the purpose of this blog were simply to deliver Scott’s writings to the largest possible audience, then that would probably be the case.
If the purpose is to deliver Scott’s writings to an appreciative audience, maybe not. And if our responses – because look, here we are, responding – are part of the purpose, then there are definite diseconomies of scale involve in an expanding commentariat. Perhaps selfishly, I would not wish to see this place become just another Reddit or Slate or usenet but with a smarter and more enlightened content generator at the top.
It’s up to Scott, of course. But if there is to be any selectivity in the pursuit of growth for SSC, I would suggest that “isn’t reading on a mobile device” correlates positively with “is willing to pay attention for more than five minutes”, which in turn correlates with the kind of audience/commentariat that works best with SSC.
Or possibly I’m the only person here reading this on a desktop machine and everyone else is horribly frustrated with the interface, in which case never mind.
I dislike them in general, too, but Scott seemed worried about his traffic. Also note, I ran the test on Gwern’s website and it passes fine. So a not-annoying mobile interface is possible. As for demographics, I wish Scott great success as a writer, even if there are downsides.
Edit: I seem to have merged Houshalter’s comment with yours in my mind before replying.
Only because last month I finally sat down with Chromium’s web tools and by trial and error (and some help from Geoff Greer and others) tweaking the CSS, finally got it to pass and render nicely; if you had checked it before, it would be nigh unreadable. I was most irked when a week later I checked my analytics and saw that there was no visible increase in conversion rate for mobile browsers. What the heck people.
I’m for having a mobile site, if it’s easy, just for the search hits.
I’d like people who are not going to wander here of their own accord to find this. Scott’s writing is sufficiently good to draw in people who might not immediately accept rationalist processes for evaluating evidence. Let’s get them all here.
I can’t help thinking the pushback is like fans of indie or weird musicians who don’t want them to go big because all the people following them after the breakout aren’t nearly as cool.
When a site I love – volokh.com – went to the Washington Post, the comments section went straight to hell. So there are downsides. But I want as much Scott as possible everywhere, all the time.
The Volokh conspiracy is the example I had most prominently in mind, yes.
And Scott as the evangelist who will convert the world to rationality if only he can reach them all, I’m kind of skeptical. At a minimum, I think that would require him giving up his day job. In general, I fear for the day Scott has to choose between this site and his day job, and if that is going to happen I would prefer it happen by deliberate planning on his part rather than as an accidental consequence of growth-uber-alles.
The comments section at volokh.com were ruined long before they moved to the Washington Post. They went to hell when they started getting linked by non-legal right wing bloggers and you had people coming in who didn’t know anything about the topics being discussed but very much wanted to yelp into the void about whatever talk radio & fox news were pushing that week.
The conspiracy was interesting because it was libertarian *law professors* which was an is a small group. And in the early days they attracted a commentariat that was interested in the sort of things law professors are interested in, though with a broader mix of ideologies than most of the blawgosphere. When the comment section became a redstate.com clone, not only did the comments themselves become worthless, but the blogging itself changed. Witness the introduction of Stewart Baker, who doesn’t have a libertarian bone in his body, and Eugene Kontorovich, who just regurgitates press releases from AIPAC.
Well, that’s why every successful site eventually needs a Secret Comments Section, where the long-timers post free of newbie noise. Sorry you didn’t get your invite to Volokh’s.
(I’m on the list for SSC’s, right, Scott?)
I definitely would not want to read SSC on a mobile device, but I’ve always sort of hated portable things in general (Gameboys, etc.).
Piggybacking on this, is there a way to make comments for this blog work on the WordPress mobile app?
As someone who often reads on a phone, the one feature I really want is for links to open-in-new-tab. It is annoying to be halfway down a long post or comment thread, click a link, and have lost my place when I come back.
I wouldn’t get that excited about mobile search ranking, if the last time Scott did a post on search terms that have led people here is any indication. I feel like most people find new blogs to follow through recommendations (links from other blogs, social media, word of mouth) rather than searching.
In all the mobile browsers I have used, you can open a link in a new tab just by pressing and holding on it, and selecting an option from the menu that pops up.
I would prefer that the site keep links the way they are, and make mobile users press-and-hold if they prefer links to open in new tabs. It is much harder to make links open in the same tab if they open in new tabs by default than the other way around.
I’d just like to add that I think that while separate mobile sites are usually a horrible thing, a good “responsive” design with appropriate css media queries etc (like Gwern’s which the original poster mentioned in a nephew to this comment) is useful. It has, for example, the additional advantage (apart from being well-suited for all of desktops, tablets and phones) that when I have two windows side by side on my small laptop screen (for example a text editor next to the browser, to write a comment or make notes) I can easily read the page without horizontal scrolling.
Edit:
@Kiya
I personally find links opening-in-new-tab to be annoying and, even worse, not easy to disable on a desktop.
Has anyone here heard of SARMs? As an alternative to anabolic steroids it seems a bit too good to be true.
They’re not completely selective. However, some people on /r/nootropics use them and I haven’t heard any horror stories. If you’re up for gunnie-pigging, Ostarine is pretty cheap at least at the vender I use for Adrafinil.
The ‘word on the street’ is that they’re just not that effective when compared to even conservative dosages of steroids. Given that it’s not particularly hard to run a decent cycle of testosterone with minimal risk (note that this doesn’t apply to high dosages or all kinds of steroids), I’m not sure why it would be better to take relatively unproven drugs for less benefit. I don’t think SARMs are particularly dangerous, but the side effects of testosterone are all well-understood and preventable, so it seems like a win-win.
Of course, this doesn’t apply to people who already on potentially dangerous doses of steroids and are looking for ways to get an additional boost without killing themselves, but I don’t think there are too many of those people posting here.
Fairly well established for TRT in the hypogonadal and/or elderly.
Haven’t seen any evidence that it gets you to superhuman strength levels the way anabolic steroids do.
From “More Links For 2014”. I do not see how the response fails to make sense. The situation is analogous to the prisoner’s dilemma; no matter what other people’s cars do, I am better off if my car tries to save my own life as opposed to trying to save the most possible lives. Now, yes, if I could only choose a single rule that all cars had to follow, then obviously I would pick the utilitarian car rule, since I would not know ahead of time which car I would be on and that is the rule that would give me the highest probability of survival. But in the real world, I expect to only have control over my own car, either by programming it or (more likely) by choosing which type of car programming to buy, and therefore I always buy the car that attempts to save my life at the expense of others. Though I am, of course, assuming that my city is not populated by superrational decision theorists, in which case I might act differently.
Note that even if I am the car czar, and I get a kickback for each life that I save so that I have an incentive to make most cars utilitarian cars, I am still better off programming my personal car to prioritize my life and simply not allowing anyone else to do the same. The only way it makes sense to pick the utilitarian car for yourself is if every car in existence must obey the same rule, and if your choice determines which of the two rules the other cars follow. Dr. Alexander causally presumes this, but I find it an arbitrary and unrealistic assumption.
People *are* advising the Car Czar here. There’s only going to be one codebase.
Enforced by whom?
If someone is actually going to try this, let me know so I can make a killing selling DIY car hacking kits.
If you use a DIY hacking kit on your car and get into an accident, how do you think “I hacked my care so it could be more deadly for others and less deadly for me” going to look in court? That’s pretty close to intentional homicide…
So I don’t think you’re going to find many buyers for your kit.
Since these sort of life and death accidents are the only way your modified programming will be detected, I would prefer to be alive to stand trial as opposed to being dead and buried.
Unless there’s legislation stating otherwise, I’m imagining a situation analogous to that of smartphones today, where the bootloader is locked down but can be broken if you have the necessary know-hows. I’m sure that there would be many hobbyists who would love to have the option of tinkering with the OS just as they tinker with the engine and suspensions today.
Well then, having a car that saves others at the expense of its occupants amounts to intentional suicide. I’m sure any half-competent lawyer could argue you can’t force people to choose one of those two options.
Besides, it depends how many people on the jury use my kit. Which is where my brilliant ad campaign comes in: A wholesome, all-American family with two small children being driven off a cliff by their car (voiced by HAL 9000) to spare the life of a homeless junkie rapist. Your move, Mr. Prosecutor.
I’m sure juries have had the prisoner’s dilemma explained to them. Maybe I’m too optimistic about the results?
Americans have been hacking their cars for performance for over half a century, often at the expense of safety, and I’ve yet to see anyone convicted of vehicular homicide over it. If “I overclocked my car for MORE POWER, and oops, some innocent kids died” doesn’t get you thrown in jail, “I bought the aftermarket kit that promised to save my babies in the back seat and didn’t mention other people’s babies…” certainly won’t.
This is the actual intersection of American car culture and American law. It will happen, at least in the United States. And the reality of American marketing is, it won’t just be a few hackers, it will be a thinly-veiled selling point by all of the major manufacturers. We sell a couple million SUVs every year to housewives who will never ever go off-roading but know full well that SUV = “uses that subcompact ahead and the babies in back as part of the impact-absorbing crumple zone to protect your own babies”.
> If “I overclocked my car for MORE POWER, and oops, some innocent kids died” doesn’t get you thrown in jail
Well, not any more than just “oops, some innocent kids died” does, anyway.
True. You can get yourself thrown in jail for irresponsible operation of a motor vehicle in the United States, but not so much for irresponsible tinkering with the hardware. I assume it is at least possible that such a thing could happen, but it would have to be severely and obviously malicious tinkering, not getting balance-of-risks wrong in your attempt at homebrew safety engineering.
The difference between the DIY kit and just tinkering with the engine is that the DIY kit is specifically meant for avoiding killing people, unlike most parts of your engine.
A bit like if a Uber driver deactivated his passenger airbag, and then a passenger died because of that. I suspect this case would be treated more severely than other kinds of hardware modification.
If that was indeed the only way they’d be detected, I agree.
But I think that if you get into *any* kind of accident, having tampered with the accident-avoiding routine is going to look super bad (especially when there were outside casualties). And of all possible accidents, those where the car could have saved more lives by sacrificing you is only a tiny minority. So you’re saving your life in a rare case, but getting into trouble in a (much more) common case.
It occurs to me that this is a real legal issue in the definition of negligence. Negligence, at least in the view of law and econ types, consists of failing to take all cost justified precautions.
Suppose a precaution increases the chance that you will die by .001 and decreases the chance that someone else will die by .002. Does it then count as cost justified?
That’s an issue for tort law. To bring in criminal law you would have to argue that failure to take that precaution would amount to negligent homicide if the accident actually occurred and the other driver was killed. I think that would be a hard argument to maintain.
I’m trying to think of real world legal cases on someone saving his own life in a way which imposes risks on others.
Perhaps some sort of evacuation-type situation? Structure fire, lifeboats, etc.
“It’s something that was itself created due to state intervention. In other words, someone saw some sort of perceived problem, and said “something must be done” and created a commons.”
Actually, the concept of common land pre-dates Parliament and statutory law.
I personally think this is one of those cases where intuition trumps numbers in most people’s minds – a car that’s programmed to run off a cliff in certain situations would, in real life, lead to a lot more deaths than one that hits the oncoming car (cars have surprisingly good crash defenses), or tries a less extreme move to avoid the crash.
This.
It also brings the certainty vs. probability error into the mix; even though in a world where everyone drives with one of the two options they both convert into probabilities of death per mile/time/whatever driving, “100% chance of death” doesn’t sound like that to people.
Basically: Bad experimentation form.
Although I’m not sure what experimentation form would fit well, but “50% chance to kill the other driver vs. 25% chance of killing you” might be closest.
If we were actually doing something like voting with the poll results, rather than studying human psychology, we’d want to actually 1. ask “what would you like *everyone’s* car to do” and 2. explain the Rawlsean math of how the option that kills the fewest people makes you as an average individual driver less likely to die. Maybe we should teach the latter bit to children in school.
Seeing how the one car I’m in is less likely to crash than the sum of all the cars around me, I’d really like them to drive off cliffs instead of hitting me, please.
Look, what’s even the point of having a government if they can’t even force people to cooperate in this completely obvious form of the prisoners’ dilemma?
I can’t imagine that a congressman would vote for such a bill. That would be too unpopular.
Why do you believe that forcing people to cooperate in the prisoner’s dilemma is the purpose of having a government?
Isn’t that sort of the obvious thing that governments seem to be needed for? I have the same feeling now that I got in introductory math classes when we had to prove some things that were intuitively so obvious that it was hard to see what a proof could look like.
There are many other ways of resolving prisoners’ dilemmas. For example social norms (you get approval for cooperating and are shamed for defecting), reputation (basically turns the situation into a kind of iterated prisoner’s dilemma, even if you don’t play with the same person multiple times), revenge (if you defect against me, I’ll go out of my way to harm you in the future), and entrepreneurial action (finding some way in which both players can credibly commit themselves to cooperate).
Moreover, while governments do resolve many prisoner’s dilemmas, they also create many new ones. For example, corporation X can cooperate by just focusing on making a good product, or it can defect by diverting some of its resources into lobbying the government to make legislation which favours X. If X and its competitors refrain from lobbying, that’s a pretty outcome for everyone. If X lobbies and no one else does, that’s even better for X. If everyone spends on lobbying, that’s bad for everyone, since the lobbyists will (at least in part) counter each other and everyone would have been better off with no lobbying. If X doesn’t lobby, but it’s competitors do, the laws will be very unfavourable for X and that’s the worst outcome for X.
Other examples of prisoner’s dilemmas introduced by governments may include being honest and public-spirited for bureaucrats and politicians, as well as being well-informed and rational for voters.
There are also many things we need government for which don’t fit the PD mold at all. I don’t want to either cooperate with or defect against the guy who wants to steal my car; I’d rather not interact with him at all.
Other examples of prisoner’s dilemmas introduced by governments may include…
…the actual Prisoner’s Dilemma, in its classic formulation? The one where the government’s victory condition is for both players to defect?
One may expect or imagine that the purpose of government is to incentivize citizens to cooperate for their mutual benefit. The observed behavior of governments is to incentivize its subjects to cooperate, compete, or defect, whichever works best, for the benefit of the government. A condition that was obviously noted by the sort of game-theorists and ethicists who gave us the prisoner’s dilemma, on account of the gave us the prisoner’s dilemma.
One question is “What would a benevolent Creator create governments for?” but a different question is “What kinds of governments tend to survive and prosper under what circumstances, and which of their attributes promote their survival and prosperity and thus tended to be intended by the people who happened to design the governments that have happened to survive so far?”
By “governments seem to be needed for” you seem to be talking about the first question, but the second one may be have better predictive value.
“Isn’t that sort of the obvious thing that governments seem to be needed for?”
It’s a popular justification for government, at least among economists, who tend to assume individual rationality. If individuals are rational, then the situations where coercion can improve outcomes are the ones where individually rational behavior doesn’t produce group rational results, which is my definition of market failure, of which prisoner’s dilemma is a two person version.
A different justification is paternalism, the idea that individuals are not rational, do not know what is good for them, and so must be compelled by some wiser power to do it.
And an economist’s counterargument to the economist’s case for government is that the conditions that lead to market failure, individuals not bearing the net cost of their actions, are the exception on the private market, the norm on the political market, hence that shifting decisions from the former to the latter makes failure more likely, not less.
Question for John Schilling: It seems to me that there are lots of inevitable “failures of the market” in the private sphere, particularly relating to group prisoner’s dilemmas, tragedies of the commons. E.g. industrial pollution, land mismanagement, overfishing, workplace safety, skipping the line (queue), etc. Isn’t government intervention necessary to realign incentives to disincentivize grazing your cows on the commons (h/t Thomas Hobbes via Steven Pinker)? Some of these things could be alleviated by social norms and reputation (as Jon Gunnarsson has said), but one bad actor could cause significant damage. For example, without the threat of OSHA, an unscrupulous meat packer could cause the deaths of multiple workers and abscond with his/her profits before a reputational hit took down the business. Do you think government intervention somehow would cause more worker injuries than it prevents?
@buckyballs,
The threat of OSHA simply isn’t that significant relative to the threat of tort law, which is itself dwarfed by the estimated risk premium in salaries.
http://caveatbettor.blogspot.com/2008/02/why-government-program-osha-did-not.html
If your concerned about systemic risks, the labor premium for risky work is the most powerful incentive by far. If your concern is that one (or a few) actors may attempt to profit from deliberately understating the risk inherent in their work, tort law dwarfs OSHA as a threat.
edit: unspellchecked you’re back into your
@Buckyballas: Isn’t government intervention necessary to realign incentives to disincentivize grazing your cows on the commons?
That’s perilously close to the classic, “Something must be done, this is something, therefore this must be done”.
Sometimes the right answer is to do something else. A religion could probably do something about excessive cattle-grazing on the commons. So could a collection of anarcho-capitalist protection agencies, or we could just privatize all the grasslands. And sometimes the right answer is to do nothing and suffer the problem, because all of the solutions are worse.
In the specific case of traffic safety, I will note that Pournelle’s Iron Law strongly applies. It is not in the interest of the NHTSA that the NHTSA’s 600 employees and billion-dollar annual budget be reduced to a few quality-control testers for the Universal Robocar Guidance Algorithm, so that’s not going to happen. The next government-approved traffic safety solution is going to require at least a thousand civil servants and a couple billion taxpayer dollars a year to implement – and it will leave at least ten thousand dead bodies on the highway every year, that nobody ever get the crazy idea that highway travel is now safe and we don’t need to give all those billions to the NHTSA every year.
And that’s an improvement over “Mad Max” or “Why Johnny Can’t Speed”, which is why I’m not an anarchist. But if you’re looking for something better, you’ll need to look beyond the government. Say, an insurance-industry coalition like the UL…
“or we could just privatize all the grasslands.”
ding ding ding
The problem with most “tragedy of the commons” arguments is that most often (especially in modern societies), the “commons” isn’t something that just magically exists organically. It’s something that was itself created due to state intervention. In other words, someone saw some sort of perceived problem, and said “something must be done” and created a commons.
Now there’s a problem with overuse of the commons, so we shout “something must be done” and the result is an even greater restriction of property rights than before. Somehow, the solution of “reverse the stupid thing we did last time that created this mess in the first place” never seems to be suggested…
Thanks for your responses. I’ll need to continue considering this. Sometimes I want to be more libertarian, but I can’t stop thinking about The Jungle and the Grapes of Wrath.
Regarding the OSHA thing, perhaps the optimum role for government is to pass some kind of worker’s comp law and then just let the market work? As you say Lupis42, that seems to be a much stronger incentive than OSHA penalties.
“but I can’t stop thinking about The Jungle and the Grapes of Wrath. ”
Believe it or not McDonalds and Walmart are both examples where big companies can enforce safety or quality standards on their suppliers. You can in certain cases have the market solve the issue of quality. The requirements are:
-an identifiable end product attached to a large business
-a clear chain of responsibility
-large amounts of market power from the firm that interacts with customers
If these are true, the end business can pressure its supplier to increase quality which often spreads across the entire field as the other end businesses aren’t willing to be seen dealing in subpar goods.
It isn’t entirely self organizing- it requires scandals or lawsuits to incentivize the effort- but it is enough to ensure that even lax regulation has food that is safe to eat. Obviously if you don’t have that kind of organization, things behave significantly less optimally.
Grapes of Wrath is about farmers in a world where there are too many farmers so the price they can sell goods at. You can’t really solve that with regulation- even with subsidies the amount of farmers drops.
@Matt M:
Privatising the commons only works if someone owns all the commons, or the number of people who together own all the commons is small enough and committed enough to reputational concerns that they’ll all cooperate. And now you’ve got a monopolist with all the usual behaviours.
We’ve had this discussion before, but privatising the commons is /hard/. How do you get someone to own all the atmosphere and thus have an incentive to prevent industry emitting CFCs? How do you get someone to own the entirety of a river system and thus have an incentive to not sell so much water upstream that the downstream silts up and/or becomes saline? How do you get someone to own the entirety of an aquifer to prevent people extracting so much water that you get rising salinity in the water table?
In an AnCap world, I very much doubt we’d have any of those things solved until one of the protection agencies reinvents government. Having an Environmental Protection Czar is kinda convenient.
Keep in mind, of course, that your single GrasslandsCorp has an incentive not to fuck it up by allowing too much grazing, but it also has an incentive to allow as much grazing as possible to make more money. Being a business in a free market doesn’t confer immunity to mistakes, it merely suggests that if you make enough mistakes, you die. So GrasslandsCorp will likely go down at some point in the next couple of decades, and will likely ruin the commons in the process before selling it off to NeoGrasslandsCorp, who’ll repeat the process. I don’t think that’s any better than a government, which has less of an incentive to run as close to the limit of the commons as they can manage – at least, to the extent it avoids regulatory capture. Regulatory capture, of course, is pretty much the same thing as GrasslandsCorp with renters under it…
@James Picone
That argument depends on the particular commons you’re talking about being difficult to divide. It certainly applies to the atmosphere, and it applies to the ocean. The difficulty with privatizing parcels of ocean is that water flows around in a way that is difficult to easily contain. Pollution in the ocean will affect everyone’s bit of ocean more or less equally. Same goes for pollution in the air and different peoples’ bit of atmosphere. Many species of fish will travel huge distances and won’t be contained within one person’s bit of ocean. And how am I supposed to work out who has benefitted from the UV protection provided by my bit of atmosphere, and charge them for it, and exclude people who don’t want to pay from using it?
But my point is that these are real problems that are specific to the kind of property we are talking about, not something that applies to anything that could be termed ‘the commons’. If you cannot easily exclude people from using the property who have not gotten your permission, and if peoples’ actions can have significant effects on the property in a way that is difficult to detect and prevent, then that makes it more difficult to privatize this kind of property. But you have to show that these effects are actually meaningful – of course to an extent this applies to any kind of property, but the question is whether the extent is small enough that privatization is feasible.
Parcels of ocean? No. Parcels of grassland? Yes. Putting up fences in the ocean is prohibitively difficut; putting up fences around a field is not. Cows obey fences; fish don’t. Pollution in one part of the ocean will flow to the other; pollution in one part of a grassland won’t.
In fact if you were to take your argument seriously, then this would require a total rejection of any kind of private property at all.
“Regarding the OSHA thing, perhaps the optimum role for government is to pass some kind of worker’s comp law and then just let the market work? As you say Lupis42, that seems to be a much stronger incentive than OSHA penalties.”
The worker’s comp law is likely to be unneeded – the existing structure of tort law is already fairly robust in that domain, and has been, in common law countries at least, for a very long time.
@Anonymous: I agree that most of that only applies to non-excludable resources. I think most of my examples were non-excludable? Well except for GrasslandCorp but that was mostly riffing on actual commons.
James,
My intent here was never to have the same argument over again about whether it is sometimes practical to have a commons – but rather to point out that the existence of a commons requires some sort of action and coordination. It is not a naturally occurring state, and many (but possibly not all) times, the “tragedy of the commons” can be prevented by simply eliminating the commons. It MIGHT (although I’m not conceding this here) be the case that in some examples, eliminating the commons will create *other* problems that are greater than the initial tragedy was in the first place, but that is a different issue entirely.
The point people in this thread seem to be missing is that the same logic that gives you market failure on private markets also gives you market failure on political markets, and that the conditions that produce it are much more common in the latter context.
Market failure, situations where individual rationality does not produce group rationality, is a result of situations where individual actors do not bear the net cost of their actions. That can happen in the private market—to first approximation (standard perfect competition model) it doesn’t, but that’s only an approximation. But it’s the normal situation in the public market, for voters, politicians, bureaucrats, judges, government employees.
So the question isn’t whether there are situation where a sufficiently wise regulator could improve on the outcome of a laissez-faire market—clearly there are. It’s whether shifting decision making over some range of questions to the government results, on average, in better or worse decisions.
@Matt M:
How does someone have property rights on the atmosphere, on an aquifer, or on some species of fish without a government to enforce their claim to property?
I’m vaguely familiar with libertarian ideas about where property rights in land come from – one part development of the land, one part ability to defend it, yes?
I don’t think either of those really work for atmospheres, aquifers, or fish stocks (if some entity has the ability to defend access to those things, they are essentially a government).
@David Friedman:
I’m not sure I agree with the full sweep of that.
I agree that bureaucrats are decoupled from the consequences of their decisions to an extent. But I don’t think they have the strong incentive to do the thing that’s bad for everyone that’s present in commons-style problems. They mostly have the incentive to not rock the boat, excepting regulatory capture scenarios (which, again, are pretty similar to having a private commons-owner). The people hiring bureaucrats tend to have very similar don’t-rock-the-boat incentives, all the way up to politicians, who are incentivised to only rock-the-boat if people will vote for them over it. That doesn’t seem to me like regulators have incentives to fuck everything up.
Not only that, but I would argue that the kinds of problems we regulate are often really, really big deals – significant ozone loss by now would have been pretty unpleasant, leaded petrol was costing a lot of IQ for nearly everybody, etc.. I’m willing to trade off some inefficiency in a lot of areas for solutions to Really Big Deals.
I guess broadly I just don’t think that money matches up very well with my utility function, or what I’d expect the utility function of most people to be.
“I guess broadly I just don’t think that money matches up very well with my utility function”
Compared to what alternative?
I don’t know how much economic theory you know. The relevant bottom line is that in the simple perfect competition model in which there are no commons, no monopolies, no externalities, … money prices are a perfect measure of utility and market outcomes maximize utility, provided you are willing to do interpersonal utility comparisons by willingness to pay–something I’m willing to pay exactly a dollar for is considered worth just as much utility as something you are willing to pay exactly a dollar for.
That’s a very imperfect approximation to the real world and to utility. But there is no comparable simplified model of the political process that gives you anything like that close a result to what you want. A steel firm that pollutes the atmosphere has, say (I’m making up numbers, but I don’t think that matters much) private costs of $90/ton and external costs of $10/ton, so the market price of steel will understate its social cost by a bit—externality 10%. A random voter who bears the cost of figuring out which candidate for president is more in the interest of the nation and votes accordingly receives about one three hundred millionth of the benefit he produces–that’s an externality of about 99.9999996%.
You mention not rocking the boat. I recently heard a talk by a woman with a professional background in drug development (doctorate and past employment) who struck me as a pretty reasonable person. Her estimate of the mortality cost due to the FDA sharply reducing the number of new drugs brought to market, increasing the time to market, increasing the cost, was upwards of four million lives. Permitting a drug that has a side effect that kills a hundred people is a disaster for the agency. Preventing a drug that would have saved a thousand isn’t.
@James Picone
The point I meant to make was that there really is no clear line between excludable resources and non-excludable resources. Every resource is non-excludable to a certain extent. It will never be possible to charge everyone who gains some benefit from your resource or damages it in some way, and to require you to reimburse everyone who suffers in some way from how you use your resource. To get total efficiency in light of this limitation, every kind of resource would need to be owned by a single all-knowing benevolent monopolist. In practice, many, perhaps most, resources are excludable enough that private ownership works fine – far better than the alternative, considering the chance of getting a single all-knowing benevolent monopolist to run things is effectively zero.
Furthermore, your claim to any resource is always ultimately backed up by violence – in our case, the government. The reason your house is your house and I can’t use it without your permission is that I will be stopped if I try. In that sense, you will never be able to do without a government, or some kind of institution to protect your property. The meaningful sense in which the government does or does not control something is in how ‘hands on’ they are with it: whether they make decisions regarding its use, or whether they allow people to own the thing, allow the person owning it to decide how it is used, and then simply enforce the outcomes of these decisions.
As for fish stocks, I don’t see them as being particularly more difficult to privatize than something like the radio spectrum. I’m not seeing the problem with rivers either, provided some reasonable specification of what constitutes damage – but then you have that problem with any kind of private property, including ownership of your own body. Legal questions like that are a separate problem entirely.
@David Friedman:
Apologies if this post is a tiny bit incoherent – I’m very sleep-deprived right now.
I’ve never studied economics, formally or informally. I’m pretty sure I know more than the average layman, but I’m very aware that I’m somewhat out of my depth here.
Compared to the alternative of each person’s preferences being equally-weighted and without the interesting side-effect where certain kinds of preferences make all your preferences worth more, I guess? I obviously don’t have a magic solution that makes utility comparisons fall out perfectly, but I do think that a world with governments is likely to match parts of my utility function involving poor people and sick people living satisfying lives better than AnCap world, and that that works mostly in spite of monetary incentives.
I don’t see how that can be true – doesn’t that run afoul of Arrow’s Impossibility Theorem, analogizing money to votes? At minimum I’d expect you’re only guaranteed a local maximum because of the thing where when you’re poor your best options aren’t necessarily good options (I know, revealed preferences, but just because something maximally satisfies your utility function right now in the situation you’re in doesn’t mean it’s on the way of maximal satisfaction of your function).
I’m not sure if comparable simplified models of politics have the same kinds of spherical-cow factors. Homo-economicus and perfect market knowledge and so on don’t seem as relevant to governmental situations to me. Part of it might just be that some of the commons problems I’m concerned about are sufficiently Really Big Deals that I’m willing to insure myself against risk via government. Part of it is that I know a number of people who are rather dependent on the public healthcare system and unemployment system over here, and I’m not sure I want to trust their continued survival and non-homelessness to the active kindness of strangers (as opposed to the more passive, paying-taxes-and-not-abolishing-the-system kind).
Did she calculate how many lives the FDA has saved and consumer benefits of having someone vaguely certify a drug is mostly efficacious? I’ve run into problems with drug regulations myself – melatonin is not only prescription-only in Australia, it’s in a special class of prescription-only medication that you’re not even allowed to import unless you have a script. If it was over the counter I would’ve been able to save myself time and money. But on the plus side, I suspect the alternative looks like the homeopathy section of the local pharmacy, except everywhere.
@Anonymous:
Agreed, non-excludability is analog, not digital. But I think there are plenty of things that are important and sufficiently non-excludable that you basically have to be a government to exclude them (radio spectrum is an excellent example, thanks for bringing it up. I don’t think that would be excludable in the absence of an overarching government or something of similar power). I don’t think Privatise The Commons works for enough of the commons that we care about.
@James Picone
The question of whether a government is necessary to maintain property rights is a separate question than the question of whether a government is necessary to make decisions about how a resource is used. I’ve been arguing that most resources are excludable enough that private ownership works better than any alternatives. But private ownership only means that how the resource is used is determined by its private owners, not by the government; it doesn’t mean that the government (or something of a similar power, as you said) is not necessary to enforce the outcomes of those decisions, i.e. to prevent other people from stealing or damaging it, and requiring them to recompensate the owner if they do.
Personally, I’ll take a self-driving car that has simple rules like avoiding head-on collisions with other vehicles. I’ll take my chances on cliffs, and bicyclists can take their chances when they are next to me. I think a car company would have to be nuts to write software that sometimes explicitly decides to kill somebody. That would be asking to be sued. Much better to have general rules that work well most of the time.
The correct answer, BTW, is “Brake to a complete stop, pulling over to the right (US rules) to the extent that this can be done safely, until the danger is passed. If the oncoming car insists on ramming your stationary vehicle, that’s on them”. And to the next obvious question, the meta-answer is “Then you were going too fast to begin with; go no faster than will allow you to stop short of or otherwise safely avoid any hazard within or emerging at the limits of your present vision”. These are the rules we (pretend to) expect human drivers to follow; the first generation of robocar programmers would be exceedingly foolish to do otherwise.
No matter how much fun they are as thought experiments, the real world persists in not giving us actual Trolley Gods in need of sacrifices. It does, in this case, give us an interesting problem in how to deal with people implementing the obvious hack to actual robocar logic, which is to say making their car a little less trigger-happy about braking to a stop in traffic because it can’t handle risk management under imperfect information. But that’s not so much a dilemma as a straightforward risk-benefit calculation of the sort we’ve been dealing with all along.
Not actual Trolley Gods, sure. But self-driving car manufacturers are considering fairly related things like “should the car endanger a cyclist by swerving out of the way of an out-of-control school bus?”
See this article about self-driving cars and the societal change they could bring about:
Ooh, I love the new Beeminder ad! Thank you so much! I also love how 100% of the ad copy is yours. You (unsurprisingly) are better at that than me.
Quibbles: We don’t camel-case it. And the right edge of the infinibee (as we call it) got a little smooshed.
Will fix later.
Occam’s razor: believe the simplest theory that accords with your evidence. In Bayesian terms, we can make sense of this as a constraint on which prior probabilities are reasonable: if two scientific theories have the same experimental consequences, it’s rationally impermissible to assign a higher prior to the more complicated theory. According to Occam’s razor.
Is this a sound epistemological principle? Is it really irrational to suppose that more complicated theories are more likely to be true than simpler ones?
There is no inherent reason you need to have a prior for simpler theories. But it would be very strange not to have such a prior. The universe appears to be explainable by simple laws, and the more we learn the simpler they seem. If the universe was infinitely complicated, we wouldn’t be able to comprehend it.
Another note is that you are rarely just comparing two theories in total isolation. Generally there are many complicated theories which could fit the data, but only a few simple ones. If you average together all the possible hypotheses, you’ll generally get the same result as the simple theory. E.g. averaging a bunch of curvy polynomials that fit some data produces a straight-ish line that doesn’t curve very much.
“Another note is that you are rarely just comparing two theories in total isolation. Generally there are many complicated theories which could fit the data, but only a few simple ones. If you average together all the possible hypotheses, you’ll generally get the same result as the simple theory. E.g. averaging a bunch of curvy polynomials that fit some data produces a straight-ish line that doesn’t curve very much.”
This is a really smart and cool way to think about it! Thank you!
While I too find the concept pleasing, I don’t think there’s anything of substance behind it. Forget about polynomials (we’re investigating the ‘less simple’ theories anyway), and widen the view to discontinuous functions passing through your data points. Then, at any other point, the value of the function is totally unrelated to the value of the data points. You’d be trying to average over all numbers without anything distinguishing any.
In math terms, you need to specify a measure to take an average (it’s an integral, after all). Saying “prefer simpler functions” is one way to specify a measure (or at least to hint at how to come up with one). Without that, saying “average over all the theories” is literally meaningless.
It would be a very interesting result, however, to show that the average using the Kolmogorov Complexity measure would turn out to just give the simplest theory that fits. Hmm, seems hard to solve.
“Averaging” is a simplified way to view it yes. But when you multiple models/hypotheses, you can combine them into one big model/hypothesis.
You just give each model a weight based on it’s probability. The weight determines it’s probability of being drawn. So if you are predicting the weather, you can draw thousands of predictions from this model, and determine how many of them predict rain. Or other queries.
In general, the results of combining a big model together will be similar to a simple model. There is a mathematical justification for it but I’m not sure how to put it into words. The complex uncertain bits tend to cancel each other out, and can just as easily be modeled as a simpler function which just is uncertain at those points.
> There is no inherent reason you need to have a prior for simpler theories
There is a very good inherent reason: the prior over all theories is a prior over programs, and thus a prior over integers. The probabilities need to sum to 1, over all integers, thus the probability must decrease for larger integers, which (usually) correspond to more complex theories.
In the limit it must decrease for sufficiently large integers, but of course the fact that a > b does not imply that P(a) < P(b) in general.
Also, you pulled a sneaky in adding the requirement that the theories be computable!
If they aren’t computable what does “complexity” mean?
I suspect that a slightly different statement is true:
for every probability distribution P(x) on the integers, there exists a (possibly large) constant c such that if a > b+c, then P(a) < P(b)
This is false. Consider giving 50% probability mass to the string of length 0, 25% evenly among all strings of length 1 to 2, 12.5% among strings of length 3 to 5, etc.
No mattter how large your constant, you will eventually reach a point where the step to the next smaller probability level is even larger.
Oscar, good point. Sorry.
>There is a very good inherent reason: the prior over all theories is a prior over programs, and thus a prior over integers. The probabilities need to sum to 1, over all integers, thus the probability must decrease for larger integers, which (usually) correspond to more complex theories.
That’s a really weak justification. It could be a flat prior over all possible programs. Any program is equally likely as any other, all the way to infinitely long ones. It could be a prior that is Gaussian distributed at some level of complexity. Or it could be a prior that is flat all the way to some (arbitrarily huge) upper limit of complexity, if you don’t like dealing with infinities.
You can have whatever prior you want. That’s why it’s called a prior, it’s a subjective thing you have prior to actually looking at evidence.
Well, the point here is that this is actually impossible: there is no flat prior over all programs. The other examples you give all implement a sort of Occam’s razor, in that sufficiently complex programs really are disfavored.
It doesn’t converge to a nice simple mathematical structure. It implies that the true laws of the universe are infinitely complicated. But I don’t see why that is implausible. Sure it might break existing assumptions or mathematical notation, but that can be dealt with.
The reason Occum’s razor exists is there is an infinite number of more complicated theories. It comes from metaphysical arguing where you can add on infinite layers of forms, true natures, souls and the like*. If one category is utterly indistinguishable from another, you can simply collapse them into a single category.
* Its been a while and I don’t speak Latin or Old English so if you want high confidence about its origins, you’ll need someone willing to go through 14th British writing.
I think you’re right. It’s not about preferring theories that are simpler in terms of computational complexity. It’s about pruning layers of causation that do no additional explanatory work.
I think this is right. But then, why not prefer theories with layers of causation that do no explanatory work?
Strunk and White’s guide to style rule 13.
Yes. The probability that the person you are talking to is in California is necessarily at least as high as the probability that they are a feminist in California, and if there is some nonzero probability that they’re a non-feminist in California, then the California theory is strictly better than the feminist-in-California theory. This doesn’t immediately tell you that they’re more likely to be in California than to be a feminist in, say, Europe. But you don’t have to pile up that many conjunctions before you start getting into ridiculously small probabilities: it’s conceivable that they’re more likely to be a feminist in Europe, but they’re probably more likely to be in California than to be a sex-positive feminist and sex-work activist in Berlin with Portuguese ancestry who is also a well-respected Wikipedia editor and regularly exhibits their photography in art galleries. In fact, I think there’s only one person who fits that description.
Yes, and it can be illustrated quite simply : it has to do with the question : “how significant is the fact that the theory can fit the data ?”
Let’s say that the facts that you want to explain are n points in the 2d plane, picked randomly from an unknown random set of “lawful points”. Our goal is to find a theory that matches the rule behind the lawful points, given a finite number of observations (let’s say, 10).
Alice says : I have 10 points. Let’s use a 10-degree polynomial to find a theory. Alice picks 10 points, find a 10-order polynomial than fit them, and says : here’s my theory, and it matches the experimental data. It must be true !
Bob says: Let’s go for the simplest explanation. Let’s try a linear relationship. He picks 10 points, find that they fit on a straight line, and says : here’s my theory, and it matches the experimental data. It must be true !
Do you see the trick ? That’s right : even if the underlying law is not even polynomial, Alice will always be able to fit the data. It’s always possible to find a 10-degree polynomial that fit 10 points. OTOH, if the underlying law is not linear, it is much more less likely than 10 random points will fit on a straight line : the fact that Bob could find one is by its own good evidence that his theory is good.
The short story is : the event “a simple theory can fit my experimental data” is far less likely than the event “a complex theory can fit my experimental data” (because for every data there is always a sufficiently complex theory that will fit it, even if said data is just random noise) ; therefore when that event arises, it is more significant.
You only need a 9th-order polynomial to fit 10 points, and if you use a 9th-order polynomial to fit them, and they’re exactly on a line, that 9th-order polynomial will have 8 zero coefficients, leaving you with Bob’s model. Aside from these nitpicks, and the larger issue of noise, your point is correct.
But there are an infinite number of tenth-degree polynomials that fit any ten points, which is why “prefer simpler functions” is significant.
What most rationalists actually seem to follow is something like: Adding more knowledge or information to an explanation does not necessarily lead to it having more truth or accuracy, which is kinda related to Occam´s razor but not really it.
Occam’s Razor is a bit more specific if I recall correctly. It says you shouldn’t add things to your theory that don’t improve its explanatory power over the data. So for instance, if you have a theory that a set of equations dictate the way electrons orbit atoms, adding “God makes them do it” is unnecessary. It doesn’t give you a better prediction, so it is useless.
In machine intelligence, a related concept is regularization. The basic idea is that if your model has too many free parameters, you will be able to fit any data perfectly and so you may overfit the data. So you make your models pay a complexity penalty that grows as your number of free parameters grows.
It’s a more gray version of Occam’s Razor. Extra complexity allows for overfitting so you need to make sure that the extra complexity pays for itself by explaining enough of your data.
An observation tangential to your question which does not really answer it but is worth keeping in mind is this: If you do believe complexity is a variable to evaluate based on, you should think hard about how to evaluate ‘simplicity’/complexity, because this may be less simple than one should think. Here’s part of what I wrote a while back on my blog on related topics:
“Models always have a lot of assumptions. A perhaps surprising observation is that, from a certain point of view, models which might be categorized as more ‘simple’ (few explicit assumptions) can be said to make as many assumptions as do more ‘complex’ models (many explicit assumptions); it’s just that the underlying assumptions are different. To illustate this, let’s have a look at two different models, model 1 and model 2. Model 1 is a model which states that ‘Y = aX’. Model 2 is a model which states that ‘Y = aX + bZ’.
Model 1 assumes b is equal to 0 so that Z is not a relevant variable to include, whereas model 2 assumes b is not zero – but both models make assumptions about this variable ‘Z’ (and the parameter ‘b’). Models will often differ along such lines, making different assumptions about variables and how they interact (incidentally here we’re implicitly assuming in both models that X and Z are independent). A ‘simple’ model does make fewer (explicit) assumptions about the world than does a ‘complex’ model – but that question is different from the question of which restrictions the two models impose on the data. And thinking in binary terms when we ask ourselves the question, ‘Are we making an assumption about this variable or this relationship?’, then the answer will always be ‘yes’ either way. Does the variable Z contribute information relevant to Y? Does it interact with other variables in the model? Both the simple model and the complex model include assumptions about this stuff. At every branching point where the complex model departs from the simple one, you have one assumption in one model (‘the distinction between f and g matters’, ‘alpha is non-zero’) and another assumption in the other (‘the distinction between f and g doesn’t matter’, ‘alpha is zero’). You always make assumptions, it’s just that the assumptions are different. In simple models assumptions are often not spelled out, which is presumably part of why some of the assumptions made in such models are easy to overlook; it makes sense that they’re not, incidentally, because there’s an infinite number of ways to make adjustments to a model. It’s true that branching out does take place in some complex models in ways that do not occur in simple models, and once you’re more than one branching point away from the departure point where the two models first differ then the behaviour of the complex model may start to be determined by additional new assumptions where on the other hand the behaviour of the simple model might still rely on the same assumption that determined the behaviour at the first departure point – so the number of explicit assumptions will be different, but an assumption is made in either case at every junction.
As might be inferred from the comments above usually ‘the simple model’ will be the one with the more restrictive assumptions, in terms of what the data is ‘allowed’ to do. Fewer assumptions usually means stronger assumptions. It’s a much stronger assumption to assume that e.g. males and females are identical than is the alternative that they are not; there are many ways they could be not identical but only one way in which they can be. The restrictiveness of a model does not equal the number of assumptions (explicitly) made. No, on a general note it is rather the case that more assumptions mean that your model becomes less restrictive, because additional assumptions allow for more stuff to vary – this is indeed a big part of why model-builders generally don’t just stick to very simple models; if you do that, you don’t get the details right. […] The problem is that not making assumptions is not really an option; you’ll basically assume something no matter what you do. ‘That variable/distinction/connection is irrelevant’, which is often the default assumption, is also just that – an assumption. If you do modelling you don’t ever get to not make assumptions, they’re always there lurking in the background whether you like it or not.”
…
The quotes above are from a blog post I wrote a while back, and the link above my name has a bit more on these topics. When you engage in modelling, you’ll sometimes overfit the data. Other times you’ll overlook important variables. I don’t think much can be said ‘in general’ about which of the two problems are most likely to be the most severe – that seems to me to be highly context dependent.
Are you familiar with statistical information criteria? Some of the people working on those topics have given a lot of thought to related questions – I highly recommend Burnham & Anderson’s text on model selection if you’re curious to learn more about these things (if you just want (some of) the highlights I posted a couple of blog-posts about the book here and here).
In other words, simplicity is complex.
In Bayesian terms, we can make sense of this as a constraint on which prior probabilities are reasonable: if two scientific theories have the same experimental consequences, it’s rationally impermissible to assign a higher prior to the more complicated theory.
This is a very complicated issue (no pun intended), but let me try to shed a little light.
By Bayes’ Theorem, the posterior probability of a hypothesis H given evidence E and background knowledge K is a function of the prior probability of H just given K and the Bayes’ factor for the evidence: that is, P(H|E&K) is a function of P(H|K) and P(E|H&K)/P(E|~H&K).
Your suggestion is to take Occam’s razor as telling us to assign a higher value to P(H|K) if H is simpler. This is intuitively plausible, and many philosophers of science (e.g., Richard Swinburne) have suggested something similar.
The problem, it seems to me, is in the background knowledge K. Whether or not H is more probable if simple depends entirely on your background knowledge: does it make it likely that the true hypothesis in this domain would be simple?
It’s true, as others have said, that there will (in most cases, at least) be an infinite number of possible alternative hypotheses, and that at some point, by mathematical necessity, they will have to decrease in probability so that the probability of the set of all possible alternative hypotheses sums to 1. But, if we number our H-i’s in order of increasing complexity, that at some point H-(j+1) has a lower prior than H-j doesn’t imply that, say, H-1 has a higher prior than H-2, or that H-2 has a higher prior than H-3. And sometimes our background knowledge ought to lead us to assign the highest priors to, say, H-40 and H-41.
It may also be true that we have often good inductive evidence that simpler hypotheses are more probably true – in other words, it’s often true that K itself tells us to prefer simpler theories. But this is not always true, and it is domain specific – it is much more true in physics than economics, say.
So I do not think that simpler theories should always be assigned higher priors – it depends on what your background knowledge tells you about the domain in question.
I suppose this is the thing with Bayesian techniques – you can get absurd results by failing to include relevant information. This goes all the way back to Laplace; he presents this neat little scheme for getting the probability of the sun coming up, finds that it is absurdly low, and speculates that if you took all of the relevant information into consideration, it might come out higher.
Note also that you can still expect to see a complex hypothesis be the true one even if the highest-prior hypothesis is the simplest. If you group hypotheses by size on a sort-of log scale, as in hypotheses with complexity 1, 2-3, 4-7, 8-15, 16-31 etc. then maybe a high-complexity group is the modal group even if you have a nice smooth exponentially decaying prior. Also, the number of individual possible hypotheses of complexity 1 may be a lot lower than the number of possible hypotheses of complexity 7, so even if the expected complexity is 7, the individual hypotheses may have much lower priors than those of complexity 1.
I sometimes think that to “do” “objective” Bayesianism “properly” you need to take all the evidence in one great big scoop and produce a prior probability distribution over hypotheses of everything and see which of those agree with the observation. I like to call it “Grand Induction”. This of course pretty much entails all of the problems of Solomonoff Induction, and probably a few others besides, hence all the scare quotes.
Note also that you can still expect to see a complex hypothesis be the true one even if the highest-prior hypothesis is the simplest.
Yes, I think Michael Huemer makes a similar point in one of his papers. “Simpler theories are more probable” does not imply “A simple theory is probably true,” for any precisification of ‘simple.’ The two are very easy to confuse, thought, and I myself slipped between them in my last post.
I sometimes think that to “do” “objective” Bayesianism “properly” you need to take all the evidence in one great big scoop and produce a prior probability distribution over hypotheses of everything and see which of those agree with the observation.
Yes, I completely agree! This is actually one of my main research projects right now. Basically, and in line with what I said above, I think that to figure out the prior of any hypothesis H you need to look at the different competing “higher-order” theories that would predict it to varying degrees, and take a weighted average of the probabilities they assign H — weighted, that is, by P(theory|K) — via the Theorem of Total Probability. Then you need to do this for those theories as well, and so on. You can only stop when either K entails that some particular higher-order theory is right — and I think it’s implausible that this often happens — or when you reach explanatorily ultimate theories such that you can’t go back any further. At that point you assign priors to those theories purely on the basis of a priori facts, or some such — though how exactly you do that, I don’t know. (Simplicity? Principle of Indifference?)
If this framework is right, it has rather startling philosophical implications. First, it gives us a novel transcendental argument for a first cause/ultimate explanation. (A regress here, it seems to me, would be vicious, and keep us from being able to assign probabilities at all.) Second, it suggests that the true prior probabilities of ordinary hypotheses about the world — those we come up with in science and everyday life — are derivative of and epistemically parasitic on the probabilities of different ultimate explanations and how likely they make those hypotheses. If I may be a bit grandiose about it, theology is shown to indeed be the queen of the sciences.
The thing about coming up with priors: I was noodling with these ideas about hypotheses expressed as strings in ASCII – and allow comments starting with #. One consequence: if you arbitrarily go for some fixed length, eg. 140 characters, then it’s easy to apply Insufficient Reason, and in practise you naturally get length effects. So if one hypothesis has only 70 characters doing the actual work, you can round it off with a # and then produce huge numbers of functionally equivalent variants, such that all of those variants grossly outnumber a hypothesis where all 140 characters are actually doing work, so that your prior greatly favours something functionally equivalent to the 70-character hypothesis. Anyway, I think this is an argument for having your prior probability diminish exponentially according to number of symbols. Possibly you might even find a way to make it tell you what the parameters are for that.
Problem 2: what formalism to express your hypotheses in. The danger here is in getting bitten by Nelson Goodmanesque “grue” problems. ISTR Scott Aaronson has something to say about this, and I vaguely recall he ended up appealing to something other than Bayesianism (PAC learning?) to get around this – and I’m still left with worries about formalisms that differ hugely from each other – i.e. where a “translation key” would be huge or even infinite (does that even make sense?). I’m going into seriously speculative mode here, but I think some formalisms come more naturally to us than others. Yes, there are huge layers of cruft from learning, culture and evolution but I think that even when you scrape all that away some formalisms are just plain easier to physically instantiate than others. “If materialism is true, you can learn something about matter just by thinking”??? Except you don’t even need to be a materialist? “if substance monism is true, you can learn something about the one substance just by thinking”??? So we’ve started with an idea about how to do empiricism properly, and ended up with a weird sort-of form of rationalism (as in a priori knowledge, rather than any of the other senses of that word).
Anyway, this has got seriously speculative but I thought that if anyone would appreciate this, the people on SSC would.
[1] Yes yes, some languages don’t distinguish green and blue, and some people say these languages have “grue” terms.
I definitely have Problem 2 in mind with respect to this project. I very much think that you can’t have empirical knowledge without a priori knowledge; I take it that this whole project is a priori.
I suspect that the “ideal language” is a kind of language of thought. Take a proposition P (propositions here understood to be the meaning of sentences/content of thoughts). Think of P as being built up out of various concepts combined in a certain way (in the same way that sentences are built up out of words combined in a certain way). Now take every concept in P that is not primitive. Reduce it to primitive concepts, and re-express P using only those.
Now we do something like (though this may be too simplistic) count the number of primitive concepts in P. We can then assign a simplicity score to each proposition = number of primitive concepts in the proposition, and make the prior of the propositions in our partition a function of the simplicity score of each proposition.
This presupposes a whole bunch of stuff, including (a) the existence of a privileged partition, (b) the basic correctness of the traditional project of conceptual analysis (a la Russell), and (c) the concept relativity of probability (which is not quite the same as language relativity, on my view). It’s also somewhat opposed to the general trend of objective Bayesians to hold up machine learning as the paradigm of probabilistic reasoning. To get probabilities we need propositions; to get propositions we need concepts; and to get concepts we need thought. If machines can think, fine; but mere formal manipulation will not suffice. (This is just another way of making the point that probability cannot be purely syntactic, and perhaps of affirming your point that some formalisms are better than others.) I’m comfortable with these all commitments, but many other – including those attached to the “empiricist” label – will not be.
I think Occam’s makes more sense if you think of “complicatedness” not as a number but as a sort of partial ordering. If one theory is strictly more complicated than another, then it should definitely have a lower prior, because it is claiming more. If I claim that there is a teapot orbiting the sun, and he claims that there is a blue teapot orbiting the sun, then my claim is more likely because it is less complicated.
In logical terms, “A and B” is a priori less likely than “A.”
>If I claim that there is a teapot orbiting the sun, and he claims that there is a blue teapot orbiting the sun, then my claim is more likely because it is less complicated.
The trouble is, readers not used to that kind of language may go away convinced that a colorless teapot is more likely than a blue one.
I really like the redesign but you should really move the search bar to somewhere on the top-right of the website since that’s the standard place for it to be
When I do, it looks like one sidebar starts below the other sidebar and angers the part of me that makes sure picture frames are hung up straight.
If you put it above the “Recent Comments’ title I think it will work.
You could also put it in the top-right next to the RSS/Comments feed buttons, although I don’t know enough about how the HTML/CSS is structured to say you definitely could do that. It would be a good place for it, though.
As long as the SSC search bar is both near the top where it’s easy to see and in a place where it looks like it does what it’s supposed to do (search the whole site), you’re fine.
I’m a software and web usability professional, FWIW.
I don’t how serious Scott’s bio-determinist guide to parenting was, but someone took it very very seriously.
While I agree with many of the measures (getting a pet for instance or eating fish), I’m disturbed by the level of detail, and the overestimation of the benefits of most of the interventions.
That wasn’t nearly as extreme as I was expecting from your description
Apologies to Scott if he’d ether not see this retread, in which case he can just delete it.
Over on the tumblr side, there was a dust up over Yudkowski having said essentially that you should judge people’s theories on X (economists in this case) based on whether or not they accepted the Many Worlds Interpretation. Is this actually a claim he has made, or a gross misinterpretation?
The original articles which advanced that particular thesis were “The Correct Contrarian Cluster” and, to a lesser extent, “Undiscriminating Skepticism”. Feel free to read them and judge for yourself if the Tumblr discussion was accurate or misrepresentative.
Thanks for that. It looks like they were indeed accurate.
(The core idea is, of course, slam-dunkingly wrong, and ironically so given what the tumblrs who actually know quantum physics thought of EY’s quantum sequences.)
I think Eliezer is correct about AI risk, very smart, and I enjoy his writings, but that post always seemed dangerous to me, and at least one of his contrarian views seems obviously wrong, namely his opinions on nutrition. I once read him claim that were he to stop eating totally he would starve to death before he lost weight, as his “lack of metabolic privilege” makes it so his body does not consume stored fat even when he has a calorie deficit. I find it strange that the person who came up with Algernon’s law could believe he has such a mutation.
As for his opinion of physics, I don’t have the expertise or IQ to judge. I find many worlds appealing, but mostly because I enjoy science-fictiony ideas and anthropic arguments. If he advocates that laymen should have a higher prior for Many Worlds than whatever you get when you poll the experts, I’d be very skeptical of that claim.
Does good taste in one field correlate to good taste in others? I would say it probably does, but probably not as much as Eliezer thinks it does. Even conditional on the Everett interpretation being wrong, it may still be a good proxy for IQ. Say, economist X was able to learn QM to the point where they are confident enough in his or her ability to engage with contrarian ideas – this may indicate they are likely more intelligent than the average economist. This could make the “contrarian cluster” strategy seem effective even if it is not.
Linus Pauling was wrong about the vitamin C, and he wasn’t too far out of his field.
The thing I find strange about Eliezer is that though he seems to be a very smart guy and writes a lot about rationality, he seems to be pretty terrible and applying rationality himself. And not just terrible for a guy who’s supposed to be a rationality guru, but pretty bad in comparison the the average person (though I might have an overly high opinion of the average person’s rationality level).
I think we are seeing the “smart people are more capable at engaging in rationalization” effect.
In what sense is he terrible at applying rationality? He’s world famous in his field, hangs out with billionaires, has as many girlfriends as he has time for, he has a devoted following both of his fiction and non fiction, and gets paid to work on what he professes to think is the most important problem in the world.
Compared to all that, fatness seems rather trivial.
That would be a good defence of his application of rationality in his own interest but it says nothing about his application of rationality to his beliefs.
(Unless, of course, his belief is rational egoism…)
@drethelin,
The same criteria of success applies to many televangelists; I don’t think applied rationality is their secret to success.
When people say this about EY, it’s generally because they see him making some grand, sweeping, confident claims about an subject familiar to them, and recognize it as ridiculously wrong or woefully under-informed, and then aren’t willing to Murray Gell-Mann Amnesia their way through the rest.
I have no opinion on Eliezer, but I am reminded of a statistic I read recently stating that books on ethics are *more* likely to be stolen from libraries than your average book. This tends to indicate that ethicists do not behave more ethically than average, but they are better at coming up with ethical justifications to do what they wanted to do in the first place.
Who needs cookbooks? People who don’t already know how to cook but want to learn. Who needs books on ethics? …
@ScottAlexander What a delightful observation!
Haha, or it could be that people who don’t “get” ethics on an intuitive level are the ones most inspired to study it.
Related, I have a subjective impression that psychiatrists and therapists are themselves more likely than average to have struggled with mental health problems. If true, this might actually be a good thing in the sense that a formerly fat person who is now thin makes a better diet guru than someone who was always thin. Is this at all your experience, Scott?
+1
Thus my theory (which I’ve mentioned before): LW people believe in crazy ideas and want you to become more rational because they think it will lead you to believe in them. Unfortunately for them, it works and you actually end up becoming more rational….
Eliezer has a history of being unusually bad at losing weight. While it seems unlikely to me that if he didn’t eat, he’d starve before he lost any weight at all, it’s quite possible that he’s got a rare mutation. Taking fat out of storage is a complex physical process (I’m assuming this because biology is like that), and complex physical processes can go wrong.
Are there documented examples of people who won’t lose much weight if they literally stop eating? It seems like something that could possibly occur, but I haven’t encountered it before.
Taken to an extreme, I doubt it happens very much. Certainly I’ve never heard of fat famine victim corpses. I can believe that there are people (or maybe just infants) out there who have totally broken gluconeogenesis pathways, but I suspect it manifests in a lot worse ways than just trouble dieting. A partial malfunction seems far more likely to cause such limited symptoms.
Haha… he must have a rare mutation. He couldn’t just be one of the millions of people who try many times to lose weight and fail.
Don’t mean to sound flippant. This just sounds like one of those elaborate justifications smart people give, like someone mentioned in the CBT thread: when asked why they don’t quit smoking, people of average IQ just say “I tried, but I couldn’t,” whereas smart people all have some elaborate excuse.
Living in our current society, with our current lifestyle and diet, it’s incredibly hard for most people to lose weight and keep it off. Doesn’t take a rare mutation.
Almost no one goes from being obese to having a normal weight.
http://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.2015.302773
There’s something similar in The Important of Self Doubt”
That said, as far as I can tell, the world currently occupies a ridiculous state of practically nobody working on problems like “develop a reflective decision theory that lets you talk about self-modification”. I agree that this is ridiculous, but seriously, blame the world, not me. Multi’s principle would be reasonable only if the world occupied a much higher level of competence than it in fact does, a point which you can further appreciate by, e.g., reading the QM sequence, or counting cryonics signups, showing massive failure on simpler issues.
The ‘low rate of cryonics sign up as a marker for irrationality’ thing really bugs me. If you sit down and do some back of the envelope maths, it really isn’t obvious that cryonics has a net positive expected value. It certainly isn’t so obvious that you can use the low rate of cryonics sign-up as an argument for a low rationality water-line generally. Yet nobody seems to actually do this back of the envelope maths, or – better – do a proper calculation of the cost benefit.
The cryonics thing does make me roll my eyes and go “And you say I’m irrational for believing in the Resurrection of the body?”
“If you don’t believe that we can take someone dying because their brain is riddled with cancer, chop their head off right at the point of legal death, freeze it and then wait ????? years until the technology exists to (a) unfreeze tissue without it all thawing into mush (b) cure brain cancer (c) take a reading from the engrams encoded in the brain and transfer a copy of the consciousness into a new vat-grown body so Mr Smith lives again, then you’re irrational!”
That’s one hell of a step from point A to point Z in one sentence, it really does remind me of “Step 4: Profit!”
You correctly assess? Okay, given that you-the-openminded-enquirer believe Many Worlds is correct, then obviously you think those who do not believe it are incorrect, so in that sense you “correctly” assess them as inaccurate (I’m going for the weakest version of “not to be trusted” because the physicist might be wrong about Many Worlds but that does not mean they are wrong about the other parts of their field or that their knowledge is incorrect).
But if I’m starting from a position of “I know damn-all about the whole thing”, I’d take it very cautiously that my amateur researches qualified me to say with perfect confidence that people in the field were talking out of their hat where they disagreed with me.
Sure, maybe the lone genius has really discovered a working and reproducible by others
perpetual motion machinecold fusion method in his shed – or maybe he hasn’t.I plan to write a post on this eventually.
Please do! You are the only hope left that he softens his position on the issue.
Yay! That means I can get in a preemptive rebuttal!
Sir Isaac Newton to that entire post!
If there exists such a thing as C, a general “correctness” factor, then Newton is surely the incarnation of C taken on human form. He made huge steps forward in physics, to point that we forget his additional contributions to optics, which on their own would have earned any other man a place in the history books. Is that too empirical for you? He also invented friggin’ calculus. Too abstract? Did I mention that he usually forged his own scientific instruments because he couldn’t trust anybody else to get them down to the precision he needed?
The man was provably good at picking correct ideas out of idea space that nobody around him did. And yet, none of you are signing up to convert to Arianism. If we believe in C, we really should.
Indeed, Yudkowski’s pre-commitment to atheism would, at nearly every point in the past, have put him in opposition to just about every one of the scientific fathers. Even more recently, it would have put him in the Lysenkoism cluster, and against the Lemaitre cluster. You basically need to posit that C only started working in the past 20 years or so, not unlike the color grue.
Don’t forget his contribution to Alchemy.
I often have fun referring to Sir Isaac as history’s most successful alchemist, and then pointing out that his development of laws of motion and gravitation that explained the motions of both terrestrial objects and celestial bodies is perhaps the most successful application of the Hermetic principle of the unity of macrocosm and microcosm, generally expressed through the aphorism “as above, so below”.
I don’t think the Arianism example is very good. Arianism isn’t any worse than the major belief systems that it was competing with at the time. The main reason people aren’t converting to it–and certainly the main reason your average guy on the street isn’t a member–is that its opponents were better at killing its members than its members were at killing its opponents.
Why wouldn’t Arianism be a good example? It was a very Contrarian theory in Newton’s time. And I don’t see any reason to dismiss it out of hand as a viable theory now. If we think there is a general “being correct” skill, and Newton demonstrated his proficiency at that in every area we can check, why shouldn’t we trust him in one we can’t, like Arianism?
(As an aside, I don’t think your history is really accurate there. Arians seemed to hold the upper hand politically for most of the time the debate was raging (Nicaea was anomalous and not enforced by the subsequent emperors), and even after most of the church hierarchy had gone Trinitarian, Arians reappeared in the form of the conquering barbarian hordes, who were pretty good at killing people.)
Arianism had the backing of a good lump of the church of the time and secular powers including emperors. It got to the stage where pretty much the only major voice opposed to Arianism was St Athanasius; indeed, Alexandrian politics being what they were, the part of the populace which was still pagan took advantage of the accession of Emperor Julian the Apostate (who tried to revive Classical paganism with a reformed twist) to fling the deeply unpopular Arian bishop into jail which, ironically, led to his murder and the return of Athanasius to Alexandria to take up the see which the Arian bishop had taken from him.
Indeed, after the Council of Ariminum in 359 AD where the orthodox position seemed to be triumphant, when the dissenting bishops went back to their sees they began to change their minds and St Jerome could say “The whole world groaned, and was astonished to find itself Arian.”
Arianism very nearly won. It wasn’t the popular notion of “And then the Emperor imposed his brand of orthodoxy on everyone and crushed the alternate Christianities”. As Jaskologist points out, the very successful invading Vandals were Arians and treated the North African Catholic communities harshly.
Just an FYI, his name is spelled “Yudkowsky” Eliezer Yudkowski is his evil twin.
Spelling it as Yudkowski seems to be a common thing among his critics. I don’t know why.
I’d guess because most Slavic names in the US are Polish and those end in -ski.
Our esteemed host speculates that it is a case of mistaken identity.
More seriously, I assume that one or more of his prominent critics must have made a mistake, and that other people who have never read his actual work but do read the critiques have picked up the misspelling. At this point it’s probably self-perpetuating; anybody who moves in anti-LW circles absorbs the incorrect spelling of the name by osmosis.
Seems more likely it’s the reverse – “Yudkowski” is the more intuitive spelling, and people who aren’t big fans of his would be less likely to remember that his name’s spelled counterintuitively. (Also, it’s more likely that fans have read a lot of things by him than that critics have gone to the trouble of reading a lot of criticisms of him).
I’m an atheist, but for what it’s worth, I think Arianism makes more sense than Trinitarianism. It’s a simpler theory, and, as far as I can tell with my limited knowledge, fits better with scripture, so I would assign it a higher probability than orthodox Christianity. (But then both of these probabilities are close enough to zero to not make any practical difference.)
He also invented quantitative finance.
I feel like a lot more people are doing the equivalent of rejecting Newton’s physics because of his Arianism than the opposite.
The stated premises of EY’s original article would indeed mean that a man of Newton’s time should ignore his theories of physics due to his Arianism, since Newton fails on the very first slam-dunk criterion (Atheism: Yes). Many-Worlds wouldn’t apply yet; I don’t know the status p-zombies as a philosophical question back then.
If we go with the more defensible underlying logic instead, and ignore EY’s specific litmus tests, we really should join up with the Arians, given how Newton manages to be right about everything that we can check, items which are much more slam-dunks than anything EY lists.
Why can’t we believe that their opinions were the most likely to be correct given the information they had, but we have additional information which changes things?
He’s long had contempt for people who don’t endorse Many Worlds Interpretation, inclusive of physicists and non-physicists. It’s long been his view that failure to endorse MWI is a sign of diseased thinking.
I got to LW because of the sequences, which I enjoyed reading and which were mostly great. But the MWI stuff seems wildly excessively confident. Even if he is right, though, that it’s obvious to anyone with a proper understanding of quantum mechanics… not all of us have that, and economists should not be supremely confident of such a thing.
It’s really unseemly for Eliezer to draw this particular line. I have my theories as to why he does this, but they are not nice and probably not necessary.
Sort of as a counterpoint to this, I read it after I had taken basic quantum mechanics in lower division undergraduate physics, was immensely confused by the way that the copenhagen interpretation was explained, read the sequences, then did upper division + graduate quantum in view of MWI and subsequently found it much easier to intuitively appreciate all the previously “impenetrable and unintuitive” parts.
However, this comes with many caveats:
There’s independent evidence to suggest that my initial exposure to quantum was subpar; the professors phoned in their teaching duties. Obviously I could have gained the appreciation just as easily from just being exposed more to quantum.
Obviously it could have just been a “placebo effect”, I don’t think my ability to solve problems got better, but I did proceed through the material much faster than before. And obviously the effect of the sequences as a pedagogical framing is not a perfect correlation with truth value (see useful lies like Newtonian mechanics or verbal analogies to mathematical constructs), but I feel like it was a useful consequence and I feel bad whenever I see a bunch of people criticize this part of my mental framing.
I don’t know – a florist could be sound on the Many Worlds question and still produce god-awful flower arrangements (in which case they’d probably soon be an unsuccessful florist and perhaps should decide to stick with theoretical physics instead).
It does seem much too easy to confuse whatever Yudkowsky is saying with the impression that he’s saying “If they agree with my pet theory, then they’re smart people and should be listened to”. Why on earth should an economist have any opinion one way or the other on the possibility of many worlds?
Okay, I see the point that “being willing to think outside the box and entertain heterodox opinions” can be useful, and that economists who are open-minded may come to better theories that can be applied to make financial and economic systems work better (for whatever value of “better” you want to define).
But at this stage, I think I’d prefer a stodgy, conservative economist who was too cautious to try out crazy new ideas and just concentrated on getting the budget balanced. We and Greece both suffered a bit from all the people who wanted to believe the good times would never end and that markets would always go up and money would fall out of the air for everyone forever and ever, world without end, amen!
MWI isnt obvious to people who understand physics, at least not for non True Scotsman definitions of properly understand. EY has struggled to answer informed critique.
Given that the sum total of the evidence for MWH, or any other interpretation of QM for that matter, is aesthetic, I see no reason at all for such overconfidence. In the face of so little evidence, the humble answer is to shut up and calculate.
Isn’t epistemological humility supposed to be the hallmark of the “rationalist” movement? Yet, time and time again on issue after issue we get predictions about the future, including the far future, that are presented in quite strong terms.
You’re not the first to make such an observation.
I am totally on board with the statement: “If I agree with someone on X topic which I have carefully researched, that gives me a higher opinion of their intelligence globally.”
I am not at all on board with the statement: “if X other person agrees with Y other person about Z topic which they have researched but I haven’t, that should change my opinion of X or Y.”
I wonder if a lot of the people arguing are not opposed to the general notion of intelligence correlations, but instead are just complaining because they don’t think that an economist’s beliefs about Many Worlds are relevant to them.
AWWWW YEAH. Referenced in the comment of the week. That’s like…. one step removed from being the comment of the week. Some day I’ll be Scott’s favourite. Some day.
Since having weird asynchronous conversations is totally the best way of doing this:
Mirzhan Irkegulov: I spoke strongly in my original comment, and thanks for seeing through my frustration to what I actually meant. I am completely on board with CBT. I believe it to be the most effective intervention. However, therapists themselves haven’t work for me because, as you identified in your comment o’ the week, there’s a ton of snake oil out there. I’ve seen about 8 therapists over the past 3 years. None of them took insurance, and none of them helped.
On the other hand, over the years I have come to administer a form of CBT to myself, and it corresponds almost exactly with what you described. That’s an excellent, concise, accurate, and actionable description by the way. I’ve paid several thousand dollars to therapists over the years and none of them were able to communicate the core idea of what they’re doing as well as you just did.
The thing I’m working on right now, the thing that therapists do the worst, is the behaviour part. The cognitive part, reframing things in more positive ways, this I understand and can do (at least when I’m not in the thick of depression). But the behaviour part, that’s harder. By way of example
What if the circumstances of your life are such that you feel (reasonably and rationally) near constantly in danger. What if the circumstances of your life are such that you feel (reasonably and rationally) unloveable. Without being able to overcome these circumstances, even if you can reframe your irrational beliefs, you can’t intuit them, and sooner or later you’ll slide right back into depression. A therapist can help you practice things like deep breathing exercises and the like, that have helped with anxiety by taking away the physical triggers. A therapist can’t fix the fact that a scary drunk homeless guy sleeps in your backyard five feet from your bedroom window and the cops refuse to do anything about it. A therapist can help you get over (for lack of better term) perfectionism, so you’re not so down on yourself. But a therapist can’t go out with you, wingman for you, introduce you to people, and teach you how to be more sociable and sexy.
In fact, the relative uselessness (to me) of therapists was very well illustrated by this last point. A few months ago I met a woman online who was starting a business, basically wingmanning as a service. Life coaching, I guess. In the span of a few weeks she was able to trigger a dramatic improvement in my mental health. How? She just… taught me and coached me in all these interpersonal things that therapists can’t or won’t. Analyzed my posture and body language and pointed out what I was doing wrong. Helped me upgrade my wardrobe. Helped me find meetups and communities that I might fit in with. Taught me how to make small talk, how to interact with strangers, meet them, and make sure they like you. All it took was a few hours of focus on these things, things that would completely fly under the radar of “real” therapists. Because, when you walk into the CBT therapist’s office, the assumption is that your thought processes are askew. They’ll not bother looking into whether or not your (negative) thoughts are simply accurate reflections of reality, and they’re not equipped to help you get over these if they are.
It’s actually a shame; she dropped off the radar almost two months ago, and I have no idea what happened beyond “I really hope her chronic health problems didn’t take a turn for the worse”. Disappeared the day before she was going to write up my first invoice though (~1/3 the cost of therapists) so silver linings, I guess.
Anyways, not sure what my point is beyond “mental health is hard, therapists are hit and miss (mostly miss), and fuck they’re expensive”. ¯\_(ツ)_/¯
I wonder if there have been any studies that take a look at life coaches vs therapists. I know that many believe depression doesn’t depend on circumstances and that it’s all in the head but I have a hard time believing that.
depression doesn’t depend on circumstances and that it’s all in the head but I have a hard time believing that.
I have two views on that.
(1) It’s like eqdw says: it’s the behavioural part that’s tough, not the cognitive part. It’s easy enough to sit down and go through a list of “You are not as useless as you think you are because look at this when your boss said you did a good job or that when you got high results in new skills”, but then the little voice goes “Oh yeah, and if you’re so great, how come you can’t talk to a stranger in a lift for five minutes? Or that you prefer email because you hate talking on the phone, when most of the communication in your job is done by phone?” and what do you do then? You need someone who can show you what the hell to do. So a life coach might definitely be better in a practical, concrete way there.
(2) I also firmly believe that some of my trouble is that my brain chemicals are out of whack so medication would be nice, please, but I’m out of luck on that (unless I become actively self-harming or suicidal, i.e. I’ve got the scars or police report about me jumping into the harbour to show for it, in which case then they’ll give me drugs) so counselling it will have to be.
So it’s a combination: there are definitely “life skills” that I lack or really, really suck at, but at the same time until I get my head sorted out, I won’t be able to work on getting those skills.
Why are you out of luck? Is your health care system in (iirc) Ireland more strict about drugs than the US? I can get pretty much whatever drug I want if I ask my doctor nicely.
Incidentally: I’ve tried four different SSRIs (sertraline, citalopram, fluoxetine, paroxetine) and none helped. Then I tried bupropion (not SSRI) and it appeared to help a little bit. I started experimenting with massive amounts of caffeine (350-500mg/day) and this seemed to help more. So now I’m looking into ADD meds.
I swear, while bupropion didn’t solve most of the problems, it made a dramatic improvement more than any other intervention I’ve tried. Drugs are the best.
Apparently we are now operating on a spiffy new community mental health initiative.
In the Bad Old Days, you pitched up to your doctor and they wrote you a prescription. But Drugs Bad! (if you don’t need them). And if you’re only Stage 1, you don’t need them – Stage 2 is the “Yes, I do cut myself/I tried throwing myself off the bridge” okay you really do need drugs as well as counselling.
The irony here is (a) a couple of years back when I was having panic attacks and anxiety attacks, my GP was perfectly happy to prescribe me Xanax and I was the one going “No, I’d like to stay off drugs if possible” (the Xanax did help short-term) (b) had I broken down and gone to my GP earlier, I’d be on anti-depressants now like my sister who went while I was doing the stoic “I don’t want to rely on drugs, drugs are a crutch!” bit. Also, that mental healthcare reform groups wanted an alternative to the “traditional, medicalised version” and “alternatives to medication”. No, honestly, I’d be happy with some medication!
By the time I broke down and asked for anti-depressants, I got the “No, now we do counselling instead!” bit 🙂 Given that on my first visit I asked for medication, was refused and got the referral to counselling, bottled out of that, got a bad dose of suicidal ideation thereafter and had to crawl back with my tail between my legs and ask for a second bite at the cherry for referral to counselling, and still didn’t get offered medication, and given that this being a national initiative any other doctor I’ll go to probably will do the same “We don’t do drugs any more, now it’s counselling”, I’m probably about as well off sticking with what I have as trying to change.
I’m probably a little cynical here, but since I haven’t made any active attempts to self-harm or kill myself, and since I don’t have a suicide plan*, I imagine I’m considered low-risk so – counselling!
*Yes, my GP did ask me that re: the latest bout. I am consciously trying very hard not to think about that and divert myself as soon as I start going “Well, if I was going to do it…” because I know once I start down that path, it’s not going to go well. Again, the irony is if I said “Actually, yes, I’m working out a way to do it**”, then I probably would get the “Okay, you need something while waiting for your assessment and maybe even need to be seen by an in-patient clinic”.
** Which I sorta kinda have a thought about; I know the ways I can’t do it, and I have sorta kinda a way I might try it if it ever got to that point – and this is where I try and force my brain off that train of thought by switching the tracks.
[Trigger warning: frank discussion of suicide and related psychiatry experiences]
FWIW this comment section would be worse off without you.
That sounds super byzantine and frustrating. :(. It hits on a thing that has been bothering me for a while now too.
I really, really, REALLY wish that I had the right to waive my rights. Because sometimes I get to a dark place too. I consider myself extremely low risk. I haven’t ever planned anything in detail (except for one really scary, impulsive, long-time-ago situation). Like you, I’ve enumerated a few ways that are definite no-gos, but beyond that, nothing. I’m still choked up at the “things I need to finish up before I can even think of that”. Which is probably a good thing to be choked up over, all in all; it’s quite literally a list of reasons to live. But I wish I could talk frankly about this. I wish I could go to a therapist or psychiatrist and say “look, I’ve analyzed the evidence, I can’t see any path from here to a solution to my unhappiness, and I want to quit playing this game”. I wish I could just frankly discuss this in a dispassionate, emotionless context. I wish I could talk to a therapist and just say “these are the challenges I think are insurmountable, these are why I think they’re insurmountable, this is why they’re important, and I think it’s reasonable to not want to live if these aren’t solveable. Help convince me that it’s not that bad”.
But I can’t do that. The split second that I start saying this sentence, they’ll send me for inpatient hospitalization. And as Scott has mentioned several times before, inpatient care doesn’t have the greatest success record, and it’s highly disruptive to one’s life in the short term. Given that I’ll get deported if I stop showing up for work, triggering this cascade of events that ends up with my life being (worst case) utterly destroyed or (best case) set back several years from my goals is a Bad Thing.
It makes me feel that in order to get the help I need, when I need it, I need to game the system. I can’t actually talk about my problems, I have to talk about the lies that I think will most effectively lead to the solutions I want. Which of course may or may not be the right ones; I’m not a doctor. But this in-built risk aversion forces us to be dishonest and hinders communication between the people who can help and the people who need help.
Honestly, drugs have been by far the most effective treatment for me. Not anti-depressants; SSRIs did nothing but accelerate the death of my relationship (yay side effects!). But stimulants have helped, well, lift my mood. Glorious California Proposition 215 ensures me an only-slightly-self-destructive way of going into maintenance mode and putting the rest of the world on pause. And some drugs, done with suitable precautions, have functioned as therapy, only self-directed and much more effectively. Drugs work, for some situations, and fuck the pharmacological Calvinism informing these “drugs as last resort” policies
Deiseach, since we had the topic of paradoxical interventions recently: are you sure that getting serious about planning your suicide would put you that much closer to doing it? I find it immensely comforting to know how I would kill myself if I wanted/needed to, and it may be a coincidence, but I have experienced less suicidal thoughts since I figured that out.
I suppose it’s a fine line you need to walk: make things seem bad enough that you get medication, but not bad enough that they intern you, right? I wish I could help with determining where that line is, but I live in a less crazy country…
Creutzer, the last bout of this which scared me back to the doctor asking for a second chance was because of what eqdw says about “I can’t kill myself unless I take care of these things first”. Generally that’s been enough.
This time round, I was “Feck it, I don’t care about taking care of those things. I’ll be dead. They won’t matter.” That’s what worried me.
I seem to be hitting a bad patch recently; it’s just a matter of gritting my teeth and getting through it. If it gets really bad, I will go back to my doctor and she might be willing to give me something in the interval of waiting for the counselling.
But yeah, I really do feel it’s the view that “unless there’s blood involved, it’s not real”. If I had cut marks on my arms, sure, that’s evidence I’m as bad as I say I am. I turn up with never even have tried anything, well how bad can it be?
Re: alternate forms of medication, I was coping by drinking, which is Not A Good Idea, Kids and which I’ve stopped doing. Binge drinking bad, boys and girls. Going “I think consuming this 75cl bottle of rum over the course of the day is a helpful idea” will not benefit your liver enzymes, even if it does make your head stop at you.
What would happen if you turned up at the doctor’s saying that you are not currently suicidal and only mildly depressed, but you have been suicidal, and in case you get into that state again, you have a fully worked-out and operational plan that is unlikely to fail? They shouldn’t be able to intern you for that, but can they take the risk of not giving you meds?
Deiseach, You need to get back in there and be a little more manipulative. Don’t list your symptoms. List the symptoms of the patient that gets the drugs. The way to handle the suicide issue is simple: you feel okay today, but regularly you experience suicidal ideation, have for years, you’re worried it’s chronic, and feel sure that you can no longer wait for things to get worse.
Millions of Britons are getting prescribed anti-depressants, some that probably don’t need it, and if that’s the trendy media narrative right now, as it appears to be, then you’re going to have to try that much harder on your own behalf. Frontline bureaucracy with a mandate to cutback starts by shooing the meek away.
I read all of your thread and the comment of the week. Very interesting stuff.
Yes, in my experience, there are a lot of therapists who are just really bad at their job. I’ve generally had better experience with psychiatrists, who seem not to go in for as much hippie-dippy bullshit (this might be harder to select for in Berkeley, which is well known for such).
Honestly what I’m doing right now is fixing the shit parts of my life. The problem is that some of them are hard to fix, some of them will take a long time to fix, and some of them I still don’t even know where to start. In the short term, I’m gonna be pretty fuckin’ sad and anxious and lonely and depressed. So, trying new drugs. Investigating certain theories. I suspect part of my problem is misdiagnosed ADD, so looking into that right now.
In the mean time, I wish there was a way to get hugs on demand
BTW, unrelatedly, I read your dissertation the other night, and I wanted to let you know it was an enormous pleasure. It’s a first-class piece of scholarship, and it’s a rare delight to discover passages that needed translating from Latin in a computer science dissertation. Thank you.
Why thank you! I’m glad you enjoyed it. It was really satisfying to do work on those chapters.
Thanks a lot for writing this. This is exactly how I feel about Therapy.
My life is a clusterfuck of psychic Problems and bad circumstances, therefore therapy cannot fix my life.
At the same time, real-life solutions that work for other people often don’t work for me because of my screwed up mind.
About CBT: There are things that CBT (as described by Mirzhan Irkegulov ) neglects, or cannot do: 1) If I am in state where I cannot imagine anymore what having fun feels like (or success, or sexual arousal, … ), no amount of logical reasoning will get me to feel these emotions, because the concepts of Fun and Success and Sex have lost all meaning. What actually does work: make sure to experience these feelings on a regular basis.
2) The mind contains subsystems that do not speak “logic” and cannot be convinced. Instead, you will have to identify the specific buttons/triggers for each desired reaction.
What actually did work for me is Pj Ebys stuff on how motivation works. That guy is a genius.
Would you like to chat sometime in more detail? You are the first person I have ever met who I think I could have a decent conversation on psychic/life Problems with.
On the subject of Patreon, Gwern of gwern.net has one too: https://www.patreon.com/gwern?ty=h. The servers that run his brain emulations aren’t free, you know.
Thanks. Added in. Last I saw Gwern was using some kind of Patreon competitor that never took off and not making as much as he wanted, so I’m glad he’s got some decent cash flow now.
Thanks for giving in to the screaming masses and putting up a Patreon account.
Just in case somehow someone didn’t say this already, and because your “I don’t need the money” spiel sort of makes it seem that you don’t get it:
The reason people want to give money to you is not just because they want you to have that money, or because they think you need it. It’s because giving money to you on Patreon is publicly signaling that what you do is worthwhile and should be done. It is, in a very small way, a method of fighting against Moloch.
Also, I would much prefer if you billed monthly instead of per blog post. This would limit the influence the Patreon has on your posting habits, which probably doesn’t matter yet, but if the numbers ever get very big, the risk of you having a neat small idea and thinking “is this really worth $x?” is something the most of us would probably rather avoid.
A quick note on the problem of utility aggregation — this is in response to some stuff I think some people were discussing on Tumblr and I don’t want to dig up.
Basically the argument went (I forget who was claiming this exactly) that utility aggregation (we will ignore for now the question of to what extent this makes sense at all) has to be by adding things up because otherwise the Δutility resulting from a given act depends on the size of the rest of the universe, and how much good something does doesn’t depend on the size of the rest of the universe.
I claim this argument is in error. Let’s simplify the problem — we imagine that the universe consists of two causally-disconnected “bubbles”, so you can only affect things in your bubble. Then the goings-on in the other bubble, in particular its size, should play no role in your decision making. This much is certainly correct.
But it’s a mistake to reify changes in the utility function as “how much good something does”. The point of a utility function is to describe preferences over gambles. Suppose that you have one utility function U_1 for your bubble B_1 and another U_2 for the other bubble B_2. You want to make a global utility function U for the whole universe. Then certainly if changes in U_1 or U_2 must be reflected by a corresponding change in U, then you must have U=U_1+U_2+const. But as I’ve said above, there’s no reason to insist on this. Suppose we weight U_1 and U_2 by the “sizes” (s_1 and s_2) of the corresponding bubbles, so U = U_1*s_1 + U_2*s_2. (Or to make it more like the original “absurd” scenario, suppose U_1 and U_2 both take values in [0,1] and we are taking a weighted average, U’ = U/(s_1+s_2).) Since E(U_2) is the same regardless of your action, for any actual decision you might make this is simply U_1*k+c, for some constants k and c, i.e., equivalent to U_1. You are making the same decisions as you would under U_1, which is all it makes sense to demand.
In the real world, of course, we do not care about actual causal disconnection, but about connections so tenuous that it makes little sense to estimate a nonzero value for E(Δutility). Still, I think I’ve shown the basic problem here. Sure, ΔU’ of your actions may decrease as the other bubble increases, but if you don’t make the mistake of looking at the number ΔU’ as meaningful, this just isn’t relevant. You still come to the same conclusions about what actions you should take, which is what a utility function is for.
This analysis seems to assume is that s_1 and s_2 are constants. But they are not always, and this is why you can get weird conclusions from averaging utilities (such as killing the least happy half of the population being highly moral).
No, it’s actually not. Part of my point is that, contrary to what other people claimed, s_2 can vary without actually affecting anything (assuming that there’s not some stupid TOCTTOU thing going on or something 😛 ).
Now s_1 varying, when you get into the question of what that actually means, may indeed present the sort of problems that you talk about, and that sort of thing is certainly a hard problem. But I’m deliberately avoiding that sort of problem, and not worrying about just what is meant by “size”, or what happens if s_1 changes, or how we determine U_1 and U_2 in the first place. My point is that, contrary to what some other people claimed, it doesn’t actually change anything things to account for the size of the part of the universe that you can’t affect.
What you’re saying makes sense to me.
But, without digging up the original Tumblr posts, what do you mean by “utility aggregation… has to be by adding things up”? I am just curious.
Now I’m faintly worried because the blog looks more or less identical to me, barring the addition of the Patreon link in the side…
So either the changes aren’t as obvious as Scott implied, or my observational skills are much worse that I thought. *shrug* Well, I always try to improve my skills anyway.
No, the changes are extremely obvious. The most likely explanation is that your browser is failing to render them.
blogroll is on the left side, is funny, is bigger
The grey is new, the page renders slightly more slowly.
The left bar does collapse below a certain window width, which is nice.
It collapses in neither of my two browsers (which might be my fault since I have JavaScript disabled by default in one browser, and the other browser is very rare). I also zoom the page in, so the left sidebar gets in the way. So, for people like me, here is a workaround that seems to do all right on my machine: use a custom CSS stylesheet (your browser should either support custom stylesheets out of the box like old versions of Opera, or support them with extensions like Stylish for Google Chrome and Firefox):
@media only screen and (max-width: 1200px) {
.widget-area {
display: none;
}
#pjgm-main {
padding: 15px !important;
background: #f0f0f0 !important;
}
.pjgm-postcontent {
font-size: 14px;
line-height: 180%;
}
}
Adjust the max-width and font-size values if necessary. This should hide both the sidebars and increase font size beyond a certain zoom level. The page ends up looking like this.
I get rid of the sidebars by increasing font size (by Ctl+ several times) till the page is big enough that only the middle section fits on the screen. When I want to consult a sidebar, I use left or right arrow to move the page over.
ETA – On my XP, this works in Opera 12.16; then I use their Fit to Width button if necessary. Iirc, it works in Chrome as well.
If you’re not seeing the horned tentacles coming up from the bottom and reaching out for you, it’s definitely a browser render glitch. It’ll probably be fixed in time.
I had the same problem on my phone. Look for a blue double arrow in the upper left corner. That will take you to the new sidebar with the blogroll.
Dorothy Thompson’s 1941 article, “Who Goes Nazi?” came up this week. The description of Mr. G seems quite relevant to the local psychology.
Fair point.
If you substitute Communism for Nazism, the fact is that 1930’s!me does not fit the *non*-Communist profile, but the *ex*-Communist profile. (e.g. Koestler, Wright, Breton, etc).
The description of G may look specific at first glance, but I’m calling Forer effect. All I can get out of it that seems meaningful is “clever and has complex opinions about political ideologies” (plus, of course, some content-free negative sentiment, but that’s how you write about hypothetical nazis). I don’t think those features are particularly useful to identify.
Have I missed something instructive?
The article is a bit vague about the specifics of Mr. G’s motivations. The bits in section III I think are helpful, and bearing those in mind, I get a picture of Mr. G as a partisan-for-hire – neither a true disinterested seeker after truth, nor a true loyalist to anyone or anything.
I think Mr. G is a contrarian who uses ideas as weapons and doesn’t consider whether they might be true.
What I found amusing in that article was the class assumptions embedded in the piece; at the same time as exhibiting “America is a classless society, Jack is as good as his master!” attitude, we get the “But some are servants and some are nature’s gentlemen and some are bred gentlemen”.
The notion that Mr So-and-So and Mrs Such-and-Such would never “go Nazi” because, well my dear, they have breeding, he’s a gentleman and she’s a lady, they may not have money but they do have class – I had thought that attitude had been killed after the Great War but obviously it lingered on.
This reinforces my secret conviction that everything I like was invented either by Borges or Von Neumann.
I suppose you could just take all your Patreon money and give it to charity anyway.
But don’t tell anybody.
I was very sure I was going to see https://thezvi.wordpress.com/ under “Those That Have Just Broken The Flower Vase”.
I didn’t know that existed but hey, added.
Scott, if you ever feel like taking advantage of the money-per-post incentive… 50 individual posts, each containing one Tom Swiftie. Just a suggestion. Of course you’d need 50 titles too, which could make it a bit more challenging.
Patreon has a feature where you set a maximum monthly donation. I assume that would kick in after a while. And then no one would ever sign up again.
At least then people would stop bothering you about letting them pay you?
O psychiatrists, psychologists, and the psychology-adjacent: how good is the evidence for biphasic sleep, really?
I just want to second that I am very curious about this too. It always seemed very counterintuitive and weird to me, though, in fairness, I’ve never gone for an extended period without electric lighting.
It’s what I have observed to happen if you go to bed before 11, don’t have work until 9, and don’t have the internet. After a bit in college, it was the sleep schedule I and my roommate fell into so that we could function during the day while also being on schedule with house mates who worked second shift.
I have had all those circumstances obtain at various points in my life (except the part about having a housemate with a night shift), and my sleep pattern never changed. Are you sure you weren’t just adjusting to be able to see/accommodate him/her?
Well, the pattern on nights when they went out after work was:
Go to bed at 9.
Wake up at 1.
Go to the bathroom.
Eat a sandwich.
Go back to bed.
Because it was rather dull to be awake at 2 am by myself without internet.
But if my roomate was home, after eating a sandwich I’d spend the next two hours catching up with my roomate, then he’d go to bed so he could go to work the next day, and then I’d go to bed.
I’d always assumed creative professions to be more resistant to elimination by ai. On that note, I think this link, while frivolous, will intersect a few diverse interests of many here, and I busted a gut:
http://www.escapistmagazine.com/articles/view/scienceandtech/14276-Magic-The-Gathering-Cards-Made-by-Artificial-Intelligence
I think this guy is assigning way too much of his own meaning into the output. (Also, the Legendary spell was never defined as something that could only be cast once per game).
Oh clearly, and bear in mind that b the author there isn’t the one who is running the program.
He definitely was viewing the output with rose-colored glasses, but his descriptions of what the cards were doing are correct. The Legendary Instant, to use your example, would only be allowed one copy in a deck like all Legendary cards, and thus (under most circumstances) could only be cast once.
What I found really interesting, though, was how fond the AI was of Green. It may have been selection bias of the author, but of the really good cards, like half a dozen were green and another half-dozen were green with another attribute.
Legendary cards are not restricted to one per deck, although this was the original rule about them, way back in 199x. Rather, only one legendary permanent with a given name may be controlled by a given player. It’s not clear what a “Legendary Instant” would mean.
I doubt it’s defined at the moment, but the most straight-forward extension of the rules would be that only one per player could be on the stack at any time, which would be a really mild restriction in the vast majority of situations. I would not be surprised if at some point an analagous keyword was devised which restricted playing spells that had a copy in your graveyard, which could be rather interesting as it would allow a modest increase in power level.
I’ve been following @RoboRosewater on Twitter for weeks, and it’s given me incredible amusement.
Still not going to replace humans any time soon.
This was hilarious. It makes me think that “programmer” will be the sole remaining job on Earth. Then programs will figure that part out too and, well, that’s that.
At the same time what constitutes “programming” has been growing steadily into “people who can think through what they actually want clearly enough to explain it to a computer”.
People who in the early days of computing wouldn’t have been mentally capable of doing anything useful with a computer can now spend their professional lives creating interactive web pages, animating sprites, constructing spreadsheets or programming in high level languages.
I started learning iOS App/Swift design before I got bo… squirrel!
What was I saying?
Oh, yeah, iOS App design. The design interface (Xcode) is amazing. You could, theoretically, create a novel and useful app without actually entering any “code” whatsoever. And if you’re willing to learn just a little bit, your app can be extremely sophisticated.
The last programming platform I learned – PHP/mySQL – was and is more powerful… but unless you need to manipulate large relational databases or other similar activity, I’m not sure you’re going to have to learn things like that any more to get computers to do what you want.
Maybe an open thread is a place to leave a comment on a very old post (whose comments are closed)?
At the end of section VI of “In Favor of Niceness, Community and Civilization”, Scott notes that even those liberals proverbally never take their own sides in an argument, and always feel like they’re loosing, they someone seem to be winning long-term. His final two sentences in that section are: A liberal is a man too broad-minded to take his own side in a quarrel. And yet when liberals enter quarrels, they always win. Isn’t that interesting?
This combines with his description of liberalism a few sections down:
Liberalism does not conquer by fire and sword. Liberalism conquers by communities of people who agree to play by the rules, slowly growing until eventually an equilibrium is disturbed. Its battle cry is not “Death to the unbelievers!” but “If you’re nice, you can join our cuddle pile!”
…to make me think of TIT FOR TAT.
As I presume most readers of this blog will know, TIT FOR TAT is famously the winner in iterated prisoner dilemma’s games, made famous in experiments by Robert Axelrod in the early 80s. (If you don’t know, see here: https://en.wikipedia.org/wiki/Tit_for_tat#In_game_theory) It beat out everything. But the funny thing was (as noted by Douglas Hofstadter in METAMAGICAL THESIS, where I, at least, first heard about it) that TIT FOR TAT can’t ever beat any given opponent. At best they’ll help each other. It looks whimpier than ones which occasionally try to defect when others cooperate. But in the long run, it wins. It’s not too forgiving — it does retaliate — but it will cooperate with anyone who will cooperate with it, even if that means it can’t ever come out on top; at best it can tie. But enough mutually beneficial ties & it beats everyone else.
Maybe liberalism is winning in the long run because it’s the TIT FOR TAT of politics? As long as you’ll play nicely, you can play with us; we’ll fight back if we have to, but not otherwise. A win-win will do.
Just a thought.
I’ll add that tit-for-tat actually “loses” to forgiving tit-for-tat, where you’re allowed to defect twice against the forgiving version before it starts implementing tit-for-tat, but then that forgiving tit-for-tat loses harder than tit-for-tat in environments where there aren’t a lot of “nice” bots or tit-for-tat form bots.
There’s a nice symmetry there with the thrive/survive model.
Here’s my model of what we mean by civilisation(/liberalism/niceness/Eula/utopia/etc):
The universe is a collection of evolving agents all playing IPD with the agents near them and being rewarded with more/fewer offspring according to how they play. There are two equilibria: one camp is tit for tat, the other is always defect. Ignore the differences between tit-for-two-tats etc etc and assume these are the only strategies.
At t=0 everyone plays always defect. By fluke some mutants start playing tit-for-tat every now and again and, being surrounded by always defectors, they die out. Until one day a small cluster all decided to play tit-for-tat at once. They gain so much from cooperating with eachother that they grow and have many descendants.
So we see a cluster called civilisation where one set of rules applies, one set called baraism where another set applies. Because civilisation is so much better at acquiring wealth it always grows and displaces barbarism. So we see that civilisation always wins eventually. But when you look at the border, which is the only place where civilised agents play against barbarians, you see that the barbarians always win.
This seems pretty accurate. Though it’s perhaps ironic that, today, it’s the “conservatives” in the US whose primary enemy is “barbarism” (while “injustice” is the enemy of the people we now call “liberals”).
So the model here is far better for explaining things like the Europeans wiping out the Native Americans, or why we will lose every single battle against ISIS and still win the war. Within civilised countries (which here means basically any functioning nation state. All of Europe, almost all the Americas, most of Asia, most of Africa etc), there’s an second-order extension to the model.
When you run this game for long enough (and allow an actual range of strategies not just 2) you notice that the civilisation cluster is unstable in the following way: Eventually everyone inside the cluster is cooperating all the time. So you may as well play always cooperate, it’s not selected against, so eventually it becomes a large fraction of the population by chance. And suddenly you’re fantastically weak against defectors if any can invade your territory.
I feel like American liberals care about moving towards tit-for-two-tats and similar strategies, while American conservatives are for strategies that look more like two-tits-for-a-tat. The liberals win the war after losing every battle in the same way as civilisation does. But! If they win too fast there will still be bad guys that can get in when they’ve dismantled all the defenses.
Are there any examples of this second order effect?
Examples of it in Game theory? The theory goes back to the origional Axelrod experiments and is expressed nicely here eg: http://www.researchgate.net/profile/Piotr_Swistak/publication/228591765_The_evolutionary_stability_of_cooperation/links/0fcfd512ccb7ea838a000000.pdf
Examples of it in reality? Harder to find a convincing one. You’d need a long established place that became pacifist and died when barbarians attacked it. The Moriori certainly fit this. They seem very strongly to have opted for always cooperate, and then got wiped out. But they’re such an extreme case I feel bad citing them as an example.
That’s because whatever wins is defined, after the fact if necessary, as liberal.
The Prisoner’s Dilemma doesn’t map to left/right politics in the slightest. The whole point of the dilemma is that both parties could cooperate to get exactly what they both wanted most, but due to lack of information were put into circumstances where a sub-optimal option rationally appeared to be the best.
That is not the case in Left/Right political fighting at all. Where the Left and Right are fighting, it is because what one side wants is at odds with what the other wants, and cooperation necessarily results in neither side getting what they wanted; sometimes, the goals are completely exclusive, and “cooperation” just means “we lost.”
In terms of the Prisoner’s Dilemma, we would no longer assume that both parties wanted to get the lowest possible prison sentence, but rather that both parties wanted to do the least amount of time while making sure the other guy did the most, and then it’s not a dilemma any more, they’re both doing exactly what they rationally should be.
I was rereading the Motte-Bailey posts the other day, and I was thinking that Jon Stewart might be one of the most high profile users of the tactic. I think he uses both in his comedy AND at the meta level, though it’s different enough from Scott’s examples that I’m not positive it’s Motte-Bailey and not something else.
In his comedy, he’ll often mock people (usually Republicans or people in the media) for being opposed to a bill or policy or court decision that is often complicated or controversial (the bailey) but frame the issue in the most sympathetic and over-simplified way (the motte) when the bill/policy/court decision is way more than that. To use a hypothetical example, say Republicans oppose an environmental bill because the believe it violates property rights. For example, Stewart will advocate for the policy (the bailey) by mocking Republicans who oppose it by presenting the policy *only* in terms of its environmental impact (the motte) and questioning why Republicans literally want to destroy the environment for no reason while making a funny face in disbelief.
As a public figure, he will often make very serious and damning critiques of media figures, politicians, and the government. Sometimes he doesn’t even try to do it in a funny way. These are influential to the point of having a real impact. He wants to use his public image to make serious critiques and advocate for serious policies (the bailey) but when pushed to justify himself or to debate it, retreats to the “I’m just a comedian don’t take me seriously” schtick (the motte) so he doesn’t have to defend himself. I’ve heard this referred to as “clown nose on, clown nose off”.
Are these Motte-Bailey, or something else?
(If you can’t tell, I’m not a Stewart fan)
I agree that he does that, but I think that (most of the time), he’s pretty good about recognizing it – he’ll wind up saying something like “okay, our side isn’t perfect, but your extreme is ridiculous.” Sometimes he skips this (particularly when he talks about social justice), and I agree that it is pretty damn annoying when he does that. The thing to remember about him is that a) he isn’t kidding when he says he’s mostly a comedian – he really is providing more entertainment (and suggestions for issues that might warrant a further, more balance look at) than actual news. And b) is that unfortunately, quite a lot of major actual newspeople seem to be even worse (or at least, less subtle) about it than him, without even claiming to be comedians.
update: I didn’t read your argument properly, I assumed you were talking about his method of argument (calling out the extreme republicans and claiming they stand for all republicans) rather than the clown-nose-on/off thing. I still think he’s pretty good about that most of the time though.
I agree. There are may not-ironclad arguments made there, not all of them are motte-and-bailey. There is plenty of other issues there.
As a young Stewart fan turned person pretty annoyed by him, as I have changed personally, I agree about his use of motte-bailey in the sense of his role as a public figure (comedian vs political analyst). However, it seems like his standard comedic tactic/political analysis/logical fallacy is simply just a straw man.
For instance, his argument in favor of any liberal policy or against a conservative policy is simply that some obviously ridiculous far right political character, like Donald Trump or Sarah Palin, says something obviously incorrect or offensive.
It’s like: obsurd outspoken far right extremist exists = this liberal policy is amazing and anyone who disagrees is evil or stupid and none of these issues are complex at all.
Unfortunately these tactics bleed into fairly common political discourse among even moderates. Stewart is guilty of exactly what he would claim
is wrong with American politics.
And of course these are generalities. He has his moments like calling out Pelosi when she says no Democratic politicians are owned financially by corporate donors.
“Unfortunately these tactics bleed into fairly common political discourse among even moderates.”
I doubt Stewart is responsible. I sometimes read and post to a FaceBook climate group, and it’s pretty depressing. People on both sides are confident that they are right, the other side idiots, and post accordingly. It doesn’t even occur to commenters, when someone posts something absurd that someone on the other side is supposed to have said, to do a quick web search to see if he really said it.
I think this is true of comedic commentary in general. If we were all smarter we wouldn’t allow such things to influence our views on serious matters, but alas.
Political satire is the art of helping people to enjoy their biases.
(At least somewhat serious.)
Although I’m not a fan, I can’t blame Jon Stewart. He’s just whatever you call the live action version of a political cartoonist. It’s everyone else’s fault: People that take him too seriously, online media that signal boosts him to ridiculous degree, etc.
Except that Stewart deliberately takes advantage of people taking him too seriously in his “clown nose off” persona, then puts the nose back on when he’s challenged.
Honestly I’ve found him mostly unwatchable for years. He went all in for Obama and hasn’t been an honest satirist since then. His “comedy” basically consists of “show right wing yokel say something stupid in a deceptively edited interview”, “look incredulous”, “spout boilerplate left wing pablum”, “mug at camera”, “repeat until time for fawning interview with popular leftist du jour”
>Except that Stewart deliberately takes advantage of people taking him too seriously in his “clown nose off” persona, then puts the nose back on when he’s challenged.
What do you mean with this? Does he leverage his position as a “Very Important Political Commentator” for some purpose or another?
I think this is the heart of it: Stewart’s motte-bailey takes the form of “clown nose on/clown nose off.” He is obviously a genuinely influential political commentator with serious opinions on issues, but he constantly disarms would-be criticism with humor.
What I don’t particularly care for is how the transition from “fake news comedy show” to “real news show that happens to be funny” was very gradual and kind of surreptitious. One could argue that Bill Maher is guilty of similarly disarming with humor, but I think he never presented his show was “fake news” to begin with. Bill Maher’s was always a “news show that happens to be funny.”
For this reason, my personal preference is Maher>Stewart>Colbert, though I am a bit ambivalent about the cultural effect of “joke news,” even as I must admit to enjoying it. Colbert makes the biggest pretense of being “fake,” while at the same time offering actual political commentary in a way I find to be a bit mean-spirited at times.
I agree that Stewart and his team are guilty of mocking people and policies they don’t like (usually Republican ones) by focusing on some aspects at the expense of others, often in an oversimplifying way. I’m not sure I agree that these are typically instances of motte-and-bailey, at least not a lot of the time. I don’t think Stewart’s arguments are usually of the form “Republicans clearly want to destroy the environment for no reason! *disbelieving face*”. I think they tend to be along the lines of “Republicans are against our particular environmental proposal purportedly because of X, Y, and Z [presented to look obviously absurd] *disbelieving face*” or, if he’s in the mood to analyze what he thinks are their true motives, “Republicans seem to be against our general environmentalist stance because of X, Y, and Z [presented to look like warped logic]”.
If Stewart were espousing his views in blog posts on this online rationalist-sphere, then I probably wouldn’t like him very much, as his arguments clearly don’t meet a particularly high standard of rationality (although they could be a lot worse). On the other hand, as a big time comedian / political commentator who, compared to most political commentators, does seem to at least make some conscious effort to be intellectually honest, I think he is doing way more good than harm for the world and like him a lot.
No he isn’t. He turned into the Sean Hannity of the left, repeating lies in order to make his audience upset, afraid, and convinced that Shadowy Malicious Forces Who Hate Everything Good are out to harm them.
He uncritically regurgitates the Social Justice Lie of the Now over and over, plays clips of people disagreeing by saying things that are true and accurate, and does his “can you believe how awful these people are?” shtick all over it. He’s a void of intellectual honesty.
After the Dunn trial in Florida, where the guy who shot the black kid at the gas station was convicted on four counts of attempted murder and there was a HUNG JURY on the one count of murder, Stewart ran a sketch where the explicitly stated, completely unironic message was that “It is now open season on black people”.
I don’t think Scott would appreciate this conversation entering into the arena of race, so I’m not going to comment on your example (admittedly, I don’t remember seeing that particular sketch and Google is not helping me find it anywhere).
I will stand by my claim that Stewart clearly makes an effort to be intellectually honest and to abide by rational standards of discussion compared to most political commentators. He promotes the idea of considering proposals on their concrete merits rather than on the basis of which political ideology they seem most associated with; he criticizes liberals for claiming things like “Bush is a war criminal” (because it’s a “conversation stopper”) and using reductio ad Hitlerum to argue against Republican claims relating to Obamacare; he criticizes MSNBC for being fearmongering as well as Fox (okay, the actual quote I directly remember on that issue is by Steven Colbert). He ran a million-participant event to Restore Sanity which emphasized the destructiveness of the hysteria on both sides of the political aisle. Sad to say, these and many more times that he has called for intellectual honesty, in addition to his effort in many cases to employ a let’s-calm-down tone (his SJ-related rants are somewhat of an exception here) give him a more reasonable voice than I have generally come to expect from political pundits.
Also, I doubt that many of even his strongest critics would agree with your assertion that his clips generally show people saying things that “are true and accurate”, rather than the cherry-picked most outrageous Republican quotes of the day.
” He promotes the idea of considering proposals on their concrete merits rather than on the basis of which political ideology they seem most associated with;”
And the objectivists support reason and the Jesuits support doubt. Is there a situation where he departs from the majority of liberals or is this simply applause lights?
” He ran a million-participant event to Restore Sanity which emphasized the destructiveness of the hysteria on both sides of the political aisle.”
And Orson Scott Card did something similar with his book Empire (admittedly more on the loopy side) and another series. Praising moderation and bipartisanship doesn’t make you unbiased. It isn’t something you build up, it is a credit that is eaten away with each screw up.
But he is basically a comedian/satirist, so of course he’s going to pick Stupidest/Most Outrageous Things politicians and commentators on the opposite side of his views said in order to poke fun at them and create sketches.
He’s not primarily a political commentator or pundit, he’s an entertainer.
At this point I’m so sick of *disbelieving face* that I don’t even care what sort of argument Stewart is using it with.
He is very dishonest. They interview subjects for hours on end in order to get the most incriminating sound-bites, often lying to them about what the subject/contents of the interview will be.
To take another example, they told another interviewee that he was there to talk about media bias against Christians as it relates to homosexuality. With the help of a number of leading questions by Samantha, they recut his words so that he was claiming that homosexuals go around looking for straights to beat up. So in a way they lie to you the audience as well.
I last watched Jon Stewart something like 11 years ago, so I guess he’s changed, but most of what I remember was him playing a clip of somebody saying something, followed by a clip of them saying roughly the opposite a decade earlier, and it was more often a CNN host than a politician, and I usually found it pretty funny. His take-down of Crossfire was terrific.
I’m very pleased to see GetStungByMillionsOfWasps made it on.
Some interesting experiences I’ve had with garage-dragony thinking:
1) a while ago I saw an interview with Jon Stewart where he was talking with someone (I think it was some republican politician/military guy) about the VA administrative healthcare issues. And Stewart proposed the idea that, since the military is universally recognized as an efficient arm of the government, they should let the military run the VA. It was interesting for me because until then I’d always thought that “the military is an efficient and functional system” was one of those things politicians said for PR but didn’t actually believe. It led to an interesting conversation, since Stewart evidently really did believe it, but the guy he was talking to was obviously garage-dragony about it – he couldn’t just come out and say “listen man, the military would do a terrible job at this”, but he just as clearly believed it.
2) A few years ago I talked to my therapist about the first time I had my heart broken, back when I was fifteen., and halfway through I realised two things: first, that it was really hard for me to talk about. And second, that I was really surprised that it was hard for me to talk about – for years I’d thought it still bothered me, but apparently what I actually believed was that I just thought that because I was being dramatic. It was like telling everyone about the dragon in your garage and then going home and meeting an actual dragon there.
1: The US military is actually borderline criminally inefficient in terms of being an actual war machine, which kind of makes sense when you consider incentives. If you’re a moderately intelligent person, the military is intolerable because no matter which branch you go into, you will spend years getting yelled at and taking orders from people who are much less intelligent than you are, simply by virtue of them having joined earlier and kissed the right amount of ass. Innovation and clever thinking is not merely unrewarded by actively punished. This makes some sense, as officers don’t have time to relay the entire big picture to grunts, they just need to give them orders, and make sure they are followed. Unfortunately, the officers all start out as grunts, so anyone who is smart enough that “get yelled at by idiots for two years” doesn’t sound like a good option is a priori excluded, which sets an upper limit on how smart your officers can be.
Add to that all the political bullshit, and you begin to see how bad are military is actually run. The only reason we aren’t under threat is because our technology is decades ahead of everyone else’s, and our runaway military budget makes it so much bigger than everyone else’s.
2: That’s actually a really interesting way of putting it.
Your explanation of the military’s inefficiency is inaccurate. In the US, most higher-ranking officers were commissioned directly as lieutenants– ie: recruited straight out of university into positions of responsibility. Promotion by seniority is still a problem, but less of one than you think.
The real reason why the US military is an inefficient war machine, though, is that it’s optimized to fight huge World War II-like total wars with other industrialized nation-states, not to fight asymmetric conflicts with nonconventional enemies. And this problem has less to do with the stupidity of officers (some high-rankers, like David Petraeus, seem to understand or have understood the problem well), but with the military’s Cold War-era institutional precommitments.
I would argue that a lot of the US military being inefficient as a war machine also (and arguably primarily) has to do with public perception/support. There hasn’t been public support behind a war in the way it was in WW2 since then. As Nathaniel mentioned, the politics are a major hamper on the effectiveness of the “visible” military institutions (as well as probably on the less visible more indirectly). To extend this, I wonder how much drone technology being more and more prominent has to do with the public thinking that it is safe (in terms of collateral damage) and precise over the actual practical in-battle concerns.
Unfortunately, the officers all start out as grunts
This is not actually true. Most officers in the U.S. military came through ROTC (with a smattering of Academy officers), and even those who come up from enlisted ranks often have gotten a degree (or are working on one) before making officer rank.
So while there is an aspect of “getting yelled at to do not-very-comprehensible tasks” involved there, the “by idiots” is not generally the case. Officers, and officer trainers, are somewhat smarter than average.
From what I gather, the American military is probably less well modeled as an engine for killing people and breaking stuff than as an engine for getting stupendous amounts of tan and/or gray-painted stuff into the right part of the world on short notice. Some of that stuff does have guns on it, but that’s almost an afterthought.
This really does do a pretty good job of winning conventional wars. It’s less good at asymmetric ones.
It’s first and foremost a jobs program for people with too much pride to participate in a regular jobs program. Unfortunately, it is also the proverbial hammer that causes its holder to go looking for a nail.
You want to play this game, it’s gonna be short and ugly.
Short and ugly is better than our imperialist wars of choice, which tend to be long and ugly.
We are all truth telling rationalists that scoff at taboos and tribalist peacocking, except of course when it come to our heroes in uniform who nobly serve to defend our freedom. God bless ‘merica the greatest peace loving nation that ever was and ever will be. When they see us coming the get the flowers and trumpets ready because they can’t wait to be bombed for FREEDOM!!!!1! *diving eagle*
I like how you are responding to things that happened entirely in your imagination, as if that made you a courageous truth-teller.
Science, I don’t like military solutions. But (especially as my father was a non-com in the Irish Army; served on U.N. peacekeeping missions in Cyprus and the Congo; probably came back with a touch of undiagnosed PTSD or the like which contributed to his later mental health problems but in those days in Ireland psychiatry was for the rich), don’t shit on the squaddies. They may be dumb grunts from the backend of nowhere too stupid to get Real Civilian Jobs, but they’re the ones who end up getting maimed and killed when the politicians manage to talk themselves into wars.
The problem of jingoism is a real one, but it’s not the problem of the man and woman in uniform, generally.
Blighters
By Siegfried Sassoon
The House is crammed: tier beyond tier they grin
And cackle at the Show, while prancing ranks
Of harlots shrill the chorus, drunk with din;
“We’re sure the Kaiser loves our dear old Tanks!”
I’d like to see a Tank come down the stalls,
Lurching to rag-time tunes, or “Home, sweet Home,”
And there’d be no more jokes in Music-halls
To mock the riddled corpses round Bapaume.
There are options between blaming the grunts and parroting the prevailing nonsense about how they are heroes defending our freedom.
In general, I’d say it is somewhat disreputable to sign up for a jobs program that will likely require you to participate in unjustified killings, but not especially more disreputable than any citizen supporting the program and its destructive uses.
this is accurate. The US military is first and foremost the world’s largest delivery company. Second, it’s a social club with an unusually intense devotion to its many obscure traditions. Last and least, it is a fighting organization. It has been this way since at least the civil war and likely always will be. the US wins wars its by drowning its opponents in an endless stream of matériel.
American military is probably less well modeled as an engine for killing people and breaking stuff than as an engine for getting stupendous amounts of tan and/or gray-painted stuff into the right part of the world on short notice
This has been codified in several quippy pre-internet memes – and other nations are recognized as having their own strengths (German generals, Brit NCOs, Russian soldiers, French rations, Swiss mountains, etc).
Some academics of my acquaintance have worked with the military and with industry on multiple research projects, and they collectively agreed that 1) the military actively encourages pre-decision discussion – particularly the airing of unpopular or unconventional opinions – much better than the university or business 2) the military makes decisions at a blinding pace, compared to the university, but somewhat slower than business/industry, and 3) once a decision has been made, military culture demands that the people who previously disagreed drop their opinions and work wholeheartedly on the chosen path. (In contrast, academia specializes in not making decisions and tolerating feet-dragging, back-biting and active sabotage, while industry makes decisions fast, but allows dissenters to exit gracefully and become successes elsewhere.)
All the researchers on this team said that the military culture was not the rigid hidebound structure they had expected, and one voiced the opinion that the difference is getting engaged prior to decisions being made. People who expected one outside voice to sway the mass of the Army – once inertia had kicked in – were born to be disappointed.
Again – what several people said, speaking in generalities.
“once a decision has been made, military culture demands that the people who previously disagreed drop their opinions and work wholeheartedly on the chosen path. ”
This really can’t be overstated and was one of the things that really surprised me about the military. The Chiefs Mess could get together and have a very hotly contested debate – to the extent that if you were sitting in the next room, you could hear them literally screaming at each other for hours on end.
And yet, when the meeting was over, they always presented a 100% unified front. The message was always “this is what the group decided and we all support it.” Unless you were in the meeting yourself, it was virtually impossible to even figure out who it was that might have opposed the decision – everyone is very careful about showing support and not making any potential doubts known to the general public.
While I generally oppose the “you can’t talk about the military unless you were in it!” argument, I’m going to guess you weren’t in it… my experience was very very different. I enlisted out of high school at the lowest ranking possible. I scored a 99 on my ASVAB (if you’ll accept that as proof of being an intelligent person who would routinely be supervised by people less intelligent than me). I did it for economic reasons, and it’s paying off in spades (the government has paid for two of my degrees, I’m on track to make six figures by the time I’m 30, I’ve never been in debt, I’ve never had to work particularly long hours or do anything particularly difficult). While I was in, I deliberately chose an easy/low-demand job to ensure I never got deployed or placed in physical danger. Usually I worked less than 40 hours a week. I was promoted to a supervisor level within two years (turns out the biggest component of advancement in the Navy at least isn’t “kissing ass” or “playing politics,” it’s taking a test on the requirements of your job). When it came to tests, I was a big fish in a small pond and it was easy as hell to out-score the competition. I think someone who looks at the potential rewards I just describes and says “Well I refuse to do that because it means that someone dumber than me might be in charge of me for a few years” might not quite be as intelligent as you assume. I’d also point out that my “less intelligent” supervisors usually recognized that fact and gave me a wide berth and encouraged me to share my thoughts and ideas. And once I promoted, I was regularly placed in positions of authority over people 10+ years older than me.
As far as officers go, Lemminkainen is exactly right. MOST of them are college kids who walk right in and are given authority over 20+ year grunt veterans. Whether this is more or less effective than “keep promoting the grunts to lead other grunts” is probably up for debate, but the grunts definitely don’t care for it. All of the most beloved officers I served with were prior enlisted. They’re much better at understanding the politics and empathizing with the lower-ranking troops, and they have the credibility of having previously done it themselves. The college kids who come in and think everything is as simple as “I’m smarter than you so I’m in charge so I say something and then you do it” usually fail hard until they figure out it isn’t nearly that easy…
Anyone who thinks the US military is efficient has not spent 5 minutes studying or interacting with it.
I’m really happy to see that you’ve signed up for Patreon! Your writing is excellent, and you’ve taken the time to personally respond to some questions I’ve asked before (under a pseudonym), which I really appreciated. It’s nice to have the opportunity to give a little something back.
Here’s a question that I’ve been thinking about for a while, but I’m not really sure how to get an answer for:
What are the contributing factors to it being hotter in cities and to what degree?
Possible causes I can think of:
1. It’s an illusion, the cities I am thinking of are just in hotter places than the rural locations I’m thinking of.
2. Cities are built in locations that are naturally hotter
3. Cities absorb sunlight better
4. Cities prevent cooling processes (different materials, preventing airflow, etc)
5. Transportation heat
6. Residential/commercial heat (is air conditioning an especially big contributor?)
7. Human bodies (not likely)
You will want to search for urban heat island.
No, Jeremy wants to delay thinking about this question and think about why he never went to google and typed “why are cities hotter.”
Aside from the discussion about it below, I have a general problem with the “don’t ask people what you can google statement. Aside from people generally being more helpful than Google, asking people also has the benefit of giing you human interaction, which people generally enjoy.
Then it’s a good thing I didn’t do that, asshole.
Wow. You may be the most unnecessarily abrasive person I’ve ever seen on the internet. Which TBH is not so much offensive as it is amazing, that’s a pretty tough competition.
>Wow. You may be the most unnecessarily abrasive person I’ve ever seen on the internet.
Honestly, what this tells me is that you’ve had a pretty sheltered internet experience (I agree 100% that anonymous is being unnecessarily abrasive).
I was in such disbelief at the wanton, gratuitous level of abrasiveness in Anonymous’ comment there that I actually cracked up laughing. Shocking.
Short answer: concrete vs grass.
Perhaps I didn’t phrase my comment well, but the emphasis was meant to be on the relative proportion of the effect, rather than the list of causes. I didn’t mean to imply that I couldn’t find out what caused the heating, but I couldn’t think of a good way to measure the size of the different effects.
If you read the wikipedia or google results, you will find contradictory information, and studies focused on individual effects, but I couldn’t find a reasonable breakdown of the proportion caused by different effects and the techniques used to measure it.
(The snarky anonymous comment irrationally annoys me)
It is very easy to determine the relative proportion of (1), which makes you story rather implausible.
How would you measure that? I could not find a list of cities and the size of their UHI effect that I could use to compare the cities I used as reference points against the average.
Please consider the possibility that you misinterpreted my original comment, and so your responses are not only unkind and unnecessary, but also untrue.
Finding that it is a real thing means that it is not an illusion.
It is possible that your initial measurement was largely an illusion, if you made poor comparisons, like Miami vs Vermont. But, when I google it, I find discussion of cities vs suburbs, which tells you how to choose good comparisons.
It doesn’t seem that irrational to be annoyed by it if you have been researching the issue, actually.
Is there a particular difficulty in assembling multiple focused studies to compare?
A lot of that comes from green plants. The plants take water from the ground and makes it evaporate, cooling both themselves and the air around them.
Related to this, parks that have a lot of trees and whatnot are usually cooler than the rest of the city, and have cold air blowing out of them to the surrounding neighborhoods. Big parking lots and other open spaces without plants are instead warmer than the rest of the city and have cooler air blowing into them.
While I agree that conducting research would seem (to me) to be more efficient than asking random people who might not be truthful or factual with you…
1. It’s an illusion, the cities I am thinking of are just in hotter places than the rural locations I’m thinking of.
Nah, it’s real, and cities in very hot regions are still hotter than the surrounding deserts. (Specific buildings are different – people built large tall buildings for a reason.)
2. Cities are built in locations that are naturally hotter
Nope. Cities are disproportionately located on seacoasts and at river intersections, where the water has a moderating effect and the extreme heat of summer is not as strong.
3. Cities absorb sunlight better
Mostly so. Concrete & stone hold heat.
4. Cities prevent cooling processes (different materials, preventing airflow, etc)
The largest impact, due to the lack of vegetation and related evaporative cooling.
5. Transportation heat
I’m assuming you mean heat associated with powering autos and trains? Is that right? Contributes, but not overwhelming.
6. Residential/commercial heat (is air conditioning an especially big contributor?)
Contributes. The largest part of AC is the generation of power to run the AC, followed by the heat-adsorbing concrete of the building being cooled.
7. Human bodies (not likely)
At densities tolerated by modern westerners, not at all significant.
Other things to consider:
Heat retention by built environment.
Lack of shade due to lack of overhangs.
Reduced evaporation (because not a lot of free water).
Reduced air movement.
I was under the impression that the large heat capacity of concrete was the principal component (which is why the effect is larger at night), but that’s a quibble and I otherwise agree, definitely lack of evapotranspiration is a thing.
Regarding the effect being larger at night, that may be the case for average temperature but I will note that lower-atmosphere turbulence over cities peaks in mid-afternoon (and at a level substantially higher than the surrounding country).
I’m guessing that differential absorption and transpiration result in more, and irregular, heating during the day, and the high heat capacity of concrete smooths this out into a higher average temperature at night.
Anyone got any advice on how to play the status games present in the world? I’m hopelessly honest and straightforward and it feels limiting. I feel constrained by my ethics and don’t feel capable of dealing with people acting in bad faith. I contrast it to board games where I seem capable of acting perfectly fine since I won’t hurt anyone.
Your ethics seem non-consequentialist if they’re making you bad at dealing with reality. You should add exceptions to your ethical rules like “Be honest, unless I’m talking to someone I suspect will use any information I tell them against me.” You can have a consistent system of ethics that still keeps you from being screwed by people as long as you don’t over-simplify them, Kant-style.
You can also get less screwed by your aversion to lying by saying less.
I am a consequentialist, I just have been conditioned to not play rough and really have no real idea how to do it in the real world, and so unless there is a clear casual path from action to outcome I tend to play nice. I also have a pretty strong aversion to dealing with fucked-up-things so I tend to try to avoid these situations and conflict in general unless I know exactly what I am doing. To stretch the board game analogy further, a bunch of backstabbing and lying and conflicts of interest are funny/hilarious, it the real world it is sad/terrifying.
Have someone hypnotise you into thinking real life is a board game. What could possibly go wrong?
Try to take a two-tier consequentialist approach to lying. Falsely accusing someone of a crime causes harm, and is therefore wrong. Lying about what you had for breakfast does not cause harm, and is therefore generally not wrong. At the meta level, try asking “do I want to promote a norm of people telling the truth in this sort of situation?” and weigh that against any object level benefit you see. Lying is not in and of itself wrong. Specifically, if you know someone is acting in bad faith, and you know that lying is the optimal strategy, you don’t need any further justification.
Inversely, though, If one asks themself “do I want to promote a norm of people being okay to lie in this sort of situation?”, they should be aware that the “in this situation” part might not be communicated properly.
My point is: Me lying about my breakfast might prompt my child nephew to think it’s okay to lie in general. Obvious example is obvious, of course, but I want to argue against lying in the neutral case. In the “lying is better than telling the ruth” case I agree: Lie.
By that reasoning, you taking things from the store after you pay for them might prompt your nephew to think that it’s okay to take things from the store in general. And you making a child go to his room as punishment may make him think it is okay to confine other people to rooms in general.
Why, yes, of course. Is that not how it works? Of course you tell them they’ll have to pay and you let them “give the uncle the money” and they learn to pay that way. Next time around they’ll know, but until you tell then, you’re just as likely to reinforce wrong beliefs by leaving the shop without them seeing you pay.
Back to my original point, though, I feel like it being “okay to lie” is generally tied to conditions that are generally not very well communicated. And I feel that failure to communicate the great, big scope of why it’s okay to lie is much too likely to happen to carelessly make it okay to lie in the neutral case.
TL;DR: I’d rather default to “lie if necessary” than “tell the truth if necessary”.
@ Godzillarissa
> Back to my original point, though, I feel like it being “okay to lie” is generally tied to conditions that are generally not very well communicated.
It’s pretty easy to communicate “It’s okay to lie to outsiders, but not to Mom and Dad” or “but not to me”.
@houseboatonstyx:
I feel like my example got us stuck in a discussion about children’s education when I intended to talk about grown-ups lying.
Kids are different to grown-ups in that I will try my absolute best to teach a child what is right, what is wrong, why I did make an exception to the rule and what the reasons are. Once you’re above a certain age, though, nothing is communicated anymore, you’re just suppossed to use “common sense”, “work it out yourself” and that’s that.
If we suspect that our grown-up actions influence other grown-ups insofar as a “standard for (not) lying in certain situations” can be organically established*, that non-communication of surrounding conditions is troubling.
*I assumed this was the point. If I have to write them a book on why I lied, with all reasons to do it and not do it etc. I’m out.
It’s pretty easy to communicate “It’s okay to lie to outsiders, but not to Mom and Dad” or “but not to me”.
But why, Daddy?
Is this the websites must have a “you must be over 18 to read it” qualifier? I don’t think anyone has ever been deterred by such idiocy.
I don’t like these other comments, which ask you to compromise your ethics. You should either totally abandon/change them, or stick by them resolutely. (Disclaimer: I’m a generally extreme-prone person.)
My advice is to find role models who have both managed to achieve high status while also maintaining strong ethical codes like your own, and emulate their behavior.
Not enough information.
What kind of status games? In what context? Are you in college? High school? Early 20s? 40s?
Is this related to your social life or your status at your job?
In general though, think of these kinds of things like fluid dynamics. Unless you can adopt a proper shape, trying to move through fluid quickly is counter productive.
I’m concerned about the whole culture war thing possibly getting worse, and maybe even turning violent in my lifetime (or the medium-term future for that matter). I’m not sure if this is a rational concern or if I’m just spending too much time on the internet.
A few open threads ago I saw two of the opposing sides promising to fight/kill each other when it happens, apparently in deadly seriousness. There seems to be an increasing sense that there are irreconcilable differences between left and right that can only be solved through violence. There’s also apparently evidence of increasing political polarization in recent decades. Personally I’ve been feeling extreme rage/bloodlust over these issues from time to time (I feel OK right now).
I don’t know if this is a serious problem or just me. I’ve personally tried to cut down on reading conflicting political views (or even overly inflammatory agreeable ones) to avoid stirring myself up too much but occasionally it can’t be helped. And I’m not sure if staying ignorant while bad things (might) be happening is a good idea either.
For some perspective, compare with the events of the late 60s: intense polarization (spearheaded by the anti-Vietnam War protests, but involving many other issues), protests and takeovers by students in many colleges, the Kent State shootings. Ten years later, few remembered and fewer cared, and the polarization somehow quieted down. Possibly the same will happen this time around, too. Possibly these things are vaguely cyclical for sociopolitical reasons nobody understands well.
Protests against the conflict in Vietnam pitted citizens against the state. Conflict between citizens was less significant (as, I suppose, supporters of the war didn’t care that much.) I would argue that polarisation is the norm but the scale of polarisation, as with the scale of almost all features of the world, is growing: there is more immigration to argue about; there is more government to oppose or defend; there is vastly more media to stir the pot…
On the other hand, one should remember that 98% of people who talk about fighting on the Internet are keyboard warriors and pacific in real life.
Conflict between citizens was less significant
Not really – it’s just that citizens on “the right” taking action against those on “the left” have mostly been either ignored or completely villainized in history of the time. There were numerous incidents of blue-collar workers beating up dirty hippies protesting the war. Most of the sometimes violent reaction to forced integration of schools across community boundaries (busing) was quite rational – the destruction of near-suburban school systems did in fact occur just as its opponents predicted, and there was no measurable benefit to the supposed beneficiaries. But people who opposed busing are demonized as racists.
The violent reactions were rational? What did they accomplish?
Rational as in not motivated by prejudice. Also forced busing was canceled so they might have succeeded at their goal (I don’t know enough about its repeal to make a confident estimate).
Wow. Hadn’t heard of that. Cheers.
It might settle down, but modern social media seems to make it particularly easy for mob mentality to rage out of control.
Not clear whether that’s more or less likely to result in physical violence – overall violence is generally down from what I’ve seen, but things like the Arab Spring have been credited to mobile phones.
“Not clear whether that’s more or less likely to result in physical violence – overall violence is generally down from what I’ve seen, but things like the Arab Spring have been credited to mobile phones.”
Did you mean for this to sound like putting Arab Spring in a category of violence? Mob action, yes, but the gatherings in Egypt were famously nice.
>the gatherings in Egypt were famously nice.
Yeah, the militias were just generously donating those bullets to each other as much as they possibly could!
@ Other Anon
Google [ Arab Spring timeline ]
The cell phone arranged civilian protests were famously nice. The military who came later were not using cell phones.
Those had win conditions like “stop the war”. How will a Tumblr fat activist win? They are offended when people find the unattractive (FPH) and they are offended when people find them attractive (chubby chaser fetish). Having other people tie themselves in pretzels entirely around the feelings of other people is not a political win condition, it is narcisistic fantasy. The only thing that would satisfy them is a kind of aristocratic politeness like in a Monte Cristo type novel that is not really likely in this age, and they are not going to promote that by cussing on Tumblr.
Putting it different, today people WILL be assholes, perhaps not sexist or racist or sizist ones, but some ways assholes because we don’t live an aristocratic Monte Cristo age.
This seems in rather bad faith.
The causal factors are important. If it was the war in Vietnam that polarized people, take that war away and the polarization is likely to go away. If the internet as a discussion medium polarizes people, then the polarization may not going away unless the internet as a medium is either removed or changed.
Your concerns are partially justified imo, though as others point out its a pretty variable phenomenon. You’ll probably feel better about the problem if you turn the concern into action of some kind. You can’t solve it alone, but you can do your part, and that will make you feel better.
The obvious starting point is learning about and promoting rationality and ideological pluralism, and generally working to expand that subculture. You could also find a specific project to work on. Take this cluster of ideas, for example.
Remember, you’re not the only one – lots of SSC readers (myself included) are concerned about it.
Focusing on getting facts and reasoning right is possibly helpful, though difficult. However it seems like differences in values (combined with some uncertainty over relative power level) can potentially lead to escalating conflict even if there is no major disagreement on facts.
I think that’s an insightful comment, but I think a solution can in some cases be found in mutually commiting to a system that limits the escalation in some way. For example democracy does this by letting people argue but not physically fight the other party. Rationality can help people better design, stick to and enforce such agreements imho. In contrast irrational people may not even understand they are damaging those agreements and won’t care even if you try to explain them.
One of the more interesting metaphors I have heard about democracy is that it is a literal substitute for mob violence: you get your mob together, and I get my mob together. But all we do is count noses and then assume the person with the biggest mob would have won: that way we don’t have to actually rumble.
The writer also extended this to large non-violent protests. “Here is the mob I could use to wreak havoc. All I want you to do is count them. This time. Then imagine what will happen if you don’t give me what I want and next time they’re not non-violent.”
“Nice mob you have there. Did I mention that I’m rich; I have a Reaper in my garage and a team of Blackwater’s finest on staff?”
… which is exactly why there are plenty of things that the majority of the public supports, and yet, we still don’t have.
It’s a metaphor, not meant to be taken 100% literally. Obviously the mob with rifles can stand to have slightly fewer people in it than the mob with pitchforks and still win the hypothetical contest of “think about what happens if we decide to actually fight here”
>“Nice mob you have there. Did I mention that I’m rich; I have a Reaper in my garage and a team of Blackwater’s finest on staff?”
Is this an argument for qualified voting or for voting based on number of guns owned?
Only Tharks get credit for more than one weapon per voter. Human voters, however, can claim noncitizen armed retainers 🙂
Somewhat more seriously, several Swiss cantons at least until the 1970s limited the franchise to adult males who physically appeared to vote while bearing swords. Mostly a moot point in Switzerland, where every adult male has a decent rifle, and the contemporary US, where anyone who wants can get one in a few days. But tellingly symbolic of Marc Whipple’s take on democracy.
If there’s a distinct armed class, or if there’s a technological shift that limits effective fighting ability to the exceptionally rich (paging Tony Stark), it’s worth considering what happens when voting power and fighting power become decoupled.
If you want to know what happens as fighting de-democratizes, you can just run time backwards. Most of Europe introduced universal (ie, male) suffrage after WWI to reward the masses for their service (quite explicitly: see war widows in some countries). In other words, to acknowledge the new, democratic nature of war.
(This theory does not explain why women’s suffrage happened at about the same time. I think that the answer is that it had become sufficiently popular to be bundled with other constitutional change, but not enough to prompt change on its own. That explains why France and Switzerland, which already had universal suffrage did not extend female suffrage until rather later. America is harder to explain.)
But the symmetry is broken by loss aversion. Extending the franchise in response to democratizing changes in warfare is relatively easy and uncontroversial. Rescinding the franchise in the reverse case is likely to trigger a civil war. Not rescinding the franchise may just postpone the civil war.
I am skeptical of the idea that rationality is likely to lead to an improvement on this front. It seems just as likely to lead people to taking political values and outcomes incredibly seriously as a matter of life and death. Perhaps caring about rationality can lead one to assign an extremely high degree of certainty to one’s political beliefs, or something like that. And I think if you look at the people who comment here, and the broader group of similar Internet-y type people, there’s a lot of people who seem to have the kind of non-pluralist attitude we’re talking about. But this group also (I think) has a much greater than average segment of people who care a lot about rationality.
Of course this is not true about ideological pluralism, but that’s distinct from rationality & ideological pluralism is itself an ideological position.
I think rationality implies investigating a little epistemology and understanding how it is you know or come to know things. In my experience, thinking about epistemology makes you waaay less certain of ideology, and I think the result will often be ideological pluralism arising from a realisation that political topics are insanely complex – leading to a desire to thouroughly investigate before feeling certain.
I don’t disagree with your empirical observation, I just put it down to a bunch of people who wish to signal rationality more than achieve it? Perhaps I’m wrong. I do feel disappointed when I see that dismissive hostile political tone appear so often in rationalist communities (though no more often than almost everywhere else). LW, though really interesting, sometimes suffers from this, for example.
I also think you’re right about people rationality leading to people taking politics more seriously, but if it was authentic rationality as opposed to signalling, I’m actually pretty comfortable with that. So a fierce effort to have a calm, careful, peaceful, logical, orderly, pluralistic political process where all non-fallacious arguments are heard and fairly considered because epistemological uncertainty demands it? Sign me up!
It’s great to hear that people are working on this. But I suspect a much harder and more important problem is to figure out how to improve internet discourse at the medium/low end. Even if the SSCsphere goes from above-average reasonableness to extremely high reasonableness or something like that, that doesn’t do anything to solve outragism in the remaining 99.99% of the population.
I don’t think that will happen. Both sides have a minority of assholes and these people love posting on the internet. But most people don’t mind hanging around with people whose political views they think are awful. I mean, I’m liberal as all hell (and not the classical kind!) and I wouldn’t stop hanging out with someone for being a Donald Trump-level racist or thinking gay people are sinners as long as they weren’t a dick about it. I don’t think I’m atypical in that regard.
You are extremely atypical in that regard; cutting off personal ties for not passing ideological purity tests is The New Big Thing To Do on the left.
Really? How very Ayn Rand of them.
Yeah, but the number of people who actually do is is pretty small.
My sister is as SJW as they come, but she has yet to actually cut ties with any of her patriarchial, conservative Christian extended family, nor even with me, the ultra-cynical reactionary.
Someone once compared some database technology or something with teenage sex: “Everyone says they’re doing it, everyone thinks everyone else is doing it, hardly anyone is actually doing it, those that are aren’t getting a lot of satisfaction”. I never thought I’d apply that to ostracism but there you go.
Big data.
I’ve derived two possibilities here:
1. The “ideological purity tests” you’re talking about are passed by not being a far-right white male supremacist. I can imagine your average liberal cutting ties with someone they’re not especially close with for being a neoreactionary, but this is very different from refusing to associate with your average Republican.
2. You’ve never actually interacted with a liberal in your life and your conception of the left is based entirely on what neoreactionary bloggers and Scott Alexander write about it.
For complicated reasons I’ve ended up hanging out with a lot of (left-) anarchists and other radical activists over the last few years. I am not a neoreactionary or even particularly right-wing, but I am a cynical ex-libertarian with a tech job, a distrust of totalizing ideologies, and an appreciation for incentive-centric policy, which by the standards of the kinds of people I’m talking about is about one step away from putting “Exalted Cyclops” on my letterhead and spending every Saturday at American Nazi Party rallies, with the ticket sales benefiting the Westboro Baptist Church.
There hasn’t been any drama to speak of, but I have seen a lot of… selective perception? It’s as if, having decided that I’m One Of The Good Ones, they assume I must therefore agree with any substantive policy goals regardless of appearances. I can say point-blank that I’m not an anarchist, and the response will be something like “that’s cool, you don’t need to label yourself” — then ten minutes later the conversation will be back to the tech scum destroying the city. Bring up an economic point and I’d be lucky to get that much.
The response might be different if I started spouting talking points, though. I don’t know, I’m not into that.
Mark Atwood:
It sounds like most people you know who are liberal, blue tribe, what have you, were born into red tribe families and converted later life. Or perhaps as converts they want to be more catholic than the pope and stick out in your mind for wearing their blueness on their sleeves?
But whatever the reason, I’d think you’d agree that such “converts” don’t numerically make up the bulk of either the red or blue tribe.
Yeah, for every 1 person raised religious who becomes a hardcore atheist as a reaction against that, there are probably 10 or 100 people who grow up to be religious adults.
Red/Blue tribe affiliation is, of course, exactly the same.
Option 3: Every social group I have interacted with online, encompassing far more people than my immediate physical acquaintances, uses “liberal” as synonymous with “good”, “conservative” as synonymous with “evil”, brag about cutting off all conservatives from their life, ascribe conservative views to everyone they dislike, fantasize about telling off conservatives, and threaten to shun and ostracize people when they catch the faintest whiff of something they could squint at, and then choose to lie about, and call a conservative view.
I am not even a conservative. I just want people to shut up for two fucking seconds about how courageous they are for having political views that everyone is afraid to ever speak out against.
It’s a language thing. If you ever want to go to a liberal environment and get people to try and sidestep points, accuse their view of really being the conservative one (with a real argument to back it up). I do it plenty mainly because despite being a card carrying member of the Blues, I’m a social liberal in so much that I don’t think the government should be outlawing soda, peaking in on our data, or trying to move the Schelling Fence of speech. The first and last are definitely not settled issues in the Bluesphere.
>1. The “ideological purity tests” you’re talking about are passed by not being a far-right white male supremacist.
>2. You’ve never actually interacted with a liberal in your life
Disclaimer: This is exclusively an account of my experiences and personal observations. I am also not American, so not all things coincide perfectly.
First of all, I think we’d do good in separating “real life interaction” and “internet communities” (with the obvious caveat that these are fuzzy sets and blah blah etc etc).
In regards to real life, I’ve definitely seen it happen. Often enough that I’m pretty sure it’s not noise, not often enough to think it’s a majority. But it seems to be limited to very politically active people, which are overrepresented in my social groups due to reasons, and particularly prevalent in far left groups (again, overrepresented, reasons).
In the internet, I definitely see it far more often and over much more trivial differences. Some of my favourite (read: least favourite) phrases common in this kind of behaviour are “[being on] the right/wrong side of history” and “[person X, who holds a different opinión than me in least one of several views including, but not limited to, the following: gay marriage, homosexuality in general, transexuality, a variety of welfare programs, feminism, men’s rights activism] is objectively a bad/terrible person”. I’ve seen cases where moderation/administration not making it a bannable/reprimendable offense to speak in favour of conservative views, or enforcing standards of civility for people criticizing said views triggered mass exoduses of liberals/leftists/blue tribers/whatever we’re calling them right now.
>are also very out and public about having broken ties with their family of origin for political differences
I don’t like to make the assumption that perhaps what’s happening is that you have some affinity for people who have bad really relationships with their families. But the idea that there are so many such people makes me kind of sad.
I’m not American, this may colour things.
I’m very left-of-centre, enough that I think the major left-wing party in the country I live is awful and populist and in most elections I first-preference an environmentalist/left-wing party because they’re the closest match to my values available (although I wish they were more sensible about nuclear power). I hang out with a lot of left-wing people, including some who are social-justicey.
I don’t recall it ever coming up in conversation that someone had shunned someone else for their right-wing political views. The only shunnings that I can think of right now that I’m aware of in my set of acquaintances were over someone regularly cheating while playing board games, and it was a “yeah you’re not welcome at games night” thing, not a “and we’ll tell everyone you’re awful” thing.
Are you sure this isn’t the same thing that leads left-wing commentators to model all libertarians as Objectivists?
>I don’t recall it ever coming up in conversation that someone had shunned someone else for their right-wing political views.
Just to be clear. What I mean isn’t active shunning, but rather cutting off all contact with people not in the party, up to and including breaking up with a long term partner, followed shortly by acquiring a new, ideologically aligned one.
The closest – and only – example I can come up with that matches that is a friend of mine breaking off contact after I suggested some of his opinions (congruent with a recent popular anti-social-justice movement) were conspiratorial. But in that scenario, I’m the one on the left.
It’s entirely possible I’m just blinded to the ills of my in-group, or that I don’t talk much to people about why they stopped talking to person X, but I really haven’t observed this pattern.
Hell, I’m not at home with social-justice positions in general and have argued about feminism with people in my circle closer to the SJ orthodoxy without being shunned, as far as I can tell. (For example: a local convention was petitioned to not have Adam Baldwin as a guest because politics, and I argued that that wasn’t okay).
I’m kind of with James here. Apparently I’m unique as an American or something, but almost everybody I know, either in real life or on the Internet, seems not to care or talk much about politics at all. They certainly don’t shun each other and the only person I’ve ever known who was openly broken from her family was an ex-Scientologist because they apparently require you to shun anyone who leaves the church.
Most people aren’t that involved in politics. New York City’s latest election had about 1.5 million people vote… out of a population of 8.5.
I have the same problem with trying to figure out if “Gosh the world seems to be getting worse really fast” is a real effect or just a result of internet reading (or getting older). Talking to friends who spend less time on the internet is a good moderator / sanity check.
It’s hard to see what might be a catalyst for real organized civil war in any event. General social worsening until the barbarians sack us seems more likely.
Maybe it’s just the internet that’s getting worse really fast.
Check out this 2007 blog post where gender was discussed for the first time on Overcoming Bias. Astonishingly civil by modern standards, isn’t it? And that was just 8 years ago. Wonder what the next 8 years will bring the internet?
Internet media arguably had polarizing effects as early as the second Bush election, but Twitter, Tumblr, and the whole clickbait ecosystem are polarizing in ways that the most punditous of 2004-era pundits could never have imagined. In the former two cases, this is mostly due to some questionable architectural choices. The latter is a hellish concoction of malice, tribalism, condescension, ignorance, and plain old greed, but it too wouldn’t be able to survive if social media (Twitter again, and especially Facebook) wasn’t architected the way it is.
Fortunately, I suspect its days are numbered.
>Fortunately, I strongly suspect its days are numbered.
Whose? Twitter’s, Facebook’s, Gawker’s?
Gawker’s.
Maybe Twitter’s, too, but that’s a lot more speculative — it’s not going to run out of users anytime soon, but I have my doubts about the commercial side of things, and its user experience is fiddly enough that if Twitter per se fails we can’t necessarily expect a competitor to pick up the slack. The mode of interaction that Facebook represents isn’t going anywhere, although Facebook itself may or may not.
The recent events at Gawker show that despite all the nastiness of late, most people really aren’t willing to sacrifice their principles to score points against the opposing tribe – and that was just one blog post. I don’t see this culture war turning violent anytime soon.
Software is getting worse, too. Not sure if that’s related.
Adobe Acrobat has been downhill since v8. Windows 7 was not a real improvement on Windows XP, and by all reports, Windows 8 was a step backwards. Flash has been getting worse since Adobe bought Macromedia, and is now basically useless. Office hasn’t had any real improvements since about 2003. iTunes broke at version 11. Java isn’t really worth installing anymore.
Linux seems to have been getting more fussy and less user-friendly since about 2010 or 2012. Perl 5 is still better than Perl 6, which still isn’t finished.
Aside from the architecture and the way people use it, Facebook is *terrible* – every time it changes, something works less well.
Win7 has some benefits over windows XP, mostly in terms of support for various things that XP didn’t support. Low-level stuff. They probably could have updated XP with those things, but *shrug*.
For example: eSATA hotplugging isn’t supported in XP.
Win8 is mostly horrible because it’s designed for phones.
Java was never worth installing. 😛
I spend some time arguing with gun nuts on the internet, and it can be really scary sometimes. A lot of these guys sign up for programs where they get to play commando on the weekend, have huge arsenals, and spend a lot of time hinting darkly about 2nd amendment remedies.
But at the end of the day it seems to be just talk. Middle aged exurban internet tough guys just aren’t a realistic kernel for a civil war. The real danger are the disturbed younger men who take them more seriously than they take themselves. They can’t start a civil war, but they can go out and kill 4-5 people.
What, in your opintion, can start a civil war? Clearly middle-aged internet tough guys can’t do it (no cannon fodder, no war) but civil wars do seem to start from time to time, and as you say, there are younger man who take the middle-aged gun nuts seriously. What are the factors that are present in the cases where civil wars have broken out that you think are missing in your country?
I’m not enough of a student of history to give a confident answer to that, but off the cuff I’d say the two biggest pathways are: 1) a military coup of some kind and 2) a potent combination of some sort of worsening conditions for large numbers of lower class people and a core group of committed elites to organize things.
That’s for a true civil war / revolution, separatism is its own thing.
Having little to lose. Wreck the economy, make them unemployed and halfway starving, and blame the other tribe for it.
>Having little to lose. Wreck the economy, make them unemployed and halfway starving, and blame the other tribe for it.
Might you unpack? If the whole comment refers to the 1%, then you’ve left out “foreclose their mortgages, buy their house, rent it back to them”.
Western societies are depressingly peaceful. The only threat of open mass violence comes from antagonistic groups, the underclass and the fifth columns of the caliphate.
Even the hardened keyboard commandos of the “right” are incapable of any coordinated action. They have too much to lose, for now.
Depressingly peaceful is pretty sociopathic term.
If it’s meant as a value judgment, yes. But it could also be intended as descriptive, an extremely cynical hypothesis about human psychology.
Extremely cynical? Saying that when people cannot exercise major biological routines like raiding and looting some tribe they get weird? This is just to be expected really.
You’re just saying it’s true, which wouldn’t detract from its cynicism.
I don’t know if it is any sort of conciliation for you, but the whole culture war is based on the lack of violence and when violence goes up, the culture war will go down. I don’t want to write a long essay, but high violence essentially reinstall the traditional, masculine man in his leadership role because he is the best candidate to dish it out or protect people from it. In other words, violence goes up, survival values go up, and liberal ones down.
The whole culture war started because everybody from women to gays said I feel really safe now so why should I let these masculine dominant fighter type men still play a dominant role? They no longer felt submission and service to a high-powered patriarch is the only thing that keeps them safe.
Yes, I did notice the new layout. Looks okay (and if that sounds like damning with faint praise, it’s not; it means I can’t find anything to complain about in the first ten minutes, unlike my usual “They changed it, now it’s terrible” attitude to new things).
There is a new movie coming out about the (in)famous Stanford Prison Experiment, and if the article is any way accurate (and bearing in mind that it’s a newspaper article, it may be getting everything drastically wrong), the experiment is perhaps not as reliable as it’s been made out. If it’s true or anywhere near it, it looks like the experiment was set up from the start to encourage the ‘guards’ to be controlling and abusive, and one at least of the participants deliberately set out to produce a particular outcome:
So some of the participants, at least, may have behaved less like “This is what happens when you put ordinary people in a position to abuse their authority” and more like “This is my idea of a prison guard from movies and pop culture”.
Opinions? Yes? No? The results are still worthwhile?
There’s been at least two other recentish movies on this that I can think of. I’m surprised they’re doing another so soon.
Regarding the experiment validity, in normal circumstances, my response would be “more experimentation needed”, but obviously there’s a reason ethics committee’s don’t approve these kinds of experiments anymore O_o
People are influenced a lot by context, but people are also habitual and like to fall into familiar patterns of thought. If that means a movie character you’re familiar with, I guess that sounds plausible, though I wonder why this person would publically say that? Anyway, IMO, it’s more about the circumstance enabling some aspect of your thought or personality that is normally inhibited, rather than instilling something that wasn’t there at all. So the experiment’s “message” always seemed to me to be kinda true but very overstated in that way.
If you get the chance to see him in video or audio, there’s something a little strange about Zimbardo’s own discussion about the experiment that I can’t quite put my finger on.
People are influenced a lot by context, but people are also habitual and like to fall into familiar patterns of thought. If that means a movie character you’re familiar with [….]
That makes sense to me. When set adrift, round off to being the nearest cliche.
I don’t know how accurate that article is (it’s a newspaper blurb for a new movie and giving a potted rundown of the events behind it, and if I measure it by how accurate or otherwise media get things I do know about, that is not very confidence-inspiring) or if the writer has a particular slant, but they do make it sound (at least to me) as if Zimbardo was less about “set up a situation where group A has power and group B does not and see what happens” and more “select people out into group A and group B based on particular traits, give group A power, give them handy suggestions how to (ab)use that power, set it up so the situation is abusive from the start, ignite blue touchpaper and retire”.
Eshelman sounds like (a) he’s trying to defend himself – ‘oh no I’m not really a sadist, I was acting a part!’ but also (b) like he was playing up to the kind of thing Zimbardo wanted in his guards.
I have no idea what the truth of the matter is, and it would be fascinating to re-create the experiment on better parameters with less built-in bias, but as you say, it’ll never pass any ethics committee.
I read Zimbardo’s book The Lucifer Effect… well, the part of it dealing with the SPE anyway. There was an odd contrast between the bits where he described what actually happened and the bits when he tried to draw conclusions from it; the latter bits didn’t seem to follow from the former. In particular there seems to have been quite a bit of variation in how the guards behaved, and “the situation” didn’t really explained why one guard in particular went in for lots of creative sadism whereas the others (except maybe for one or two) didn’t do do nearly so much.
My psychologist said that the main use of the SPE these days is as an example of how to really mess up your ethics in as many different ways as possible. It would make a really good lesson in the sort of classroom format where the teacher asks, “now can anyone tell me another unethical thing about that experiment” and you could get a good discussion going.
The SPE was a terrible experiment. In fact, it wasn’t really an “experiment” at all, more like performance art. Zimbardo was actually giving orders to the “guards” and significantly influencing events. Given his admitted political motives in doing the SPE in the first place, that makes the whole endeavor useless as science.
I’d call it a LARP session with an abusive GM.
I find the main column of the blog too wide — over 110 characters per line (I find more than 80 uncomfortable to read). OTOH, replies to replies to replies to comments are no longer unreasonably narrow. I guess a possible way to eat your cake and have it too is using a larger font in the posts than in the comments.
Does Patreon have an option to send money that does not involve a credit card, like the European Union’s direct debit, or PayPal? Their FAQ, confusingly, doesn’t say, and I’d rather not make an account just to find out.
Yes; my Patreon subscriptions are collected via Paypal
I’m in love with the blogroll categorization. Was it via that language log article a couple of weeks ago?
It’s originally much older than that. I ran across it in Women, Fire, and Dangerous Things in the early 90s.
It’s from an essay by Borges.
https://en.wikipedia.org/wiki/Celestial_Emporium_of_Benevolent_Knowledge
Oh, I know. (Borges is my favourite writer and that’s probably my favourite essay of his.) I just wondered whether it was fresh in Scott’s mind from being mentioned in a Language Log post (to which he linked) a couple of weeks or so ago.
Akrasia is (like?) a demon. I’d like to hire someone to send it back to Hell.
Unfortunately, I am poor, and anyone who could manage this should value their time at at least an order of magnitude more than I could pay if I gave them all my money.
Yes, obvious advice is obvious: Food! Exercise! Therapy! Sleep! Pomodoros! Find Your Passion™! Socialize! Beeminder! Schedules! To-do lists! Meditation! SSRIs! Ritalin! Modafinil! Level-grind until you have something resembling agency!
Ok, but let’s say I’ve tried all that, and either they don’t work, Akrasia and/or crippling anxiety interfere, or something else gets in the way, and The Demon Lord of Sloth and Akrasia grows stronger by the day, as we approach the End of All Hope.
So, yeah, the only options I’m sure I have are whining on the internet, and spending an absolute maximum of $282.39 (being all my Paypal money, I am understandably reluctant to go that high on anything less than a guaranteed miracle).
If anyone can fix me hard enough that I eventually have a net positive cash flow, sure, I’d be willing to owe them $x000 (where x<=10) long term.
I'm posting this here because I can't think of anywhere better, and there is something of a "put your money where your mouth is" culture in these parts. I am skeptical that anyone reading this can succeed at this task–I have read all the Akrasia-related articles on LessWrong, naturally, and they seem to require a much higher base level conscientiousness to be the least bit usable. But if anyone can do this, I'd expect at least one person here to be able to find them.
Sorry for the mess.
“let’s say I’ve tried all that, and either they don’t work, Akrasia and/or crippling anxiety interfere, or something else gets in the way”
Are you certain you want what you think you want? Your list is pretty thorough.
Probably? To make matters more frustrating, I notice my preferences changing in ways I don’t like; if I am wrong about anything I want, then I want to self-modify for otherwise.
Yes, but I feel I should reiterate that there are some that have not succeeded on the grounds that I can’t really try them, certainly not in any sustainable way. Ex, leaving my house is barrier enough, never mind all the other obstacles that make putting anything involving other people into practice, and most things I could do on my own (diet/exercise/pomodoros/lists/etc) fall prey to Akrasia themselves.
Actually, the closest I’ve come to quantifying this mess suggests that pomodoros I can’t Akrasia out of, and some sort of decent social contact probably have the best chance of helping. But the former requires finding some way to make it too costly to cheat, and the latter requires* divine intervention.
* “Requires” might be a strong word. But, realistically, I am terrible at people, I can’t travel all that easily, meetup.com gives me nothing more interesting than a board game group that’s not even in the same state, and seeking out people solely as medication rather than because I actually like them turns it into a chore, on top of everything else.
Preferably, I would have the power to do things by choice – and if not all things, then the things in the list that supposedly help. Absent that luxury, I’m reliant on things being convenient, and that’s just terrible.
What is it that you want to do that akrasia is stopping you from doing?
What I currently do: pace, sit, lie down, read SSC/LW/Facebook/Twitter/ToT/related subreddits/the audiogames.net forum, and occasionally play video games/watch videos on youtube/read fanfiction/try to write or code or learn something and fail / occasionally go to my parents’ on Saturdays or Sundays / watch my parents’ pets when they’re out of town. Add in survival things like eat and sleep, and that is literally all I do.
What I want to do: make things (physical things, software, fiction, music, whatever), learn things (math, written/spoken languages, sciency-type things, etc), apply the skills I’m inexplicably unable to apply (But that more or less is covered in the other bits), figure out how to do extremely useful things like leave/clean/improve my house… do something useful… I could go on but then I’d have to get more specific or mention things I’m less confident about sharing.
If you want to work on a project and continually find yourself distracted by other tasks, and if the project is one you could work on outside your home (e.g., reading, learning math), you can try to find some place outside of your home where you do not have the distractions of Internet, etc. For example: a university library, a cafe.
I’m an academic and I like to work from home for convenience, but often I need to go into school and work in the library to keep myself from being distracted.
A more drastic measure is to get rid of the things in your house that are sucking up your time. For example: disconnect your Internet, get rid of your video game systems. Go to a cafe or library when you need to use the Internet.
A less extreme version of this is to compartmentalize different places in your house for different tasks (works better if you have a big house). Just being in a different room from my computer makes me more productive at reading/working through a text, if I have an established work-space there.
To me, your “What I currently do” list reads as “What I want to do” and your “What I want to do” list reads as “What I want to want to do” perhaps you should focus on why exactly you want to want those things, instead of assuming you want to do those things. All of the things you claim you want to do seem to just be abstract things that intend solve a problem of you being unsatisfied with what you currently do. If you could break down why exactly you want to want to do “productive” things, then perhaps you can decide if you really want to do those specific things, or if having an abstract unsatisfiable goal is even worthwhile.
The problem is that there are long term and short term wants.
I’ll use myself as an example, since I don’t know a lot about CAE_Jones. I want (as in, that is what I do, absent someone pushing me around to do something else) to laze off Reading SSC and playing videogames all day while eating good food. While that might be possible for me, it’s highly unlikely, and straight up impossible for a lot of people. So, in order to laze off and play videogames and eat good food, I need to perform the job (that I tricked myself into going through college to get qualifications for) competently, for which I need to not laze off for an average of 3 hours a day, which is very hard, because I really want to laze off.
Another problem is that, while I find that what I do is pretty satisfying, I find that when I’m pushed to do something else, I usually find a higher level of satisfaction in it.
What I sometimes call this is having a really, really high time preference. But I don’t like to call it that, because it would be essentially conceding that it is inmutable. I suspect CAE_Jones might be in a similar situation.
I’ve gotten a little improvement through observation– what’s it like when I’m stuck? What’s it like when I’m doing things? Also, I think it helps to notice that I’ve done something, and I haven’t been struck by lightning.
It’s conceivable that I’m dealing with anxiety-flavored depression, and just plain depression would need different solutions.
I’d like to help! I’m dreeves@beeminder.com if that’s easier. Of course I have an ulterior motive which is to understand what went wrong when you tried Beeminder.
My two suggestions:
Use Cold Turkey or similar to block distracting websites and applications.
Ensure that the things you are trying to do are things you want to do. There is an important difference between really enjoying learning maths (for instance) whilst doing it, but being unable to get started, and not particularly enjoying it at all. Most people who are really good at things get that way not by supreme amounts of willpower that lets them practice a lot, but by enjoying the thing so much that practice isn’t a grind at all.
Moving to a new place can be a valuable context change.
Part of the issue here is that most of the competent people who are best positioned to fix you are busy doing other stuff like running companies, doing cutting edge research, or winning scholarships. Your best bet might be competent people who decided to specialize in developing others, e.g. Malcolm Ocean. Offer yourself as a hands-on case study for them.
What does Angels-And-Clockwork mean?! You used it in a blog post as if you expected the reader to understand it, but I didn’t, and google picked up way too much noise. Also, a blog post describing what the phrase Object-level means in the context of Object-Level errors would probably help a lot of people who aren’t as used to the vocabulary of the LW community as you are.
The blog post in question is The Virtue Of Silence.
Suggested Dungeons and Discourse artifact: armor that will protect your life and liberty against those whose notions of morality would restrict private conduct they find strange. Called the Fursuit of Happiness.
Dr Scott, do you know anything about Trazodone?
Just prescribed some. Good sleeping medication. Not so good as an antidepressant. Occasionally causes extremely prolonged erection which is MUCH less fun than it sounds.
Priapism probably not a problem for my wife. Is it helpful for anxiety? Prescribed to help sleep in the face of anxiety.
To be clear, I mean that my wife probably won’t get a priapism. The meds aren’t for me.
I did read the other implication for a moment there, but figured it was just me with a dirty mind.
@onyomi:
Me too.
I hate to distress you, Thomas, but women can get priapism of the clitoris (which, looking up Wikipedia, is a side-effect of Trazodone in women).
And verging on too much information – let’s just say that even without medication, due to hormonal changes in their life-stages, women too can get persistent and uncomfortable constant sexual arousal. Scott is right: this is much less fun than it sounds.
Does it result in clitoral necrosis leading to occasional gangrene, as penile priapism does? I had the impression that clitoral erections were not as rigid as penile erections, and so were not likely to result in permanent damage.
And yeah, iku iku byo is already internet-famous, although I hadn’t heard it was caused by menopause.
PS thank you for answering. Is it bad I trust you over my actual physicians?
My favorite priapism video of all time. No actual priapism footage. Im sure you can find that if you look.
Have been reading Jonathan Haidt’s book on the “Righteous Mind” recently (has this already been discussed here?), and thinking about the sources of morality.
Haidt seems to be a species of intuitionist, but not a rational intuitionist like Huemer, but more of a feelings-based intuitionist: that is, morality starts with biology based feelings like disgust (evolved to keep us from eating rotten food, etc.) and caring (evolved to make sure we take care of kids), and then we add a layer of reasoning on top of that to justify what we already feel.
This view is persuasive and seems to me to present problems for both utilitarianism and rational intuitionism.
Re. utilitarianism, Haidt points out that the only people who don’t have some of the basic moral feelings like caring and empathy are psychopaths, and they are notoriously bad ethical reasoners. Yet strict, rational utilitarianism seems as if it would predict (behind a veil of ignorance) that psychopaths would be the *best* ethical reasoners, because their judgment is least clouded by emotion.
Re. rational intuitionism, I still find the pure evolutionary explanation of morality to be weak, because there are cases when we may clearly override our feelings of disgust, etc. in order to make a more rational ethical judgment: maybe homosexual sex feels a little “icky” to me, but I can override that to reason that that’s no good reason for discrimination (though maybe that also comes from a separate feeling about kindness and fairness). This being the case, maybe reason is the “higher order” ethical calculator which may balance and weigh the raw emotional data provided by the gut-level intuition?
But given that books on ethics are, apparently, more likely to be stolen from libraries than other books, this seems to point to the idea that learning to reason about ethics may not make you more ethical, but may only make you better at rationalizing what you already want to do. In which case, trying to be more rational about ethics may be a dangerous endeavor, since it just makes us more psychopath-like?
I haven’t read the book, just absorbed some of it through (sub)cultural osmosis, so I’d like to ask: Are these connections between moral psychology and ethical theorising yours, or does he venture in that direction in the book?
About the psychopaths, that may not be a very strong argument. It may be very cognitively effortful to account for other people’s utility, which is unnatural to them, and maybe they also just can’t be bothered to reason about this because it’s profoundly uninteresting to them. (Disclaimer: I don’t know what data there are about moral reasoning of psychopaths.)
To add onto this; why would a psychopath care about the welfare of others? They’re generally not going to see the long term effects of constant defects if they are smart about it, so why not use the caring nature of everyone else (which is what makes us more likely to work together over defect) against us?
In effect, when one side isn’t purely rational the system we know as the classic PD breaks down.
He does go that way in the book, though he seems to take a somewhat dimmer view of the importance of reasoning than I do. I’m not sure whether or not he’d classify himself as a “moral realist,” as I am inclined to do myself.
It’s been a while since I read “The Righteous Mind”, but as I remember, Haidt (somewhat annoyingly) uses the term “morality” to mean “people’s beliefs about morality”.
I think his studies are just documenting how people act morally and not how you should. It still is relevent though as it shows places of friction. Generally of the 6 foundations,
Care/harm: cherishing and protecting others.
Fairness/cheating: rendering justice according to shared rules. (Alternate name: Proportionality)
Liberty/oppression: the loathing of tyranny.
Loyalty/betrayal: standing with your group, family, nation. (Alternate name: Ingroup)
Authority/subversion: obeying tradition and legitimate authority. (Alternate name: Respect.)
Sanctity/degradation: abhorrence for disgusting things, foods, actions. (Alternate name: Purity.)
The only ones consequentialists generally care about is care/harm and only occasionally use the others as second-order concerns and heuristics. I really struggle with people who use the other as an actual moral root.
Thanks for listing these here for me.
I think one of his big meta points is that the post-enlightenment West has focused very heavily on care/harm and liberty/oppression, while increasingly ignoring the other parameters, but that if one travels to say, India, one will find that parameters like authority and sanctity still hold very strong sway.
One of his other points is that conservatives in the US are better at understanding liberals than the reverse (and this is revealed by actual surveys), probably because conservatives tend to draw on a wider range of these foundations (conservatives have stronger disgust reactions I’ve read here, for example), but liberals mistakenly believe that conservatives do not care about what they care about, when in fact, they simply value other things as much or more.
Generally, I’d say US liberals care most about care/harm: they want to help the poor, etc. and don’t care that much about whether or not the poor “deserve” to be poor, or, in some cases, whether it will require curtailing the liberty or undermining the authority of more responsible people less in need of care.
Conservatives DO actually care about harm, but they also care a lot about fairness (people should have to work for their benefits), liberty (keep government out of x), loyalty (nationalism, support our troops), authority (pro-military, pro-police, pro-traditional institutions), and sanctity (anti-abortion, anti-gay marriage, etc.)
Libertarians like me are apparently a hybrid of the two: we care about liberty to a degree even greater than conservatives, but, unlike conservatives, we usually don’t care much about loyalty, authority, and sanctity. Haidt also claims libertarians score low on measures of empathy, which may be true, but I would not say we don’t care about care/harm. In fact, for many libertarians, harm is the only justifiable reason to pass a law. Or to put it more accurately, we are super live-and-let-live. We may not feel much responsibility to help you, but we’ll fight hard to make sure no one *harms* you.
The thing that most obviously marks someone as a libertarian to me is an unwillingness to solve coordination problems with government – that is, situations where many decisions with very small harms aggregate to a large harm are not something that most libertarians fight against.
CFCs and leaded petrol seems like moderately nonpolitical examples of that type of problem to me. As far as I’m aware, most libertarians aren’t in favour of the Montreal protocol or government regulations that forbid leaded petrol.
Usually the argument I see is that these situations will resolve themselves via people making a principled decision not to buy leaded petrol, or in extreme cases that the problem will be resolved via people who get skin cancer suing emitters of CFCs. Or, to be uncharitable, claims that it’s not real and CFCs can’t get into the stratosphere / we don’t need ozone anyway / sunscreen technology will compensate.
I think it’s simpler than that. Libertarians just don’t like government, period. Some view it as a necessary evil, others as an unnecessary evil.
There’s the consequentialist version of “don’t like government,” the argument that the decision making mechanism of the political system is more prone to produce wrong decisions than the mechanism of the market, hence that shifting decisions from the latter to the former will, on average, produce worse decisions.
That doesn’t require believing that the market doesn’t produce bad outcomes, only that the alternative does.
Progressives apparently not caring about loyalty, authority, and purity is an artifact of the questions asked. The dominant narrative is that progressives only care about harm and fairness, but that’s only because the cases that trigger the other foundations for them are non-archetypal. For example, it’s sometimes said that the right moralizes sex and the left moralizes food (see all the purity about organic/non-GMO/vegetarian/gluten-free/etc) – though the SJ left moralizes sex too.
If anything, not caring about loyalty/authority/purity is a contrarian trait rather than a progressive one. When I lived in the South, the locally dominant conservatives used all of the foundations and the progressive minority used only harm/fairness, as Haidt would predict. But when I went to study at a liberal arts college, it was the locally dominant progressives who used all of the foundations and the conservatives who only used harm/fairness. Libertarians are disproportionately likely to be contrarians, so they care less about loyalty/authority/purity.
Locally dominant cultures exhibit a sort of unconscious “morality creep”, expanding their core moral concerns into ever-wider spheres, e.g. the SJ left moralizing sex without seeming to realize they’re so doing. The end result, in the absence of significant internal or external challenge, is the establishment of a morally-correct opinion about nearly everything. This famously happens in cults and other ideologically closed movements.
Another, unrelated issue in the Haidt book: he makes what I consider an interesting observation that countries like Sweden with big welfare states are not truly “group focused” in the way communism or true socialism is, but rather very focused on providing a certain standard of living for the individual. This tends to divide governments into individual-focused (all liberal democracies) and group-focused (including communist, fascist, and hyper-nationalist and/or theocratic states).
I’ve long thought that it was weird to put communism and fascism on opposite ends of the spectrum, and this seems to support that. On the other hand, it may also mean that we libertarians need not worry that Sweden-like welfarism is a slippery slope to North Korea, because, while we may think it’s a bad idea, it is fundamentally a different sort of philosophy than that which produced North Korea.
Allow me to be the first to congratulate you on the Celestial Blogroll of Benevolent Knowledge, which is the best blogroll categorization that I’ve ever seen. I literally LOL’d reading which blogs had been assigned to which categories.
I have a question about Effective Altruism and its interaction with finance/currency.
One the one hand, if I tip my barber $20, that is grossly inefficient because African Children need the money more. But, on the other hand, the money is just an accounting entry and exchanging it doesn’t create or destroy any real resources. Overpaying for my haircut simply reallocates some of my economic power to the barber. Is it inefficient because he is maybe less likely to use that power to help African Children? In a world where we all cared equally about doing good, there would appear to be no harm in this kind of transfer.
I guess the point is that what appears to be wasteful spending doesn’t necessarily lead to wasteful allocation of society’s resources because dollars are not “used up”, just exchanged. Is there a subtle point that “wasteful” spending patterns will lead to societal investment decisions that really do reallocate real economic resources in bad ways?
Even if the barber is as likely as you to use the economic power to help African Children, giving it to him delays it being used in that way. You would prefer £10 now to £10 later (bar a large amount of deflation), and so would the African Children.
Disclaimer: I’m not quite EA, as my charity budget isn’t fully aligned with EA principles. Also, this is off the top of my head, so I’m not a spokesperson.
In a world where everyone is a perfect utilitarian saint, then giving $20 to the barber or to African Children will have the same effect. You still have to get the money to the African Children somehow; in the hypothetical, if you’re spending all your spare cash on tips, then so’s the barber and everyone the barber is interacting with economically and none of it ends up going to the African Children. Also, if everyone’s giving all their spare cash to the maximum marginal utility cause, then ex hypothesi you aren’t tipping the barber.
Except a) there are no (or hardly any) perfect utilitarian saints, the EA movement doesn’t aim at sainthood. Various bits of the movement use the 10% figure, I’ve heard lower figures kicking around, but let’s say 10%, suppose everyone adheres to that. So the immediate EA value of donating the $20 is $20 whereas if you tip the barber the value is only $2.
Now the barber might spend the remaining $18 on tips to waitresses etc. and so $2 + $1.80 + $1.62 + … adds up to $20. What gives? Maybe the barber spends the money on something else, but it all ends up as income for people anyway. Eventually. Delays mean losses to inflation, also you can consider interest.
I don’t think this is the whole story though. Money sent to African Children also ends up as income for people, so by that logic the African Children get $20 directly plus another $20 from the recirculation effects. Except if they’re getting another $20 then they’re going to get another $20 from that, and another, until people decide that other causes are more efficient now. So – I’m out of my depth here – it looks like the African Children are getting a stream of money, the key thing is the rate at which they get money (remember you can use interest rates to convert back-and-forth between stocks and streams). Also, if you’re considering people’s charity habits, then you’re thinking about streams rather than stocks anyway.
Now if I understood economics better I could say more here, it seems like you could get more money – that is, a faster-flowing stream of money – to African Children by getting money to circulate faster. However I seem to recall something about a higher velocity of money being (or leading to) inflation, so maybe that just gets more zeroes to African Children without any corresponding increase in the provision of goods and services.
My inner uncharitable person says: “you know, this sort-of answers the question but you do realise you’ve fallen into a trap. The question is basically a filibuster; people can ask endless questions about theoretical economics more and more remote from the actual matter at hand, with the answers getting more and more tenuous, and not only can you never fully satisfy them with each attempt at explanation you run a greater and greater risk of making a mistake and looking foolish”.
I don’t think this is really an angels-on-a-pin theoretical issue. I think it’s a basic issue relating to the misunderstandings we get when we use the household budget as an analogy for macroeconomics.
Like… African Children are not aided by dollars per se, they are aided by flows of goods and services from outside, which they are able to command (sort of) with dollars. Circulating the dollars within “Africa” is not going to double and triple the economic capacity of Africa as your assumptions maybe imply.
I don’t have any great conclusions, which is why my comment was phrased as a question. But one obvious danger is that you will get “pushing on a string” effects where there is a relatively inelastic supply of tradeable goods flowing into “Africa” and more aid flows serve mostly to just increase their prices. Then… the best use of charitable resources would be to invest to build tradeable good capacity rather than trying to subsidize consumption.
I guess the meta-question is “does EA think about these issues in a serious way?”
PS – I totally get your uncharitable “this is a filibuster” response, and I 100% acknowledge that there is a nonrational temptation to filibuster oneself when considering applying EA in ones life. BUT: I think this discussion is important enough that it defeats the safeguards against that bias. Consider that at the end of the day investments in Western innovation really really have done orders of magnitude more for the world’s poor than investments in aid. It is nuts to be too confident about how to do good on a global basis without being able to account for that.
Sure, circulating more money in Africa is not going to do anything, but you are outside Africa. You send the money in and it comes out in return for more goods. Sending money into Africa really results in more goods entering Africa.
To put it another way, by giving less money to the barber, you discourage people from being barbers and encourage them to work in the sectors you do spend money on, like making insecticides or bednets. Or whatever Africans buy, if you directly send money.
Do you doubt this? When you talk of “tradeable good capacity” it sounds like you think that the cash gets trapped in Africa and just inflates prices. If there were a bottleneck that the ports were too small, then expanding them would be valuable, but would probably also be a pretty direct effect of sending money.
Ultimately, it would be good to make Africans more productive. (Is this what you mean by “tradeable good capacity”?) But this is a really hard problem. Maybe Givewell doesn’t spend enough time analyzing the effects of building schools, compared to distributing bednets. But preventing malaria results in smarter and more energetic people than not preventing malaria, and directly contributes to productivity.
The very stylized model that illustrates my point is:
Option 1: There is a single bednet factory in the world, operated for profit. The factory is already outputing at full capacity of C nets/year. The outside world sends $X to Africa, and the factory charges Africa X/C per net. If X becomes large, the factory will earn excess profits in the short run but new entrants will be attracted to the net business and more capacity / lower prices as a result will benefit Africa.
Option 2: Instead of raising X, spend that money directly on constructing a net factory. You are likely to achieve the same result with much less leakage to the “excess profits” account.
” Is there a subtle point that “wasteful” spending patterns will lead to societal investment decisions that really do reallocate real economic resources in bad ways?”
Yes.
Payments are both transfers and signals. Suppose the normal price for a haircut is $10. At that price, there are just enough people willing to cut hair to cut the amount that customers, at that price, want cut.The social value of your barber cutting your hair is only $10, since if he didn’t someone else would be happy to do it for you at that price.
You pay him $30, the $10 price plus a $20 tip. He was considering switching to a different job because, at $10/haircut, the alternative job paid him a little more for his time. He decides not to, because he assumes that you will give him similar tips in the future, and that makes cutting hair more attractive than the alternative job.
You can work the same argument out on a variety of other margins, such as whether someone decides to cut hair, how many hours the barber works, and the like. You are putting in a false signal, hence (very slightly) distorting the incentives that, undistorted, produce (in some sense) an optimal outcome.
The only thing I would say to this is that if people are actually paying their barbers $30 when the published price is $10, then the consumer clearly feels he received $30 of value and is sending a more accurate price signal than he would be by paying the published price. He’s effectively just transferring his consumer surplus. In a world without tips, assuming people still valued having their hair cut by others and not having to do it themselves, something between the high and low current payments would become the published price, which wouldn’t necessarily change production levels. It would just mean everyone would pay $20 instead of half the people paying $10 and half paying $30, and it would become much easier for a barber to tell in advance if becoming a barber would be worth it.
I was considering a case where there were alternative suppliers with an opportunity cost of $10. Switching the job from someone whose opportunity cost is $10 to someone whose opportunity cost is $20 is a net loss of $10.
I take it you have never worked a job where you received tips as a substantial portion of your