Guys, I think Thomas Schelling might be alive and working in a Kentucky police department: Police offer anonymous form for drug dealers to snitch on their competitors.
Journalists admitting they’re wrong is always to be celebrated, so here’s Chris Cilizza: Oh Boy Was I Wrong About Donald Trump. He says he thought Trump could never sustain high poll numbers because his favorability/unfavorability ratings were too low, but now his favorability/unfavorability ratings have gone way up. But remember that favorability might not matter much.
Speaking of Trump – Why Securing The Border Might Mean More Undocumented Immigrants (h/t Alas, A Blog). Related: A Richer Africa Will Mean More, Not Fewer, Immigrants To Europe. So, if I’m reading this right, the best way to minimize illegal immigration is to have long, totally unsecured borders with desperately poor countries. Sounds like a plan! 😛
No, conservatives don’t like the Iran deal, but before you get bogged down in the debate note that they have been against pretty much every deal with hostile foreign countries regardless of the terms.
Study The Long Run Impact of Bombing Vietnam investigates whether areas in Vietnam that suffered “the most intense episode of bombing in human history” during the war are still poorer today. They find that no, areas heavily bombed by the US are at least as rich and maybe even richer than areas that escaped attack. They try to adjust for the possibility that the US predominantly bombed richer areas, but that doesn’t seem to be what caused the effect. Their theory is that maybe the Vietnamese government invested more heavily in more thoroughly destroyed areas. More evidence that compound interest is the least powerful force in the universe?
Luke Muehlhauser, working with GiveWell, has come to a preliminary conclusion that low-carb diets probably aren’t that helpful. Given that me, Luke, and Romeo Stevens have all said we’re not too impressed by low-carb, can this be declared Official Rationalist Consensus?
A lot of people on my Facebook have asked why Black Lives Matter protesters are disrupting Bernie Sanders but not Hillary Clinton. Answer is: they tried to disrupt Hillary, but she has security. I feel like this is an Important Metaphor For Something.
There’s a lot of heartbreak and emotion in this New York Times piece, but the part that really stands out for me is that Oliver Sacks and Robert Aumann are cousins. This sort of thing seems to happen way more often than chance, and I shouldn’t really be able to blame genetics either since cousins only share 12.5% of genes.
In 2000, the medical community increased their standards for large trials, requiring preregistration and data transparency. Now a review looks at the effects of the change. They find that prior to the changes, 57% of published results were positive; afterwards, only 8% were. Keep this in mind when you’re reading findings from fields that haven’t done this yet.
The FDA rejected flibanserin, a drug to increase female libido, as ineffective and unsafe. The pharmaceutical company involved got feminists to call the FDA sexist for rejecting a drug that might help women (NYT, Slate) and the FDA agreed to reconsider. But now asexuals are mobilizing against the drug, saying that it pathologizes asexuality. I look forward to a glorious future when all drug approval decisions are made through fights between competing identity groups.
Stuart Ritchie finds that we have reached Peak Social Priming. A new psychology paper suggests that there was an increase in divorce after the Sichuan earthquake because the shaking primed people’s ideas of instability and breakdown, then goes on to show the same effect in the lab. Even the name is bizarre: Relational Consequences of Experiencing Physical Instability. Despite the total lack of earthquakes in Michigan to prime me, I still feel like this finding is on shaky ground.
The most important Twitter hashtag of our lifetimes: #AddLasersToPaleoArt.
I’d like to hear more people’s opinion on this: Jayman links me to a post of his where he argues against the third law of behavior genetics (most traits are 50-50 genetic/environmental), saying they are often more like 75% genetic, 25% environmental. He argues that the 50-50 formulation ignores measurement error, which shows up as “environmental” on twin studies. As support for his hypothesis, he shows that the Big Five Personality Traits, usually considered about 30-40% genetic on studies where personality is measured by self-report, shoot up to 85% or so genetic in studies where personality is an average of self-report and other-report. Very curious what commenters have to make of this.
Brainwashing children can sometimes persist long-term, as long as you’ve got the whole society working on it. A new study finds that Germans who grew up in the 1930s are much more likely to hold anti-Semitic views even today than Germans who are older or younger, suggesting that Nazi anti-Semitic indocrination could be effective and lasting. Contradictory more optimistic interpretation; in no generation were more than like 10% of Germans anti-Semitic, so the indoctrination couldn’t have worked that well.
The Catholic blogosphere is talking about how fetal microchimeralism justifies the Assumption of the Virgin Mary or something.
A new meta-analysis finds that the paleo diet is beneficial in metabolic syndrome and helps with blood pressure, lipids, waist circumference, etc. Seems to have outperformed “guideline-based control diets”, although I can’t get the full-text and so can’t be sure exactly what these were – and one of the easiest ways to get a positive nutrition study is to use a crappy control diet. But if that pans out, all the people talking about how the paleo diet has no evidence will have egg in their face (YES I JUST USED AN EGG PUN AND A PAN PUN IN A SENTENCE ABOUT THE PALEO DIET). And here’s an interview with the authors
A subreddit of words that are hard to translate. “I will zalatwie this” means “it will be done but don’t ask how.”
Study discovers dramatic cross-cultural differences in babies’ sitting abilities; African infants seem to be able to sit much earlier and much longer than Western ones. Possible reasonable explanation: we coddle our babies and keep supporting them when they could perfectly well learn to sit on their own if we let them.
A while back I made an extended joke comparing gravitational weight and moral weight. Well, surprise, surprise, somebody did a social priming study showing that they were in fact related. Now the inevitable negative replication is in.
Archaic Disease of The Week: Eel Thing
Since we’ve been discussing coming up with numbers to estimate AI risk lately, try Global Priority Project’s AI Safety Tool. It asks you for your probabilities of a couple of related things, then estimates the chance that adding an extra researcher into AI risk will prevent an existential catastrophe.
Reason article on how a chain of New York charter schools catering to poor minority students manages to vastly outperform public schools, including the ones in ritzy majority-white areas. Wikipedia appears to confirm. My usual suspicion in these cases is that it’s selection bias; the “poor minorities” thing sort of throws a spanner in that, but here is a blogger suggesting they use attrition rather than selection per se, and here is someone else arguing against that blogger. And here is a charter school opponent saying this chain is mean and violates our liberal values, which I am totally prepared to believe.
The latest in this blog’s continuing coverage of weird Amazon erotica which totally really exists: I Don’t Care if My Best Friend’s Mom is a Sasquatch, She’s Hot and I’m Taking a Shower With Her
Cognitive behavioral therapy can cut criminal offending in half – this study should be read beside Chris Blattman’s work showing similar effects in Africa. I am usually skeptical of large effects from social interventions, but after thinking about it, CBT is at least more credible than poster campaigns or something – it’s the sort of thing that in theory can genuinely have a long-term effect on people’s thought processes. If this is even slightly true then of course we should teach CBT in elementary schools. Maybe those New York charter schools will go for it.
I should probably link to this study “showing” “that” a “low-fat” “diet” is “better” than a “low-carb” “diet”, but lest anyone get too excited it really doesn’t show that at all. It shows that in a metabolic ward where everyone’s food is carefully dispensed by researchers and monitored for compliance, people lose a tiny amount more weight on low-fat than on low-carb over six days. This sweeps under the rug all of the real world issues of dieting like “sometimes diets are hard to stick to” or “sometimes diets last longer than six days” – in their defense, the researchers freely admit this and say the experiment was just to figure out how human metabolism reacts to different things and we shouldn’t worry too much about it on the broader scale. Some additional criticisms regarding ketosis, etc on the Reddit thread
Some countries have problems with annexing neighboring lands that later agitate for independence. Switzerland has a problem with neighboring lands agitating to join them even though it really doesn’t want any more territory.
“If this is even slightly true then of course we should teach CBT in elementary schools.”
Oh dear.
One of the things that the study of anti-semitism can not account for is that anti-semitism would naturally increase in a authoritarian government due to the takeover of the media and displacement of (jewish) liberal writers even when no additional propaganda takes place. The current level of ant-semitism in germany is artificially low.
Ah, so the reason Germany doesn’t have a problem with people spouting nonsense like “The Jews control the media; Something Must Be Done!”, is that the Jews control the media.
Germany specifically is very soft on Jews due to its historic past. A less apologetic government would almost certainly increase percieved anti-semitism even when there is little change in the media-landscape.
On Success Academy:
While most charter results are completely consistent with attrition and expulsion, as discussed above, Success Academy’s results are well beyond that. They have low income black and Hispanic kids outperforming white and Asian kids. You can’t select or expel your way to those results. It’s not the curriculum, which is touchy feely nothing. It’s not the teachers. The test prep is inhumane, but even that’s not enough.
As a result, no one is really celebrating Success Academy, or doing so very carefully, and if you are knowledgeable in these circles you can see it’s pretty obvious that a lot of reformers suspect nefarious means. No one knows how they are cheating, but lots of people suspect it. Few people will speculate aloud.
https://educationrealist.wordpress.com/2015/04/07/ian-malcolm-on-eva-moskowitz/
As I write here, if they *aren’t* cheating or gaming in some way, then all hail the closing of the achievement gap.
But they almost certainly are.
On a different point, regarding the This American Life story, has anyone noticed that there’s been a big media push to try and sell integration? That story, plus the recent one on Pinellas County, plus Dana Goldstein pushing it hard, and a few other people. It’s the new liberal push. It will fail, of course.
Some of the commenters above were blaming schools for things like lax discipline and failure to expel or remove, but schools get sued quickly if they have numbers that are racially imbalanced (cf LAUSD). Of course kids who are disruptive screw up classes, as do kids of low ability, as do kids with low engagement. But high schools aren’t allowed to separate by ability, and aren’t even allowed to teach kids at their level. High schools get penalized if they aren’t teaching high school level classes, leading to the odd situation in which colleges can offer remedial level classes (middle school) but high schools can’t.
I’ve been writing a series of “education policy proposals” for the presidential campaign to demonstrate how far off base the rhetoric is on reform with what the public wants. So, for example, we never talk about repealing IDEA so that states can decide their own special education plans. Instead, parents with SPED kids can come armed with a lawyer and get millions–even though no research shows that interventions help.
https://educationrealist.wordpress.com/2015/07/31/five-education-policy-proposals-for-2016-presidential-politics/
1. Ban College Remediation
2. Allow High Schools to Remediate
3. Repeal IDEA
4. Make K-12 Ed Citizen Only
5. Haven’t decided yet–probably going to be ELL.
The point isn’t to agree or disagree with the proposals, but rather to understand that all of them would be extremely popular, but are all off limits. And it’s in that environment that charters operate. Because we aren’t allowed to separate kids, teach to ability, charters take advantage of skimming to pretend to do a better job at actually teaching.
Of course, not that much better. Charters aren’t making even a minor dent in the achievement gap. They’re just getting marginally better test scores than the control group. Except Success Academy. Which, to bring it round in a circle, is why people are suspicious.
Some of the commenters above were blaming schools for things like lax discipline and failure to expel or remove, but schools get sued quickly if they have numbers that are racially imbalanced (cf LAUSD).
In my experience, it has not been the black students who are the disrupters. It is mainly white, male students and, of these, children of small business owners (who do not feel they will need formal education to be successful, and are probably right).
Of course kids who are disruptive screw up classes, as do kids of low ability, as do kids with low engagement.
It is disruptive kids who disrupt classes. Low-ability, unengaged kids may become disruptive kids, or they may just sit there not bothering anybody else.
But high schools aren’t allowed to separate by ability, and aren’t even allowed to teach kids at their level.
Not to the same extent as before, but there is still separation by ability. In my district, that translates to having two levels instead of three. And the problem isn’t so much that teachers aren’t allowed to teach (though there is that), it’s that after elementary and middle schools curricula consisting of fluff, the students aren’t the level they would have been.
Also, I know that Success uses Reader’s/Writer’s Workshop (ugh) but my understanding is that they soup it up enough so that it’s actually effective. I think their success probably does lie with their curriculum.
Other than that, I suspect you and I agree more than we disagree.
On the average, yes, first cousins would share 12.5% of genes, but: (1) first cousins with only a single pair of grandparents in common could have anything from 0% to 50% identical genome, and (2) Sacks doesn’t actually say whether they were first cousins.
Indeed, very few people are careful enough to accurately specify degrees of cousinship or number of generations different. Lindy Boggs’ obituaries (and her Wikipedia article) all say she and Mayor/Governor/Ambassador Chep Morrison were second cousins; in fact, they were first cousins once removed.
From “this sort of thing,” I assumed you meant that the two of them came down with the same kind of cancer at the same time. I was so convinced of this that I Googled Aumann, looking for bad news about his health.
Apparently you actually meant, “strange coincidence: brilliant physician Oliver Sacks turns out to have a brilliant cousin who won the Nobel Prize.” Growth mindset?
The much bigger surprise in that essay: Sacks is gay.
Both my parents are lawyers and so my siblings and I grew up knowing about different types of cousins (first, second, once removed, etc.) It was a bit of a culture shock to me the first time I came across a friend’s family where everyone, related or not, was either a cousin or an aunt/uncle.
Yes, in Ireland everyone gets called a second or third cousin when they’re really first cousins once removed. People seem to go by ‘if I’m your parent’s first cousin, that makes you my second cousin’.
What’s even worse is that people are apparently now unable to tell the difference between half-siblings and step-siblings. It’s not just our clients (I’ve seen one application for social housing where the applicant backed by her case by saying that since the various children were step-siblings, she didn’t think it was fitting they’d have to share a bedroom; these were both her children but by different fathers, so they were half-siblings not step-siblings) but some of my work colleagues referring to children in these cases as step-children or step-siblings.
The old SSC post linked as “compound interest is the least powerful force in the world” discussed the Georgia land lottery. A question about that: maybe I missed it, but is there any mention of the Georgia land lottery winners having hit some kind of Malthusian frontier because their new wealth led them to have more kids? Like, they would have had more kids, and so divided up their greater wealth among more kids, hence the lack of greater wealth in subsequent generations. In other words could one of the mechanisms in “Farewell to Alms” disprove this bit of supporting evidence for “Son Also Rises”? I strongly assume somebody thought of this, but I couldn’t find any note of it.
The first study says that 18 years after the lottery, the winners had 0.1 more children than the losers. The study is restricted to men who had a child in the three years before the lottery. “In the lower half of the wealth distribution,” which I think means wealth before the lottery, the increase is larger, but it cannot be more than 0.2 without driving the upper half negative.
That 0.1 extra children is against a baseline of 5 children.
The second study is of the grandchildren of the lottery winners. All the tracking is by surname, so they only track grandchildren born to sons. The lottery winners had the same number of grandchildren as the losers, marginally fewer grandchildren per son.
A new study finds that Germans who grew up in the 1930s are much more likely to hold anti-Semitic views even today than Germans who are older or younger
Rather reminds me of this study, which also studied the effects of deNazification: http://www.voxeu.org/article/hatred-transformed-how-germans-changed-their-minds-about-jews-1890-2006
“The American authorities ran a highly ambitious and punitive programme which resulted in many incarcerations and convictions, with numerous, low-ranking officials banned and punished. Citizens were confronted with German crimes, forced to visit concentration camps, and attend education films about the Holocaust. There was a considerable backlash, and perceived fairness was low. The Jewish Advisor to the American Military Government concluded in 1948 that “… if the United States Army were to withdraw tomorrow, there would be pogroms on the following day.” In contrast, the British authorities pursued a limited and pragmatic approach that focused on major perpetrators. Public support was substantial, perceived fairness was higher, and intelligence reports concluded that the population even wanted more done to pursue and punish Nazi officials.”
It seems that telling Germans they were antisemitic, made them more antisemitic than telling them Nazis were antisemitic, and also to blame for everything bad that had happened to Germany.
So remember to keep telling southern whites how racist they are. Otherwise, we might run out of racism.
Interesting that the public wanted more punishment for officials in the British case. It sounds like the British approach let the population project the nation’s sins entirely onto those high-level officials. Very Girardian.
Well, the positive response to the British way of doing things probably has a lot to do with human nature. If you were Private Smith, or even Sergeant Smith, you feel you had a lot less choice and freedom and responsibility than General Smith, so you perceive it as unfair that you get punished in the same way and to the same degree as General Smith.
And if you’re plain Citizen Smith, you are going to resent being blamed for things going on that the politicians imposed – how many Americans would like being forced to watch videos about Abu Ghraib and told “This is your fault, you colluded in it, we are holding you personally responsible”?
They find that no, areas heavily bombed by the US are at least as rich and maybe even richer than areas that escaped attack.
Astonishing. Apparently, bombing paddy fields and uninhabited jungle has little long-term economic effect. Who would have thought it?
Regarding the low-carb vs. low-fat study, there’s a very obvious (IMO) explanation for the extra fat loss in the LF group that almost nobody seems to have picked up on: glycogen depletion. The human body can hold 2000-3000 calories worth of glycogen. A person on a high-carbohydrate diet will tend to keep glycogen stores relatively full, whereas a person on a low-carbohydrate diet will tend to keep them relatively empty.
So when you reduce the carbohydrate content of your diet, you consume glycogen from your stores and don’t replace it. You get the energy in that glycogen for “free,” in the sense that you don’t have to burn fat to get it. So it has a fat-sparing effect. You burn glycogen instead of fat, and you lose that much less fat (a pound at the absolute most, if you’re big and go from having your glycogen stores always full to always empty).
Quick example: Suppose at baseline everyone has 20kg of fat (180,000 calories) and 500g of glycogen (2,000 calories), for a total of 182,000 stored calories. During the six days of the study, everyone burns 2,500 calories per day, and eats 1,800, for a loss of 4,200 calories. Furthermore, the LF group has no change in glycogen stores, and the LC group completely burns through their glycogen stores. So now the LC group has 177,800 calories of stored fat (~19.76kg) and no glycogen, and the LF group has 175,800 calories of stored fat (19.53kg) and 2,000 calories of glycogen.
They lost exactly the same number of calories, but the LF group lost an extra 230g of fat because they didn’t consume any stored glycogen on net.
Note that this has essentially no long-term implications, because the LC group has burned through their glycogen buffer. If this study continues on with the same energy deficit, both groups will continue to lose fat at the same rate. The LF group will maintain a 230g gap in fat loss, and the LC group will maintain a 500g gap in glycogen loss (plus another 1-2kg in water).
I fudged the numbers a bit to simplify the math, but this story is basically consistent with the data they actually got in the study. They did find that people on the LF diet had slightly higher energy expenditure (~50 calories per day), but that had a p value of 0.099, so that may just be noise.
Which is to say, looking at fat storage plus glycogen storage, there was no statistically significant difference in energy loss. The only difference is that the LF group lost more of that energy as fat and the LC group lost more as glycogen.
This is consistent with the weight loss pattern of LC diets (quick drop followed by more normal rates). And it is fairly dramatic because each gram of glycogen can store 3 grams of water IIRC. Bodybuilders have been manipulating this variable before competition in order to look the way they do for a long long time. One of the psychological reasons LC can work is that people find the initial quick drop very satisfying.
Re: Report your drug dealing competition. The police say it works, and I don’t doubt that it works for their statistics of how many drug dealers they busted. The real question is, does it work for the underlying problem of society, illegal hard drugs and the crime surrounding all of it?
At best, it does nothing but pretty up statistics and fill up jails, because for every dealer you bust, another will fill the void, either because a new dealer starts working, or an established one expands his business.
At worst, it’s the best-case scenario but with increasing drug prices and therefore crime rates, because the cost of business is rising, new guns have to be bought, new people be brought in, etc..
But as you acknowledge, getting more dealers caught increases the risk of being a dealer and probably the price of drugs. That means fewer people willing to deal and fewer people willing to consume.
People don’t consume highly addictive drugs because their price is low, they consume them because they got hooked on them. New customers can be created by “free” shots, the cost thereof transferred to existing customers. Once created, the demand for highly addicting drugs is inelastic, almost like you can’t stop eating or drinking just because the price of food and drink is currently higher than usual. Dealers will transfer their higher risk to the consumers via higher prices. The highly addicted consumers transfer their increasing costs to their environment, breaking social ties and committing crime to fund their addiction. The highly addicted consumers have neither the choice not to consume (because they’re addicted) nor the choice of dealer (all dealers are under equal pressure unless where is significant corruption in law enforcement).
I’ll agree that increasing prices probably has a positive effect on illegal drugs that are not highly addictive, but then again, those drugs are not the biggest problem either.
“Once created, the demand for highly addicting drugs is inelastic, almost like you can’t stop eating or drinking just because the price of food and drink is currently higher than usual.”
What is the evidence for that claim? My rather casual impression of polling results is that there are lots of people who report having used an illegal and supposedly addictive drug in the past year but not in the past week.
Speaking anecdotally as someone who grew up in areas where highly addictive drugs were purveyed and used, I would say you’re looking at this all wrong. The random suburban kid in the nice car and the soiled shorts that found his way over to the hood by getting a number from his cousin isn’t the primary source of income for your average dealer, the regulars are. So yes, I’m sure there are far more people who’ve tried a drug once or twice because they had the opportunity and the cash than there are junkies, but for the junkies, the real source of income, price is absolutely inelastic.
It increases the risk for dealers but also the reward as the prices are driven up. Generally that means new dealers aren’t in short supply.
I’m a bit confused by this. As far as I’m aware, asexuality is not the same as a low libido? I mean, I know that this is what asexuality means colloquially, in the sense that it’s what people who aren’t actually asexual almost always mean when they say ‘asexual’, but that’s not actually what it means.
I grant that “feeling no attraction toward any gender or sex,” is in many scenarios only subtly different, but in cases like these it seems like a really important distinction to me.
One of my boyfriends is an asexual with a perfectly normal libido. His libido basically just has nowhere to go. This is about as frustrating as it sounds. Compare this with, if you’re heterosexual, the idea of the world being made up only of people of the same sex; or if you’re homosexual, the idea of the world being made up only of people of the opposite sex. (With apologies especially to pansexuals, who lose out on this thought experiment on a visceral level pretty much entirely.)
That being said, of course if you have a non-existent libido, it’s difficult to tell what your sexuality is. If you think you’re asexual because you have a non-existent libido, and you’ve invested much into that identiy so far, then I suppose you might feel threatened by something that increases libido.
The backlash still just confuses me. If women with a low libido don’t feel threatened by this*, why asexuals? Why the fuss about flibanserin and not about viagra?
I guess this is just evidence for that I’m easily confused.
* (can’t speak for them, mine is unfortunately high and I’m tempted to look for ways other than my naturally occurring “be depressed” to reduce it, since I keep ending up in relationships with people who don’t have a comparable one)
“One of my boyfriends is an asexual with a perfectly normal libido. His libido basically just has nowhere to go.”
As in nothing/no one arouses him?
“One of my boyfriends is an asexual with a perfectly normal libido. His libido basically just has nowhere to go.”
I’m confused as to what, exactly, this means. If he has a libido then presumably he can experience sexual arousal. If he experiences arousal then presumably some things arouse him more than others, unless he’s just as apt to get an erection reading about rationality as looking at porn?
I don’t mean to be judgmental; I’m just genuinely curious.
Regarding men with insufficient libido to keep up with yours, you might try getting one of your partners to try tantric sex (basically don’t ejaculate every time you have sex).
I hope this isn’t getting too personal for him:
If he has a libido then presumably he can experience sexual arousal.
Accurate, and he does (both very visibly, and self-reportedly). It’s a specific manner of human interaction that does it for him (I guess formally one would class that as a fetish? Not entirely sure), but he can’t at all stomach the idea of getting genitals involved.
He’s tried to get around it for me, but it doesn’t work, it makes him deeply uncomfortable. He’s hetero-, bi- or panromantic (we’re not sure; leaning toward panromantic), and he loves snuggling with me (in ways I jokingly call ‘virulent, not platonic’, because it doesn’t usefully map to either platonic or romantic), but anything beyond that is a no-go.
With the caveat that everyone is different, I can only speak for my own experience, etc. – think of libido here as another appetite, like hunger.
Everyone experiences hunger, yes? But not everyone experiences it the same way.
Think of someone sticking their head into the fridge and going “There’s nothing there to eat” (even though the fridge may be full); what appeals to one person’s appetite is not necessarily the same in quality or quantity as that of another. Sometimes you fancy a pizza and you don’t want to make do with a salad, even though that would fill you up the same way, right? Or sometimes a bowl of soup is plenty, even though everyone else is tucking into a full dinner and urging you to do the same because you must be starving, you haven’t eaten anything else all day!
For example, I can eat prawns and shrimps and shellfish. No allergies there, so my choice not to eat them is simply based on “don’t like them, don’t fancy eating them”. Other people may have different preferences on what they would rather go hungry than eat (obviously, we’re not talking starving to death levels here).
So it goes somewhat similarly with libido for asexuals: all the way from a spectrum of “completely turned off by sex” (like vegans faced with a meat meal would be repulsed on a visceral level) to “never or hardly ever feel sexual arousal” to “can and does become sexually aroused but not interested in having sex” to “can have sex but it’s a matter of choice in particular circumstances and is not an over-riding urge” (the starving to death example, where it never gets to the same level of desperation as, apparently, it does for the sexual amongst you).
Some asexuals deal with the build-up of libidic tension by masturbation (you know, just like the rest of the population) and that’s it, that’s as much as we want. Some of us can admire people in an aesthetic sense, and can objectively recognise that they are sexually desirable. Some of us can even feel sexually aroused by others, but we don’t want to translate that into actually having sex with someone (we can enjoy sexual fantasies as much as the next person, but we don’t want or need to go the next step). Some asexuals will have sexual relationships, but it’s because it’s a pleasurable experience that satisfies their partner and makes them happy, not because they want the sex as sex.
It does NOT mean asexuals aren’t interested in people or don’t feel affection and caring or don’t want contact or don’t want to engage in varying levels of intimacy (some may want romantic relationships and be happy with kissing, cuddling and anything else up to stopping short of intercourse). Asexual is not the same as aromantic and aromantic is not the same as asexual, and they don’t necessarily go together, although they can do.
There’s a plethora of sub-divisions but I’m not going to bother parsing out what the difference is between a greysexual and a demisexual and an asexual and Uncle Tom Cobley and all; the basics are that sexuality is a spectrum, desire is a spectrum, and asexuals like everyone else aren’t neatly clustered into one grouping.
Your fridge / food analogy is fantastic. Thanks for sharing your insights! 🙂
I’m glad it could help. I mean, you know, I have my favourite actors in TV shows and I go “Whooo, I’d love to throw you up against a wall and ruin you, baby!” but that’s all pure fantasy. The very notion of doing that with a real person has me running a mile in the opposite direction. Enjoyable as fantasy fodder for, um, private moments but you don’t want it in real life with people you can actually interact with.
And that definitely is falling under Too Much Information 🙂
The backlash still just confuses me. If women with a low libido don’t feel threatened by this*, why asexuals? Why the fuss about flibanserin and not about viagra?
From what I’ve seen, the examples used in the backlash are of women being pressured into treatment because they want or need to please a partner (husbands threatening to leave them because of lack of sex) and not because they have a low libido and want to increase it to normal levels.
I think Viagra should also be treated the same way, but there’s a whole raft of assumptions about male sexuality all tied up with that. It was intended for erectile dysfunction, that is, a physical cause preventing men who wanted to have sex from being able to successfully have sex. I imagine men who also had low libido were interested in trying it, and perhaps asexual men were pressured or felt pressured into “Well, now there’s something that can get you an erection, you have no excuse for not being normal!” because men are supposed to be studs and mad for sex. and if you’re not, there’s something wrong with you.
Naturally, humans being stupid, people then tried it as a sex aid (more or longer lasting erections) even where they didn’t need it. It’s also apparently true that some version of “a Viagra for women” was considered a desirable line of enquiry because of profitability, if it could be marketed and sold in the same volume as that to men.
I think the pharma companies pushing this new drug, as well as Viagra, is a dodgy idea, and I think feminist activists who fell for the pitch need to examine their assumptions. Part of the problem is that there is so little knowledge about asexuality (we can laugh about Tumblr, and I often want to smack the people who make particular posts on there, but they gave me a name and a description and a tool for a part of my being that I had gone decades without an adequate way of identifying.)
So it’s easy to conflate “low libido” with “asexuality” and assume that there is a problem that needs fixing, and now here’s a drug that will fix you right up! Especially the pressure from some corners of the feminist political ideology that women should be seizing the right to be sexually active in the same manner and the same quantity as men, that it’s not sex-positive to say you don’t want sex, that you are suffering from internalised misogyny and repression and prudery and religious brainwashing and slut-shaming if you don’t want to be sexually active. That’s why I don’t like the cynicism of the pharma companies manipulating mouthpieces (if that’s what they did).
I think people of whatever gender and sexual orientation who have low libido and want to do something about it should have the help to do so. But I also think there’s really a lack of knowledge about asexuality/aromanticism/different sexualities, even within the medical profession, and that it would be easy for people to be pushed by well-meaning family and partners to “do something”, to “get themselves fixed”, instead of understanding that they’re not broken and don’t need to be mended or healed.
I think this is probably the most important observation – or at least, the one that dispelled my confusion. Basically, to reiterate with my own words: Just because ‘low libido’ and ‘asexuality’ are not the same thing, many people think they are (heck, I so said, myself), and that’s actually the reason why this sort of drug is troublesome, because even though you can’t fix asexuality with a drug meant to increase your libido, people will assume so, paving the way for potential abuse.
I grok that. I admit I don’t share the fear, simply because I reckon people who have this misunderstanding now and might be potentially abusive/pushy partners over it already are (though I recognise this is a mere assumption on my part and can easily be wrong; after all, I currently only have intimate contact with one), with separate measures, but I feel like I have a much better grasp of why this is potentially scary now.
Thanks for your explanation and patience; I really appreciate it. Makes much more sense now.
I think the problem would be with the lack of awareness within the medical community; how many doctors in general practice have heard of asexuality? But if the pharma reps are pushing a new drug that’s “like Viagra for women” and a woman comes in with “Doctor, I’ve very little interest in sex”, the impulse – I would be afraid – would be to go “Aha, I have a sample here for you to try”. All meant in the best way, of course.
And then she goes home to her partner, and they say “See, the doctor says it’s simply a matter of needing medication” and the partner doesn’t feel like they’re being pushy or abusive (because they have no idea asexuality exists), and the woman feels like well, it must be something wrong with her after all, particularly since the doctor has given her something to take for it, so really she should just stop being so selfish insisting she doesn’t want sex.
And everybody has good intentions and is acting out of ignorance and people end up being pressured into doing something they don’t want and they’re not happy, their partner is not happy, nobody’s happy.
Except from before the drug’s existence, I would think the lady would go to a doctor, say she doesn’t have much interest in sex, and the doctor’s first impulse still wouldn’t have been “well, maybe you’re asexual*”, it would be to talk about nutrition, exercise, the dynamics in the relationship, psychology.
(* “or have any other sexuality otherwise not meshing with your romantic interests, e.g. you’re homoromantic but heterosexual”)
Those things don’t have the same ‘magic bullet’ characteristic, granted, and that might make all the difference – but the general class of problem seems (unfortunately) the same to me.
I don’t know what the solution for that would be. Maybe the first question out of anyone’s mouths should be “Have you thought about your sexuality; is it really what you were raised to assume it is?” That’s a big maybe, though. I can’t claim I can imagine the effects.
—-
Sidenote:
To be fair, does anyone ever consider themselves pushy or abusive? I suspect that’s one of the things that needs to be ‘diagnosed’ from the outside, unlikely to come from introspection.
@ Neike Taika-Tessaro
Maybe the first question out of anyone’s mouths should be “Have you thought about your sexuality; is it really what you were raised to assume it is?” That’s a big maybe, though. I can’t claim I can imagine the effects.
I can imagine expensive and … disruptive effects on a traditional relationship, when a short trial of the medicine might answer the question.
you can’t fix asexuality with a drug meant to increase your libido
I don’t believe this has been established. It may well be that some forms of asexuality are primarily caused by abnormally low libido. Perhaps others, as I’ve read in the above comments, may be something else entirely–libido without an outlet. It would seem that all that one needs to qualify as an asexual is “don’t want to have sex”, so it’s really unclear to me how you can make a blanket statement like the above one.
In addition, I really don’t get what the argument here is for asexuals to oppose drugs like this. Some people want to treat their low libido, but the “asexual community” is opposed to such a treatment being made available because…? Is it fear that people are going to force or pressure their asexual friends/family members/partners into taking these drugs? If this is the case, then I’m afraid I have to say I don’t find it very compelling. If other people suffering from a treatable sexual condition is the price of permitting asexuals to avoid having to grow a backbone and refuse to take medications they don’t want to take, then it’s too high.
I have no problem with people who aren’t interested in sex, but forming a community that feels entitled to attempt to restrict access to medications seems rather audacious.
It sort of sounds like the reaction of part of the deaf community against cochlear implants.
@Saal:
I don’t think we’re in much disagreement. As far as I’m aware, asexuality has a very particular meaning, which is the one I already mentioned – but it’s absolutely true that people with a low libido are probably more likely to assume they’re asexual, whether or not they actually are. And I’d agree that it makes sense for people to find out what they are, and not shy away from it.
If you read Deiseach’s remarks in this thread, though, you might get a better grasp why the drug is potentially scary for some aces (which is what I started this thread in hopes of understanding).
I definitely strongly agree with you that the drug should not be locked away on that basis, because I also feel that the good it can do probably outweighs the negatives, but that’s very much a gut feeling, since I don’t have numbers. (It’s probably also in large part because my fundamental views are generally heavily suspicious of any regulation, so please be wary of my aforementioned gut feeling, it’s definitely biased.)
Eh, wanting/needing to please a partner is a very good reason to seek treatment. Presumably you’re in that relationship for a reason and want to contribute to your partner’s satisfaction and happiness. And for many men and presumably some women, having a partner who is completely uninterested in sex is damn near an emergency. So why not take the pill?
Eh, why do gay men and women get married to opposite sex partners? Why not just do the therapy that will enable them to have sex with their spouse? Why insist they’re not oriented that way?
It’s down to lack of knowledge: if you know asexuality is something that exists, you can explore the idea “Do I have low libido, do I want sex, what is it that explains how I feel and what I want?”
Some people will decide they’re wiling to pay the price of sex for having intimacy with a partner and a romantic relationship so yes, they’ll take the pill. Some people will decide they’re not willing to compromise on this.
But if they know, then they can make a decision and a choice, not under the influence of something that appears to have a complicated mix of effects on boosting serotonin (so uplifts mood and gets muscles constricting), blocks serotonin-receptor (so acts as a sedative) and a weak dopamine affect (may or may not boost reward centre and increase/decrease activity).
I’m wildly over-simplifying here, but it seems to act on the same principle as “candy is dandy, but liquor is quicker”: gets you relaxed and loosened up and more impulsive and less likely to be resistant.
I have nothing against people who aren’t married, aren’t interested in sex, and don’t want to have a pill forced on them in order to “fix” something that they don’t regard as broken. As far as I understand, that’s your position, and I’m on your side.
But the original scenario was of two people who are already married, who have very mismatched libidos, and who might try to help the situation with a medical aid. For those people, begging off because “I’ve realized that I’m asexual” should not be on the table. If you got married (without some explicit agreement to the contrary), then you signed up for a sexual relationship and you have a moral and ethical obligation to meet your partner’s sexual needs. (As a corollary, if you have very low libido or might be asexual, then you should think twice before getting married, to avoid causing tons of pain to yourself and your spouse.)
The notion of asexuality as a distinct orientation does not legitimize bailing out on your existing responsibilities. The guy who realizes at 43 that he’s actually gay and wants to ditch his wife and kids to run off with his boyfriend is scum; so here, if you’re using your asexual orientation to recuse yourself from half of your marriage vows.
Does this argument presuppose a no sex before marriage subculture? In my world no one gets married before living together, and if you weren’t having much sex during courtship you shouldn’t be at all surprised if you don’t have much sex during marriage.
I could see how it would be different if one partner’s libido changed dramatically during the marriage. That’d be the equivalent to the coming out at 40 scenario. I’m not sure I’d be quite so condemnatory, but I can see the argument.
@Mai La Dreapta:
I understand marriage is a little different from a long-term relationship, so my horror is likely not an altogether appropriate reaction, but I’ll admit I recoiled quite a bit on reading this.
The idea that someone would owe me sex is very foreign to me. This is though the lack of it feeds into my depressive phases, which is to say it’s helping to make me sick – but I strongly feel it’s on me to do something about it.
If I were monogamous, it’d be my responsibility to pack up and leave if it’s unbearable for me. Since I’m polyamourous, I don’t need to break up with anyone, but it’s my responsibility to actually look for someone sexually compatible with me.
(Heck, ultimately, I might just be repulsive, who knows? I’m the common denominator here.)
Sex is supposed to be about intimacy and love, at least for me. If it’s a chore at best or something prompting visceral unease at worst, then I think that’s not okay, and it makes me think of unpleasant things like rape. “Don’t want it? Shut up, it’s your obligation.”
tl;dr: I think implying sex to be an obligation is a pretty scary statement.
—
To be fair, I don’t think you’re trying to make a statement quite as strong as how it came across? I’m not sure I know how to read what you said differently, but it seems so foreign to me that I’m fairly sure I must have misunderstood you.
Sex is supposed to be about intimacy and love, at least for me.
But that’s only half the equation. The other half is, what is marriage supposed to be about?
As Deiseach so delightfully pointed out in one of the recent gay marriage discussions, the answer seems to increasingly be “nothing at all”. Marriage is just like shacking up but with a big party at the beginning and maybe a tax cut for the duration, until it becomes inconvenient to one party or the other and then it ends. No actual obligations involved.
In which case, obviously, no obligatory sex. But also no point in marriage. Except for hospital visitations, of course.
But for as long as people have been keeping track, some people have seen real value in being able to make real, binding commitments on their future selves to their future partners. For these people, marriage still has a point, is more than a name and a ceremony.
Almost always – and Mai did call out “explicit agreements to the contrary” – one of the commitments that partners want and expect is a commitment to not have sex with anyone other than one’s spouse, ever, until death do they part. It may baffle you that anyone would want, expect, or even accept such a thing, but empirically about five billion people do. This maybe isn’t the place to explain why. If you are one of the dissenting minority, fortunately we are past the time when anyone is likely to force you to marry against your wishes.
But if you love someone intimately, and if you do choose to secure from them a commitment to never have sex with anyone else ever again, there is a fair, reasonable, and obvious reciprocal commitment for you to offer. Again, about five billion people do.
Most real, meaningful marriages do involve an obligation for each partner to make a good-faith effort at collaborating on a mutually satisfactory sex life for both partners within the marriage. And that does mean, if one partner has a high sex drive and the other a low sex drive, they are probably going to wind up having a moderate amount of sex. This obligation is no longer legally enforceable save by divorce, and properly so, but it is morally significant.
And it is reason for a person with a low sex drive, who is going to be having a moderate amount of sex, to look for a way to dial their sex drive up to “moderate” so they can enjoy it.
Also, as brad notes, a reason to figure this out before marriage. That doesn’t always happen, because people aren’t always scrupulously honest during courtship – even prolonged courtship. And when it does happen, turns out most people don’t get to marry their perfect ideal partner. If the closest match you can find has a mismatched sex drive, but the other aspects of marriage still appeal, you may want to find a way to live with that.
@John Schilling:
Yeah, that’s sort of why I put in the footnote (headnote?) that my horror might not apply. I do mean that. It is indeed inconceivable to me that people would think this way, but as long as it’s a consensual decision (and the option of divorce exists), it’s really none of my business. On this subject, I can only take other people’s word for this (and express my confusion, as I have been doing).
I should probably say that I’m very fond of long-term relationships where I pour in sweat, tears and commitment. I just can’t stomach the idea of expecting the same from my partner(s). If they from their own incentives would like to do the same thing, that’s great, but we can do that without making it a contract or obligation.
I guess that’s sort of inconsistent, but I’d rather expect more of myself than anyone else.
I’m already miffed that just by being together with me for this long, my primary’s apparently legally obliged to support me if we do part ways, much like a married couple would be. I’m not going to make use of that if I can help it, but I find it kind of disconcerting that I could. (I’ve been together with him for 12 years now; hoping for at least another 12, of course!)
And all of that conspires to make the whole notion of marriage really odd to me, and it quite difficult to get into the mindset of people who do marry. (Though, honestly, more power to them; if it makes them happy, that’s awesome.)
But we are already in the phase of wild overcorrection: now everybody seems to think their lack of desire to have sex equals asexuality, when the vast majority of cases it is simply about their partner i.e. their brain likes Nice Guy Good Father Bob and they marry him, but their vagina wants Dominant Psycho Criminal Ned, or if they are men, their brain likes Smart Funny Awesome Mother But Fat Old Fanny and they marry her, but their penis wants Young Thin Stupid Bimbo Betty.
Shenpen, I’m going to bop you on the nose for that. Bad puppy!
There’s a difference between “I don’t want sex with you” and “I don’t want sex with anyone“. Now, maybe someone will try to let a partner down easily with the “It’s not you, it’s me, I just don’t have any interest in sex”, but I really, honestly do not believe asexuality is known enough to be seized on as an excuse by the general public.
@ Deiseach
The backlash still just confuses me. If women with a low libido don’t feel threatened by this*, why asexuals? Why the fuss about flibanserin and not about viagra?
From what I’ve seen, the examples used in the backlash are of women being pressured into treatment because they want or need to please a partner (husbands threatening to leave them because of lack of sex) and not because they have a low libido and want to increase it to normal levels.
This is an argument used (more justly) against legalizing physican-assisted suicide — that a patient who does not really want it might be pressured into it by family. That makes more sense, as the patient is not in condition to resist pressure.
But a wife in normal health can say No, and if she tries the drug and doesn’t like the results, she can drop it
(It’s also the same argument against ‘excessive’ lab testing — that some results force doctors into unnecessary surgery. “Just say No” may be legally risky now, but that’s a flaw in hospital policy, not a good reason for refusing to get test results.)
houseboat, unfortunately men do walk away from marriage when things get rocky healthwise.
A woman unwilling to stick with a drug that she finds the side effects unpleasant, but which makes her sexually available to her husband, is likely to end up divorced. If she wants to maintain the relationship (and personally I’d hand the jerk the doorkeys and tell him goodnight, good luck and goodbye, but that’s me), she may well give in to pressure to keep taking the tablets.
That’s only a short-term fix, of course, because resentment is going to build up on both sides (he’s forcing me to be nothing more than a blow-up doll/she’s making me feel like a rapist) and that’s probably going to kill the marriage anyway.
Again, if more people knew the options before getting married, then they could identify and discuss things like “is this asexuality, is it low libido, is it just that we have different sexual needs” before it all gets out of hand.
Unfair generalization detected: only 1 in 5 men do that, according to the study.
What do you mean by that? I have trouble parsing this, in context of female sexual physiology – it’s not like she needs to do much besides giving permission. There’s, AFAIK, no physical mechanism preventing intercourse like impotence does, in men.
@ AngryDrake
Many people find even the idea of sex unpleasant when they are not aroused, to the point of being disgusted by the content of their own pre-orgasm thoughts immediately after orgasm. Someone whose sexuality works that way may be “unavailable” for psychological reasons.
For many other people, their partner’s arousal and pleasure is an important source of sexual satisfaction. Someone whose sexuality works that way would feel that their partner is “unavailable” even if the partner lied very still and let them do whatever.
no physical mechanism preventing intercourse
AngryDrake, without wanting to go into physiology and indeed the physiology of female arousal, simply “lie back and think of England” is not enough.
Think of trying to force a square peg into a round hole. It can certainly be done, but it’s not likely to be pleasant for either party.
Vaginismus is also an existent condition; if you’re psychologically tensed up about having sex, that will affect the tension of your muscles and I’m not going to get any more direct now.
@AngryDrake: I don’t know how universal this is, but as a man I don’t want to have sex with people who only second-order want it. I find it much more awkward and much less fun.
That’s prior to any physiological effects, and I’m pretty sure vaginal lubrication is a relevant factor.
@ Deiseach
Is one of these things not like the others …
1. Ban assisted suicide so no one can be pressured into it.
2. Ban marginal lab tests so doctors can’t be pressured into unnecessary surgery.
3. Ban any female libido drug so women can’t be pressured into taking it (and continuing to take it even when it’s not really working).
… or can 3 be addressed by short term prescriptions, some good follow-up questions, and if indicated, referral to counseling before giving refills?
A woman unwilling to stick with a drug that she finds the side effects unpleasant, but which makes her sexually available to her husband, is likely to end up divorced.
If the marriage is that shaky, the sooner she goes to counseling the better, and the doctor can make that happen (and in the US, can get Medicare to pay the counselor).
Again, if more people knew the options before getting married, then they could identify and discuss things like “is this asexuality, is it low libido, is it just that we have different sexual needs” before it all gets out of hand.
Oh, definitely. I just hope that before marriage, as well as having plenty of sex with each other, both of them have also had experience with different partners, all using contraceptives.
Why the fuss about flibanserin and not about viagra?
Because viagra, to a first order, doesn’t affect libido, sex drive, or arousal. It is purely mechanical in effect, allowing erections to occur normally in light of physiological defects that would otherwise prevent this. Inability to translate arousal into erection is the most common male version of a “libido with noplace to go”, and it is fairly uncontroversial that removing the mechanical roadblock to good sex and the satisfaction of desire is a good thing.
There’s a second-order effect that foreknowledge that one isn’t going to get an erection tends to inhibit arousal in the first place; viagra can help with that as well. So can a placebo. Viagra is not fundamentally an aphrodisiac, and people wanting “viagra for women” are missing the point.
Flibanserin, if it promotes actual sexual desire where none was present, is fundamentally different. And pharmaceutically modifying people’s desires, I can see why that might be controversial.
My concern w/re flibanserin is, are we inadvertently developing the missing ingredient to the perfect date-rape cocktail? Clinical trials seem to show only a weak, long-term effect, but I’d guess more than a few amateur pharmacologists will nonetheless try to develop the perfect mix of it, GHB, and alcohol, to be tested at bars and college campuses as convenient to the experimenter.
Aha! Thanks for filling in some blanks for me… and for the horrific footnote. Plenty food for thought. Much appreciated, thank you. 🙂
Journalists admitting they’re wrong is always to be celebrated, so here’s Chris Cilizza: Oh Boy Was I Wrong About Donald Trump. He says he thought Trump could never sustain high poll numbers because his favorability/unfavorability ratings were too low, but now his favorability/unfavorability ratings have gone way up. But remember that favorability might not matter much.
idk…early on, Howard Dean had momentum and then poof gone . same for Herman Cain, Newt. And in 1984: Garry hart, jesse jackson also had momentum early on but lost it. Too soon to know.
“Stuart Ritchie finds that we have reached Peak Social Priming. A new psychology paper suggests that there was an increase in divorce after the Sichuan earthquake because the shaking primed people’s ideas of instability and breakdown”
A fellow I used to know reacted to the 1994 earthquake in Los Angeles by driving to Santa Barbara and checking into a hotel for a week, leaving his wife and kids to take their chances with aftershocks. His wife didn’t immediately divorce him, but this revelation of his inability to be brave when the ground shakes didn’t help his marriage. It was like that recent Swedish movie Force Majeure about a husband at an Alpine ski resort who dashes off when it appears an avalanche is going to kill his wife and kids. We’re supposed to be beyond all that sexist stuff about men being be brave, but women don’t respond well sexually to displays of male cowardice.
Apparently minotaur porn is a whole class of Amazon porn. I guess I’m not surprised.
http://www.amazon.com/Menaced-Minotaur-An-Erotic-Novelette-ebook/dp/B00I1HVU58/
Re: Chris Blattman’s work on CBT and cash transfers in Liberia:
http://www.poverty-action.org/project/0166
Does this mean we can look forward to seeing Scott in Doctors Without Borders after he finishes two years post residency? 🙂
Small reference for color: http://www.msf.org.uk/article/haiti-interview-msf-psychiatrist
2010: “There are only 8 or 10 psychiatrists today in all of Haiti.”
Re diet, I have a very disgusting question. CW: dieting and also parasitism.
It’s a standing joke in our culture that people deliberately infect themselves with tapeworms in order to lose weight. Apparently this is quite effective; Maria Callas supposedly lost 80+ lbs doing it.
People are absolutely desperate for a pill that will let eat them as much while not gaining weight, or even losing it. In the past they took amphetamines for this purpose but that’s illegal now. People get gastric bypass but that means you have to eat less. Since people are so desperate to lose weight but can’t eat less, why don’t doctors prescribe tapeworms? Is it that it’s disgusting? Apparently most tapeworm infestations are asymptomatic except for the weight loss. You can eat a lot with a worm and still lose weight, and we know how to remove the worms. Nutritional deficiency is a worry but that’s true of bypass as well. Is it that the infestation can spread to muscle, other abdominal organs, and the brain? Cysticercosis results from eating the eggs of the worm, not having an adult already living inside, and it seems like it could be caught early if doctors knew to look. Presumably modern sewage treatment is effective at killing worm eggs. Am I seriously underestimating how dangerous/unpleasant it is to have a tapeworm? People find being overweight very unpleasant because of how they’re treated and apparently it’s pretty dangerous too.
Wouldn’t that release massive amounts of tape worms into the sewage system,etc? You would basically be reintroducing an infectious disease that could spread. Unless you genetically engineered them to not spread or something.
Ok, but we draw water from rivers and lakes that are full of such infectious parasites anyway – animals have worms that they can transmit to us and they poop & die in the waterways all the time. Most people get tapeworms from undercooking pork. The infectious disease is present right now. People in North America don’t drink a lot of reclaimed water and if they keep cooking their meat hot enough it seems like not many would get infected. Municipal drinking water is also highly filtered. Maybe if the eggs got into well-water supplies? But if the sewage is seeping into your water table you’re drinking out of it seems like you’d have bigger problems than tapeworms.
This is not a problem with hookworms, at least, assuming you poop into a toilet and not an outhouse. Plumbing is the reason we no longer have most parasites in the US. The life cycle of the hookworm requires that they mature in the dirt for a few days at least after someone has pooped out their (tiny, invisible) eggs. The mature larvae then enter people’s ankles by crawling up dewy grass, etc. When everyone flushes their poop almost immediately and not that many people walk around outside without shoes, their life cycle is broken.
People definitely use hookworm for allergies, but hookworms drink blood rather than eating food and are pretty small, so not likely to cause significant weight loss except with a severe infection. The smaller necator americanus also cannot migrate to muscles and other tissues in humans, so there is no worry there. I’m not sure about tapeworms.
I think the tapeworm is problematic mostly because it’s grosser than the hookworm: it can get a lot bigger and their larva actually crawl down your leg. How nasty would it be to see tiny worms emerging from your butt and crawling down your leg? That said, given the choice between removing a piece of my stomach and infecting myself with a tape worm, I’d take the latter. Given how desperate many people are to lose weight, you’d think a little grossness would be worth enduring (not grosser than bulimia, either, imo).
People do, actually. It’s called Helminthic therapy, usually prescribed for allergies, not weight loss, but either way. I know people who care deeply about their worms, which at thousands of dollars a year, I would hope they help.
Rushton observed that African babies sit up earlier than European babies ages ago.
The idea that the difference is due to whites coddling their babies is absurd, especially given the “never hold or kiss your baby unless absolutely necessary” advice that was completely dominant in American childrearing texts until recently.
The specific study linked to in the Jayman discussion does note, towards the beginning, the problem that personality traits might be ascribed by peers in part based on someone’s appearance. It then addresses that concern by saying “However, as quality and quantity of personality-relevant information should reduce stereotypes (Funder, Kolar, & Blackman, 1995; Letzring et al., 2006), well-informed acquaintances should not provide personality assessments affected by stereotypes.” Read the quoted study for yourself, but I interpret the results as saying that acquaintance improves correlation with reports from professional evaluators, close friends, and, interestingly, self-reports (which in the linking study apparently diverge), not that it eliminates stereotyping. And, of course, it’s a study done on college students.
Bag on my priors as you like, but I would want to see more evidence that people aren’t attributing personality traits based on appearance to these identical twins before shifting my own view from 50% to 75% on this. Some, but not all, of that skepticism extends to statistics drawn from criminal records.
“low-carb diets probably aren’t that helpful”
This statement could be misleading if one confuses “low-carb” with “slow carb” (low glycemic index). From the linked article:
“Nor did I attempt to evaluate the effects of any property of a diet besides its mere macronutrient proportions (between carbs, fats, and proteins) — for example, I did not try to evaluate arguments about ‘good carbs’ vs. ‘bad carbs.'”
Anecdotal evidence, but my own experience with “slow carb” was pretty good. On my doctor’s recommendation I went on a low-glycemic index diet where most of my carbs were things with a low to moderate glycemic index: things like kale, broccoli, cabbage, berries, and moderate amounts of beans and sweet potatoes, but very little in the way of grains or regular potatoes, and basically no heavily-processed carbs. I dropped about 30 pounds without even trying.
Actress Suzanne Somers was promoting “slow-carb” — eating vegetables instead of starches and sugars (and you can melt butter over your vegetables) — a couple of decades ago. The idea is that slow carb helps you from feeling too hungry. If hunger sensations are driving your overeating (rather than, say, a love of flavors), it’s a diet worth considering experimenting with. If it doesn’t work for you, you can stop.
Different diets work for different people.
Reading the Oliver Sacks article, he had a lot of cousins. His cousins might comprise a proportion on the order of 10^-5 of all the Ashkenazi Jews. Given base rates plus assortative mating, I don’t think this sort of thing is especially improbable.
Agreed. I’m not even entirely sure what Scott means by “this sort of thing.” The only connection between Sacks and Aumann, as far as I can tell, is that they’re intelligent and somewhat famous, both of which are not rare enough to be that surprising in a community of many children per family and high expectations.
I think for that last link about Switzerland you should link back to your Infinite Jest post as well. ‘Experialism’ in action, folks!
On the Global Priorities Project Page, entering 90%,10,1000,30% causes it to estimate a probability of 1 in 0.04 for the aversion of AI risk. This leads me to question whether the page’s authors knew what they were doing.
Haha, good point.
On the other hand, anybody who says that each person who joins now will encourage a thousand more AI researchers later, yet there will never be more than ten AI researchers, deserves what they get.
On the gripping hand, anybody who puts code into production which will only ever receive inputs from a finite (and small!) space of possibilities, without writing a simple automated test to make sure that the probabilities displayed are always between 0 and 1, similarly deserves what they get.
To answer your actual point, those inputs are entirely consistent with the prior that the foom is so close to happening that we only have time to get any serious people to put in another minute or so per day. Then someone publishes a paper on how to get fooming AI without value alignment, and those people start working really hard all day, every day, but too late! Some schmuck turns the math into working code and we get the x-risk outcome. There’s your factor of 1000, with only a handful of new researchers. Stupid, but not so inconsistent as to imply that something has a probability of 25.
Higher production values on the tool would have caught this, but it’s not clear would have been worth the extra time given expected use. Thanks for pointing it out anyway.
I hope it’s clear that the whole thing is giving pretty crude probability ouputs, and getting a probability like this means the model is breaking down (in this case adding an extra person leads to multiplying the total amount of work we do in expectation by a factor of 100, which I don’t think anyone would think is correct in expectation, even if you can tell construed stories where it’s true; this means the assumption that increases in success being linear over a doubling of the total research effort is no longer appropriate).
I am pleased to report that Thomas Schelling is indeed alive, although not, so far as I know, working in a Kentucky police department.
Looking into Success Academy (god the cheesiness of that name…), their methods do seem rather unpleasant (e.g. bordering on public shaming for bad students). But those results seem absurdly good, easily enough to justify it if the higher achievement lasts into adulthood. They are particularly good if people believe, as many do here, that a significant portion of intelligence is genetic… and these schools are not taking those students likely to have such good genes. And yet they are still on par with, or beating wealthy schools that, in addition to genetic, funding, and lack-of-disruption-in-class advantages, also have access to learning/tutoring outside of school and parents who care/can afford to splurge on learning.
Maybe it’s necessary to be harsh. From the perspective scientists sometimes take when evaluating cognitive biases (that we have some mental behaviors that are just evolutionary hangovers, but are now very harmful), it might make sense that we need to suppress kids’ natural inclinations if we want them to learn in a way that is very different from how they are designed to learn in the ancestral environment.
Very interested to see how these kids turn out in the future.
Isn’t high-pressure and shaming poor performance how East Asian schools produce high test scores?
Well, outside of handpicked rich people schools in Shanghai, the East Asians who live in America do better than the ones in East Asia on international achievement tests… so it probably has more to do with the students being East Asians.
Which is nevertheless consistent with a model in which high-pressure and shaming is how East Asian parents produce high test scores.
I think the Occam’s Razor advantage has to go to EducationRealist’s “they’re cheating” hypothesis. It covers both Success Academy and Asian immigrant high test scores with the same simple explanation.
It’s a lot more plausible that a single school is cheating than an entire population.
@Jiro
Isn’t that kind of basic probability? It has to be more likely that Linda is a bank teller than for Linda to be a bank teller and active in the feminist movement? It has to be more likely that Bob is cheating than Dan, Dave and Chris are cheating?
Isn’t high-pressure and shaming poor performance how pretty much any objective-defined group achieves anything?
Seems to me that they’ve simply rediscovered the “boot camp” model.
@HlynkaCG:
“Isn’t high-pressure and shaming poor performance how pretty much any objective-defined group achieves anything?”
I’m not sure what you actually mean here, but no, there are many models to elicit desirable behavior. Isn’t it Skinner who mapped those on to reward/punishment, positive/negative reinforcement? Shaming is punishment, and could be seen as positive or negative (does the shaming start when you fail? Or does it stop when you succeed?)
High pressure, I think would map onto frequency of reinforcement. I think Skinner (again) showed that less frequent (and uncertain) models were actually more lasting in producing desired behavior.
This is all from memory, so I will be happy to be corrected on the particulars, but I think the broad map is fairly correct.
Yes, a variable-ratio schedule gets a rat to press the bar for the food pellet the most. Aside from getting people to play slot machines, it’s not clear how this maps on to producing other outcomes.
@urstoff:
If you punish a child every single time they engage in any poor behavior, this produces (I think) worse compliance than other strategies. If you reward them every single time they engage in good behavior, it definitely produces worse results than more occasional rewards. That’s not even some huge insight, that’s just common sense.
Right; any parent knows to taper off reward frequency after a behavior has been established. So there is some applicability to fairly simple behaviors. Does this work for more complex behaviors or outcomes that are related to complex sets of skills and behaviors? What would a variable-ratio reinforcement schedule look like in a classroom?
@Urstoff
I’m not sure, nor do I know how one would go about testing it. But in regards to more complex behaviors my own experience would seem to indicate yes.
I played sports at a reasonably “high level” in high school and college, and this model held true. Likewise when I entered the military, and when I went through flight and EMT training.
see a section bias in that poor-performing kids quickly dropout, creating the impression the program is more effective than it is
I don’t know all the ins-and-outs of NYC schools as I have no kids, but just today on my Facebook feed a picture popped up of a child of someone I went to high school with going for his first day of school to a Success Academy in a nice area of Brooklyn. The family is neither poor nor a racial minority, so apparently the brand is not limited to those groups.
> Possible reasonable explanation: we coddle our babies and keep supporting them when they could perfectly well learn to sit on their own if we let them.
A potential problem with this explanation is that the differences show up much earlier than at five months. Black babies are often able to hold their heads up just hours after birth, a feat white babies aren’t expected to manage for a month or more. This is perhaps the most stunning example of a general pattern termed “African infant precocity”, which has been well-known to developmental psychologists for more than fifty years. (Geber 1958, J. Soc. Psych.)
Any modern papers still supporting this pattern of precocious African infants? I remember Rushton bringing it up, but well, it was Rushton.
The most recent I was able to find dated to the 1970s.
I found an anti-racist book at the local children’s museum giftshop, “Multiplication is for White People,” which details a number of such observations about African children. I suspect that anyone who had spent much time in an African community and paid attention to the babies could confirm or deny claims for their community, at least.
Using pubmed and google scholar, I was able to find papers and dev psych textbooks from the 90’s and 00’s. Measuring babies is hard and two different papers will typically have two different operationalizations of motor development, but my quick impression is that they’re mostly gesturing in the same direction? IANADP.
If you live in a multiracial neighborhood, it’s really obvious at the playground that, on average, black preschoolers are faster runners and better jumpers than white, Asian, and Hispanic preschoolers.
All 64 of the last 64 finalists in the men’s 100 meter dash at the Olympics going back to 1984 have been of substantial West African descent. That’s a remarkable statistic, but it’s slightly less astonishing if you used to take your kids to the playground in the extremely diverse Uptown neighborhood of Chicago.
The generally proposed biological mechanism behind that is that African Americans have lower bodyfat% relative to bodyweight, which means they carry less dead weight. This is equally true of kids as college athletes (not the general adult population though interestingly).
You can see this in the way it plays out within the different breakdown of race within the NFL as well (also convenient for disputing the social theory, more black sprinters than black lacrosse players is plausibly entirely social, but more black defensive linemen than offensive lineman?)
http://www.citypages.com/news/nfl-a-study-in-racial-separation-by-position-6533654 Faster coordinative development doesn’t really fit this pattern.
There are actually quite a few racial differences that have important sports implications. The recent book by David Epstein of Sports Illustrated that President Obama bought for himself for a Christmas present lists several of them.
Lower body quickness and agility is an average racial difference that emerges early in life and continues on to the black domination of cornerback and point guard in the pro ranks.
Sports are where human biodiversity average differences are perhaps most obvious of all, so not much sophisticated thinking has been done about it over the decades.
Here’s my review of Epstein’s book:
http://takimag.com/article/white_men_cant_reach_steve_sailer/print#axzz3j3KsolWr
@Steve
Conservative creationists posit a scientifically meaningless distinction between microevolution and macroevolution. They believe in the first and don’t believe in the second. Liberal creationists posit a scientifically meaningless distinction between evolution of physical traits and behavioral/neurological traits. They believe in the first but don’t believe in the second.
Your “crime of noticing” doesn’t include noticing different physical traits because that’s consistent with the belief. But if you were to say point out that lower body quickness and agility are partly a neurological trait because the brain had to have evolved the ability to send signals that coordinate the muscle movements more effectively, you might have strayed into thoughtcrime territory.
My general point here is that it might be easy to mistake differences you are allowed to notice for differences that are the most obvious.
And liberal economists posit a meaningless distinction between macroeconomics and microeconomics.
I’m always a bit surprised by economists not seeming to indict each others’ position as unscientific. Maybe it happens and I just haven’t noticed.
>to the black domination of cornerback and point guard in the pro ranks.
Wait, you mean point guard as in nba point guard? Because that seems to me like one of the least black dominated positions in recent times.
It’s pretty apparent, as Steve noted, that blacks tend to excel at athletics at a rate higher than whites
There is https://www.youtube.com/watch?v=vtBeA6qXU2E http://pzacad.pitzer.edu/~dmoore/1994_Kagan%20et%20al_Reactivity%20in%20infants_DP.pdf
The argument given is that the immigrants only cross the border once and stay, rather than commuting from Mexico and staying in Mexico, if crossing the border is difficult. Even if that’s true, it would actually mean that there would be a level of enforcement that results in peak immigration, and that increasing the enforcement past that would still reduce immigration (since with perfect enforcement the rate would be zero, so the curve has to go down at some point).
And on the subject of immigration… Seriously, wanting to be annexed is just a demand for wholesale immigration and of course would be desired for the same reasons.
“(since with perfect enforcement the rate would be zero, so the curve has to go down at some point)”
Perfect enforcement seems highly implausible, though, doesn’t it? A theoretical model has that as a lower bound, but in actuality?
I think the more cogent counter-argument is that the pressure to stay in the U.S. has probably reached the flat part of the curve in terms of effect on re-patriating, so tighter borders probably results in reducing migrant populations. The counter-counter argument is that you still have to get illegal crossing below repatriation to have any net reduction, and this is going to be very, very small compared to total population.
The problem with perfect border enforcement is that ~40% of the people not here legally are in fact documented, they documented a short trip (or even a multi year one) then didn’t leave.
Making voluntary repatriating easier might still help though.
Oliver Sacks was also a cousin of Abba Eban!
Intellectuals and artists tend to be related to other intellectuals and artists. This is especially true for Brits and for German/Austrian Jews.
In Britain, the same surnames keep popping up over many generations. Hamiltons, for example, are twice as likely to attend Oxford/Cambridge as Smiths or Jones.
At the start of the essay Sacks notes that his mother had 17 brothers and sisters.
It’s a lot easier to have notable cousins when your family fertility is sky-high.
Switzerland’s situation reminds me of your concept of a walled garden. As long as you keep the place tidy enough, more people will want to hang out there.
Unfortunately, Switzerland does not appear to be interested in taking over the world.
Unfortunately, Switzerland does not appear to be interested in taking over the world.
Borges hoped it would, eventually. (Link to poem in Spanish; couldn’t find an English version).
I like Borges’ sentiment there- it seems like a very functional example of multiculturalism as far as I can tell.
Maybe the other regions who want to join should just together create another country and call it “Switzerland 2”. Possibly copy the same laws Switzerland has (with reasonable modifications to the laws containing words “Switzerland” and “Switzerland 2” to avoid possible exploits).
That would be pretty amazing, especially if their borders formed a ring around Switzerland so that it would be totally surrounded.
The blog post about the charter school starts with the context of the switch to Common Core. This suggests two possibilities: (1) that the charter chain had always been great, but the new higher standard made that easier to see; (2) that the charter chain was better able to turn on a dime and implement Common Core. Case (2) would be a temporary and probably not meaningful phenomenon.
> Case (2) would be a temporary and probably not meaningful phenomenon.
…assuming that you believe that the Common Core is the platonic form of education, and that the ability to rapidly iterate through teaching standards in an evidence-based way is a non-benefit.
If you mean something, which I doubt, you probably left out some words and reversed it. I’m not going to put any effort into guessing what you meant.
High confidence that rossry meant that, if educational curricula can be improved, the ability to do a rapid change-over to a better curriculum is meaningful, and not a temporary benefit.
One of the ways of thinking about school systems is that the errors are errors of sclerosis, more than technique, ideology, or inputs. It’s not the whole truth, but it does seem true… and does imply that the ability to adapt is valuable.
(Still requires judgment about how to adapt, but that’s what rossry’s ‘evidence-based’ was likely assuming.)
notes is entirely correct about what I meant, though I’ll concede that I was perhaps overly telegraphic last night.
How would one, in principle, distinguish errors of sclerosis from errors of technique, &c.? It seems like similar questions arise in many different policy fields, but it’s not obvious to me what factors one should be looking at here.
How?
Look for perseveration.
If one thinks that schools are not focused on education (howsoever defined), that’s a criticism of ideology; if one thinks that schools do not educate competently, that’s a criticism of technique; if one thinks that schools cannot make bricks without straw, that’s a criticism of inputs.
If you think that many agree that schools are dysfunctional (including many or most teachers/administrators), and yet notice schools just go on in the same old way, and fail in the same old way… that’s a criticism of sclerosis. The link between error and correction, by way of acknowledged consequences, has been broken.
Perseveration.
> If you think that many agree that schools are dysfunctional (including many or most teachers/administrators), and yet notice schools just go on in the same old way, and fail in the same old way…
These facts also describe a failure of inputs, with the schools doing the best they can (which, since the inputs don’t change, doesn’t change).
This coupled with the civil asset forfeiture scam seems like a recipe for disaster.
How big a sample pool is “Germans older than those who grew up in the 1930s and are still around to respond to polls today?”
How big a sample pool is “Germans older than those who grew up in the 1930s and are still around to respond to polls today?”
Now I’m curious if you could use this to make an argument for some form of correlation between ‘being anti-semetic’ and ‘living a long life’.
Especially in Germany.
Of course, there also might be other common factors in the group of “Germans who were alive during World War II and are willing to answer poll questions in 2015 about their opinions on Jews.”
> here is a blogger suggesting they use attrition rather than selection per se
This is not the point that said blogger is making; in fact, with the 47% overall attrition rate they arrive at, even if every attritioned student failed, Success still would have outperformed the base rate.
Instead, their point seems to be that their high attrition rates are just a bad thing.
If other-report of CANOE is better than self-report, that seems like it has many consequences that can be tested apart from heritability. Actually, I don’t see many direct consequences, because it depends on the way in which self-report is inferior. (Is it noisy? Is it biased towards desirable answers? Is it biased towards 0?)
The meta-analysis on the paleo diet includes only 4 studies. The control diets are “Consensus Mediterranean-like diet,” “Diabetes diet in accordance with current guidelines,” “Nordic Nutrition Recommendations diet,” and “Dutch Health Council guidelines for a healthy diet.”
Those sound pretty legitimate to me, although I don’t know enough to make sure they did a good job with them.
The problem to me is that I seriously doubt those are macro-equivalent diets. One of the really common criticisms of Paleo in fitness communities (where Paleo is incredibly popular) is that it’s benefits come from resulting in a high protein/mid carb diet more or less on accident. A very superficial glance at the Dutch Council recs confirms that its basically ‘eat unprocessed foods and whole grains with less salt’.
On the flip side telling people to stop eating grains/sugar/whatever else paleo advocates claim our ancestors didn’t eat but they really did may be a far more effective intervention than macro counting and protein supplementation because its mentally easy. I don’t want to count it out even if I’m right, I’m just not easily convinced by comparing a diet I think is suboptimal to one I think is incredibly suboptimal.
As always, Libgen is your friend: https://www.dropbox.com/s/rlmyrjwnw609wmw/2015-manheimer.pdf / http://sci-hub.org/downloads/8e82/10.3945@ajcn.115.113613.pdf
Besides the “metabolic winter” study someone posted a little while back but which I can’t easily find right now (basically that we’re designed to be active and put on fat in summer and be inactive and burn fat during winter, but now we stay all warm and well fed and active throughout winter thanks to refrigerators, heaters, and artificial lighting), I have been thinking about a slightly more meta reason for the increase in fatness, especially over the past 30 years (which wouldn’t seem to have varied so much in the above regard anyway):
Basically it’s a negative feedback loop created by the diet and food industries:
Step 1: Nutritionist/Diet Book Authors/Science Journalists
Create a new diet amounting to: “eat whatever you want except x” [fat, carbs, grains, foods with the letter “p” in them]
What actually causes you to lose weight (I am assuming here) is to take in fewer calories total. What actually causes you take in fewer calories total (I am assuming here) is to not eat heavily processed, enriched, calorie dense foods which fool you into thinking you have eaten less than you really have. But that takes like, one sentence. Can’t write a diet book or sell a proprietary blend of protein powder or get invited to Dr. Oz for that. Plus, it basically amounts to “stop eating the most delicious foods.” That is not popular advice. So instead you tell people “just do this and it’s your holy grail.” People literally eat it up, and for a while, they really do lose weight! But here’s the key: the real reason they lose weight is that the new restriction forces them to cook more of their own food, read labels, and generally avoid a lot of processed, calorically dense foods.
Step 2: Food Industry
Create food catering to the new dieters, but which taste good.
Food industry notices “hey, suddenly everyone is into [low fat, low carb, gluten free, grain free, no corn syrup, no foods with letter “p”]. Let’s go to the lab and find a way to make pizza, cookies, cake, doughnuts, lasagna and all the things people love, but without [fat, carbs, gluten…]! Okay, it seems the only way to make lasagna taste good with no fat is to add a bunch of sugar… Okay, it seems the only way to make cookies taste good with no grain is to make them out of high-fat powdered nuts…
Step 3: Consumers
Yay! Now there is [low fat, gluten free, corn syrup free] cookies! I was getting tired of cooking everything myself and everything being kind of bland. Wait, why am I getting fat again??
Step 4: Diet Industry
Create a new diet amounting to “eat whatever you want except y…”
I find the winter hypothesis implausible. Why is a species that evolved in Africa going to have these adaptations? And if your theory is that the ancestors of Europeans picked them up as they moved to the temperate regions, then how come African-Americans have just as much of an obesity problem as anyone else?
People living at very equatorial latitudes this wouldn’t apply to, but I think most of humanity’s ancestors came from places where there was at least some seasonal change. And even if not that, then change in availability of food. If food had been consistent, there would be no need for fat at all.
Also, it’s not as if Africa has no seasonal changes, though perhaps in the places where humans first evolved they would not have been very drastic. Looking at the average temperatures for Ethiopia, where the Lucy skeleton was found, for example, the average daily sunshine decreases dramatically in July and August (monsoon), and the temperature can get pretty cool, if never very cold by most Anglo-American standards. I certainly imagine that any ancestors of ours living in Ethiopia 50,000 years ago would have likely slept more and ate less during those months. In fact, I’m pretty sure people sleep more and eat less during monsoon season in places where it happens even now.
Also, I think seasonal change is part of the reason so many animals use vitamin D from sunshine (though I think nocturnal animals get it more from food?). And not coincidentally, we store it in our fat, and higher levels of vitamin D seem to be associated with higher fertility. So you eat and be active and have sex all summer and by say, September (in the northern hemisphere), when your vitamin D levels are at the highest, hopefully you are a bit fat, and, if female, pregnant. This gets you through the winter and ensures your baby won’t be born *in* winter, when chances of survival would be lower.
But the “we’re tropical animals” (at least for a significant part of our evolution) hypothesis is also plausible in many ways: after all, where but in the tropics can we be naked all year? The proponents of this view also tend to promote high-carb, since the tropics have fruit. The people eating high-fat tend to be living in very inhospitable climes like Alaska, which would tend to indicate that carbs are the “preferred fuel” and fat the alternative fuel (along with our brain’s use of glycogen first, ketones second).
What is your source for daylight hours in Ethiopia? This says that at 15N (into Eritrea) daylight ranges from 11 to 13 hours.
Wikipedia. Daily sunshine hours, not daylight hours. Presumably it is much cloudier during the rainy season. Only 3 hours of sunshine, on average, during July and August, as compared to 6 to 9 for most of the rest of the year.
Oh, yeah, you did say that the minimum was in August.
But do we care about hours of sunshine? This claims that a solar panel pointed south in Addis Ababa will receive 30% more light in February than in July. Except for the “pointed south” part, this is what matters for plants and vitamin D. That doesn’t sound like a big difference to me. But there are other seasonal differences. Maybe water is the limited resource and February is down time.
Dry season / wet season is a big deal in the tropical parts of the world, but when they occur tends to differ from place to place.
They don’t.
They have the same BMI effect from modern America as anybody else, but body-fat percentage is lower.
Apparently not true of AA/EA adults in general, though true of kids in general and college athletes. Hmm.
Given the subject matter of recent discussions around these parts, I initially unpacked that as “Apparently not true of Affirmative Action/Effective Altruism adults in general…”
It’s a theory more about mammals than humans in particular. The theory is that mammals have occasionally had to survive periods in which they are cold and cannot find adequate food, hence have code in their genome that produces a capability to hunker down and spend fat reserves to stay alive through the winter. If it works for bears and squirrels it might work for us too.
The animals that live near humans got fat at around the same time we did. Dogs and cats and mice and even laboratory rats are all a lot fatter today than they were in the 1970s. Their food has not been engineered for maximum tastiness like ours has and they don’t follow human diet fads. So what changed since the 1970s for us and our pets? One thing that changed is we got rid of winter – via better heating and insulation and clothing. People and animals used to “feel cold” in the winter or at night at least some of the time. In the 1970s the president encouraged Americans to lower the thermostat, let their houses be cold and wear more sweaters and blankets. To quote President Carter:
Ray Cronise’s Metabolic Winter paper.
The Wired article.
Ray’s later work suggests even pretty small seasonal temperature changes can produce this effect.
Hah, maybe the reason I lost weight when I lived in Japan was because I had no central heat and the walls were paper thin…
Have an explanation for some but not all of weight gain: Dieting!
A fair number of people (anyone have information about the proportion?) can lose weight, but regain it plus more. I see 25 pounds frequently cited for the rise in set point.
There are people who have gone through this cycle 3 or 4 times, and being 75 or 100 pounds heavier is a big deal.
Also, I’m not sure whether it’s still as common, but women used to be strongly encouraged to diet while pregnant, which might cause famine adaptation in their children.
For different angle, what proportion of the population is on meds which have a side effect of weight gain?
I, on the other hand, can only maintain a healthy weight by means of periodic dieting.
I know they tell you you need to cultivate good habits, change your lifestyle instead of going on a diet, etc. etc. And I do have some reasonably good habits: I exercise, I don’t drink soda, I don’t usually indulge too heavily in fast food and the like.
But the problem is, if I eat food I like until I feel satisfied every day my weight inevitably starts to creep up. The only way to compensate for that is to periodically put myself through some mild deprivation, be it actual fasting, intermittent fasting, eating strictly vegan for a few days, or what have you.
I’m sure I *could* devise a diet which I could eat every day and maintain a healthy weight, but I would rather suffer some hunger pangs every once in a while than swear off cheese and chocolate for the rest of my life.
Doctors and nutritionists complain about so-called “yoyo dieting,” but honestly, a very mild yoyo is the only thing that works for me.
>women used to be strongly encouraged to diet while pregnant
I saw the complete opposite i.e. “Eat up, clean your plate, as now you must eat for two!”
> stop eating the most delicious foods.
I’m thinking of doing a new diet for myself that’s based upon this idea. Anecdotally it seems to make a lot of sense.
How would one make this work? where do you draw the line?
Well you could do the diet recommended by McDougal. Basically just avoid artificially “concentrated” foods. How do we concentrate calories? Oil in a bottle, drying, deep frying, refining sugar (out of sugar cane or beet), etc. etc. That is, you might say, “it’s okay to eat the natural fats in avocado, nuts, etc. but not to add a bunch of olive oil,” or “it’s okay to eat as much fructose as I want in the form of fresh fruit, but not high fructose corn syrup, dried fruit, or fruit juice (dried fruit takes out the water, fruit juice takes out the chewing). Similarly, try to avoid things with fiber removed: brown rice instead of white rice, etc.
If you think about it, much of what makes food extra delicious is, in effect, pre-digestion and/or concentration: break down the protein, take out the water, fiber, etc. and reduce it down to just pure, yummy calories. Trying to avoid foods which have been subjected to too much of such processing seems a good idea.
McDougal is basically: eat a lot of starch and beans and fruit and veg, but no added fat or sugar. They are usually okay with salt if you don’t have high blood pressure.
> s okay to eat as much fructose as I want in the form of fresh fruit,
I can eat a lot of delicious strawberries and bananas and pineapples..
Modern fruit are already “artificially concentrated” through selective breeding for taste.
BTW what is this “McDougal”? This https://www.drmcdougall.com/health/education/free-mcdougall-program/ ?
The McDougal stuff looks incredibly time-comsuming. I think I will but pre-made versions of that. Tinned beef stew, add extra lentils and mayo perhaps.
I’m skeptical of the metabolic winter theory because if it was true, one would expect obesity rates to be higher in places with colder climates, because those would have seen the biggest changes as a result of heating technologies and so on. But that isn’t the case. Venezuala, nearly all of which is less than 12 degrees from the Equator, has obesity rates matching America’s, while Columbia has rates that are half that despite being at pretty much the exact same latitude.
It looks like colder places do, on average, have higher obesity rates, though that is very heavily confounded by them also being, on average, more developed:
http://aswwu.com/collegian/wp-content/uploads/2014/12/Global_Obesity_BothSexes_2008.png
Honestly, this hypothesis is also interesting, but there’s really only one factor I can point to which nearly all cultures without an obesity problem share: they get most of their calories from grains, legumes, and vegetables.
You can point to particular individuals who have lost weight eating low carb, but it’s very hard to point to any large group or traditional society which eats a lot of animal products and also has low incidence of obesity and cardiovascular disease. The Inuit, for example, supposedly able to eat huge amounts of fat and yet suffer low rates of cardiovascular disease, seem to actually have the same rate of heart attack and higher rate of stroke as people eating a more modern diet: https://www.minnpost.com/second-opinion/2014/08/fish-oil-and-eskimo-diet-another-medical-myth-debunked They have even found heart disease in Inuit mummies, proving it’s not the introduction of the Western diet corrupting them somehow.
Compare to the 7th Day Adventists, Okinawans, South Indians, etc. etc.
As for the bombing of Vietnam making areas richer, a possible cause for those areas being better off is infrastructure reworks. The idea is that infrastructure was set up ages ago and was optimized for the resources and concerns of the time and as time passed was only modified incrementally. Heavy bombing would force the locals to rebuild everything from scratch presumably better organized than before.
I remember a link to a study comparing cities in Roman empire vs non-roman empire and their locations relative to natural resources but can’t find it now.
That sounds plausible – Japanese cities are amazingly easy to get around because their infrastructure grid was all made post-WWII when they knew they were going to need big roads for cars, etc. The less-bombed cities of Europe are a big mess.
I wonder how long it takes before bombing has a net positive effect on GDP, and how cheap it is compared to other interventions that raise GDP by the same amount…tell you what, let’s not let the effective altruists know about this one.
I suppose the natural comparison would be with the rebuilding of San Francisco after the earthquake of 1906 where 80% of the city was destroyed; new San Francisco better off than old San Francisco?
I think about 80% of the city of New York was torn down and rebuilt over a roughly similar span, so doing it all at once catastrophically might just expedite what would have already been done anyway. Of course, similar or better long-run outcomes don’t mean much to all the people who died in fires and they count, too.
There is a massive difference between rebuilding an entire city all at once and doing it piecemeal. When you replace things one place at a time, you are constrained by the demands of all the places that you are not currently wrecking. No such thing when the whole place has been flattened.
I find Tokyo a nightmare to get around, and it was bombed heavily. I find Kyoto super easy to get around, and it was one of the few places not bombed. Kyoto’s good design is thanks to Tang Dynasty fengshui rules which were followed in its design (in imitation of Chang’an). Tokyo, by contrast, is a massively overgrown fishing village.
Not that I don’t sympathize: When I was working in Boston, I often felt the place could use a good fire or two. I hate destruction of culture, but the roads there are just a total nightmare. Of course, it also has a lot to do with wear you build: Chicago is pretty flat and open and so follows a nice, logical grid. Boston, of course, is like Tokyo in being on the water and more mountainous, and so is a similar spaghetti string mess. (Kyoto is surrounded by mountains but not too hilly within, and so may thank that also: but again, we have to thank the court geomancers for picking the location).
Re. infrastructure in general: being late to the game does seem to come with advantages: compare the New York subway system to the Taipei subway system (the latter is much nicer).
What about Tokyo is so troublesome? (If you intend driving in Tokyo, I withdraw the question: that is nightmarish. On the other hand, there are usually better ways to get around Tokyo.)
It just takes an hour to get door-to-door between any two locations of any distance: first walk to the nearest subway station, then walk THROUGH that no-doubt vast subway station. Then wait for train. Then walk through another vast subway station to transfer. Wait for second train. When you finally arrive at the subway station nearest your actual destination, go try to find your destination on the local map since half the streets have no names and you have to navigate by landmark.
Notes’ comment above is the point where the artificial avatar generation program has gone from “things that sort of remind me of swastikas” to “actual swastika”. Are there any other good such programs other than the ugly monster one?
@onyomi
You aren’t kidding on the map issue. Tokyo’s address system is nightmarish. The block numbers are rarely in any sort of order, and the building numbers… they’re in order of construction, not anything to do with physical location. But are things better in other Japanese cities? I believe Kyoto has some alternate address system, because the standard one worked even less well than usual.
And, sure, that hour door-to-door is about right. Could be shorter, if you or your destination are near a nodal subway station, but that’s by no means guaranteed. Still think that, for a major metropolis, not even making allowances for the density, that’s pretty good performance. Had much worse times trying to get around other large cities (outside Japan).
@Scott Alexander
No idea. Quick look says that WP standardized on gravatars, and with the winner picked, there’s been less work on the other options.
Kyoto is wonderfully logical, with the major east-west streets being named ichijou, nijou, sanjou… (Street One, Street Two, Street Three…), and the north-south streets having easy-to-remember landmark-based names. I haven’t spent enough time in other cities recently to remember how it works, exactly, but I seem to recall Tokyo’s lack of street-based addresses and number of unnamed streets being especially egregious.
The Japanese standard address system, to the best of my knowledge, is better described as working your way through a set of numbered Venn diagrams until you’re at a door. Works acceptably for unique identification on the tax rolls or similar; does not translate easily at all into a set of directions. Street addresses generally do.
Pretty sure Kyoto’s use of street-based addresses is the exception, not the rule, and driven by the failure of the standard address system in the face of Kyoto’s many historical tiny non-unique neighborhood names.
That is to say, Tokyo’s address system is indeed egregious, and indeed especially egregious… but not in comparison to most Japanese cities.
“and the building numbers… they’re in order of construction, not anything to do with physical location.”
Thanks. I remember my late cousin Jerry telling me that in 1968 when he came back from Army duty in Japan.
I spent most of my time in Nagoya, which was heavily bombed and then rebuilt on a principle of large streets in a grid network. I can’t speak much to anywhere else, though I seem to remember Tokyo being pretty okay given its absurd density.
I also lived in Nagoya for a few months. I do recall it being very pleasant and easy to get around. Kobe also has weird, everything-is-brand-new vibe (also an unusually Western vibe), but that is more because of the earthquake.
This is a long bet, but did you live in Freebell Mansion?
Hmm, nope. I was living with a host family back then. This was a pretty long time ago as part of a summer language study thing I did.
Washington DC has a nice logical grid that’s a nightmare to navigate for reasons having nothing to do with natural geography.
I can’t edit my post but I found the paper I mentioned above.
Abstract:
“Do locational fundamentals such as coastlines and rivers determine town locations, or can historical events trap towns in unfavorable locations for centuries? We examine the effects on town locations of the collapse of the Western Roman Empire, which temporarily ended urbanization in Britain, but not in France. As urbanization recovered, medieval towns were more often found in Roman-era town locations in France than in Britain, and this difference still persists today. The resetting of Britain’s urban network gave it better access to naturally navigable waterways when this was important, while many French towns remained without such access.”
Interesting paper; so far I’ve just read the introduction. Do they explain why the Roman towns were located in suboptimal locations in the first place? The introduction mentions that the Romans placed their towns along the roads, but of course the Romans built the roads and I would expect them to have laid out the roads to connect either existing towns or what they perceived as good sites for future towns. What were the Romans looking for in their urban sites that the French didn’t find similarly valuable, or vice versa?
In Roman times roads were important, in post-Roman times coastal access was more important. In France the cities didn’t move since the Roman collapse wasn’t as bad so had bad coastal access. In England, the cities did move due to the greater upheaval and generally moved to more coastal (better locations).
Relevant bit:
“During the Roman era roads connected major towns, and other towns emerged alongside these roads, since the Roman army (which used the roads to move quickly in all weather conditions) played major economic and administrative roles. But during the Middle Ages, the deterioration of road quality and technical improvements to water transport increased the importance of coastal access. In our empirical analysis, we find that during the Middle Ages towns in Britain were roughly two and a half times more likely to have coastal access – either directly or via a navigable river – than during the Roman era. In contrast, in France there was little change in the urban network’s coastal access over the same period.”
Right, but that treats the roads as arbitrary. I can certainly believe that the post-apocalyptic Kingdom of Georgia will be ruled from its capital in Atlanta because that is where the Six Eternal Roads were laid down by the Great and Mighty Eisenhower.
The Romans built their own roads. If there was any reason at all to build towns at rivers, coasts, and harbors – and I cannot believe that water transport was wholly unimportant in the Roman Empire – why did they not build the roads to those sites rather than to the otherwise unexceptional sites they actually chose? Which came first, the city or the road? Was that in fact the road crossed by the chicken that may have come first? Wait, I had a point here somewhere…
Oh, yes, the difference between Britain and France. In France, all roads lead to Rome and waterways lead to the barbaric frontier. If the Romans can build town in the middle of nowhere and make them economically viable, that might drive them to avoid navigable waterways. But in Britain, roads could at best lead to harbors from which boats would lead (directly or indirectly) to Rome. The Romans might have had more reason to build their towns on harbors and navigable rivers in Britain, which would be a confounding variable with the more complete abandonment of Britain compared to France.
I’m going to have to read the whole paper now, aren’t I?
Because they were optimized towards being military camps, not towards being economic towns. Castrum -> Manchaster, Doncaster, Lancaster etc.
That raises the question of whether the lack of massive rebuilding in the absence of bombing is a market failure, or whether the finding that cities are improved by bombing fails to properly include the cost of the rebuilding, including the time discounting for the delay between spending resources rebuilding and the benefits accruing.
Language Log occasionally discusses these ostensibly untranslatable words. They tend to find that the claimed meanings of said words are much more whimsical and poetic than their actual usage in the language. For example, I wouldn’t be surprised if “sehnsucht” from that subreddit, “the inconsolable longing in the human heart for we know not what”, turned out to be roughly equivalent to “melancholy” in terms of its usage. For example: http://languagelog.ldc.upenn.edu/nll/?p=19128
German speaker here. Melancholy and Sehnsucht are two distinct concepts in German.
How does “the inconsolable longing in the human heart for we know not what” strike you as a translation?
Overly dramatic but not entirely incorrect. Wikipedia more reasonably suggests “longing”, “yearning”, “craving” or a type of “intensely missing”, but admits that it’s hard to translate adequately.
The claim that “Sehnsucht” is difficult to translate strikes me as very odd. The words in that list, together with “wistfulness” (in the case where the “Sehnsucht” has no determinate object), would seem to cover every possible use quite adequately. Of course there is no English word whose uses correspond one-to-one to that of the German word, but if that is your standard for translatability, then the set of translatable words in English may be exhausted by about a hundred lexemes like “table”.
Also german here. I would say the “the inconsolable longing in the human heart”-part is alright. But you can know what you have Sehnsucht for. For example a person (Source: cheap love song “Sehnsucht nach dir”/ “Sehnsucht for you” http://www.songtexte.com/songtext/wolfgang-petry/sehnsucht-nach-dir-63de528f.html).
If we’re linking songs, allow me to do the obvious and state that all I know about sehnsucht is that it’s so grausam 😛
C.S. Lewis in his novel “The Pilgrim’s Regress” invokes sehnsucht, and it’s not melancholy. It’s more a combination of nostalgia and longing for we know not quite what; something strikes us as reminiscent of some happy day gone by, some golden moment of the past, but it’s not the thing itself we desire, rather the intangible sensation of contentment or fulfilment.
Possibly elsewhere, but Lewis wrote about the longing itself being desired, and I think he took it as evidence that we aren’t entirely native to this world.
I remember most of that being in his autobiography, _Surprised by Joy_, 1955 — ‘Joy’ being his technical name for that feeling. He mastheaded the book with a quote from Wordsworth — the whole poem is at
http://www.poetryfoundation.org/poem/180628 — but I’d suspect that was signaling, as the poem uses ‘joy’ in its usual sense.
@ Deiseach
How about Lewis’s translation of the word Tao? He sounded like he was introducing an un-heard-of word to his peers in the 1940s.
My take is that Lewis says we have this yearning apparently triggered by some experience, yet when we try repeating the experience or the event, it does not satisfy us, and it is that piercing bittersweet yearning we then desire to have, if we cannot have what would satisfy it, as a second-best.
Because the desire itself is so plangent, even to have it is better than the satisfaction the things we can have provide; the things we can lay our hands on do not satisfy the longing, and since we cannot have the actual whatever-it-is that would satisfy it, the longing itself stands in for the unattainable.
To quote from Tolkien’s “Athrabeth”:
Yes, Lewis says that about his ‘Joy’. But I was asking about his ‘Tao’ in AoM, which is quite a different thing.
I often disagree with Jayman, but I agree that measurement error can drive down h2 estimates (there’s other designs than aggregating self and other report available to study that, e.g. modelling the retest reliability). Also importantly, E may not always be things that people consider “environment” but also stuff like stochastic processes during embryonic development, not modifiable environment a la “peers”.
However, there are some things in that blog post which could serve to confuse, depending on how you understand the terms used.
He cites the Hanscombe study as showing that heritability isn’t lower on the low SES end. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0030320
That depends on what you mean by heritability. Jayman prefers to talk about percentages of the total variability (relative variation) and does so in that post. If absolute variability (a bigger SD regarding the number of correct answers on an IQ test) is higher among low SES twins, heritability as a percentage will go down. Figure 2 shows this very nicely, the variation stack is higher at the low end.
But that is only because shared experiences explain some absolute variation at the low end, but not at the high end (i.e. maybe the presence/absence of malnutrition and beatings matter, but making the babies listen to Mozart vs Beethoven doesn’t).
So genes don’t explain less absolute variation, but they explain less relative variation (what Jayman and most people usually do), because other factors (E&C) matter more.
This is useful to keep in mind when commentators want to extrapolate such coefficients to populations where absolute variability and variability of the environment may be much larger (i.e. including general severe malnutrition, specific pathogens, prenatal Ramadan exposure and all sorts of things the populations studied so far don’t have).
I think this also shows that absolute variation is usually the more sensible perspective (but harder to grok). It should have been used in all the meta-analyses, but wasn’t.
In addition, Figure 2 shows that even eminent behaviour geneticists are wont to overfit by cluelessly using polynomials and not dividing their data into test & training sets.
What do you think of the specific claim that Big Five traits should be considered closer to 80% genetic than 40%?
I think it’s not entirely implausible, but probably an overestimate. Basically our methods for assessing intelligence are very good tests with great psychometric properties, our methods for assessing personality are self-report surveys with not so great psychometric properties (it’s not like we’ve carved nature at its joints with the Big 5 and self-report is of course inferior to testing). It is definitely a mainstream opinion that aggregating with observer reports will increase validity. They first aggregated two peer reports, then averaged this with self-report. There’s also a school that says you should weight each peer as much as the self report, so maybe there’s potential to increase validity even more.
That said, this is based on 919 twin pairs, it’s two samples in one country, the standard errors on those proportions are still high, the models didn’t fit snugly and even a well-intentioned researcher can inflate or deflate some values depending on the analytic choices he makes. Also this is just twins MZ/DZ, the full set would include adoptions and MZ reared apart. We tend to ignore this now because it’s hardly ever the pattern, but just with MZ/DZ you can’t actually say whether a high common environment, making DZ more similar, is canceled out by high nonadditive genetic variation, making DZ less similar.
So, further replication and especially pre-registered analyses would be nice since this sort of structural equation modelling lends itself to tinkering with the model after you’ve seen the results.
But even then, all you’ve got is a proportion. I think heritability as a proportion has been the source of a lot of misunderstanding. People tend to think “80% heritable”, genes don’t change quickly, so environmental influence is capped at 20%. But that’s not true. Just increase the total environmental variation (or go to a less WEIRD country than Germany) and soon it may occupy a bigger proportion. But yeah the graph in Hanscombe makes that point better than I can.
The measurement error issue (which is necessarily a given in virtually any measurement) doesn’t just emerge from one or two studies on personality, several.
In young children, there is little E (unshared environment) stability even when measurements are taken over seven consecutive minutes! See Burt, Klahr, and Klump, 2015. E measured at those ages is entirely measurement error.
Another twin study (Hopwood et al 2012) looking at personality from at three points from age 17 to age 29 finds half of the E measurement is unstable, while the other half remains. This indicates about half of the 50% of the variance ascribed to E is measurement error, while the other half are real phenotypic differences, putting results in line with 75% A+D-0% C-25% E as I’ve noted.
As well, there was Hatemi et al 2010, the large extended twin study on political attitudes that had a retest sample that gave a pretty good measure of measurement error:
https://twitter.com/JayMan471/statuses/553198598293577729
I’ve also explained in my post how teacher and peer ratings produce higher heritability estimates. Teacher ratings of child self-control (Coyne & Wright 2014) produced a heritability of 76%.
What about http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.0020132 ?
Heritability is a technical term. It means proportion. I agree that it is the wrong thing to look at, but if you look at something else, you have to use another word.
I know and that’s why Jayman is technically wrong when he writes “The heritability was constant across SES”, it’s not, the absolute share of variability explained by additive genetics is. But I take pity on him here, because this proportion business is not the way most people understand the term at first, so actually an incorrect way to say it may lead to the right intuition being generated (genes don’t matter less, the environment simply matters more).
Or maybe we could just be clearer in our language.
Which (along with the proportion of the variance explained by non-additive genetics) is the heritability. That’s the definition of the word.
I almost always agree with JayMan on most issues of behavior genetics. I have a query for him here, though.
GCTAs have thus far haven’t really supported your hypothesis here, and not a single gene has been found that has replicable environment-independent effects on any complex behavioral trait yet. How come?
And considering that you’re suggesting that the third law of beh. gen. should be amended based solely off of twin studies on personality traits conducted within the WEIRD world, well, that’s pretty weak evidence.
Interjection: folks have to learn the difference between a hypothesis and a fact. I’m reporting facts.
The ones I know about have, I don’t know which ones you’re talking about.
Yeah? And? So?
Primarily because behavior is the result of many genes of small effect. If you were following GCTAs, you’d know that.
After all, at least 84% of all genes are expressed primarily or exclusively in the brain.
Scott is saying that. Let’s review what the laws of behavioral genetics actually say:
These laws don’t have numbers attached to them. That the left over variance (which falls under the Third Law) is smaller than commonly claimed (10-30% instead of 50%) is still quite consistent with the Third Law.
Another way of interpreting the Third Law is this: “identical” twins (even when raised together) aren’t actually identical.
The ones I know about have, I don’t know which ones you’re talking about.
Linking to a list isn’t really helping, so you’ll have to specify which GCTAs support your claims.
It seems to me that common variants are less likely to explain the heritability now. I’m still looking out for genes with statistically significant, reproducible, and environment independent effects.
(Unrelated note: none have found an allele w/ IQ raising effects)
Fundamentally, you can’t really do this type of analysis. This is because SES is also heritable. It’s hardly a random variable. Hence stratifying your sample by SES is attenuating some of the A in it, which is bound to give you unreliable/nonsensical answers.
What people are really interested in is heritability by race, is which is the same for all races in the U.S.
This is really rubbish, too. We have behavioral genetic studies based on large, nationally representative samples. These give high heritability (for example, like 82% for the heritability of military service in the U.S.) and zero shared environment. Now if local environment mattered, it’d turn up in the shared environment in these samples. It absolutely does not do that.
You make a lot of claims here, can you source them? I’ve seen either evidence to the contrary or no evidence for basically all you say here, so I’d like to see where you’re coming from exactly.
We have behavioral genetic studies based on large, nationally representative samples. These give high heritability (for example, like 82% for the heritability of military service in the U.S.) and zero shared environment.
But that’s what Ruben seems to be saying, you have your range of environments restricted to cohorts from the U.S., and not even from all possible environments in the U.S.
E.g. http://www.ncbi.nlm.nih.gov/pubmed/23725549
“…greater genetic variance was found in school environments in which student populations experienced less poverty. In general, ‘higher’ school-level SES allowed genetic and probably shared environmental variance to contribute as sources of individual differences in reading comprehension outcomes. Poverty suppresses these influences.”
I am not disagreeing with you, but what you’re saying here doesn’t debunk what Ruben says.
For the record, IQ measurements (and related measurements of cognitive ability) are unreliable in children. All these studies looking at young people and claiming to find this or that interactions are wasting everyone’s time.
You’d have to look at adults. The bottom line is simple: if there’s no shared environment influence in the combined sample, there can be no modulation by SES. This would mean IQs are more similar for a certain subset, which would lead to a non-zero shared environment overall (unless it’s matched by that much greater variability in the rest of the sample).
For the record, IQ measurements (and related measurements of cognitive ability) are unreliable in children. All these studies looking at young people and claiming to find this or that interactions are wasting everyone’s time.
You’d have to look at adults.
But based on what, JayMan? IQ measurement unrealiability, assuming it’s a replicated problem, seems like an ad hoc explanation. Also, as a result of the P = G + E assumption much of the non-shared environment is only the different norm of reaction of DZ twins to the shared environment. Only the shared environment of MZ twins is truly shared in the sense that they have mostly identical genomes. On top of this is the evidence that G itself is decreased once environmental variance increase. But aside from all that, Hanscombe et al. 2012 found that shared environmental influence appears to be greater in their low SES samples, which are always undersampled.
Besides, the cited paper was about reading comprehension.
The existing research on SES x h2 interaction is based on a model where the main effect of SES in partialed out from the variable of interest, and the moderator analysis is done on the residual variance. The results may be incomplete because of this, but they are not biased by gene-environment correlation.
There are also methods where the SES variable is first biometrically decomposed, allowing for unbiased moderation analysis using unresidualized variables even when there’s a gene-environment correlation. See Fig. 9 in this paper, for example. Models like haven’t been used in the IQ literature, though.
That’s fascinating about volunteering for the military and heritability. I recall Tom Wolfe talking in The Right Stuff in 1979 about how the officer corps was kind of a hereditary caste distinct from civilians.
It would be interesting whether environmental exposure to the military in your region would incline random youths to be pro or con toward the military. I suspect it would largely increase how strongly you hold your opinion, rather than which opinion you hold.
For example, consider Oceanside, CA (the furthest north suburb of San Diego) next to Camp Pendleton, the West Coast equivalent of the Parris Island boot camp for Marines. I can recall in 1975 a kid from Oceanside at debate camp complaining about how awful it was to live next to a bunch of violent, lowbrow jarheads.
But he was a policy wonk. I can imagine his lower IQ, more jockish brother thinking the Marines were pretty awesome. For example, if you like live fast and loud guitar rock, Oceanside has, relative to other exurbs, a pretty happening live music scene because of all the 19 year old Marines in the area.
Anyway, my point is that kids in Oceanside probably have relatively strong opinions, whether pro or con, on joining the Marines, while kids in Chicago probably have relatively weak opinions on joining the Marines because there isn’t a lot of Marine oriented local environmental influence.
But a study of heredity might not register much difference between Oceanside and Chicago because the influence of the different environments is, perhaps, more in variance than in mean.
That’s a good example, especially because it moves away from the things that people have strong preconceived notions about. Better to get the principle right while talking about something you care little about, then apply it consistently.
Another good example might be myopia. It’s probably completely a gene-environment interaction. Depending on the amount of whatever the environmental aspect is (outdoor play, near work, both sometimes rejected, but maybe just because they’re hard to measure right), there is a high or low incidence and variability in myopia. Assuming it’s outdoor play, it’s always 50% of those who don’t play outdoors who have the genes that allow your eyeball to adapt by elongating. But if nobody plays outdoors (Singapore now), incidence of myopia and variability in eyeball length is higher than if almost everybody plays outdoor a lot (prior to spread of books and computers).
This may explain the secular increase in myopia even though it’s heritable.
@Ruben:
Throughout your discussion on the matter here, you’ve been confusing the issue by muddying your terms. Let’s clear some things up:
First, all human traits result from “gene-environment interactions”. The genes depend on environment for expression.
But, that’s not what behavioral geneticists mean when they say “gene-environment interaction.” There, they mean genetic effects modulated by (typically some measured) environmental variation within the sample (typically within a cohort). That’s far and away from environmental variation that affects everyone at once. In other words, the “unshared environment” in behavioral genetic studies talks about environmental variation within a cohort. Variation between cohorts is typically not detected. In other words, to use the recent rise in obesity for example, the incredibly high heritability (>80%) and low “unshared environment” has nothing to do with the environmental variation that causes change between time periods.
You can use the 0 shared environment and the high heritability to learn something about the between cohort environment, but that an “advanced” topic…
@JayMan You seem to have understood this just fine. It’s an atypical example, but usually it drives the point home.
Sorry to trouble you with advanced topics 😉
This would still turn up in the shared environment. Twins raised in places with “weaker” opinions would be then more similar than those who had stronger opinions. That would show up as excess DZ twin similarity, unless we had a perfect cancelling effect (unlikely).
He was making an example where you have one sample in Oceanside and one in Chicago.
If you collapsed those two samples, then yes. But just from a twin study in Chicago, you won’t find out about the effect of the environment.
Now can you apply what you’ve understood to the problem of generalising the h2 in Germany to the h2 in Zimbabwe?
Now can you apply what you’ve understood to the problem of generalising the h2 in Germany to the h2 in Zimbabwe?
Or hell, what about the h2 in areas formerly within East Germany to the h2 within West Germany, or vice versa.
Moving the goalposts much? First you rely on these analyses in your blog post. Now, you say they cannot be done.
All this, when once in a hundred years, I am actually agreeing with you and just allowed that Douglas’ nitpick is correct.
You use the technical term heritability incorrectly, the heritability goes down at low SES, but it is because amount of total variability went up because of environment. It’s entirely correct to say that genes still matter just as much/explain as many differences in the number of correctly answered question as at the high end, but it’s not technically correct to say h2 didn’t change, because it is a proportion.
Regarding h2 by race: listen to yourself! Because SES is heritable it cannot moderate IQ h2? And ancestry/race isn’t heritable? At least be consistent with your incorrect ideas.
See the other comment below on why representative twin studies in the US aren’t the whole pciture.
This pretty much gets at the heart of the problem; the fact that people so often misunderstand what heritability is, what it can tell you, and what it can’t.
It’s a snapshot of variance in a particular place and time, it is not outright “genetic effects,” just because in some range of environments you have higher environmental variance doesn’t mean genes don’t express (ignoring possible GxE interactions) but just, as you say, the total variability increases.
Finding the genes is another matter totally, but the heritability estimate by itself is not that.
We have more evidence than that. A whole lot more.
Ruben,
Somehow, I find the idea of people here lecturing me of all people on the meaning of heritability rather laughable.
You clearly read a lot about heritability. You still used the technical term incorrectly in that sentence.
Admitting minor mistakes like that won’t make people laugh at you, the ability to admit mistakes will make readers respect you, and actually your larger point in that post stands. So just do it.
Now the fact that you consistently try to brush off the problems related to variability that has not been seen enough in twin studies (eg severe malnutrition, tropical diseases) and interpret heritability as if the variability that can exist were capped at the levels seen in the WEIRD world. That’s a bigger mistake, admitting it jeopardises a lot of the ideas that you hold dear.
I can understand why you get testy when this mistake is pointed out to you.
All in all, maybe this goes to show that even people beyond the event horizon have a reputation to protect (in a certain corner of the internet).
People keep talking a big game about gene-environment interaction, but they never seem to produce good evidence that it figures into cases people care about.
Turkheimer did indeed produce a study that seemed to show that h2 for IQ went down considerably in the lower SES levels in the US. But, among other problems, the data was quite old (50 years ago), and the numbers in the sample quite small.
A more recent study based on current data with much larger numbers (8 times larger) showed minimal differences between SES levels, with the lowest level of SES having h2 of .55, and the highest level .63. That’s not a difference likely to have any important societal impact.
http://link.springer.com/article/10.1007/s10519-014-9698-y
So, if anything, these more recent results would indeed suggest that, yes, h2 may well be affected by the particular era in which it is measured, but that today’s SES environments have relatively little impact on heritability of IQ.
Thus, while in general it is of course almost trivially true that the gene-environment interaction can be very important — no one thinks Kasper Hauser just out of his cell was not going to have his IQ affected by his experience — the evidence that gene-environment interaction has a significant impact in contemporary society is mostly non-existent.
“Gene-environment interaction” as generally thrown around in these contexts is just a blast of sand into the eyes of science, like “epigenetics”, or the “X factor” in racial IQ differences. There’s no real evidence that any of these factors play any real part in the phenomena of interest. But they are escape hatches from the unwelcome conclusions that drive so many who write about these issues.
This is useful to keep in mind when commentators want to extrapolate such coefficients to populations where absolute variability and variability of the environment may be much larger (i.e. including general severe malnutrition, specific pathogens, prenatal Ramadan exposure and all sorts of things the populations studied so far don’t have).
What about undernutrition, or poor diets even within the developed world (If U.S. is still developed world) which may account for undernutrition. Why rush to “general severe malnutrition” right away?
different groups have differing cognitive capacities; so improved nutrition would allow individuals to meet this capacity, but not exceed it . maybe that’s why the Flynn Effect has actually gotten weaker in Western countries
I don’t think undernutrition has been studied particularly well, of course there’s differential recruitment into and attrition from twin studies in the WEIRD world, so we haven’t necessarily seen all that exists here.
Potentially some studies could have tapped into it, though of course if one or families in your sample suffer from undernutrition, it won’t explain a lot of variance (but the effect size could be big).
But I wanted to make a clearcut example: severe malnutrition or prenatal Ramadan exposure or tropical diseases have not been studied well at all, and even less in twin studies, because we hardly have any twin studies in eg. Africa (I think one in the Gambia on early developmental outcomes).
@grey enlightenment: That’s just talk. We don’t know the cognitive capacities of different groups if you allow for understudied environmental aspects to permanently affect IQ. Or how did the we measure cognitive capacity independent of IQ and nutrition? Must have missed that.
Some recent figures tell us that 46 million Americans, including some 15 million chidlren live in homes without enough food, relying on donations and such. I would agree with you that the effect size can be pretty big, but twin studies aren’t built to pick that up too well, so the true effects can possibly be hard to pin.
There’s no modulation of heritability by race in U.S. samples. This is largely a non-issue for cognitive ability in the U.S.
What sources do you refer to for that, JayMan?
“Since we’ve been discussing coming up with numbers to estimate AI risk lately, try Global Priority Project’s AI Safety Tool. ”
The idea of breaking down a probability estimate (or any estimate) into smaller pieces is that if each part is a roughly equally relevant contribution to the final estimate, then (hopefully random, independent) errors in estimating each part will tend to cancel out, leading to better estimates of the desired quantity. It also allows for clarifying the thought process that leads to an estimate.
This “tool” does neither of those things. It breaks the estimate down into one relevant/controversial number (the probability of eventual AI risk) and three useless details (how much adding AI safety researchers will help). This is completely question-begging. If I assume that AI risk is real, then of course adding researchers is a good idea. So in this process the only relevant number is the probability of eventual AI risk, and the “breakdown” of probabilities has added nothing to the analysis. Which brings me to the best (worst) part: the probability of AI risk is selected from a drop down list where the lowest probability available is 0.01%!! Are you kidding me??
I actually know a lot of people who think AI risk is real but there’s no point in researching it right now – that’s why I wrote https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/ .
That’s a reasonable discussion to be having, though I think the tool is also structured in a way that discourages the user from thinking clearly about those issues. There is no pull-down box for the question “what is the probability that current AI friendliness researchers will produce any results of value in our lifetimes?”
The point is, though, that you can’t count the tool as an argument against Matthews’ article when it’s based on a multiple choice question that doesn’t include any of the choices that Matthews would have picked. His answer to the first question, like mine, would surely have been far less than 0.01%. It’s a completely silly exercise.
I didn’t present it as a refutation of Matthews; I wrote one of those myself. I presented it as an intuition pump hopefully useful for the majority of people who don’t have as extreme skepticism as you do.
Actually, I want to push at that – in my opinion your confidence is absurd. Suppose we follow Bostrom’s survey and represent AI risk as a conjunction of three probabilities: probability of human level AI this century, probability of superintelligence within 30 years conditional on that, and probability of very bad outcomes for humans conditional on superintelligence (please excuse vagueness of these definitions and feel free to clarify if necessary). What would your estimates for each of those be?
Fair enough. I should say I think I came off as more confident above than I really am. If forced to defend probabilities related to your three propositions, I will pick numbers much lower than you picked in your post on the subject, but my true uncertainty is much more “Knightian” in nature (in this respect I think my views are similar to su3su2u1, for example).
That said, my answers are as follows:
Prop 2 I regard as highly probable. If we create true AI, it should be able to improve itself.
Prop 3 I really have no idea, and it’s hard (for me) to see how anyone could have any idea. I can say that my intuition is that future AI programs are likely to be more like current programs in that they have certain defined inputs and outputs, and aren’t able to do harmful things unless we specifically give them the ability to do harmful things. Let’s say 10% for this one. As an aside, I’m totally onboard with concrete research into goal alignment for things like autonomous drones, self-driving cars, etc.
Prop 1 is where my views differ most from yours/Bostrom’s and where I have some confidence in my intuition. I would assign our chances of creating human-level general AI in the next century or so as very very low, maybe 1 in a million or less. I have a few reasons for this. One, the Moore’s law estimates of the time needed to develop human computing power are off by many many orders of magnitude. This is due to poor assumptions about the computing power of single neurons, which are much greater than most people doing these estimates think (Larry Abbott is a prominent theoretical neuroscientist who has influenced my views on this, you can see some of his review papers on the subject). Bostrom’s estimates tend to assume a computing power of around 100-1000 MIPS for the human brain, I think the right answer is more like 10^9-10^10 MIPS. More importantly, I think our understanding of general intelligence is extremely poor, and that even developing this kind of computing power (which we will do in the next 50-odd years) will not lead to true AGI. Consider the example of the nematode C. elegans. The C.elegans brain has exactly 302 neurons (~10^11 for humans). We’ve had a full wiring diagram for the nematode brain since the mid-80s. And yet, we have pretty much no idea how to simulate or understand their behavior in complete detail (see a recent review by Bargmann and Marder for some reasons why). And this is for an animal with a very simple repertoire, basically chemo/thermo/photo-taxis, eating, pooping, and squeezing out clonal offspring. The circuitry involved in understanding even one human behavior, like natural language, is likely to be mind-boggling complex. I’d guess that understanding it well enough to truly simulate it will take hundreds of years. In this case friendly AI research really does seem premature, and not unlike Isaac Newton deciding to spend his days thinking about preventing pandemic disease risk (in an era when no one believed in germs).
Anyway that’s my two cents. I should reiterate that I view assigning numerical probabilities to these things as slightly ridiculous, but have no suggestion as to how else we should think about them. Maybe we should all be working on Dempster-Schafer theory or something.
My problem is how do we get an estimate of how likely AI is and in what timeframe? What do current AI researchers think? What estimates are they floating about “We expect to have animal-level intelligence in so many years”, never mind human-level?
I tend to ignore Bostrom et al because they’re so convinced of the risk. I want sceptics, I want guys working on the real problem right now saying “yeah, well, we’re about thirty years away from having anything as smart as a cockroach” or some number I can grab hold of, not that tool which as Phil points out is “pull a number out of the air”. I did pure guessing and ending up, quite coincidentally, with the “1 in 10 billion” figure they referenced later down the page.
Basically, what I’m saying is of course people who think AI risk is really a real threat are going to put high probability on it happening relatively soon, and high estimates of bad outcomes if we don’t make sure to start working on it right now, and high estimates of “gazillions of future happy human interstellar colonists” if we do.
And I’m sorry, but that’s Cinderella’s Fairy Godmother stuff. “If I get a pumpkin changed into a carriage, then I can work on having mice turned into coach horses, else I’ll never get to the ball to meet the prince and never live happily ever after”. There’s too much “human level AI will turn itself into god-level AI and if we get it right it will be our Fairy Godmother solving all problems and giving us unimaginable riches and opportunities” without any real “Here’s some hard estimations about how close we are to getting human level AI from people who aren’t all in our fanclub”.
So I tend to agree with you that human-level intelligence is really hard, but I think it’s almost inconceivably overconfident to put “less than one in a million” probability on anything at all like this.
Remember, that means “I could make one million predictions approximately this difficult and controversial, and only be wrong once” (as an intuition pump, if it took you no time at all to choose these predictions, and you just had to say them aloud, and each prediction took only five seconds to say, this would take over a month). Can you imagine making one million predictions on what technological changes will vs. won’t happen this century, against the consensus belief of most people in the field, and not being wrong more than once?
I can’t find the webpage with all the quotes from people who are like “No one will ever invent flying machines, I am certain of it!” ten minutes before the Wright Brothers took off and stuff, but I’m sure you know which one I’m talking about. Given all that, do you really think you can predict “this won’t happen soon” with one in a million accuracy?
I don’t even have that level of confidence that we won’t discover something crazy like time travel this century, even though that’s probably impossible and I don’t think there are any research programs even close to having any idea where we would start.
@Scott:
Aren’t you privileging the “will not happen” wrong guesses over the “will happen” wrong guesses?
The obvious example here is, of course, previous predictions about AI. But I think there are several tech leaps assumed for a)AI x-risk and b)future human population that pattern match to “cold fusion” for me.
Put in Hitchens favorite line here: extraordinary claims require extraordinary evidence.
HeelBearCub: I would only be privileging something if I were to now say there’s only a one in a million chance that AI won’t happen, which of course is not my position.
Given a long history of people confidently predicting things won’t happen, that later do AND a long history of people confidently predicting things will happen, that later don’t – the proper attitude seems to be accepting that technological prediction is very difficult, and not having 999,999/million confidence levels in anything in that area.
Deiseach, what kind of data are you looking for? Do the surveys conducted at conferences not count? Because if you’re rejecting them as all in the ‘fanclub’, then you’ve basically defined-in that everyone who looks into the subject well enough to plausibly qualify as a domain expert has joined the fanclub. Which would, if it was true, be pretty strong evidence that the ‘fanclub’ is right.
@Deiseach “My problem is how do we get an estimate of how likely AI is and in what timeframe? What do current AI researchers think? What estimates are they floating about “We expect to have animal-level intelligence in so many years”, never mind human-level?
I tend to ignore Bostrom et al because they’re so convinced of the risk.”
http://www.nickbostrom.com/papers/survey.pdf
You say you want estimates of AI timelines and risk from people not working on it, but that’s why Bostrom leads front and center with survey data (link above), which lets you distinguish a) the highest-cited 100 people in AI identified using a citation search engine (TOP100) b) an AI professional society (EETN) c) the conferences of self-selected people interested in AGI or in AI impact issues.
I personally pushed to have those surveys of independent AI people conducted, and to foreground them at the start of Bostrom’s book, because of my concern about selection biases.
So it’s a bit disheartening to then hear you saying you don’t trust Bostrom and want estimates from independent academics, when if you had read the first couple chapters of his book you would have found he provided exactly that.
The independent TOP100 respondents gave median dates of 2024, 2050, and 2070 for 10%, 50%, and 90% chance of human-level AI conditional on no catastrophes stopping technological progress.
The median TOP100 estimate of superintelligence within 30 years of human-level AI was 50% probability, and 5% for within 2 years.
The TOP 100 gave mean probabilities of 20% for extremely good outcomes, 40% to ‘on balance good’, 19% to more or less neutral, 13% to on balance bad, and 8% to extremely bad (existential catastrophe). Adjusting for an outlier, or going with the median (unfortunately medians don’t add up to 100%) would give something more like 5% for that last one.
Now there might have been some selection bias in which AI people responded to the survey, and most did not respond, but it is still very directly responsive to your complaint.
And while I agree that looking at people interested in these issues introduces important selection bias (thus my pushing to have the above surveys conducted), there are also object level arguments and analysis and I would not cavalierly say there is nothing to be learned. E.g. bringing in basic economic analysis of AI strongly predicts a major boost to progress:
http://datascienceassn.org/sites/default/files/Economic%20Growth%20Given%20Machine%20Intelligence%202009%20Paper_0.pdf
@PDV, that is an issue, but sometimes it happens. E.g. philosophers of religion are mostly theists, philosophers are mostly atheist. And in fact, even in philosophy of religion the moves tend to be from theism to atheism rather than the other way around: the high religiosity is driven by entry of people who want to do apologetics, not because the philosophy provides good supportive evidence for religion.
http://philpapers.org/surveys/results.pl
In the case of AI risk I think a) exposure to the arguments increases credence, unlike theism b) there is still self-selection to worry about.
@Scott:
But that need for 1 in a million accuracy depends almost entirely on the estimates of future population, and not any intrinsic need to predict AI risk with 1 in a million accuracy.
In other words, that 1 in a million accuracy now applies to every single x-risk you can imagine.
How is this NOT Pascal’s mugging? What makes AI risk specifically worthy of requiring 1 in a million accuracy an no other X risk does?
Do you feel 1 in a million confident that AGW isn’t an x-risk? Do you feel 1 in a million confident that asteroids aren’t an x-risk? Do you feel 1 in a million confident that bio-terror isn’t an x-risk?
And on and on….
The proper answer to my objection to all these things is that we should study these risks and try to mitigate them. I agree! But the 1 in a million number doesn’t say anything special about AI risk vs. any other x-risk, does it?
@HeelBearCub:
>> Do you feel 1 in a million confident that AGW isn’t an x-risk? Do you feel 1 in a million confident that asteroids aren’t an x-risk? Do you feel 1 in a million confident that bio-terror isn’t an x-risk?
I agree that if you said pandemics were a one in a million risk, you would be insane. I agree if you said global warming was a one in a million risk, you would be insane. I feel like I’m being pretty consistent here.
(asteroids are a special case, because we know from the geological record that extinction-level asteroids only happen like once every couple dozen million years, so we have high confidence and small error bars on this. That doesn’t apply to anything else.)
>> “The proper answer to my objection to all these things is that we should study these risks and try to mitigate them. I agree! But the 1 in a million number doesn’t say anything special about AI risk vs. any other x-risk, does it?”
I’m having a lot of trouble seeing where we disagree. No, AI risk is probably not super unique among x-risks – I would argue you can maybe make a case for it being maybe one order of magnitude more likely than some things, but there’s no giant qualitative difference.
But we spend $20 million a year looking for asteroids, NASA is petitioning the government for a few hundred million more, and everyone agrees it’s super-important. We spend some number of billions of dollars per year funding the CDC and WHO to prepare for global pandemics. All of this is happening and none of it is controversial. No one is doing anything about AI except Elon Musk and a tiny handful of people many of whom read this blog.
My side is the one saying “Let’s treat AI risk the same way we do these other x-risks”. The people I’m arguing against say “Let’s dismiss it totally” – as far as I can tell.
@Scott: ‘it’s almost inconceivably overconfident to put “less than one in a million” probability on anything at all like this’
Three things I would say:
(1) Yes, I could name many other scientific breakthroughs that (I think) are equally or more unlikely, but not necessarily physically impossible. Examples would include teleportation, time travel, faster-than-light communication, Dyson spheres, etc. Can I name a million such things? I don’t know, with enough time and creativity, maybe I could. But so what? As the Bayesians would remind you, my 1 in 10^6 probability is a degree of credence, not a frequency.
(2) Yes, everyone agrees that true AGI is hard, and the reason we have to worry about it is that the potential risk/payoff is so large. Unfortunately, the proponents of x-risk, Bostrom in particular, use such stupidly large numbers (10^57 !?) in their claims, that if you don’t believe in very small probabilities (or degrees of credence), then arguments about any type of x-risk will go through. Is there really a less than one-in-a-million chance that aliens will vaporize the planet with deadly lasers? Then maybe we should be developing laser shield technology. Is there really a less than one-in-a-million chance that God will smight us all for our wicked behavior? Maybe we should become fundamentalist activists. Is there really a less than one-in-a-million chance that Cthulhu will rise from the deep…you get the idea.
(3) Given this, what we should do instead of trying to put probabilities on everything (a practice that I’ve already said is mostly BS), is try to decide what are the most plausible sources of x-risk. AGI is a clear loser here, for all the reasons I already articulated. Sophisticated terrorists could create deadly viruses right now . Russia or India or (god forbid) Israel could mislay a critical mass of uranium and we could have a rogue a-bomb right now . A planet-killer asteroid could appear in the next few decades. None of these causes are receiving enough attention right now, and in the meantime we’re arguing about whether or not we’ll have AGI in the next century ??
Edit: I see that while I was typing HeelBearCub made a similar point, and Scott responded. I would emphasize that I think the risk from AGI is far far less than from pandemics/asteroids/a-bombs, and that we should be spending many orders of magnitude more money on those causes.
@Scott @Phil:
Yeah, the problem I see with the AI risk arguments I have seen is that they don’t do the hard work to justify quantifying AI risk vs. other potential x-risk. And they definitely haven’t justified the “this is the most important, most probable x-risk out there” talk.
It seems to me that in order for any sort of AI to be an x-risk in the next 100 years, it would have to do to us what we can already do to ourselves, or close enough. You don’t just get “super-intelligent AI therefore can end our species in completely brand new and novel ways” all at once. (And that assumes a whole bunch of stuff just to get to super-intelligent).
And even if one is convinced that this is possible, the fact that people are falling back on explaining how important it is to resolve x-risks (with pretty outside the box number to boot, assuming all sorts of other tech leaps like what seems like assumed FTL travel), doesn’t say phenomenal things about how probable they think they can say it is compared to other x-risks.
Worth studying != the most effective charity dollar. Worth studying != underfunded right now and definitely != severely underfunded.
I’m not saying AI will never happen. I’d be happy enough if somebody predicted “something that can genuinely be called intelligence within fifty years, even if it’s only on the level of a cockroach or maybe as high as a lizard”.
Human level? Yeah, maybe eventually. Not fast, because we’re still trying to figure it out ourselves, and so far lightning only seems to have struck once on this planet with “human level intelligence”, and evolution has been running this experiment for a loooooong time.
God level smarter than smart Fairy Godmother AI? Order your cryonics capsule now, because I think you’ll be a damn long time waiting.
Ahhhh, ahhhh, we’re all gonna die AI? Well, humans are stupid is my motto, and there’s no reason we can’t develop AI, put all our eggs in one basket, and do something remarkably stupid that leads to all the buttons being pushed at once and boom, there goes industrial civilisation. But I still maintain the biggest existential risk comes from humanity itself screwing up in even bigger and better ways by using our shiny new toys to fuck ourselves over, rather than the toys themselves deciding to do away with us. Or do things that incidentally and as a side-effect do away with us.
PDV, Carl has answered what I’d say in reply to your “all the domain experts agree”. At a conference of theologians, I’d expect (well, okay, theologians can get very wacky but in general) not to find a whole heap of “Yeah, definitely God does not exist”. Same way at a conference about “AI risk” I would not expect a whole heap of “Nah, not a problem at all”. If I think AI risk is plausible, I’m going to attend that conference. If I think it’s not a problem, I’m not going to attend (unless I want to be contrary and push the “You’re all idiots and here’s why” approach).
Carl, that’s the kind of predictions I do want to hear. I’m very glad to hear you pushing for the less optimistic (or do I mean less pessimistic here, if we’re talking about risk?) view. Pushing “human level intelligence” out to 2070 for the 90% confidence seems to be taking a more reasonable look at it (though it still rather falls into the “within fifty years we’ll have – ” prediction cycles).
I’m not trying to say AI will never happen. I’m not trying to say that there isn’t a risk. What I am trying to say is that given how little we actually know about what constitutes intelligence, given the arguments about does such a thing as consciousness exist, given that we’re debating about do humans have a genuine core identity that can be called “I” or just a convenient cover-story overlaid on independent processes running in parallel, the model of “Wham! we’ll crack intelligence Bam! we’ll crack human-level intelligence Ka-blooey! We’ll have super-intelligence! Really really soon!” is not convincing to me.
When we achieve human-level machine intelligence, how will we know? What tests will we run? What evidence will we accept as proof that the machine is thinking independently, and not just a really sophisticated simulation?
We’re putting the cart before the horse, is what I’m saying. Yes, worry about risk of making massive mistakes. But “Give me all your money now so we can research how to make the Fairy Godmother not the Wicked Stepmother” is an argument that, to me, presumes too much urgency on too little concrete showing. Throwing around estimates of how many potential quadrillions of future uplifted humans will exist if we get it right isn’t helping there, either.
@Carl Shulman
Regarding surveys of AI researchers, one bit of cultural context that may be missing for some readers that aren’t in the computer science world is that there’s been 50 years or so of predictions that AGI is 10-20 years out. The only comparable thing I can think of is commercially viable fusion energy.
For those of us who are on the somewhat older side, claims in these two areas tend to be treated with a little more skepticism (perhaps unwarranted, but born of experience.)
@brad
http://aiimpacts.org/ai-timeline-surveys/
http://aiimpacts.org/accuracy-of-ai-predictions/
@ brad
For those of us who are on the somewhat older side, claims in these two areas tend to be treated with a little more skepticism (perhaps unwarranted, but born of experience.)
My Inner Ma Kettle agrees. We can’t even make a room full of gadgets on the same USB hub run right without human troubleshooting. Who is going to check the AI + all the factories’ robots + etc etc, looking for corrosion on some plug, or a literal bug? Even if the AI had deceived every human into believing the system deserved support … there’s still human error.
I’d reply to Ma: “Good point about a Paperclip AI. But a Dr. Strangelove AI that really wanted to destroy civilization, could just tweak some transmissions and get the nukes launched.”
Hm, there’s an area that could use some more precautions already, in case of accidental false alarms, or hacking by Bin Laden’s grandsons.
The Red Cellphone?
Hm, there’s an area that could use some more precautions already
Do you have any idea how many precautions are in place already, and what they are?
The details are of course highly classified, but this is a case where
a bunch of very professionally paranoid people have been working on the problem for more than fifty years, the problem space is narrowly defined, and the prospect of a malevolent combination of human and artificial intelligence subverting communications is explicitly the threat they have been working against.
Arguing about probabilities of AI risk is beyond WAG (Wild Ass Guess), it’s by definition absurd. “Intelligence” isn’t well-defined in a rigorous sense, and the gap between the specialized bits and pieces of contemporary AI theory and an enormous system running amok, out of control of its operators, is so murky as to be unpredictable.
Estimating a percentage for AI risk is fucking stupid right now, because there’s no good model to work from. You can point to the enormous, unforeseen technological advances of the past 100 years as evidence of the possibility of the problem, but possibility is meaningless without some estimate of necessity.
With so much unknown, a probabilistic model serves no purpose. This is why AI risk research is insipid, and I don’t give a fuck who says otherwise. You cannot make an accurate estimate of risk without an accurate estimate of the processes leading to problems. It’s like,
Step 1: Collect Underpants
Step 2: ??????????
Step 3: Sacrifice Human Population to Maximize Paperclips
@Anonymous – You might be right. I really have no idea. I’m just a spectator, the whole field is beyond me. As a complete spectator, though, I do get pretty tired of this argument. It could be true, but it doesn’t seem any more provable than FAI itself.
I’m interested in addressing that exact same attitude but for all other technology as well. Only unlike AI, I don’t see anyone else in my field (human factors engineering) echoing the sentiment.
Random, independent errors don’t cancel out, they add together. It’s just that the errors in the estimates add up more slowly than the estimates themselves, so they become smaller percentage-wise, while larger in absolute size. So, for instance, the average die roll is 3.5 and a single die roll will have an error of at most 2.5. The sum of a thousand die rolls will have have an expected value of 350, and the error is going to be in the neighborhood of 100.
So, uh, I guess I’ll come out and admit that I’ve been (reluctantly) on a mostly high fats diet, with carbs mostly in the form of rice and have been avoiding “gluten” (I suspect FODMAPS or whatever the name is from earlier to be a lot more responsible).
So, before I went on this diet, I was slightly overweight according to BMI, had weekly/biweekly digestive episodes and felt a type of drowsy-but-not-sleepy type of food coma, right after lunch, which was usually the “healthy” option of subway whole grain sandwiches.
After six months of a high fat, close to zero carb with about 4-6 eggs per day, and essentially what amounts to half a stick of butter. As a result, I lost forty pounds, essentially stopped having digestive episodes and the incidence of food comas dropped to near zero. I also had some blood work done and my triglyceride and good/bad cholesterol numbers still looked fairly normal.
This was about… three years ago? And I haven’t had bloodwork done since then. I’ve regained half the weight I’ve lost on the diet, but the other benefits remain. I don’t obey the diet *quite* as slavishly but…
The gluten free part is very annoying. I like pizza. Pie is great. So is cake, and sandwiches are convenient and I hate that I am doing this to myself despite what the meta analysis from scott’s earlier posts and the stuff from Luke. The inspiration for the diet comes from a self promotional, kinda crank person.
However, I also don’t particularly like how I felt back when I didn’t have these restrictions. The food coma feeling in and of itself feels like losing two hours of my day to nothing, the episodes were frequently painful and I think I look a lot better now than I did back then.
Does anyone have any thoughts/advice? I’m probably going to get some bloodwork done to see that my numbers aren’t too crazy, and if they are I’ll just do what the mainstream says, but I don’t know how to settle this discrepancy between what the studies say and my personal experience.
If you think it’s FODMAPS, then you should do the standard FODMAPS procedure and figure out which FODMAPS cause you problems, because most people have can eat some but not all. I think the procedure is to eliminate all of them for a month and then add them back one per week.
But the really important step is to get a list of FODMAPS and seriously consider each one as a hypothesis. For example, I was surprised to see garlic on the list because I thought of garlic as an herb, not a source of nutrients. It won’t give you many calories, but it is enough to cause digestive issues.
I… I am ashamed to say that this never occurred to me and that I should have been doing this all along. Thanks!
[I] felt a type of drowsy-but-not-sleepy type of food coma, right after lunch
Here is a look into the alternate universe Internet that (hm, like HPMOR) has continued its golden age (at least to 2012).
https://blogginginparis.wordpress.com/2006/03/29/getting-sleepy-after-eating
It has 200+ comments from a variety of people with various states of health, weight, and diet, who all report that same problem with a variety of theories about what caused/relieved it for them.
My own report is: I’ve had this problem since high school (otherwise I’m extremely healthy and close above underweight, always eating anything). Gave low carb a good try, but that didn’t help much. Adrenalin situations right after lunch would keep me awake, but were unhealthy in other ways. Tea before lunch usually helps, though chocolate and coffee have erratic effects related to how much sugar is in them.
Probably a stupid question, but have you been tested to see if you’re coeliac? I know “gluten-free” is the new trendy thing (apparently) but Irish people have a high incidence of coeliac disease and you never know who might be far back in your ancestry!
I’m Han Chinese, which according to wikipedia means I’m most likely immune to it.
I had testing but it was way after I stopped consuming gluten, which means that it’s always going to return negative :/ (doctor said aw what the heck test anyway ???)
If it’s working, why go back? Nutritional studies aren’t reliable enough to apply to everyone equally, and if the diet you’re on makes you feel better and a medical checkup shows you’re doing better (or, at worst, the same as before) then I’d keep doing it instead of worrying about population averages. Another checkup since it’s been awhile sounds like a good idea, just to be on the safe side.
As for missing wheat, it’s unlikely that gluten itself is your problem, though it’s possible. You could try adding back a small amount of white flour and see what happens; if you don’t have celiac disease and an occasional piece of cake doesn’t make you feel awful, you can probably eat wheat once in awhile. IIRC sourdough bread is the easiest on digestion, though I do have celiac so I can’t even have that.
FWIW, you CAN still have cake and sandwiches on a gluten-free diet, though those foods aren’t low in carbs. I found some mock rye bread at Target, gluten-free, about a month ago and it was very good. Some cake mixes are all right too. They’re just more expensive and you have to go out of your way to get them, they won’t be stuff your co-workers bring in on a whim – and that’s probably better, really, since you can still have your treats but they’re not a constant temptation.
Re the rebuilding of Vietnam and the American south, what do you think of the theory that it was air conditioning? Krugman sums it up here: http://krugman.blogs.nytimes.com/2015/03/23/charlatans-cranks-and-cooling/. The idea being, basically, that’s too freaking hot to do much serious work in hot regions during the day, so it wasn’t until air conditioning became widespread that they could have a regular work schedule.
That…seems pretty plausible, actually.
This coming from a resident of an area in Texas that’s on an 8-10 year drought/1-2 year rain cycle :'( , so there’s your bias.
You could give that a genetic twist. All of the people who lived in the south originally could have been unproductive but then air conditioning gave the productive people an incentive to move down there.
Eel Thing (Erysipelas) seems pretty metal, at least judging by its other names. Wikipedia gives Ignis sacer and the English translation of holy fire, plus “St Anthony’s Fire.” Not sure which St Anthony because there are approximately 84 trillion saints and I’m not a Catholic. Someone should tap in Deiseach.
As for the sitting thing, hasn’t early maturation of African children been a known factor for a while? I don’t have numbers right here but pretty sure I’ve seen Jayman and co bat that sort of thing around before.
That would be St Anthony of Egypt, Ever An Anon. The two most famous St Anthonys are St Anthony of Egypt (3rd/4th century Coptic saint, attributes in art: tau-cross, swine) and St Anthony of Padua (13th century Portuguese saint, attributes in art: holding the Child Jesus, lily, book) – ironically for someone who was a noted theologian and preacher of his day, now the “finder of lost things” go-to saint.
There’s apparently an American-tradition verse going something like “Tony, Tony, come around/Something’s lost and can’t be found” for invoking him – tsk, you colonials and your breezy ways of address! 🙂
Anyway, why the first St Anthony is associated with skin diseases (from Wikipedia):
Yes, early maturation of Africans & African-Americans (and late maturation of Caucasians, and even later maturation of east Asians) is a major plank of Rushton’s r/k lifecycle theory of between-race differences.
Even in infants?
Check lambdaphagy’s comment below: differences showing apparently as soon as an hour after birth.
At first glance, it doesn’t look like a generalized precocity: the sitting issue has previously been attributed to training.
Especially in infants. You might want to read Rushton’s book if you’re interested in all the details.
I’ve read RI&B a long time ago, and Rushton’s use of r/K wasn’t too great, but I’ve wondered ever since if some prenatal environment variables were involved in some way. But from what I’ve seen thus far, I don’t think the literature in general has dug deep enough into the issue for us to know too much about that.
Environmental explanation: Fast Life History Strategy?
https://www.psychologytoday.com/blog/beautiful-minds/201008/life-in-the-fast-lane-part-ii-developing-fast-life-history-strategy
The Black Lives Matter / Hillary link is broken. You want
Editing to add: curse you, Douglas Knight!
Diets? diets.
Drink lots and lots of water. Don’t add any sugar to anything.
Eat a healty breakfast. Something that will keep you fed for a lot of hours. Like porridge and boiled eggs or something.
Then during the day, eat fruits. Which? Doesn’t matter, as long as it’s varied. Peaches and oranges and apples and bananas and berries and cherries and and and more. Keep yourself not-hungry with a variety of them.
Then for dinner, eat a steak, or tofu or whatever, with a good salad (dressing? sure: lemon juice and mustard, for a salad made of sliced cucumber. Or red wine vinegar and herbs, for a diced tomatoes salad. Or balsamic vinegar and herbs, with lettuce salad. No oil, never pre-made dressing.) and a few potatoes or a little other starch. Like rice or pasta. Grill the meat on salt or something instead of oil or butter or the like.
There. Avoid fat and (most) carbs, keep yourself not hungry.
Wow, avoiding both fat AND carbs! Now all you have to do is avoid protein, and your diet is sure to be a success!
(sorry, I’m being mean, it sounds like a fine diet)
It made me hungry, so I ate a nearby bag of chips.
You’re an enabler, Corwin. I hope you’re proud of yourself >:(
It’s not actually bad advice. I lost about 20-25 lbs in the last year (still dropping) by doing something similar. I call it a relative diet, because it wasn’t that I was following a specific eating plan, I just ate less of what I used to and substituted different, lower-calorie, lower glycemic load foods.
The things I avoided (not eliminated) were wheat, dairy, and added sugar.
I’ve varied all the way from 235 to 155, usually on purpose for one crazy reason or another (was naturally skinny most of my life, then decided I wanted to be a weightlifter, then a rock climber, etc.), and the highest payoff thing I’ve ever found to do is removing liquid calories. Conversely, the highest payoff thing to do if you’re looking to gain weight is add liquid calories.
My weight is astoundingly sensitive to changes in diet (not true for all people).
I also cut down a lot on my liquid calories.
Successful dieting really shouldn’t be measured by months or even a year or two, 3-5 year outcomes matter, and they tend to suck with a lot of really successful short term diets.
Does one diet have to fit all? Is it possible that for people who have metabolic syndrome and insulin resistance low-carb can be effective, and for others it may be unnecessary?
I think, for the most part, one diet could fit all in terms of what will work metabolically. But psychologically, some diets probably work better for others. I, for one, am skeptical that there are many people for whom a high fat diet is genuinely better (apparently it can help epileptics, weirdly, though); however, if a high fat diet is psychologically much easier for someone to handle than a high carb diet, and if it does, in fact result successful and maintainable weight loss, then it is probably worth it (though I do think you will end up with higher cholesterol, etc. in most cases, at the same weight on a high fat diet as compared to a low fat diet, just losing weight at all seems to improve one’s numbers no matter how it’s lost).
But I also think it’s incorrect to think that “well, I’m prone to type 2 diabetes, so I need to avoid carbs.” Meat also causes insulin spikes. Skinny Asians who eat a ton of rice and noodles don’t get it. The best way to avoid type 2 diabetes is to not be fat. If the only way you can not be fat is to eat a lot of fat, then I’d say it probably beats the alternative.
Maybe “skinny asians” don’t get Type 2 diabetes (some skinny people do), but only about 10% of obese people get it as well.
Maybe meat causes an insulin spike (although I’d ask to what degree vs. refined carbs) but it doesn’t cause a blood glucose spike. So no, it’s not incorrect to avoid carbs if you have metabolic syndrome or pre-diabetes.
Also, high-fat is not the opposite of low-carb. You can also replace carbs with protein, just saying…
I feel jittery and unhappy if I run my fat intake too low. How can you tell whether this is a psychological issue or my being metabolically different from people who tolerate low fat better?
Why does it make sense to you to think that the same diet is the best for everyone, or almost everyone?
Well, I think our digestive systems are a palimpsest of all our evolutionary history, and I do think there are going to be some differences among groups, since, as it turns out, evolution does happen over a shorter span than we might imagine. People with few or no farmers in their genetic history (many Native Americans) may be worse at processing grains, for example, and people with little history of dairy consumption may not do well with dairy.
That said, from what I’ve read, our GI tract is still like 98% the same as a chimpanzee GI tract. And chimpanzees are omnivores like us, but they get most of their nutrition from fruit, and a little from bugs and other forms of meat. We do, apparently, have more enzymes for digesting grain than the chimpanzee, however, and this may be related to a history of farming. And most other primates are even more vegan than the chimp.
Therefore, it seems to me that a diet high in fruits and vegetables with some occasional meat is probably better for us than a diet high in meat with some occasional fruit. This is also supported by the China Study, etc. etc. And there is also the fact that eating higher up the food chain concentrates potentially toxic stuff.
Sure, some people in Alaska may be able to survive on bear fat, but do we really view Alaska as the climate for which we are evolved? Doesn’t matter if your ancestors came from Africa more recently, or, like mine, have become pasty to deal with the lack of vitamin D in Northern Europe, for most of our evolution we were living in warm places where fruit would be a better option than bear fat. Otherwise, we would have fur.
So I’m not saying there are no individual and/or group variations with regard to which diet is appropriate, and I think everyone has to, to some extent, experiment with his or her own diet and see what works for them personally, but I think it can also be something of a red herring to imagine that their are really big differences. Take a fat white person, move them to Okinawa and have them eat a traditional Okinawan diet and pretty soon they’ll be thin like the Okinawans. Take an Okinawan and feed them McDonalds and cookies every day and pretty soon they’ll be fat like an American.
The Japanese seem pretty healthy to me and a big bulk of their diet consists of rice. A lot of Asian cultures and countries seem to eat rice with every meal or almost every meal (if following a traditional diet). They do eat smaller portions though.
Those Japanese sure do eat small portions!
https://www.youtube.com/watch?v=Y4cM9jyOfzM
(This is, of course, a joke, as she is some kind of competitive eater, but I’m not sure I actually think they do eat smaller portions. I routinely saw skinny salarymen wolfing down gigantic bowls of noodles. I do think they eat less fat, more fish and vegetables, and maybe fewer snacks).
I’ve also heard that it is considered impolite to finish all of your rice, it signals that your host didn’t give you enough to satisfy you.
This is technically true in China (though not enforced in casual situations), but not Japan. The Japanese actually tend to strongly discourage food waste, especially of rice. They have some kind of saying equivalent to “each grain of rice is a drop of sweat off the farmer’s back (so eat every last one).”
I think China still has enough of a connection to real hunger that providing guests enough to eat is a genuine concern. Like at tapas restaurants I’ve been to, for fancy meals the Chinese try to save the rice and noodles for the end, basically just to “finish off” anyone who might not already be full after eating all the main dishes (like a giant paella you get at the end of a fancy Spanish meal). But I personally prefer to eat rice with my meal, so I tend to like casual Chinese meals more than formal.
The Okinawans do have a conventional health notion called “hara hachi-bu” or “8-10ths of your stomach,” basically meaning you should only eat until you’re 80% full. Weirdly, the samurai of Edo Japan, who didn’t usually have much to do, also developed a culture of eating very small portions, I guess as a form of self-discipline.
It baffles me that when giving dietary advice, the advisor’s first tidbit isn’t “Where are your last thousand years of ancestors from? What did people in that part of the world mostly eat when times were good? Try to eat mainly that.”
Okay, counterpoints:
“Water”
I. Water may be correlated with positive health outcomes, but you aren’t recommending any tangible amount of water a day, you’re simply seemingly recommending people drink “the correct amount”, which isn’t exactly dietary advice.
“Eat a healthful breakfast?”
I. The discussion seeks to frame exactly what it means for food to be healthful. You can’t just say “make sure all your meals are healthful!” and send it off as realistic diet advice, as we can’t even agree on what that means exactly.
II. Intermittent fasting is a thing–while it may sound obvious to you that eating a “good breakfast” is best, there are alternative (and, quite valid!) viewpoints on the matter that we should explore, rather than dismiss in favour of the obvious approach. Typically when we discuss nutrition, we’re not discussing what’s necessarily adequate–we’re discussing what’s optimal. Sure, breakfast may be adequate and comforting to most but is it optimal? If I don’t like eating breakfast should I force myself to eat breakfast?
III. *If* fullness vs. health is a tradeoff, how do we balance and optimize the fullness/health equation? Do we skew full, or do we skew health? If fullness is so important, why don’t we just not eat breakfast?
“Fruitarianism”
I. While you stated earlier one should “avoid adding sugar to one’s diet”, your chosen mid-day diet is almost entirely sugar. A fruitarianism diet is a lot more “out there” than the Mediterranean or keto folks and, as such, I would hold it to a higher level of proof.
II. Does form matter? If I blend a bunch of fruit together and drink it all day long does it affect this rule? Can I have anything else with the fruit?
III. Are you certain this diet won’t cause pancreatic problems or any other generalized health problems? While I don’t expect a diverse diet to cause many problems, the high endorsement of fruit is a bit odd to me.
“Dinner”
I. Even if you’ve somehow picked the best diet (a point widely contested), it sounds very boring. Porridge. Fruit. Meat and potatoes. Rinse and repeat. Oh boy! Not exactly, in my mind, a convenient diet to stick with for an extended duration.
II. Are you sure that steak is healthy? *inserts one of many studies linking red meat consumption with something bad* (It’s worth noting that, while I personally don’t think red meat is unhealthy, there’s certainly a valid counter viewpoint here)
III. Are you sure that tofu is healthy? http://www.ncbi.nlm.nih.gov/pubmed/10763906 (see above point)
IV. If tofu and steak can both be dismissed (which, obviously is not realistic), are you sure that you can survive off of salad with lemon juice and mustard? If steak and/or tofu are to be included–why include them over other things you’ve explicitly disallowed in this diet?
“Avoid fat and most carbs”
I. So, avoid almost all food? Why even bother eating breakfast or lunch?
II. You say avoid fat, but a steak has more calories from fat than protein. If fat were bad enough to warrant the statement “avoid fat”, why include steak? Seems easy enough to swap out a lower fat alternative–or even an alternative that contains more omega-3 fats which are pretty much universally adored at this point.
“My response”
I. I disagree with the way you have laid out your diet in that I don’t believe it’s a workable solution. You specify how one should eat in a day, but you don’t mention anything about what’s not allowed. Are we to assume anything that isn’t lsited isn’t allowed? No nuts? Milk? Really no butter or oil at all? What about diet soda? Really just water? How much fat is okay (obviously, some must be–you suggested steak!). How much protein do we need? Just the tofu’s worth? Is everything else really just carbs? Is the fructose in all the fruits we’re now eating somehow better for me than the sugars of other carbs? What about fats? Omega 3s? Polyunsaturated fats? Saturated fats? Can I eat fruits at night? You say during the day–does sunlight somehow change the metabolism of fruit? If I work graveyard shifts, can I only eat fruit in off-hours? Is there a biological impetus for us to metabolize fruits over vegetables or grains? Can I have honey? This may sound obstinate, but one of the awesome things about low fat and low carb diets alike is that they specify exactly what’s included or excluded.
II. The elephant in the room: diet advice centers on weight loss. While this is typically a good indicator of health, it isn’t always. I believe the typical avenue of dietary success is “lower weight means typically lower chances of health risks”, when we should be looking at auxiliary health markers in addition to weight: LDL, HDL, triglicerides, blood pressure, heart rate, CRP, TNFα, etc. Sure, it’s downright dandy that both low-fat and low-carb diets work, but is there a better distinction for us to use? Losing weight is easy, actually improving your health is hard. By narrowing in on weight loss, we’re doing nutritional science a severe disservice. Low-fat dietary goals don’t exist to reduce weight, they exist to improve health markers w.r.t. atherosclerosis and CHD and thus, this should be the marker upon which they are compared.
III. Dietary science isn’t simple and, at its crux, isn’t about what’s adequate–it’s about what’s optimal and why. If we distilled all our knowledge and found that, for whatever reason, health outcomes are simply based on how much eggplant you eat then we can take this information and design diets that maximize eggplant consumption while also maintaining enjoyment. No one wants to sit down and eat 1500 calories of raw eggplant but we might be able to convince people you eat a nice eggplant omelette for breakfast and an eggplant and capicola sandwich on rye for lunch with eggplant fries and eggplant chicken parmesan with an eggplant bisque for dinner, especially if you told them this would reduce their chances of CHD and colorectal cancer by 99%.
Personal bias: I am a proponent of ketogenic dieting. Let this colour the discussion as it may. For what it’s worth, reading “just eat a normal diet!” or “balanced diet!” or “healthy/healthful diet!” tends to throw me into a fit of rage (perhaps “eat a balanced diet” is just an applause sign–a way for people to say “haha but I’m kidding I’m not like one of those alternate diet quacks!”). I’d imagine it’s not dissimilar to telling a person suffering from mental distress to just “not be crazy”, or telling a rationalist the way they think is silly and they should just “think normally!”. What we may consider normal or obvious may be quite far from the truth and we all benefit from actually exploring it.
EDIT: further little annoyances on the topic of dietary science: could we all agree how much caloric dietary fat exists in a low fat diet and how many grams carbs exists in a low carb diet? Can we also differentiate keto-stable low carb diets and the rest? A diet with 100g carbs a day is not what I would personally consider “low-carb”, nor is a diet with 30% calories from fat exactly what I’d consider “low-fat”.
I came up with some dietary advice which I thought was generally applicable, but I’ve gradually found more and more limits to it.
This is “first do no harm” advice, not optimization. I think optimization is somewhat risky. If you have a problem, you can tell whether you’re solving it. If you’re basically healthy, an effort to optimize could make your life worse.
So, what I came up with was to eat in a way which leaves you feeling good three or four hours after a meal. In my case, when I follow it, I’m doing a mild low carb, high fat and protein, fairly high fiber diet.
What could possibly go wrong? For one thing, finding out what leaves you feeling good may take some flexibility about what you eat. Not everyone has that. You also need an ability to track how you’re feeling, and I seem to do more of that than a lot of people.
I did throw in that some allergies/food sensitivities can take 3 or 4 days to kick in. (I used to know someone who’d get migraines a few days after eating seafood.) These are going to be harder to identify.
It’s also going to be hard to notice bad reactions to small amounts of common foods. Now that I think about it, celiac is weak evidence that there might be other similar disorders which haven’t yet been identified.
This might not be good advice for someone who has bipolar– hypomania feels good, but it could be a lead-in to mania, which is real trouble.
The advantages to this advice is that it can help you pry yourself loose from advertising– you keep getting told that some food will make you feel great, but maybe it doesn’t. You’re unlikely to give yourself an eating disorder. I suppose that someone with OCD could drive themselves crazy by trying to follow it, but fortunately, I don’t think anyone has taken this advice that seriously.
Excuse me, but this is ridiculous. People don’t get fat through cooking with wrong ingredients. People get fat through much deeper issues like:
– alchoholism (hello, me! 3 liters of a beer a day)
– comfort eating as a coping mechanism, usually lots of sugary desserts and chocolate
– fast food addiction for various reasons, from comfort eating to lack of time, laziness, addiction, low IQ, low income, you name it
I mean if your only source of calories would be things you cook or food you prepare, you would be already 80% there and hardly even need much of a diet because then of course you steer it towards a “cleaner” direction and you got it.
But the problems are usually with the calories in the food or drink you did not actually prepare, reasons above.
How does one maintain satiety on such a diet? (Edit: badly phrased, should have said “variety + satiety” or something else here to differentiate the raw sense of hunger from the feeling you get when the menu is, say, Soylent for the rest of your life.)
I have been on a ketogenic diet for over three years. It is not a universal solution (many find it hard to do for even a few weeks, and the greatest diet on earth is worth nothing if you can’t stand eating it), there is no particular reason to think that other diets don’t work (cf. Asia), but there seem to me to be several advantages:
1. Ketosis is an appetite suppressant. Probably a minor contributor unless you’re doing intermittent fasting.
2. Carbs are not necessarily fattening (again, cf. Asia), but Western diets use them as vehicles for other things – by not eating bread, I’m also not eating the butter; not eating chips means less salsa or sour-cream-and-onion dip; banning cake means I don’t eat frosting.
3. It’s very hard to game. A very rigorous carb count will force people to eat almost no processed foods – foods that would be better described as “engineered”, because they are deliberately produced in such a way as to make them have the highest amount of taste reward and the lowest amount of satiety (they don’t want people to feel bloated after eating them, after all). 6 oz of Doritos has about as many calories as a stick of butter, but I would wager that most people find the prospect of eating the former a great deal more appealing than that of eating the latter.
So Taubes et al may be utterly wrong on the science, but as a practical matter they have found a very useful heuristic for forcing people to read nutrition labels and excluding the lowest-quality foodstuffs in a way that “eat a low-fat diet” does not.
I’m repeating this comment from two posts ago, re: the values of SSC readers:
Jonathan Haidt’s moral foundations site lets users create groups at YourMorals.org, and you can see how your group compares to other users. I created a group you can access at the following link. You do need to register as well.
http://www.yourmorals.org/?grp=7b11562808693d236a40d958c11c2aec
The Moral Foundations Questionnaire is particularly interesting. It’s not exactly about politics, but I think if users take it, the results will be more informative than some political tests. (The Pew Political Typology Quiz, for instance, has very imprecisely-worded questions that are maddening to me and I suspect would be to a lot of other readers.)
What an interesting site. No idea if any of this has scientific validity, but at least my implicit self-esteem is off the charts!
I thought I’d score higher on Narcissism, but apparently my Neuroticism is holding it in check 🙂
As a non-cognitivist that survey completely failed to capture my views. It makes some pretty big assumptions about the responder’s meta-ethical views.
The survey is a non-cognitivist?
Neat. Apparently if Slate Star Codex were a political party, I would totally be that one. (Except I’m with the liberals on harm.)
For those who identify as LessWrong members, there’s a LW group at YourMorals too. It was created in 2011, which means it has accumulated a reasonable sample size at this point, but might not be representative of the group as it currently is because the user base has probably changed a lot since then.
Apparently it isn’t possible to belong to multiple groups simultaneously (and creating two separate accounts to be able to compare yourself to both groups would be very slightly disruptive since the site is used for somewhat serious research purposes), but maybe you can juggle between the two groups or something. It would be super interesting to see whether there are any major differences between the communities of LW and SSC, but unfortunately the site lacks advanced features.
That might be the type of functionality someone would be willing to add. If anyone is interesting in asking, contact info is here: http://www.yourmorals.org/aboutus.php
I followed the link but I only see the welcome page. I’m logged into my account and have completed the survey. Anyone else have this issue?
Worth repeating: Haidt’s studies are flawed because he focuses on the kinds of Authority, Tribalism and Purity specifically conservatives care about, and forgets to test those kinds liberals could conceivably care about:
Authority: catastrophic global warming, “there is a consensus, 90% of climatologists agree” types of arguments.
Tribalism: “you are not a progressive if you are not a feminist” etc. these kinds of ingroup policing
Purity: feeling horrified about the idea of burning books or using racist slurs BEYOND what such sentiments are consequentialistly justifiable, we are looking for the pure disgust beyond those
“Authority” in this context refers to people you obey, not to people you believe. Of course everyone considers some sources of information trustworthy. The difference is in how people feel about questions like this: if you are given an order you disagree with, do you have a moral duty to obey it?
“Tribalism” refers to moral approval of tribal loyalty as a value, implying that it can sometimes outweigh your personal sense of justice, not to tribal behavior as such.
If everyone valued authority and tribalism, then statements like “we have an ethical duty to question authority and fight against our tribal instincts” would look simply incoherent, and no one would ever write them.
However, I have seen such statements, so apparently these values are not universal.
Because you took those statements too literally. In the liberal mind, authority and tribalism never means their own authorities (intellectuals) and their own tribes (political movements), merely the authorities they don’t accept (priests, parents) and tribes they don’t want belong to (nation, ethny, race).
But to be fair this is not even a liberal problem, but a general human problem: when we hear nice sounding _general_ statements, we tend not notice our _specifics_ can be part of them. It is always about other people!
Thus sure we agree that we don’t want to eat unhealthy food in general – but subconsciously filter out our favorites. That is not unhealthy food, that is mom’s chocolate fudge cake!
This is more of a general human problem, and the liberal problem explained above is just a subset.
>>In the liberal mind, authority and tribalism never means their own authorities (intellectuals) and their own tribes (political movements), merely the authorities they don’t accept (priests, parents) and tribes they don’t want belong to (nation, ethny, race).
I think what Nita is saying is that *believing* what your priest says is true is not “authority” by Haidt’s definition. Obeying orders from your priest and acting a certain way just “because he said so,” even if your own conscience says otherwise, is the only thing that would count as valuing “authority.”
This conflation of authority (trusting that what someone says is true) and authority (obeying someone because they have power) is dishonest.
It’s not just that we coddle children that makes it harder to sit; it’s also that we shape their bodies to be worse at sitting. In particular, strollers typically place children in a position with a rounded back and weight on their tailbone, which conflicts with ideas of good posture. You should not just look at whether they’re sitting, but how they’re sitting.
Do you have any more info on that?
As someone who wishes he had formed better posture habits for himself but now has a toddler on whom to impress the importance of good posture, I wonder if there are simple modifications I can make to the stroller/car seat that will avoid the pitfalls you mention.
The book “8 Steps To a Pain-Free Back” by Esther Gokhale (great book with a bad title) talks about . It shows lots of pictures of how babies naturally have good posture, but learn bad habits from imitating their peers and sitting in bad furniture. It contrasts this with how people in other cultures handle their infants — it shows pictures of African women holding babies in a way that stretches the spine, and talks about the Burkina practice of giving infants a certain kind of massage to help improve their body alignment (actually, they may have just mentioned that in the accompanying class).
Their recommendations for modifying seats is to place objects (pillows, towels, etc) on the back and seat so that you can sit in a more ideal manner: knees below hips, weight on sitz bones, pelvis behind back. I’d expect the principles for children’s furniture to be the same: basically, just “fill in the holes.”
Presence of bad students hurts good students more than presence of good students helps bad students, so a charter school that was completely equivalent to control in its instructional practices but was able to attrition out the bottom half of its population would still be expected to deliver a positive educational outcome for the total population.
“Presence of bad students hurts good students more than presence of good students helps bad students”
Is there data on this? The two most recent episodes of This American Life on school integration made the exact opposite claim, but I didn’t check their sources or anything.
from anecdotal evidence from my wife as a teacher, behaviorally bad students ruin things for everyone in the classroom. but i have no idea if this would hold in an actual study.
So if you have a daughter, you want to send her to an all-girls school that expels at the drop of a hat?
So if you have a daughter, you want to send her to an all-girls school that expels at the drop of a hat?
Object-level assertion =/= normative assertion.
Okay, presuming you want to have her have the best educational outcome?
i would probably prefer to send my kid to a school that has the ability to expel, as opposed to my wife’s public school that won’t expel students for outrageous misbehavior.
i also don’t think gender segregated schools for children is a bad idea, personally, but thats another topic.
If you expect she will not be expelled, yes.
It’s similar to a more common situation when you have to choose whether to send your son to school for the first time this year or the next one. Kids who start school later usually perform better then their peers, at the risk that if he drops out, he will have received one less year of education.
@Saint_Fiasco, that may be true for a year or two, but the effect _reverses_ in later school years. Moral: Don’t redshirt your kids!
I went to just such a school and if you can homeschool or just send her to university earlier either is a vastly better option.
Anonymous, that sounds interesting! Do you have a link for that?
All-girls, yes, expels, no, there are other, less final discipline methods. No reason to break a career for one wrong move. I endorse of limited and humane corporal punishment, such as palm caning instead.
(It is one of the characteristics of the idiocy of our times, that we don’t use the most basic thing Nature gave us to not do things we are not supposed to do: physical pain, because it is “abuse” and “cruel”. What we actually do is far more cruel: instead of TEMPORARY pain, after which the punished could easily integrate back into a normal education, career or life and have a great life, we use “humane” but far more FINAL methods of punishment, such as expulsion or prison, which breaks careers forever. This is not ethics, this is on any utilitarian scale squeamish immorality, which cares about not seeing visual hurt being more than about actually not hurting, dressed up as fake humane ethics.)
@ Shenpen
”
—Lazy idle little loafer! cried the prefect of studies. Broke my glasses! An old schoolboy trick! Out with your hand this moment!
Stephen closed his eyes and held out in the air his trembling hand with the palm upwards. He felt the prefect of studies touch it for a moment at the fingers to straighten it and then the swish of the sleeve of the soutane as the pandybat was lifted to strike. A hot burning stinging tingling blow like the loud crack of a broken stick made his trembling hand crumple together like a leaf in the fire: and at the sound and the pain scalding tears were driven into his eyes. His whole body was shaking with fright, his arm was shaking and his crumpled burning livid hand shook like a loose leaf in the air. A cry sprang to his lips, a prayer to be let off. But though the tears scalded his eyes and his limbs quivered with pain and fright he held back the hot tears and the cry that scalded his throat.
—Other hand! shouted the prefect of studies.
Stephen drew back his maimed and quivering right arm and held out his left hand. The soutane sleeve swished again as the pandybat was lifted and a loud crashing sound and a fierce maddening tingling burning pain made his hand shrink together with the palms and fingers in a livid quivering mass. The scalding water burst forth from his eyes and, burning with shame and agony and fear, he drew back his shaking arm in terror and burst out into a whine of pain. His body shook with a palsy of fright and in shame and rage he felt the scalding cry come from his throat and the scalding tears falling out of his eyes and down his flaming cheeks.
—Kneel down, cried the prefect of studies.
Stephen knelt down quickly pressing his beaten hands to his sides. To think of them beaten and swollen with pain all in a moment made him feel so sorry for them as if they were not his own but someone else’s that he felt sorry for. And as he knelt, calming the last sobs in his throat and feeling the burning tingling pain pressed into his sides, he thought of the hands which he had held out in the air with the palms up and of the firm touch of the prefect of studies when he had steadied the shaking fingers and of the beaten swollen reddened mass of palm and fingers that shook helplessly in the air.
—Get at your work, all of you, cried the prefect of studies from the door. Father Dolan will be in every day to see if any boy, any lazy idle little loafer wants flogging. Every day. Every day.
”
— A Portrait of the Artist as a Young Man, by James Joyce
Shenpen:
— Human beings have visceral reactions to inflicting pain. This means that there are human beings who enjoy inflicting pain in a way that there are not human beings who enjoy imprisoning. This creates a conflict of interest.
— Human beings have visceral reactions to inflicting pain. This means that inflicting pain has bad psychological effects on most punishers that do not apply to prison.
— Because pain is inflicted in a short time period, it’s easy for punishments to be ratcheted up almost infinitely. Prison cannot be ratcheted up past a human lifespan.
— Prison also has the effect of keeping the prisoner from committing crimes on the public simply because he can’t get out of prison to commit crimes. Inflicting pain doesn’t do this.
Note that all these reasons are also why we don’t punish prisoners by raping them, something else which your reasoning would allow.
@Nita,
If you have a real argument, present it. Dueling emotive appeals about the horror of corporal punishment v imprisonment of juveniles will only serve to muddy the waters.
@Jiro,
In point of fact, we do punish prisoners by raping them. In fact that’s the big way prison terms are justified as fair in the public consciousness. Listen to people’s comments when a sentence for a high-profIle criminal is given out, they’re quite explicit about it.
More on point, prison has it’s own perverse incentives. The offenders are whisked away “somewhere else” so ordinary people don’t much care what happens to them, no particular person has to foot the bill for a long sentence or secure facility so costs can easily spiral out of control, and the length of time spent in prison means that ex-cons have both a years-long hole in their resumes and years of criminal training.
Anyway, obviously sometimes incarceration works. We should have it as an option when circumstances warrent it. But the same is true of corporal punishment and even capital punishment (although presumably not in schools). Locking ourselves out of any of those options seems like a poor choice.
@ Ever An Anon
Well, I wasn’t presenting an argument, real or otherwise. That was just an informative illustration — raw evidence, if you will.
Unfortunately, psychologically realistic literature (including fictionalized autobiographies and some genres of journalism) is the only medium we currently have for conveying human experience.
All good points, but my whole point is that we should, ethically, WANT to NOT destroy the lives of people, even when they committed serious crimes. So the racking up does not count – we should NOT WANT to do that at all. Lifetime is the most limited resource and taking it away should be seen as the cruelest thing.
Therefore, the primary ethical requirement of punishment is that it should be SHORT. Return to normalcy as quickly as possible.
Therefore, to have any sort of effect, it must be INTENSE.
We should also try to minimize lasting effects.
What is short, intense, and has less lasting effects than comparable alternatives? Answer: physical pain that is only skin-deep.
Another important thing. Time discounting / time preference. Which is especially important for stupid people. Stupid people do drugs now and don’t care what will happen in 10 years.
Do you think they therefore care that they will be really bored in prison in 8, 9, 10 years? The answer is no – and thus everything that lasts long but is not intense has not deterrence value at all for the stupid.
As the stupid only care about short-term consequeces, optimal deterrence is short-term intense pain.
To add: if you had a choice between taking a beating and getting expelled, which would you prefer? Between taking a beating and a prison term?
@ Nita
Unfortunately, psychologically realistic literature (including fictionalized autobiographies and some genres of journalism) is the only medium we currently have for conveying human experience.
Briefly, I’ll see your Joyce and raise you Kipling. (Action level: if I were king of the administrative system, I’d ban any punishment of the high nerve areas [hands, feet, head, etc]; otherwise listen to both sides.)
Imo popular literature in general provides extremely good evidence for many things — about its target audience. What they found acceptably realistic as background, what they found realistic and controversial (foreground of the story), and (less easy to distinguish) what they accepted as fantasy or metaphor or exaggerating to make a point within an otherwise realistic work.
@Evan, you can find links to primary sources in this article from Slate.
(And on the advantages of being younger than one’s peers.)
@ houseboatonstyx
So you’d ban exactly what Shenpen was proposing? OK.
Well, the passage I quoted doesn’t seem fantastical, metaphorical or exaggerated to me. Does it seem that way to you?
@ Nita
Well, the passage I quoted doesn’t seem fantastical, metaphorical or exaggerated to me. Does it seem that way to you?
Different readerships would read it different ways. My Martian archeologist researching punishment in schools would begin by looking for cane/caning in other popular works (eg Kipling’s) and contemporary reviews of them, and find that
1) canes did exist and were often used for punishment
2) a majority of readers and reviewers (Kipling’s camp) accepted caning as common and not permanently damaging, in moderate use on average students (though of course exceptions would exist). They might see Joyce using excessive wordage to make a point (likely about his school or Catholic schools in general), so also probably exaggerating Stephen’s reaction to Dolan’s real abuse.
2.a) If a writer is making a point about a nation-wide practice, choosing an out-lying example for poster child may use literally true description, but imo would be exaggeration in effect.
3) Joyce’s target audience (already condemning all Catholic schools and/or all physical punishment) … might also see it as elaborate wordage to make a point they agreed with. Warmer Joyce fans might see it as literal, realistic about himself, unusual as he may have been (as he said on the tin).
(Oh, dear, this is starting to pattern-match with something Bayesian.)
As for Shenpen’s palm caning, I think he was using a convenient example, and even if the king forbade hurting high-nerve areas, it wouldn’t affect Shenpen’s point, which imo is worth considering (cautiously and case by case).
@ houseboatonstyx
1. Innocent people mistakenly condemned to death are outliers, but their existence is still an argument against capital punishment.
2. There are basically three ways schoolkids can react to routine disciplinary violence:
– recoil in horror, condemn the authorities and become alienated from mainstream society (Joyce);
– accept it as normal and even character-building, go on to glorify violence in other spheres of life (Kipling);
– develop a cynical attitude, abandon ethics and concentrate on avoiding pain at all costs (probably most of them).
None of those strike me as beneficial to niceness, community and civilization.
Anecdotal evidence from my time working in a school (hasten to add as clerical staff, not teaching): one kid can disrupt an entire class and the teacher’s time is taken up with dealing with them, rather than teaching, so they do drag the class down, e.g. strolling in late, don’t have their books, “please can I go to the locker to get them”, coming back late, being noisy when sitting down, needing to be rebuked to get them to settle down and stop distracting others, “please can I go to the bathroom but I need to go why won’t you let me go” etc. etc. etc.
And that’s not the ones who literally start fights, throw chairs, swear and yell, and so forth in the classroom. Out of a forty-minute class period, anywhere from ten to twenty minutes could go merely on coping with distractions from pupils who didn’t want to work, didn’t want to be there, or had behavioural/psychological problems.
I have taught in numerous schools; certainly true that it’s easier for a student to negatively affect another’s learning experiences that to positively affect it.
I listened to the TAL episodes with interest also (I live right in between the “bad” and “good” districts they talked about). I kept waiting for them to describe a mechanism, but they never quite got around to it.
The first TAL show had a mechanism– a claim that schools with a high proportion of black students don’t get comparable resources with white-majority schools. Students are unlikely to learn material they aren’t taught.
Also a former teacher, also can provide anecdotal report that adding one bad kid to a class took up a pretty high percent of my time and dragged everyone else down. I also vaguely remember hearing someone give the “good kids pull bad kids up, but bad kids don’t pull good kids down” study and thinking it probably came from about the same place as the “earthquakes cause divorce” study, but that’s just my own bias.
I think the “good kids pull bad kids up” thing works in specific situations, e.g. schools where there is an expulsion policy, classroom discipline (and things like behavioural programmes where there’s a teacher trained in this and a room separate from the classroom where disruptive pupils can be sent and monitored and helped with work), where the parents are supportive (and not confrontational about “why are you always picking on my Johnny?” when Johnny is starting fights or being openly disruptive in class) and there are things like paired-reading tuition, better students coach weaker students/lower years, etc.
But in your ordinary “you can’t expel them unless they commit murder because they have a right to an education and boy will their parents come raging in with their lawyers in tow to force you to take them into class because they don’t want the little brats at home all day” school, that kind of mentoring and monitoring will probably not be in place.
One of my friends talked about going from a school where the culture of the students opposed doing well in classes to a school where everyone agreed that schoolwork was important. He became a good student very quickly
In one school where I worked, it was more like “we can’t force this kid to behave or punish him for bad behavior, which is totally due to his SPED status, because we would be violating the law/ADA/IDEA.” AKA, full inclusion.
My wife’s experience with public school, even in a relatively nice area, involved several children that would literally storm around the classroom throwing desks and screaming. These kids could not be expelled according to the administration.
It seems like the obvious problem with expulsion is that the law dictates every child needs to be educated somewhere, so you’re just pawning off your problem on some other school that is then going to get worse, until the kid you expel gets expelled from enough different schools that they finally end up in the district continuation school (or whatever they call it in districts other than where I went to school).
@Adam
One could take the view of expulsion as sort of like ACT, in that it should mandate services for the student. If a student is that disruptive one might reason they have medical/psychological/family problems that require intensive intervention and monitoring (like that old saw about the kid who starts fights all the time because he’s getting hit at home). They could be placed in a special track in their school, if it’s big enough, or in another school focused on kids with serious external stressors (I have no idea how you’d avoid making this stigmatizing). So the funding for that student would go up but since they are removed from the classroom you can spend less on the other students, and intervening earlier probably has good knock-on effects for the rest of that kid’s life. Someone mentioned The Wire above and one of the things that show talked about is that if you try to intervene when the kids are 15-18 it’s way too late to help them avoid dropping out or entering the prison pipeline. You have to catch them earlier, in middle school.
@Bartlebyshop, Can’t be done. That would be segregating the students (I don’t actually recall if that’s the verb that was used) and would violate IDEA/cause a lawsuit (at least in the U.S.).
Adding my data point as a former teacher:
Yes, one bad (disruptive) kid can pull down the whole class. Sometimes you see almost a miraculous change in the class when that one kid stays at home sick for a week. But then the kid comes back and everything reverts to “normal”. If you have two or more disruptive kids in the same classroom, meaningful education becomes almost impossible, because they will keep encouraging each other (while having just one kid means they will sometimes get tired or bored, so you can teach for a while).
On the other hand, one good kid in a class… will probably be bullied, or learn to become invisible.
It works in specific situations: the time honored Eton tradition of collective punishment.
Put one bad kid into a dorm room with five goods kids, punish all six when the bad one misbehaves and they will effectively police him. Of course, do turn a blind eye to black eyes and swollen mouths and other obvious signs of effective peer-to-peer policing.
I think that this is highly dependent on what we mean by “bad”.
If the “bad” students are disruptive, have pool discipline, etc., then it’s pretty plausible that their presence drives the whole class down. If the “bad” students have a learning disability but aren’t otherwise causing problems in the classroom, then it’s quite plausible that they would do better by being mainstreamed, without causing harm to other students.
The link downthread about “full inclusion” seems to support this dichotomy.
If ONLY the administration would allow students to be distinguished in this way (bad = disruptive vs. bad = not learning). Instead, there is huge pressure on teachers to tolerate/include disruptive students due to a putative fear of being sued otherwise (even though this fear is largely unfounded, according to a reputable source that I cannot recall and therefore cannot provide a link to).
I have this book which doesn’t give a direct answer to your question, but it gives “classroom behaviour” an effect size of 0.8, “decreasing disruptive behaviour” an effect size of 0.34 and “peer influences” 0.53. It also gives “acceleration for gifted students” an effect size of 0.8 (acceleration = moving them ahead one or more years), but “ability grouping” an effect size of only 0.12.
0.4 is about the average of interventions studied in the book, and 0.8 is among the top few studied. The author suggests 0.4 as a benchmark for effectiveness; I’m not sure if this is the optimal way to look at things, but it’s a reasonably simple way to discount inflated effect sizes due to publication bias, Hawthorne effect etc.
Clear as mud?
Anecdotal evidence here, but Scenes From The Battleground should be a mandatory reading before any debates on current state of education.
Specifically, see these articles: The Naughty Boy, The Disruptive Girl, The Top Five Lies About Behaviour, The Driving Lesson.
It came to my mind, The Wire (which is, granted, a fictional TV show) made a similar claim, that separating bad kids (“corner kids”) and good kids (“stoop kids”) is beneficial for both groups. Ed Burns, one of The Wire creators, worked as a public school teacher after being a Baltimore police detective, claimed they did something similar in their school and it worked. Much of The Wire‘s 4th season is based on Ed Burns experience being a public school teacher in a really bad neighborhood.
We need a distinction between disruptive students vs. students who aren’t learning much. I’m assuming that the former drag a class down, while the latter don’t have that much effect.
My mom taught class for a few years, then went to grad school for diagnosis/child psychology. I’m relaying what she told me one time. Consider a math class with 30 students. Each of them has a “mathematical ability” (some idealized combination of innate cognitive ability and prior knowledge). The “pace” of the classroom, the rate at which material is taught, will pretty much correspond to the average ability of the kids. She called it teaching to the mean.
The upper half of the kids are bored and wish things would go faster. The interesting thing is that the lower half tends to actually trudge through and learn the material as instructed. It’s harder than they’d like, but they make due.
Consider 60 students rank 1-60 in math ability. Classroom A contains students 1-30, classroom B contains students 31-60. If you swap student 30 and 31, you’ve not appreciably changed the mean in either classroom, and the change would have no effect on the pace. But if you swap 21-30 with 31-40, the mean in B has gone up and the mean in A has gone down. Thus the pace of teaching will change appreciably.
If you’re a student in Classroom A you’re going to be upset about the influx of lower ability kids who are slowing the pace of the class, making you even more bored and learn less in the end. If you’re a student in Classroom B you’re going to be begrudgingly happy about the influx of higher ability students. You’ll have to try harder but will end up learning more.
Students that are simply slow will take up more time, especially as teachers are measured by the percent of students that are meeting the basic requirements.
Now, having students that are excelling in with them so that they can peer tutor the others can help, so a diligent but struggling student can benifit from having advanced but caring classmates, however as ryan says the pace will be affected.
Of course not as much as by a disruptive student who hurles tennis balls at the teacher’s head the first day of class while he ties to write on the board… gah, the repressed memories are coming back! Where was the trigger warning?!?!
Some of those disruptive kids may be the ones bored by the glacial pace.
Academically poor students aren’t as much as a disruption as students with discipline problems, but they can be disruptive, especially if they’re doing poorly due to lack of effort. Not to lapse too much into woo (and obviously I intend this metaphorically, not literally true), but students who aren’t even trying to learn really suck the energy out of the classroom’s aura.
I think that your jump from an ordinal concept to a cardinal one borders on the fallacious.
The link in “here is someone else arguing against that blogger” currently goes to a Google cache page rather than directly to the page. Is this deliberate? The page itself is still up.
Site wasn’t working so well before, is now. Fixed.
OK, thanks!
Well, Thomas Schelling IS alive.
Really? He has to be like a hundred years old by now!
Did Death do that thing where he challenges him to some kind of game for his life? Death should really know better than that.
Thomas Schelling’s Bogus Journey
94.
Presumably the game was some variant of “guess what number I’m thinking of”.
“You and I shall meet somewhere in Hades, sometime tomorrow. If you miss the meeting, I’m going back to Earth”.
I have vague recollections, which likely have significant inaccuracies, of David Friedman saying that he discussed the term “Schelling Fence” with him.
Compound interest is not the least powerful force in the universe. It’s just a really shitty metaphor for the intergenerational transfer of skills and resources, which is apparently not a very powerful force.
The obvious wrench in the slavery thing is I’m pretty sure it didn’t benefit very many white southerners. The tiny number that actually owned large plantations did well. Everyone else just had their labor value severely undercut.
According to this source (http://www.civil-war.net/pages/1860_census.html), the percentage wasn’t that tiny. Something like 1/5 to 1/2 of households in the Confederacy owned slaves, depending on the state.
So…I’m gonna try and be delicate about this.
How many of those households were a case of “we own a nanny” or “we own a cook” or “we own a maid”, or so combination of those, rather than we own several slaves which are integral to running this profitable, exploitative business we have.”
Now, I AM NOT trying to trivialize ANY slavery. But it seems to me that if you’re a maid, or a cook, or a nanny for a small family, and you are receiving food, board etc. as part of the conditions of your slavery, You are being exploited, in purely financial terms, far less than the other slaves pulling cotton all day. I would not expect this variety of slavery to have nearly as much of an effect as cut-and-dried cotton pulling on transfers of “potential wealth”.
Granted this doesn’t take into account lost opportunities to do other things because you’re cleaning up, cooking, and looking after your owner’s children, which may have been significantly profitable.
IDK, I feel like I’ve phrased what I’m trying to say poorly.
Data for North Carolina from 1790-1860 (in very big chunks of time): http://www.learnnc.org/lp/editions/nchist-antebellum/5347 More than one slave was more common that just one slave by a considerable margin.
Additionally I’ll bet most of the single slave households were *still* agribusiness, though I have no data.
That’s really quite interesting. I was imagining something more similar to Victorian style households with a big house and a number of staff but with only a fairly small portion of the population able to afford such a lifestyle and the vast vast majority of “families” being poor peasant or serving class.
The links provided would tend to imply otherwise: that a very large portion of the population were able to maintain households rich enough to have one or more servants and/or slaves.
Is it a matter of households being rich enough, or were slave just that cheap? Or maybe even in non-plantation households, they were necessary for the productivity of the owner. Kind of like owning a car today; it’s hard to have a steady job without a car. Maybe slaves offered some kind of benefit of that kind.
Slaves really are that cheap.
Even now in the third world, there are more households with maids than households with cars.
When my family moved from a rural area to a real city, we couldn’t afford a full time maid anymore, but we still have a maid that helps in the house on Tuesdays and Thursdays. The other days she works as a cleaner in an office building, I believe.
In the first world, human labor is more expensive than technological time-savers such as dishwashers, roombas, laundry machines and the like.
There’s a substantial difference between the statements “slaves really were that cheap” and “slaves really are that cheap”. The standard figure I’ve heard (though the original figures were in a book so I can’t check how they came up with them) is that an antebellum slave in 1850 cost ~$40,000 in modern dollars, where a slave nowadays costs ~$90. (Source: http://www.freetheslaves.net/about-slavery/slavery-today/ )
That’s a pretty hefty expense back in the day, and the fact that a substantial portion of the southern population would be financially able to own multiple slaves definitely runs counter to my intuitions. I wonder if it was a social signal strong enough to command the majority of a families resources: “You’re not one of those families that can only afford one slave, are you?”
I don’t see how you’re being exploited less – after all the farmhands get fed and housed as well.
And given what I pay for my kids childcare, the financial loss to the slave strikes me as considerable.
Not to mention that back before electricity running a house was a lot harder work than it is now. I’ve washed clothes by hand, once you get to things like adult-sized trousers and tops that’s very hard physical work.
Well, if the farmhand slave and the household slave are receiving roughly the same basic sustenance and housing (not exactly the same, but pretty close), and the farmhand is producing more, then in Marxist terms, the farmhand is suffering more total economic exploitation. He/she is producing more, but receiving the same in return. So if we’re trying to measure just how much economic exploitation was added to the wealth of American whites, and then potentially increased by compound interest, it’s important to know whether most slaves produced a lot of value (farmhands/cotton-pickers) or a little less value (domestic chores), so we can make a decent estimate of principal.
I agree that being enslaved to perform domestic chores would indeed be a significant financial loss, but it seems basically correct to say that it’s less of a loss than more purely productive agricultural activities.
Farmhands get paid as well. Household slaves in a small holding probably didn’t, does anyone know if (like Classical world slaves) they could avail of earning opportunities?
I think I’ve read anecdotal accounts of slaves saving up to buy family members out of slavery when they were free themselves; first buy their own freedom, then work and save to buy out parents/spouses/children.
From “Martin Chuzzlewit”, published 1843:
“if the farmhand slave and the household slave are receiving roughly the same basic sustenance and housing (not exactly the same, but pretty close), and the farmhand is producing more, then in Marxist terms, the farmhand is suffering more total economic exploitation.”
That’s funny, because isn’t the ideal “From each according to ability, to each according to need?” Or is that a misquote or out of context?
@Randy
I think Marx believed that if the slaves owned the means of production, they wouldn’t mind working as much as they could. But since they work for someone else and are not compensated for the full worth of their labor, then they are being exploited.
Sure, and that makes sense, I was just struck by the observation that “to each according to his need” and “according to his contribution” are radically different.
Sort of what Saal said, but for the most part, those weren’t people running major operations on the backs of slave labor. It was just household labor. I’d expect whatever compounding lasting advantage to accrue to the bigger landowners, not the people getting free maid service.
As I understand it the big counter argument to the first two rebuttals here is that, functionally, every southern household that could have owned a slave would have been a small business owner producing surplus household goods and services like soap and candles and clothing and every other thing that had to be made in someone’s home because the south wasn’t an industrialized region. Over a third, and depending on individual circumstances more than half of the household’s material wealth would come from the home economic activities that you would use your single house slave for.
Compound interest can be that powerful when a country’s economy is limited primarily by the scarcity of capital. It’s just that that’s a very rare circumstance to be in, usually countries are running up against technological or institutional constraints. You can tell you’re capital constrained because your economy is growing 10+% every year.
There were psychological benefits – both aspirational (“Someday I could own a plantation”) and, well, ugly (“At least I’m not a filthy slave”).
I went back to Scott’s article there and was surprised that ctrl-f did not find the phrase “resource curse” as a theory of Southern U.S. poverty post-slavery. Although some commenters did make similar points, mostly on the theme of the South wanting an agrarian economy and getting what it wanted.
The idea would be that the presence of a valuable and easily exploitable resource – usually oil but in this case slave agricultural labor – led to the development of a political/economic structure focused almost exclusively on exploitation of that resource. And consequently inhibited the normal development of (more) productive industry.
I have no idea how that would apply to heavily-bombed areas of Vietnam. It probably wouldn’t.
Can tourists be a resource curse?
No because far fewer will come if the government mismanages things.
Even if the specific mismanagement is to make the local environment unfairly benefiting to tourists?
Speaking as a resident of a city where 40% of the economy is supported by tourism, there are plenty of ways that a government can mismanage things without driving tourists away. For example, the road quality is so bad in my neighborhood that several streets near my house are virtually undrivable due to potholes, but the parts of the city that tourists go to don’t have that problem.
So, tourism isn’t a resource curse for the tourists, but it is for the natives.
What does government have to do with it? Usually the resource curse is thought to be due to underinvestment in industries with positive spillovers, which are often manufacturing and high-skill/tech sectors. I would think that e.g. a small island nation with amazing beaches would be a prime candidate.
The dominant meaning of the term “resource curse” is a situation where an economy has an abundance of natural, as opposed to human, resource. Using the term to refer to an abundance of human labor would be a deviation from, if not a direct contradiction of, the standard meaning. Just because someone isn’t a person in the eyes of the government, that doesn’t mean that they aren’t a person in the eyes of economists. Slave labor may be cheaper than non-slave labor, but it’s not free. Runaways, malingering, and revolts are all costs that increase with mismanagement.
Just because someone isn’t a person in the eyes of the government, that doesn’t mean that they aren’t a person in the eyes of economists
If a person cannot trade in the market to their own advantage, that kind of does mean that they aren’t a person in the eyes of economists[*]. I think the underlying logic of the “resource curse” does apply when the resource is slave labor.
[*] Just to clarify, this does not mean economists would endorse the practice of slavery because the victims are unpersons. Rather, they would condemn the ultimate dehumanization of denying someone free market access, the highest right of homo economicus.
But they can trade. That’s my whole point. They trade their labor for food and lodging. That’s it’s not a “fair” trade does not mean it isn’t a trade. Slaves are economic actors, and it’s a deep error to think otherwise.
Me taking a thing from you against your will is not trade, in any sense an economist would acknowledge. Not even if I chose to give you something in return. The place where this happens, is not a market. You are using these terms outside their normal economic definitions and claiming the results apply or ought to apply to mainstream economics.
The slave owners didn’t literally take the labor, the slave gave the labor in exchange for things of value, including not getting whipped or killed. Morally, that’s taking, not trade, but economically, it is trade. It’s two people each offering something the other wants. That it’s “against their will” is a moral, not economic, judgment. Slaves have the power to deny their labor to owners, and it a valid economic question to ask what motivates them to not do so.
A bigger economic problem was that slaves aren’t very strongly motivated to produce, and are strongly motivated to run away, so slave owners have to spend resources on slave drivers and slave catchers, and slaves still manage to work less efficiently than someone who gets paid.
So the rest of the economy loses the extra potential production of the slaves and of the slave drivers and slave catchers, if they were doing something that actually added to human happiness. This is offset a bit by higher earnings to the slave owners, but, you know, if I steal $100 from you the economy as a whole isn’t better off, the immediate effect is just a transfer from you to me.
There are some industries where people’s output can be directly measured well enough that the flogging approach can still be profitable for the slave owners. Last time I checked the relevant historical economists there was a bunch of debate about to what extent Southern slaveowners managed to shift many of the costs of their slave-owning into society in general.
I thought that the obvious wrench in the slavery thing is that it in 1860 slaves represented the majority of the wealth of southern whites. Then all that wealth got transferred back to the slaves themselves and went to 0 for the whites. Its hard to compound 0. I thought that was why Coates focused on post-slavery stuff like redlining because there hasn’t been a civil war to undo most of those benefits.
In a competitive market, slaveowners derive no benefits from owning slaves. So if you wish to assert that slaveowners did benefit, it is incumbent on you to explain how you think the market was uncompetitive, and how that resulted in slaveowners profiting.
My mother has been interested in the Antebellum South since my parents moved to Mississippi before I was born. She’s read lots of interviews with former slaves, things like that. She told me that many households might have one slave doing what you would think of as “hired help” work, the same stuff a farm hand would do. They’d probably be working alongside the men of the house.
My sister once dated a boy who was the direct descendant of a plantation-owning family. He and his mother lived on a house on a street named after their family. The other families on his street were all black and all had the same last name as him–they were all descendants of his ancestors’ slaves.
The family had had to sell the big old plantation house decades before–it was too expense to keep up with their dwindling resources. They moved into a small, cheap house like all the others on the street. They *did* still own some land which they used to grow hay. My sister’s boyfriend spent every summer baling hay with his uncle (fairly grueling work, by the way.)
They also owned a barn crammed with all the fancy old furniture from the house, most of it dating back to the Civil War and most of it probably worth something even though it hasn’t been kept up very well.
So yes, they did have some resources left over after all that time. I’m not sure what the land and furniture would be worth, but the family certainly seemed less wealthy and comfortable than my family, which was raised into the middle class by my grandfather.
The other families on his street were all black and all had the same last name as him–they were all descendants of his ancestors’ slaves.
The family had had to sell the big old plantation house decades before–it was too expense to keep up with their dwindling resources. They moved into a small, cheap house like all the others on the street.
Now that could make a novel. Why didn’t the owners’ family get an equally cheap house in a different neighborhood, instead of choosing to live among people who might have a grudge against them? And how did they all get along?
Oh, look, I got the first post!
So, the observation about fetal microchimerism is that every child leaves a few cells behind within his mother’s body. Therefore, they argue, since Mary’s body had some of Jesus’ cells within it, and Jesus’ body could not be allowed to undergo corruption (Psalm 16:10, “You will not… allow Your Holy One to see corruption”), Mary’s body could not undergo corruption either. Therefore, Mary’s body was received up into Heaven.
However, this proves far too much. Skin cells are constantly decaying and being washed off the human body. Does all the dirt of Galilee and Judea thus become incorruptible? If Psalm 16:10 applies to all Jesus’ cells, then the much more sensible solution would be for God to assume just those cells into Heaven, leaving behind the land of Israel – and Mary’s body as well.
(Though, I suppose that would be one solution to the Israeli-Palestinian problem…)
Eh. The foetal microchimerism is an interesting fact, but not the basis for the doctrine of the Assumption (yesterday the 15th was the Feast of the Assumption, which is why this is all over the Catholic blogosphere).
Mary is the New Eve, the Ark of the Covenant – quoting from the Litany of Loreto here:
I have to make a tiny, evil, atheistic chuckle every time I see one of these “Science Backs Up Dogma” posts. If they really believed what they say they believe, they wouldn’t care whether science backed it up or not. Even the invention of a chronoscope proving that all the miracles of the Bible happened exactly as written should be yawn-worthy news, the equivalent of “Scientists Prove Sky Is, In Fact, Blue”.
At least for Catholics, this is not the case. At least as far back as St. Augustine of Hippo, Catholic doctrine has been that reason and faith are parallel paths to the same truth, that faith is properly reinforced by e.g. scientific discoveries confirming things previously believed on faith alone, and at least at the margins that matters of faith may be reinterpreted in the light of scientific evidence.
First, it’s interesting to see science provide an explanation from a totally different point of view. Science backing up a miracle is something like “Scientists Detect Blue Sky’s Effects from Bottom of Ocean” – it doesn’t prove anything new, but it’s still interesting to read about just how they managed to do it.
Second, there’re all these people who don’t believe in miracles but do believe in science, so it’s fun to see science coming in on our side for a change. And, hey, it just might get some of you atheists to follow science and trust in God!
I’m going to indulge in something I try very, very hard to avoid, and make a false-consciousness argument here. I think you are simply wrong about the reasons why religious people in general, and yourself in particular, seize on scientific discoveries that appear to confirm dogma. The pattern is too easily recognized by anyone who’s ever tenaciously held a controversial and poorly-supported belief: You thirst for evidence because evidence would mean you were right after all.
I do this, too. We all do it. My tiny, evil chuckle is not one of smug superiority. It’s the ironic acknowledgment of one of those moments when the curtain gets pulled back, and there sits the Wizard of Oz.
It is kind of puzzling. If faith and reason aren’t supposed to conflict, then why the need for validation? Wouldn’t the stock response simply be that wherever they seem to conflict, then the proper reasoning/evidence hasn’t yet been discovered/developed?
Ultimately, what’s your (the royal you) epistemological priority? You can, of course, just pick and choose willy nilly areas in which faith take epistemological precedence (I know it’s a stretch to call faith an epistemology at all, but I’ll allow it) and other areas where reason should dominate, but that’s not going to satisfy most reason-inclined individuals; after all, they might not reject faith as a possible epistemology, but at least would want some sort of reasoned argument why faith should take precedence in some areas.
In other words, in a world where *everyone* believed in the assumption, no one would really care about scientific proof for it, except as a curiosity.
Even if we are sure ourselves, we want proof because those other guys are WRONG
“I boldly affirm that an infidel who, destitute of all supernatural grace, and plainly ignoring all that we Christians believe to have been revealed by God, embraces the faith to him obscure, impelled thereto by certain fallacious reasonings, will not be a true believer, but will the rather commit a sin in not using his reason properly.” – Rene Descartes
Can we back up a minute here? This is not “Aha, a scientific explanation of how the miracle was performed!” For non-Catholics and non-Orthodox Christians, for instance, this will have no impact on their ideas about the Assumption. In fact, for certain parties, this will be even worse: horrible confirmation that the Papists are indeed trying to make a goddess out of Mary and put her into the Trinity! 🙂
Whether or not the Shroud of Turin really is the actual burial cloth of Christ does not affect my belief in the Resurrection one straw. I don’t base my belief on “this is the physical relic that proves it” and if it was unilaterally declared real in the morning it wouldn’t make a difference, anymore than the Discovery Channel-style documentaries about how Leonardo faked it up make me go “Well, I guess that’s disproved everything!”
It’s not so much “science backs up dogma” as “here is something even more fascinating to add to contemplation of this event”. It’s like getting older and having more life experiences and that enriches your reading of Shakespeare and lets you appreciate things you did not notice before.
Jesus takes all His humanity from Mary, so the idea that this is reciprocated, and that part of His human nature remained in the body of His mother is fruitful for contemplation. The idea of the Incarnation, after all, is that by God becoming Man, justice and mercy can both be satisfied. And Mary is the first-fruits of those who are saved.
They are not actually that kind of idiots. In Catholicism the term “faith” simply means “to hold probable”. Not “blind 100% trust”. As a Catholic friend of mine has put it: Gnostics claim to know, Agnostics claim to not know, but what Caths have is “hope” and “faith”, which roughly means “these ideas are probably right and really nice too”.
The whole Scholastic system is highly probabilistic.