You probably know Alan Turing was pretty good with computers. But did you know he was also an Olympic-class marathoner?
Study predicts that US average IQ will rise 2-3 points in the next fifty years, driven by finding that minorities are catching up very quickly. Haven’t investigated their numbers yet, but excellent if true. [EDIT: more discussion here]
Rabbi, what are the Jewish laws regarding Halloween?
Not mentioned on the above, but I’m going to come out and guess this dog costume is probably treyf.
Omnilibrium is a social site that tries to improve online filtering. Instead of a big pot of Reddit-style karma that shows everyone the most upvoted posts, it tries to show everyone posts upvoted by people whose opinions have previously been correlated with theirs, with various customizable options to decide how much you want to be exposed to differing opinions. It needs more users for a good trial run, so check out their FAQ and then join in.
If you want to learn something important, compare the graph in Vox’s The Budget Deficit Is Way Too Low with anyone else’s graph of the budget deficit.
From the Department of Goodhart’s Law: economists sometimes use the Big Mac index to estimate the value of a country’s currency; knowing this, Argentina strong-armed their local McDonalds into changing the price of Big Macs.
This is the only proper response to a teacher taking off points because you didn’t “show your work”.
The Cuban government (not Castro, the one before) built the Panopticon as a real prison.
Pseudo-Erasmus gives a good introduction to the emerging study of cultural institutions.
Disney’s 101 Dalmatians was loosely based on the original novel. Disney’s sequel was not based on the original sequel. The original sequel involved an alien dog coming to Earth to save the Dalmatians from nuclear war.
10/21/15. No hoverboards, but the government did grudgingly allow us to use the genetic tests we were using just fine years ago until they took them away. Well, some of the genetic tests. At double the price.
Female education decreases teenage fertility, but does not have a more general lifetime effect on fertility.
A couple weeks ago I wrote about how restrictive regulations let Turing Pharmaceuticals raise the price of Daraprim to $750/pill. Now one group has found a way around that – use compounding pharmacies, which are less bound by the regulations, to sell the same medicine for $1.
Evidence shows that evidence-based literary instruction doesn’t work. Which, if you think about it, means that it does work. Or something.
My new favorite thing – extraordinary overwrought Bollywoodesque portrayals of the climactic archery battle in the Mahabharata. Here’s how one movie handles it, here’s the same thing from a TV series handles it, and then and only then are you prepared to appreciate the consummate genius of this lentil cake ad
Michigan leads the nation in routing around democracy. The pessimistic view is that democracy shouldn’t be routed around. The optimistic view is that democracy should rarely be routed around, the catastrophes in Detroit and Flint are among those rare times, and this proves that people are good at limiting this extreme remedy to the times when it’s needed.
A history of people being excessively geeky and emotional about literature, from the Iliad to the present day.
Nostalgebraist, frequent commenter on Rationalist Tumblr and whose previous story Floornight got advertised here, has finished his latest work of online fiction, The Northern Caves. It’s the chronicle of “an online message board devoted to a cult fantasy author wrestling with his baffling final book” in a way that quickly turns weird. I have some commentary on it here and a review here.
Japan’s Yakuza are famous for…well, many things, but one of them is having great trick-or-treating on Halloween. To the dismay of children everywhere, this year they have announced suspension of their operations due to an especially big turf war.
Wikipedia: List of bizarre buildings. For example, Dr. Evermor’s Forevertron is a three hundred ton, fifty foot high steampunk device in rural Wisconsin.
Why we are really far away from successfully simulating the brain.
Chanda Chisala at Unz Report is on a roll. He’s been talking a lot about how the success of African immigrants to the US confounds a lot of simplistic explanations of the black-white achievement gap. Everybody figured this was just a result of those immigrants being heavily selected, but now he’s back with two posts arguing that it’s not selection effects. First, he points out that the immigrants’ children don’t seem to be regressing to the mean in a very specific Jensenian way that we would expect if they had been selected – this is more complex than I originally thought and not answered simply by “the offspring of two equally extreme parents will not regress”. Second, he points out that even the children of Somali refugees – an unselected population if ever there was one – seem to be doing better than native-born African-Americans. Some attempts at counterargument from James Thompson and Human Varieties. It’s good to see these issues being debated by such civil and mathematically sophisticated people, and also a sign of change how many people on both sides are black. Also by Chisala: book about Barack Obama debating Ayn Rand
Closely related to the recently linked Berniebro article: A Portrait Of The Person-Guy. “The Person-Guy is the cause of every evil and frustration in your life. The Person-Guy only wears odd socks, because he thinks that wasting our limited lifespan sorting them into matching pairs is indicative of a potentially authoritarian neurosis. The Person-Guy has a minor vocal tic, and it sends you into strange daylight fantasies”
Was World War I really a uniquely depressing and terrible war? Or were World War I soldiers just big wimps?
Designer makes a flag for Earth. Finally, something to burn when you just want to protest everything.
Franz Ferdinand is mostly famous for getting assassinated, but if that hadn’t happened maybe we’d know him for his plan to modernize the Austro-Hungarian Empire into a United States of Austria. Meanwhile, the modern heir to the Habsburg name used to host a game show.
Do extra police reduce crime? Many studies have converged upon the finding that they do. Here’s a recent one where a private university increasing police patrols cuts crime in nearby areas by 40 – 70%.
Genetic engineering has now reached the point where we can give dogs super-strength. And by “we”, I mean Chinese people. We can hold conferences about how we should form a commission to determine a framework for investigating the potential implications.
Joe Biden has decided not to run for the Presidency, likely ending his career. That makes it a good time to go beyond the stereotype of the gaffe-prone crazy uncle and look back on his the highlights of his forty years in politics.
Very large randomized controlled preschool study finds abysmal results of pre-K education; recipients of pre-K do better in kindergarten, but by third grade the trend reverses and they have worse behavioral and academic outcomes (hey, remember that study from last month how entering school too early may exacerbate ADHD)? This echoes results from the last few large randomized well-conducted preschool studies, so even the educational establishment is vaguely starting to notice a trend. James Heckman leads the pro-pre-K response, saying that past studies have found the same lack of advantage in later school performance but then later found better adult employment and prosociality outcomes; his explanation is that preschool doesn’t necessarily increase IQ but does increase “social and emotional skills that are greater determinants of late-life success”. But then how come the pre-K children had worse behavior in school in the new study? Possible explanation: the old studies Heckman cites were preschool plus a bunch of early life care and parenting help; maybe the latter helped and the former was at best neutral? Vox has a really excellent summary.
Facebook: Classical art memes.
David Burdeny (click on the link, then on the “Russia: A Bright Future” tab on the bottom of the page) photographs Moscow’s astonishly opulent subway stations. Also from Russia: Putin introduces
inelegant hack bill to prevent religious scriptures from being banned as extremist hate speech. When you’ve got to officially declare “right, but this doesn’t apply to things we like”, it’s probably a good sign you’re insufficiently meta.
American? Vipul Naik explains how to test your filter bubble, eg how much your friends differ from the general population. Go to his list of candidates’ Facebook pages and see how many of your friends support each.
Speaking of filter bubbles, if you’re surrounded by terrible people, consider that you might inadvertantly be selecting for them.
Before: obesity set point is just speculation. Now: obesity set point is an effect of melanocortin sensitivity in the hypothalamus.
Type 2 poliovirus has been declared eradicated (“declared” means it hasn’t been spotted in years and people are finally confident it’s gone). Type 3 is probably gone but not yet official; type 1 is still around but rapidly declining. The end is in sight.
Disrupting urban violence with culture, including that time all the gangs in the Bronx held a truce meeting, agreed that killing each other was unpleasant, and decided to invent hip-hop instead. This is maybe the most Hobbesian thing I’ve ever read, not in the “absolute monarchy” sense but in the “eventually people get tired of the state of nature and explicitly decide on civilization as an alternative” sense. Although better not to think too much about what it means that people have to pull a Hobbesian invent-civilization manuever in the middle of the most densely populated areas of the United States, which some people would have considered already a civilization.
Adopting good curricula – not even inventing it, just using the ones that are already out there and well-tested – is probably the cheapest way to make education more “effective”. Why are so few people doing it?
Sarah C has been writing a lot about structural issues in modern cancer research. See for example Is Cancer Progress Stagnating?, Chemotherapy Then And Now, A Note On Protocols, and much more on her blog.
People imitating Lovecraft’s short stories are a dime a dozen, but at least one person is trying to imitate Lovecraft’s sonnets, and they’re not bad, albeit they go way too heavy on name-dropping various mythos references in a way Lovecraft himself never did.
Some effective altruists are doing their own analyses of the offset costs of various kinds of vegetarianism (see mine here) and get broadly similar results. Jeff Kaufman finds that giving an extra $2/year offsets any harms of eating dairy. Gregory Lewis thinks that being vegan is equivalent to a 46 cent donation to the Humane League.
More real maps of fictional landscapes: the Greater Ribbonfarm Cultural Region. Less Wrong gets a prairie, for some reason. I guess we should be grateful – all they got was a white box on our map.
We know about Stanislav Petrov, Vasili Arkhipov, and other Soviet close-calls with nuclear war. We know about them in part because the Soviet Union collapsed so we got lots of records. There are probably a lot of American close-calls with nuclear war we still haven’t heard about, but it’s starting to look like one of them was during the 60s in Okinawa.
An argument against affirmative action: suppose a student with mid-tier grades ends up in a high-tier school. That student may be surrounded by students with high-tier grades who are faster learners; ie she may be the slowest person in her class. First, this can be frustrating and disspiriting. Second, it means she’s in trouble on anything graded on a curve. Third, the professor might aim their teaching to the median student and end up going too fast or over the head of students below the median. All of these mean a mid-tier student might do worse at a high-tier school than she would at a mid-tier school which she could get into on her grades alone. If this negative effect is larger than the positive effect from going to a more prestigious school, the overall effect of affirmative action could be negative. A long essay by Gail Heriot argues this is what’s happening and gives various statistics in support. Especially worrying is the possibility that affirmative action drives minorities out of STEM classes – which are very hard and might exacerbate the dynamic above – and into easy-A classes where they don’t have to worry so much; could this be why black students in all-black universities succeed at STEM at a much higher rate than black students at mostly-white universities where affirmative action is commonplace? Maybe – but I kind of want to see this argument made by someone other than the Heritage Foundation before I sign on.
Did you know: our modern word “dunce” comes from some theologians who thought that Duns Scotus was really stupid.
New study suggests that good ventilation in buildings increases cognitive test scores as much as 100% – a figure so high that it seems impossible no one noticed this before. Possible mechanisms include decreasing CO2 concentrations – which is being played up for the global warming angle. If this replicates everyone should stop everything they’re doing and improve their ventilation however possible – if being the key word..
For only $76, you can have a cube made of 62 different elements. That’s only $1.23 per element!
No, but I knew from reading Cryptonomicon that Alan Turing could take long bike rides.
I knew that he had rowed for his college at Cambridge reasonably successfully, but again, didn’t know anything about running.
Yes, from a campaign my local MP ran a few years ago when trying (ultimately successfully) to get Turing pardoned.
Also Moscow, instead of Msocow.
I am assuming this is helpful.
You said in your post about overconfidence a while back that the probability of superhuman AI within the next 100 years was only about 25%. Is this how you always thought, or did you change your mind recently?
I think I’ve always figured somewhere around that range.
So the above-linked post on the difficulty of simulating brain doesn’t change your mind? Direct brain simulation is not the only potential path to human-level AI, but it does seem to be the most concrete of the available possibilities.
If the 40 or so years of research into the C. elegans brain haven’t been sufficient to produce a simulation, then 100 years seem like way too little for scientists to produce a simulation of the billions-of-times more complex human brain.
up until the iPhone, 40+ years of research into computers/radios/televisions/microchips etc. had failed to produce one.
And up until the Wrights, thousands of years of tinkering had failed to produce (or understand) powered flight. Etc.
Right, which is why I’m not claiming that it will take 100 years to simulate the C. elegans brain. We might solve that problem soon. But given that taking enough data to produce a model of a brain with 302 neurons and a few thousand synapses took 40+ years, one could imagine that taking enough data to simulate a brain with 100 billion neurons and 100 trillion synapses might take a little more than twice that long??
There was plenty of incremental progress towards iPhones. It was a revolution of usability, not functionality.
Flight might be a better model. But if so, then all of the working computers we have will be useless for assimilation into the AI overlord.
IOW, pick your poison. Is it incremental progress on existing platforms? Or a radical new platform for which there is nothing in existence that is similar?
Lots of programming problems scale in weird ways.
Once you’ve written the code to sort a small list, for example, it can be applied unchanged to quite large lists, until you run out of memory.
There might be a similar pattern here – the algorithms for extremely good simulation of 300 neurons might work just as well for simulating n neurons.
We have had powered flight for a hundred years, but are still nowhere near producing a bird.
IMO, the obstacle to simulating brains is in measuring/characterizing brains, not in scaling out computational resources. Simulating a billion might be only a few Moore’s law iterations away from simulating a hundred, but acquiring enough data on what’s going on all over a three pound blob of tissue is quite a different proposition from a 300 cell network that can be all seen under one microscope, and AFAICT there has been no Moore’s-law like behavior in the technology of physiological measurement.
For my part, I always assigned much higher probability to both de novo approaches (non-simulationist AI) and machine-augmented human intelligence (e.g. brain-machine interfaces, and eventually brain-machine-brain communications). The
Can you expand on why you think so? Until recently, I was actually lucky enough to be working in one of the best AI research groups in the world. I can tell you from experience that the amount of effort put towards modeling the brain directly is pretty tiny. This is even why many have started referring to neural networks as deep learning, as an attempt to combat the oft heard confusion about neural networks being anything but very loosely inspired by the brain’s structure.
If you think about it, it’s pretty intuitive that we wouldn’t model the brain directly. Computers (particularly networked ones) have enormously different strengths and weaknesses than human brains do and attempting to adhere to the same path that the brain took to intelligence would be driven by nothing more than sentiment.
 So as not to overstate my credentials, it was at a wellknown company and I was involved in a team within the lab that was intended to straddle the line between engineering and pure research. Here I’m mostly referring to the exposure that I got to the work and ideas of the teams that were doing the cutting edge stuff.
This seems quite logical, but I would be more comfortable with it if I didn’t keep reading the word “sentiment” in Loki’s voice.
Sorry, I’m not really familiar enough with the script of the Avengers to get the reference. Is there a particular well known bit of dialogue that involves Loki using that word?
My fault. Most people don’t have kids who want to watch the same Marvel scenes over and over until the dialogue has been memorized; and regardless, normal people on this site try to speak “English” rather than “meme”.
I was recalling the scene discussed halfway through: http://ssfrostiron.tumblr.com/post/34173228483/sentiment-and-sentimentality-in-the-avengers
Thor tries to appeal to Loki, and “Loki takes the opportunity to stab Thor in the side with a small knife. As Thor falls to the ground, Loki mutters “Sentiment,” clearly criticizing Thor’s emotion-driven actions as not only useless, but also as a cause of his own injury.”
If “The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”” then it’s not reassuring when their designers sound, however faintly, like comic book supervillains. I’d like to hope that 22nd century computers feel *loads* of sentiment for their squishy biped pets.
Haha, I see, thanks for explaining the reference.
I was actually using “sentiment” to make a rather different point than the one it seems like you took from my comment. Correct me if I’m wrong, but it sounds like you thought I was saying roughly: “AI won’t be like NI (natural intelligence” because human brains are irrational and sentiment-driven and AI will and should avoid that pitfall”.
What I was in fact saying was that, even if we define the goal of building an intelligence as being a 100% faithful reproduction of NI (including emotion etc), it’s “sentiment” to assume that its development must follow the same path. I admit it probably wasn’t the clearest way to express this. The implication was that many laymen have an intuition that human intelligence is so untouchably and spiritually unique that we can’t possibly arrive at the same destination without hewing closely to its path.
 FWIW, I do feel the same way in the emotional sense: the human brain truly is a marvel, and its wonder is only amplified by the fact that we don’t fully understand it. I just don’t think that that’s strong enough evidence that we therefore can never create something with substantially similar capabilities other than by aping the already-existing design explicitly.
>Direct brain simulation is not the only potential path to human-level AI, but it does seem to be the most concrete of the available possibilities.
This seems completely counterintuitive to me, could you elaborate why you think this? IMO intelligent design should beat evolution very easily.
Do we even have a good definition of intelligence that’s different from “do what smart humans do?”
“IMO intelligent design should beat evolution very easily.”
Why? Especially when we don’t understand what evolution has done.
If we understand exactly what evolution has done, than we could see ways to improve it fairly easily I would imagine, but that isn’t what you are postulating.
Why would we need to understand what evolution has done in the first place? We’re not talking about improving humans, we’re talking about human-level+ AI.
When people say “intelligence”, they tend to mean “That thing humans do”.
Why reinvent the wheel when you have an example which you can copy?
When we copied birds, we got airplanes, which are distinctly not birds, and share almost no features beyond having a pair of airfoils.
Human level AI may likely have a few features that are analogous to, or inspired by, or “copied” from how brains work, but I think there will a long gulf between the first human-IQ-level AI and a human brain simulation. If Startup A pitched a project to simulate a human brain cell-for-cell and connection-for-connection, and Startup B pitched a project to try capturing the essential computational properties of intelligence in a simplified model, I would bet on B.
Airplanes make remarkably cruddy birds, though. They can do two things better than birds, go fast and carry a load. They can’t even do most of the other things birds can do, let alone do them well.
And intelligence, at least AGI type, is the ability to do lots of things well.
What do you mean by superhuman AI? Watson is superhuman AI (the ability to store and recall information used to be considered an integral part of human intelligence). Deep blue is superhuman AI. What is this computer doing that’s intelligent that’s categorically unique from what computers are doing today? Why isn’t AI just slowly getting better at solving more and more problems?
Superhuman is usually interpreted as a lot better at a range of cognitive tasks.
My first though upon seeing that Earth flag is “you’re gonna get sued by the IOC.”
It seems to me that either of these choices would have been incomparably better in any case.
These all seem like awful choices. Earth’s flag should display something that an alien civilization would find particularly notable in comparison with their own world or most other life-bearing terrestrial planets. Our distinguishing characteristic is that our satellite has almost exactly the same apparent diameter as our star.
Good point, although I wasn’t going to go even that far, I would just go with “a circle with blue bits and green bits”. Surely that’s what’s exciting to us?
Wait, now I’m suddenly unsure. Do the continents actually look green? Well, if not, so much the better, we won’t clash with any alien flags.
In fact, there are lots of possibilities: blue circle, green circle, circle with city lights, etc. And only some of those might apply to an alien planet.
On the lower half of the flag, blue & green land with a curving horizon and a couple of simplified human figures like those on the Pioneer spacecraft. Black sky with sun and moon in ‘diamond ring’ phase of a total eclipse on the upper half.
Meh. Our flag should be about our planet, not our solar system. The planet is the interesting bit where we live, after all. Plus, a solar eclipse is mainly remarkable for its alien appearance – it doesn’t look anything like the sun and moon normally do. It might be distinctive to aliens, but a human isn’t going to see “white ring on a black background” and say “Ah, what a wonderful depiction of the celestial orbs above our planet!”
Also, based on our current observations, “7 continents and tons of water” is at least as unique as “can have solar eclipses.” Planets with life are rare enough that I don’t think we have to worry about ripping someone off.
Also also, I’ve noticed that vexillologists tend to like simple designs that can be easily recognized at a distance and don’t take a lot of work to draw, hence the huge number of nations that have tricolors for their flag.
Earth’s flag isn’t for the benefit of alien civilizations, because none of them are ever going to see it. Earth’s flag is to symbolize, to the human people of Earth, the essential unity of the people of Earth.
[Brief pause while we wait for everyone to stop laughing]
Eventually, the other purpose of Earth’s flag will be to symbolize the divide between the human people of Earth and the human(ish) people of the other worlds of the Solar System. That’s a ways off, but maybe not so far off that we shouldn’t be thinking about it at least a little bit.
For these purposes, the UN flag seems fine for now. It represents the organizational structure of United Earth Humanity to the limited extent that Earth humans are actually united, and if something comes along to Unite the Earth even more effectively it will probably assimilate the UN en passant. The UN’s member states are pretty much all committed to not claiming sovereignty over anything extraterrestrial, so any future offworld polities that the Earth flag needs to point to and say “We’re not those guys!” will likely be outside the UN as well.
Also, until we terraform Mars, we need roundels more than we need flags for this purpose.
But, regardless of format, what’s the emblem that signifies, to humans, the essential concept of Humans of Earth and Not Any Other World of the Solar System?
I agree that Earth would need a flag that is distinctive to Alien civilizations. But I think everyone is overthinking this. We should just scrawl a dick on it. It’s very unlikely that aliens will have a sex organ that closely resembles an erect cock, so I consider it an excellent symbol for humanity.
Prefacing the apocalyptic war between the Mushroom and the Anteater People.
And the proportion of humanity which does not possess sex organs which resemble cocks either erect or flaccid decide to throw in our lot with the aliens and have the dick-flag designers all shipped off to a nice quarantine planet where you can spend all your days measuring your flag emblems with rulers and measuring tapes.
When the aliens ask us “But why do you want us to take over your world?”, we’ll just point at the dick-flag and I’m sure they’ll understand immediately 🙂
also, this “consumer telecommunications company”
[insert conspiracy theory]
So does the obesity set-point paper lead to any means for individuals to easily alter their set point?
Hell, I don’t even need easily, I’d settle for anything that’s in any way practical.
Me too! And I’d pay a lot of money for it, if I had to.
I’m afraid the answer may still be “diet and exercise.”
Three years ago I dropped 100 pounds with calorie tracking and circuit training.
In the time since then my diet and exercise have both regressed to their previous (non-existent) state, but my weight now hovers at its new 100 pounds lower point without effort.
My experience, with a much smaller change, is less positive. A few years ago I dropped about fifteen pounds. I have kept it off, but only by sticking to a “mostly one meal a day” pattern. Every time I significantly relax and let myself indulge, it starts up again.
I agree that, in my experience, constant vigilance is the only way to maintain a weight I’m at all happy with. Whenever I eat as much as I want for any period of time my weight gradually creeps up, though I know it wouldn’t do so indefinitely. I think something in me somewhere wants me to be about 180-190 pounds, whereas for aesthetic and health reasons I want to be about 150 pounds. In reality, I am usually somewhere around 160 pounds, though that can start to creep toward 170 alarmingly fast. Then I basically have to go on a diet for a little while.
I’ve heard many diet experts say you need to find a way to eat every day that will make you happy and healthy, but thus far, with many years of tinkering, that has been impossible for me: if I relax and eat what I feel like eating I will be fatter than I want to be, even if I exercise a lot (exercise, in fact, tends to increase my appetite, though it adds a bit of muscle, which is more aesthetically pleasing and functional as weight goes).
Thus, the only thing that works for me is repeated cycles of mild indulgence and restriction. Not extreme yo-yo dieting, but mild yo-yo dieting for sure. I wish there were a way to just put my set point lower, but for the time being it ain’t happening.
That said, I read an interesting study recently which claimed that weight lost rapidly was more likely to be kept off than weight lost gradually–the opposite of conventional wisdom, but sort of plausible to me. Maybe a drastic change is more likely to prompt a set point adjustment than a very gradual one.
So you basically drink a tablespoon of flavorless oil once or twice a day between meals? And that worked well for you?
It’s an interesting theory he has about the increased prevalence of food which taste the same each time you eat them: maybe you don’t feel satisfied as quickly eating a very familiar taste as compared to a different taste (though this might be contradicted by the much simpler and less varied diets of people in times past and my sense that part of why we are fat now is that we have so many interesting and delicious options, where, if all you have is potatoes, you get tired of eating sooner).
But why would eating flavorless food in between meals reduce the appetite (beyond what eating any food would)? Or does he just say, “just do it. It works”?
Anecdata: I asked my parents, who grew up in the fifties, when people were thin, what seemed to be different then as opposed to now, and they identified a couple major factors:
1. Food was significantly more expensive, and, as a result, usually came served to you in smaller portions. It wasn’t that it was so expensive people were going hungry, but that it was expensive enough to give you more pause. Not much fast food. People ate out less in general.
2. People were outside walking and doing stuff a lot more, even if they weren’t actually going to the gym and working out with personal trainers as much.
Sometimes the simplest answers are probably the best.
@The Original CC:
But why would eating flavorless food in between meals reduce the appetite (beyond what eating any food would)? Or does he just say, “just do it. It works”?
I read Roberts’s book. He has a just-so story that he admits is just a guess. In times of plenty, it makes sense to want to eat a lot and pile on the fat; in times of scarcity it makes sense to want to eat less because there is less to eat and you can live off the fat you have piled on. The mechanism that evolution cobbled together is the “set-point”, the weight your body tries to consume just enough calories to maintain. The set-point is affected by what you eat: when you eat something with a big, fast calorie load, it thinks times are good and raises your set-point; when you eat something with a small, slow calorie load, it doesn’t raise the set-point as much; and when you aren’t eating, it slowly lowers the set-point. If you are at your set-point and eat normally, the raising and lowering tend to balance out.
If you just eat less, your set-point will drop, but you’ll be hungry all the time. What Roberts claims is that the feedback mechanism depends on conditioning based on the association of a taste with the calories. This means that calories associated with no taste will satisfy the set-point, so you’ll eat less of everything else, but will not trigger the conditioning — a two hour period with a shot of flavorless oil in the middle of it will be just like a two-hour period with no food at all. So at the end of the day you aren’t hungry because you’ve had a normal calorie intake, but your set-point will be slightly lower, so the next day you’ll want slightly less to eat. And so on, until you reach a weight consistent with the set-point imposed by the tasty food you do eat.
Because it’s a learning/conditioning mechanism, it takes a week or two to get started.
Roberts reported simply amazing results from his own self-experimentation, and I’ve read many reports from people who swear by it. I found the just-so story reasonably plausible in broad outline, and the whole thing seems to explain some things widely observed. Eating food with low glycemic index is good, because the taste is more separated from the calories, so you don’t get as profound a learning experience. Fast food is bad, because it is engineered to taste exactly the same every time, which makes the learning happen more readily. People who get fed through a tube seem to lose their appetite, because no matter how many calories they get, there is no taste associated with them.
I could stand to lose a few pounds, and it seemed to me I would do myself no harm by trying it. Alas, I spent seven weeks on the regimen that is supposed to let you drop ten or twenty pounds, with no effect on my weight. (I should note that I didn’t do anything else — no additional exercise and no attempt to watch what I ate except as needed to isolate the doses of flavorless oil; the diet was supposed to take care of that.)
The extra 317 calories a day did not add to my weight, which suggests that it might indeed have been reducing how much normal food I ate, so it might be a useful trick to make a traditional diet tolerable.
But it didn’t seem to do anything to my set-point. I was disappointed.
The theory says part of the mechanism that generates hunger is the regular strengthening of a mental flavor/calorie association. The first time you taste a new flavor you might not want much of it. A little while later if you experience a nice kick in the blood sugar (and don’t get sick), your brain says “Oh! We like that flavor – that flavor represents FOOD!” Next time you’re exposed to that exact flavor (and again get calories from it) this reinforces the connection, eventually reaching the point where hunger prompts you to seek out those specific calorie-dense foods and exposure to those foods prompts you to eat more than you should.
This connection is constantly being reinforced every time you eat.
Shangri-La theory says that calories which have essentially NO flavor alleviate hunger without also stimulating this addictive feedback loop. If you can get a hefty chunk of your daily calories in a “flavorless” form, that kind of breaks the cycle – the addictive connection between flavor and food is weakened a bit through disuse.
Some kinds of oil (extra-light olive oil) are calorie dense but (in the absence of other flavorings) too mild to count as a flavor. A weak sugar water solution (sugar, water, no other flavors) also works – possibly “sweet” alone doesn’t count as a flavor, or perhaps it just doesn’t count when sufficiently dilute.
Another option consistent with the theory is to just eat normal food but wear a noseclip to reduce how much you TASTE what you are eating.
Yet another option consistent with the theory is called “crazy spicing” – add weird spices to your foods so they don’t seem familiar.
(Shangri-La didn’t really work for me, but does seem to work for many people.)
On the one hand, the Shangri-la diet seems to directly contradict one piece of common knowledge, which is that caloric beverages are diet poison, because they “feel” like fewer calories than they are (few people drink as much sugar water as Americans in the form of big gulp, etc. and we are obese: I always assumed this was because 100 calories of liquid coca-cola is not as satiating as 100 calories of potato or, indeed, butter). But these soft drinks are not usually flavorless, of course.
On the pro side: similar, perhaps, to the author’s experience in Europe, I always lose weight when living in Japan (but not every time I have lived in a foreign country, so I don’t think it’s just the difference in flavors), where I usually eat a large amount of relatively flavorless rice. Certainly eating bulky, relatively low calorie foods like rice and oatmeal seem to have a favorable effect on appetite vs caloric intake levels, and I’ve seen studies claiming that people asked just to eat oatmeal once a day (or even just to eat a handful of nuts a day) and otherwise eat however they feel like, have lost weight.
It also makes sense that foods with intense flavors are addictive, and that, like any addiction, you may need more to get the same “hit” after a while. And while I agree with Scott about the impossibility of “conditioning” oneself (to associate happy feelings with not eating, for example), there also seems to be an undeniable “learning” component to the omnivore’s eating strategy: we probably eat new foods with more care and attention due to the greater chance of a bad reaction. Once eating foods we have learned to like, we can go at it with greater abandon.
Also, his idea that the set point is lowered by not eating is interesting, because it seems to support the study showing that rapid weight loss may actually be easier to maintain than gradual weight loss–the opposite of the conventional wisdom also in the sense that most people assume that not eating, by lowering the metabolism, is essentially increasing your potential for future weight gain, and eating, by increasing the metabolism, is doing the opposite.
If the reality is that the body wants to pack on pounds when the calorie-dense food is available (which makes sense), and wants to shed pounds and run on fewer calories when they are not, then this idea of eating flavorless calories may actually make sense.
Also points to something many have probably noticed: a kind of disconnect between “stomach full” and “brain full.” If I have been fasting or exercising a lot, for example, and then I eat a meal, I can very easily feel “stomach full, brain hungry”–that is my stomach feels very full, but something in my brain is still shouting “keep eating! we’re still in a serious deficit from all that exercise and/or period spent not eating!”
By satiating the stomach without stimulating the brain’s appetite, maybe one can gradually ratchet down “brain hunger”?
Shangri-La didn’t really work for me, but does seem to work for many people.
The anecdotal evidence is way more enthusiastic than I’ve seen for other diet fads. But I have not seen anything like a proper study.
caloric beverages are diet poison, because they “feel” like fewer calories than they are
A Shangri-La theorist would probably explain this analogously to fast food: Your favorite soft drink is engineered to taste exactly the same from can to can, which enhances the learning potential — especially since you probably have the experience a few times a day, more frequently than you eat a Big Mac. Roberts started tinkering with this idea when he went overseas and drank a lot of soda with unfamiliar flavorings, and lost weight.
Have you actually used Shangri-La to good effect? Or were you just passing on something relevant that you’d heard of?
Yeah, I tried it and I’m still on it. I lost about 15 lbs. I could probably stand to lose another 5-10, but those last few pounds are tough. On the bright side, it takes me almost no effort to maintain this new weight, and it’s been about 18 months since I started. Few dieters can claim that.
For people who found it didn’t work, I think the effect might be more subtle for them. Since it seems to give some people “instant willpower”, I think maybe the other people (for whom it didn’t work so well) should see if maybe they’re willpower is stronger even if it’s not dramatically better. IOW, try eating less and see if you care.
I think it’s tough for normal-weight people to understand how food- and weight-obsessed overweight people often are. They can lose a few pounds if they’re *constantly* thinking about restricting calories but they can’t wait to stop dieting and pig out. On the SLD, at first I “automatically” ate less and didn’t even have to obsess over it. After I lost maybe 10 lbs., I had to make a small effort to not pig out all the time.
I think a normal weight person wouldn’t see this as a big deal, but for me this was miraculous.
As for it being a just-so story and needing further research, I basically agree. But two points about that:
1. Seth Roberts did point out that, consistent with his theory, people who go on bland diets “automatically” eat less and lose weight. I believe this has been clinically studied, and it really suggests that if you could just reduce the calorie-weighted average flavor level of your diet (say, by getting a bunch of calories from flavorless oil), you would see a similar effect.
2. I believe that Seth Roberts did try to get funding to study this and he was shot down. People either thought it sounded dumb or didn’t actually understand the diet (“you just eat disgusting food all the time?” or “you eat nothing but oil!?”).
@The Original CC:
Thanks for the report. It accords with my guess that maybe it works more profoundly for people who are more seriously overweight than I am; some of the success stories I have read sound like the diet relieved them of some really unhappy symptoms, but they are not symptoms I’ve ever particularly experienced.
I didn’t mean to dump on the diet for lack of controlled trials, just that I wish there had been some, because they might give me better guidance about how to benefit. There are enough enthusiastic anecdotes that I take it for granted that there is something there. And, as I said, the just-so story does seem to tie together a lot of observations that would not otherwise have struck me as particularly related.
The first time I tried SLD it had no affect – I actually gained weight while doing it. I talked to Seth and he convinced me to try a larger dose. When I did, at first there was some appetite suppression, but the effect size diminished over time. I could only lose about 10 pounds (I wanted to lose more than this at the time) and could only keep it off at the cost of doing this weird thing for the rest of my life. When I stopped, I quickly gained back all the weight I had lost and then some.
Reasons I was disappointed in SLD:
(1) the explicit claim made was “do this One Weird Trick and you can eat as much as you want!” – which for me was clearly false. I think if some of your eating is out of habit or out of boredom, reducing hunger helps but is not actually sufficient – you ALSO have to make a conscious effort to eat less…whereupon it’s not all that different from other diets. (If I have the willpower to eat less on SLD, I probably have the willpower to eat less without it!)
(2) The effect size wasn’t that large and wasn’t enough to, say, reach one’s ideal weight. If you read the boards there were many many people who lost 10-20 pounds and then got stuck, baffled at what ELSE to do to lose the rest of the weight they wanted to lose. (a good answer to that question might render SLD moot.)
(3) The sugar-water version seems like a health risk; the oil version is (still) nauseating. It requires the expenditure of willpower to keep doing SLD; eventually that flagged.
I had a few theories as to why it didn’t work for me. One is that I have acid reflux issues, so perhaps that counts as “a taste”. Another is that I’m seriously addicted to Diet Coke which I drink both with and without meals – perhaps having one constant background taste changes the relevant dynamic.
I dug through the paper looking for anything obvious, and didn’t find anything.
Both of the neurotransmitters involved (melanocortin and neuropeptide Y) are protein neurotransmitters. That means you can’t just eat a bunch of the neurotransmitter or a precursor for it. It looks like there is a down-regulation of the Melanocortin 4 receptor (MC4R) that you want to avoid, but possibly only in a particular set of neurons, and it looks like agonists for that receptor would have interesting side effects (they’re erectile disfunction drug candidates).
However, the trigger condition for increasing the set point in the study was feeding the rats a high-energy diet. Avoiding whatever part of the diet is triggering the set point increase should at least prevent your weight from getting worse. The HED was higher in fat and lower in carbs, but that’s the opposite of what I would expect to cause weight gain in humans, so I’m not really sure how what to recommend based on that.
I remember coming across something that seemed appropriate to link here, sometime between the last Open Thread and now. Since I have no idea what it was, but feel oddly compelled to link something anyway, have this blogger’s response to a woman who chose to go blind (the prevailing narrative is that she has (had?) BIID).
If designer babies come into existence in my reproductive window, I will probably have to go to China to get it done.
America, if you are indeed an American, needs lots of people like you if we are going to maintain our military and economic dominance.
If you ask random people (as I have) most of them generally agree it’s okay for a friend of mine to screen their fertilized eggs to make sure she doesn’t pass on her father’s ALS. I can’t recall getting any pushback on that.
I often tell them, you want your kid to be happy and healthy and kind and smart. So does my friend. And it turns out we might be able to make a kid much healthier, smarter, happier and kinder (in ascending order of current difficulty) by fiddling just a little bit more with an already extremely unnatural process. You don’t want to get rid of the weird frankenstein process when it helps my friend not pass on the terrible burden of ALS, so the process isn’t the problem. What is?
American culture is… scared. I don’t know any other way to say it. Your phrasing may be the most effective because it can make Americans more scared. [While I am a big proponent of voluntary genetic modification, I tried to phrase this in a way that makes it clear what my concern is with this persuasion method.]
For an improved version, consider Heinlein’s idea (in _Beyond This Horizon_) for what I think of as libertarian genetics. You separately select egg and sperm before combining them. That gives you much more control than selecting at the fertilized egg stage–and you are still producing your own children, just more nearly the best children you two could produce.
The obvious puzzle is how to determine what genes are in an egg or sperm without damaging it. The solution is that each is produced by a process that throws off a body with the other half of the genes (I oversimplify a little). Do that process in vitro, destructively analyze the extra body, destructively analyze a full cell, subtract the former from the latter, and you know what genes are in egg or sperm without ever having looked at it.
Not doable yet, but probably within the lifetime of most here.
The individual’s expectations (and maybe even the experts?) of genetic engineering are vastly overstated. Just figuring out the “best” combination of egg and sperm at the chromosomal level is extraordinarily difficult. There are a handful of traits that have a simple X=Y, or P+Q=Y genotype to phenotype translations (like hair color or Tay-Sachs),but we can already increase/decrease our chances of getting children with/without these traits by selecting mates.
For more complex traits (cancer resistance, intelligence) it becomes mathematically difficult just to maximize one trait through sperm/egg selection, let alone the ideal of many traits together (without even discussing epi-genetics, which is not well understood yet).
In the near term (next 50-100 years) genetic engineering will be like life expectancy. Big gains in a few discreet areas will pull averages way up, but median experiences will only move a little. If you visit a geneticist in 20 years and ask how you can make your children smarter it is likely the best answer will be “don’t smoke and drink less during pregnancy- they we will make sure that these 10-12 basic problems haven’t happened, and then for a few hundred dollars more we can screen you and your husband and find a basic combination of genes that you have which will on average boost IQ by 1-2 points.
In that book, one character explains to another why he has no children. He and his wife were controls — no genetic engineering, and a stipend for helping preserve the genetic diversity — but they decided they wanted it for their children. They went in. Basically it came back that they could only get some trivial tweaks, which put them off children entirely.
> for a few hundred dollars more we can screen you and your husband and find a basic combination of genes that you have which will on average boost IQ by 1-2 points.
But isn’t SD[IQ] = 15 (naturally)? If you can screen a 1M+ sperm, couldn’t you get really far out onto the tail? Like 5 SD’s.
I’d be sort of surprised if the intelligence distribution of one’s offspring was still Gaussian out to that many SDs.
You get nothing in life for free and no, it’s not likely you could do a small tweak or two and get a happier, healthier, smarter and kinder kid. The IVF evidence, such as it is, is pretty obviously not showing that even broadly. People think the ALS screen is more powerful than it is, that’s all.
Nothing is free, but quite a lot is paid for by someone else, with the benefits passed on to you. How much did the people of Broad Street pay John Snow for his investigation of cholera? How much have you paid, given the improvements in it knowledge of water-borne diseases has given you?
Now, I don’t disagree that IVF and even direct genetic tweaking is not likely to be a quick and easy magic bullet. But on the other hand, every major technology we use barely worked at first, and had significant side effects when we started investigating it on a wide scale. And if you have one side which is willing to suffer through the transitional period of terrible exploding early-cannons while the other is unwilling to sacrifice their lives tinkering with gunpowder mixes, what happens if the two sides fight six generations later?
That’s survivorship bias. If it never did work well and the side effects never did go away, it wouldn’t still be a major technology. Looking at existing major technologies gives you a false impression of how well proposed major technologies are likely to work.
Wait, what? My argument is an argument that the current state of genetic engineering infers the potential benefits of it as a mature technology. I’m saying that mature technologies start out as immature and you can’t judge a technology as useless just because it looks immature in your lifetime. I’m not saying that all technologies that look immature will grow into beautiful butterflies, just that all butterflies start as ugly caterpillars.
We now know the deleterious effects of breeding dogs for certain traits deemed desirable in pedigree breeds. We have evidence of how breeds have been changed from their origins in old drawings and photographs of animals from two centuries ago compared with modern-day breeds.
If China fucks up with genetically engineering its super-strength dogs, we have some miserable and/or prone to die early and in pain dogs. If they fuck up with genetically engineering your baby to be “healthier, smarter, happier, kinder”, then you have a possibly mentally and/or physically damaged child who may die early or develop disabling physical or mental conditions (e.g. “Congratulations, your child will have 10 extra IQ points above what they would naturally! But also epilepsy of the kind uncontrollable by medication and necessitating surgical removal of the brain tissue responsible.”)
Sure, probably eventually they’ll get it right, but do you want to volunteer bringing a human being into existence as one of the early generations of test subjects where they cross off their list “Okay, don’t do that particular tweak again”?
And how do you measure “kinder”, anyway? Your kid literally gives the shoes off their feet to other kids? You’re going to run into, at best, scrupulosity problems. If by “kinder” you mean “can recite flawlessly the shibboleths of contemporary progressivism, such that they will lecture you for three hours straight about using the correct terms and how it means you are a bad awful terrible racist sexist homophobe if you add/leave out an asterisk in Preferred Term of the Week”, then possibly they could learn that anyway from you and their peers without the necessity of being genetically engineered?
Dang it, we had a “Star Trek: Deep Space Nine” episode about this very thing! 🙂
I think pro-sociality, agreeableness (in the Big Five sense), scrupulosity, low transgressiveness would be the kind of things you’d look for, which raises some concerns but I think we could deal with a lot more cooperation before we are all too low on people willing to defect/transgress.
While your latter kind of “kindness” is behaviour by people who are affiliated with the “be pro-social and kind to others” position, I don’t think they actually are individually agreeable, pro-social, scrupulous, or kind, and I suspect efforts to improve those things would make there be less of them rather than more.
That said I think practically selecting for any of these things is a really weak target; you’d be much better off going for high IQ, or just trying to improve the bottom tail of IQs where both suffering due to an inability to compete and criminal behaviour are concentrated.
I also prefer CooperateBot to DefectBot. But I greatly prefer TitForTat, TitForTwoTats, etc, since a population of those helps protect me from other DefectBots too. If we’re going to create CooperateBot out of kids who would otherwise have been TitForTat then there’s a clear mechanism by which that could backfire horribly.
If one social class engineers their kids to be “kinder”, and another selects for “aggressive, dominating”, that might be an interesting experiment in large-scale social stratification. Obviously we do it now with socializing. But making it somehow “genetic” is going to a whole different level. Alpha, Beta, Delta, Gamma anyone? (from “Brave New World”)
Your comments would be relevant to genetic engineering but not to the approach I described, which is simply selecting among the children the couple could have which one they do have. I can’t see any reason for that to produce the sort of negative results you describe, and as long as we have any information at all about the relation of specific genes to either positive or negative characteristics it should produce at least some improvement.
This also shows up in Neal Stephenson’s “Seveneves”; somewhat of a spoiler but the book has been out for six months and it’s kind of there in the title. After an astronomical catastrophe, and some catastrophically foolish decisions during the response to same, seven fertile women have to repopulate the human race. They do have a genetic engineering lab and they know how to use it; they don’t know better than to use it to “improve” on human nature and none of them can agree on how to do that.
The seven resulting races of humanity, and that is I think the right term, include a very large race of kind, pacifist cooperators(Camites*) and a much smaller race of paranoid political schemers (Julians**). You can guess where those two fall on the social strata.
For completeness, the rest (not descended from fictional analogs of real people) are:
Dianans: Omnicompetent Heroes, stalwart and true. Kimball Kinnison and James T. Kirk would be right at home with these guys. Well, girls at first, but they do eventually reintroduce the Y chromosome.
Ivyans: Supergenius nerds and eggheads. If the Dianans are Kirks, these are the Spocks. Science and technology are the path to a brighter future, and they are its masters.
Teklans: Klingons if you want to keep the Star Trek theme, otherwise Spartans. Literally, Russsians. Will carry out their assigned mission or die trying, and they won’t die easily.
Moirans: Epigenetic adapters with lots of redundant DNA that doesn’t express until adulthood and depends on what stresses they undergo on the way. Human Swiss Army Knives, and rare because it’s a tricky bit of genetic engineering with a ~5% survival rate in the first generation.
Aidans: Evil-twin counterparts of all the above, because Aida was a seriously disturbed individual, and numerous because the was the youngest and healthiest of the “Eves”. One of their subraces is explicitly called “Betas” and seems to correspond to the Camites.
Yes, there’s a literal race war going on 5000 years later. Well, race cold war, with the factions being “Red” and “Blue”.
* Descended from a thinly-disguised Malala Yousafzai, who is really really sick of violence by this point
** From a US president who probably isn’t a disguised Hillary Clinton
Dianans: Omnicompetent Heroes, stalwart and true.
If you describe Teklans as Russians, then Dinans are, simply, Americans.
Seveneves has a weird structure — not the usual Stephenson weirdness, either. The first two-thirds of it is essentially backstory to the “race war” part, which really does feel like the main plot, but gets terribly shortchanged. It should have been two novels, and ideally the chronological second part should have been published first, IMO.
It’s also possible he wasn’t quite sure what to do with the far-future setting, once he had it. Its narrative revolves around the (ludicrously implausible in context) rediscovery of some ancestral human types, which is terribly significant to the space dwellers for underspecified reasons.
If you describe Teklans as Russians, then Dinans are, simply, Americans.
Except that Tekla literally was Russian and figuratively represented the core Russian-as-viewed-from-the-Anglosphere stereotype, whereas Dina, Ivy, Moira, and Julia were I think all Americans and encompass different American cultural stereotypes.
If you have to pick one cohesive stereotype for “This is how Americans like to imagine the Ideal American”, the Dinans are the best fit, yes.
The distinct races didn’t seem plausible to me. Are there any examples in humans or non-humans of subspecies in close contact for substantial periods of time (hundreds of generations) without extensive interbreeding?
It looks to me like you get subspecies with geographical isolation, and if the geographical isolation ends you lose distinct subspecies pretty quickly.
The book even mentioned that sexual attraction was common between two of the blue races, yet we are supposed to believe that hybrids are rare to the point of virtual non-existence?
Also the pingers. And I wasn’t thrilled about the giant space architecture stuff (not so much that I didn’t buy it as I didn’t care). I guess for me the book would have been better if it ended at the council of the seven eves.
Are there any examples in humans or non-humans of subspecies in close contact for substantial periods of time (hundreds of generations) without extensive interbreeding?
Dogs, wolves, and coyotes are interfertile, have been in close contact for millennia in the American west, and remain sufficiently distinct that zoologists considered them to be different species until fairly recently. And the canids weren’t literally genetically engineered to carry forth their creators’ vision of a better world through racial superiority.
I share your gut instinct on this one, but it’s not entirely clear-cut. My suspicion of disbelief didn’t entirely break until we met the pingers, but that gets us solidly into spoiler territory.
King, did you know that Stephenson considered publishing Reamde in 2 volumes? Does that affect your evaluation of the option to divide Seveneves?
1) Engineering a child to not have a specific disease has pretty much no adverse consequences. This doesn’t extend to engineering a child to have other types of changes; if you want to convince me that making a child smarter is absolutely risk-free, it would take some convincing.
2) You cheated by not listing things that many parents might want to engineer in their kids, but which are in part positional, such as beauty or height. It is more obvious why engineering those is a bad idea, for the same reason that letting athletes take steroids is a bad idea–since everyone has to take them to compete with the others who take them, you get a race to the bottom and everyone does the same as before except with lots of risks added.
3) Regardless, I think the parents you describe should be kept far away from genetic engineering, for the sake of our species. You put “kind” in there. I absolutely do not want people engineering their children to be more kind. Imagine that 200 years in the future the result is that humans are genetically programmed to accept dictatorships rather than rebelling, or to impose their will on others in the name of kindness.
Wealthy Americans going to China for cutting-edge medical treatments that aren’t available at any price in the USA is pretty high on my list of “ways I will know if the US is completely broken”.
If they’re designer babies anyway, one might as well (eventually) use donated sperm and/or eggs too. Sperm and egg banks will eventually begin advertising specially selected donors and “designed” engineered qualities. Then everyone will be able to adopt such a baby, contracting with a surrogate mother (also selected for good genes and health), regardless of their own reproductive windows.
In the longer term, some enhancement techniques may be applied to the pregnant mother instead of the fetus, which would be another reason to use professional, pre-selected surrogates. When designer babies are socially accepted, it would also make sense for older or less healthy women, while still fertile, to want a healthier child borne by a young surrogate mother and avoid being pregnant themselves.
30 years from now…
“Mom, I’d like to meet my real mother someday.”
“I am your real mother! Except for the DNA, which is better than mine was by 7 potential IQ points and 15 lbs set-point. And the womb, my hormone levels were 12% too high. And you were mostly raised by your nanny until pre-pre-k. But I am the one who is legally responsible if you do anything wrong, so you’d better at least get me a card on Parent-2 day!”
Parent-2 Day made the joke for me.
It is better or worse if parent-1 is a prokaryote? How much gene editing is allowed before the “passing on my genes is what gives my life meaning” crowd decides that those “parents” are beta cuckold orbiters?
Note that your comment does not apply to Heinlein’s version. I suspect that people have a strong bias in favor of producing their own biological children if they can.
The ones with that bias should tend to grow as a proportion of the population anyway.
The best evolutionary place is probably sperm donor though.
I imagine that in the theoretical end game of designer babies, there’s quite a bit of genetic code to muck around with while still keeping a large chunk of my own in there. I don’t feel a particular attachment to the genes that give me depression or make my feet prognate, but I would want them to share a non-trivial portion of my DNA so that they’ll be similar to me.
*Strong* dogs – who cares?
How about “lifetime” dogs, with a 70-year life span though?
If we don’t self-modify, how weird will it be to be outlived by your pet?
>*Strong* dogs – who cares?
Dog sled racers. K9 units. Fighting dog owners.
All a K-9 dog needs to do is be able to read its handler well enough to bark when he wants to search a car. Don’t need strength for that.
The dogs are usually used for attack as well.
“Read its handler well enough to bark when he wants to search a car”? Seriously? Seriously?
Is this the “wow, just wow” I’ve heard so much about?
Link. Drug dogs are probably cause on a leash that can be trained to give it any time you ask.
It’s actually a problem, and not usually intentional. Don’t forget, when you see a dog doing a scent search, you’re watching a dog do a trick for a reward. It wants to get the reward as easily as possible, and it’ll cheat if it can.
I was working in an organization training dogs, and you had to take a *lot* of care to prevent issues. The biggest one was that a handler couldn’t set his own training problems, because the dog would learn to keep its nose to the ground, but pay attention to the handler for cues rather than do the hard work of trying to sniff for things. Even this could have problems if not implemented properly. One that stuck with me was that we had a handler working with a trainer. The trainer set an explosive scent (so the handler wouldn’t know where it was and cue his dog). However, the handler would usually stand about even with the scent so he could see if the dog was close enough to the training aid when the dog indicated. A few times, when the handler moved away from his customary position, we noticed the dog would indicate when he got about even with the trainer, even though it wasn’t near the scent. The dog learned to look for visual cues (being even with the trainer) rather than sniffing. Another was that sometimes the dog would start searching until he found the scent of the trainer, then would follow the trainer’s scent trail to the aid. “Well, that’s very clever, Rex, but it’s not really going to be effective in a real search.”
Dogs doing a scent search are *very* effective. (I’d stake my life on it–and have.) However, if the trainers and handlers aren’t very careful the dogs will lie to you.
So you’re saying it was a mistake to buy a pet Galapagos turtle?
The WWI link caught my attention. I can think of at least one unique aspect to this war. It was the first time in history that massed soldiers marched forward into machine gun fire. That doesn’t seem to comport with the definition of wimp.
It was also the first, and only, time in history that gaseous chemical warfare agents were used on a large scale. Although they were of limited offensive potential, they were said to have been extremely effective psychological weapons.
No it wasn’t, even if you were to argue that the gatling gun isn’t a “true” machine gun. Here’s an example of recoil-operated machine guns being used more than 20 years before WWI. Machine guns were also extensively used in the Russo-Japanese war from 1904-1905.
The majority of combat casualties were due to artillery fire though. Something like 60% of deaths, and 75-85% of injuries were from shelling.
I think WW1 has gotten its reputation in part because it dragged on so long. Casualties of 20,000 a day have been possible for thousands of years ( Something like 60,000 were killed in one day at Cannae in 216 B.C), but that sort of battle would be decisive. In WW1 that sort of thing happened again and again.
The psychological impacts of the first large scale use of chemical warfare also has a part to play in the perception of WW1. Who can forgets poems like “Dulce et Decorum Est” ?
Though, in my opinion, the Eastern Front in WW2 takes the cake for the most apocalyptic fighting in modern history.
The Iran-Iraq war was also horrifying. Like WW1, but transplanted to the 1980s, and with addition of religious fanaticism and child soldiers.
WW1 can only be consider long if you count the time frame from the end of the Napoleonic Wars to WW2, and it wasn’t uniquely long for that time period either (the US Civil War lasted a comparable time frame for instance and if you want to count non-European wars, the Taiping Rebellion was over three times longer). The redditor’s theory that it was a war that sent a high proportion of the literary enthusiasts in the upper middle and upper classes of Europe to the front makes more sense to me.
It’s not that the war lasted particularly long, but that the “battles” lasted so long. People spent months being shelled. That’s probably optimal for PTSD. Trench warfare is pretty much unique to WWI. There were a couple battles on the WWII eastern front that were comparable, probably even worse, but the whole western front in WWI was much larger.
You can think of WW2 as what happens to a continent run by men with PTSD.
Everyone knows that Hitler served in the trenches, but I did not know that Mussolini did, too. But Chamberlain, Churchill, Lebrun, and Pétain were all way too old to do so. Churchill, Chamberlain, and Lebrun were in civilian government during the war. Pétain, FWIW, was high-ranking officer during the war, probably not spending a lot of time in the trenches. Stalin was rejected by his draft board; I am surprised that he was considered over the age of 35. Of course, he fought a different war. There are other countries and there are a lot more people running a country than the premier, usually younger people, but I’m not sure about that timing.
Churchill did, in fact, serve in the trenches during WWI, after resigning his post in the civilian government.
Artillery, trenches, gas, yes – but WW1 got its reputation primarily because it was such a shock for the British officer class, to be thrust from a relatively pampered life to demands even more lethal and traumatizing than what was expected of a footsoldier. Infantry officers had by far the most dangerous job, but also they came much less adapted to the trench life than the lower classes.
For soldiers, the war was still highly traumatic, but it was a certain generation of the British upper class that gave it the reputation of an unprecedented hell on earth.
Edit: Oh, that’s pretty much the link’s content as well. haven’t clicked on it!
Most casualties were caused by artillery which makes it fairly terrible because you cannot fight back, you just sit there passively and it is just a big frigging death lottery. A “fair” war is that you have a chance of shooting the guy before he shoots you. How demoralizing must it be that you can do nothing with your rifle against the arty that kills your best friends? It is basically not even fighting in any sense. Also, being injured by pieces of your comrades bones when they blew up was a thing.
From the link:
“a mere ~25% of the year in positions ‘under fire’ and only 2-3 days in actual direct combat”
The point is, the 25% under arty fire that you are helpless to fight back against is WORSE than direct combat when you can.
Generally speaking, soldiers have far less fear in situations in which they “cannot fight back” than when they are forced into intimate contact with the enemy. See here for more on that.
Bad link; try this.
Concentrated rifle fire can be as lethal as machine guns, because the slower rate of fire is compensated for by greater accuracy; the real advantage is how much more quickly you can train a machine gunner.
So the American Civil War saw massed soldiers marching into something as deadly.
Soldiers in the US Civil War (on both sides) tended to get thrown into battle with a minimum of training. Their musketry wasn’t very effective by contemporary standards (200 yards was generally the outer limit for firefights in the US Civil War; European armies were usually trained to hit targets 400 yards away; British infantry were expected to be able to break up a cavalry charge from half a mile away), and certainly not by comparison with WW1 machine-gun fire.
200 yards was generally the outer limit for firefights in the US Civil War; European armies were usually trained to hit targets 400 yards away
Apples and oranges. How far away did European armies actually hit their targets with rifle-muskets in their wars of the 1860s?
Well, at the famous “Thin Red Line” engagement at the Battle of Balaclava (1854), the Sutherland Highlanders fired several volleys at advancing Russian cavalry, beginning at 600 yards and ending at 150 (after which the Russians wheeled about and retired).
At the Battle of the Alma the previous year, the British opened fire on the Russians at 1200 paces. Not sure what proportion hit at that distance, but they were able to drive the musket-armed Russians off the hill before the latter could get range for a firefight.
Rifles, Mr. X, rifles. There were a few smooth-bores at the very beginning, armies being what they were, but by the end, the Union armies had breech-loading repeating rifles and matched the longbowmen for rate and range of fire.
Well, the distinction’s a little fuzzy at that point in history. The Springfield 1861, the most common gun of the war, is often called a “rifled musket”.
As Nornagest said, the two terms weren’t clearly distinguished during this period. The term “musket” was often used to refer to rifled guns designed to be used in the same way as the old muskets had been (i.e., by massed infantry formations firing volleys).
6 out of 10 French men between 18 and 25 died or were maimed. More than a third of the soldiers aged 19 to 22 were killed. Look at population pyramids ( here) in relevant countries to get an idea of the effects 50 years later.
No European war before affected demographics to that level.
Does that shed any light on what to expect from China’s 1 child policy?
This simply isn’t true. See here:
He was writing about it as if non-wimpy lower class men would be all cool with the experience or at least expected to be cool with it. Not being cool with realities of trench warfare is not exactly a sign of being “pampered”. I would get the point that lower classes experiences before were not cared about and they were not listened to when they talked about experiences, but he writes about them as if they would be unaffected by the war. Lower classes young men had psychological problems coming back from war too, it was not something unique to British upper class. And it was not exactly as if they would all be eager to go or as if their families would not cared about them going away to die.
The other thing is, WWI having bad reputation at the time was something that French and Germans had too, so if your explanation centers around British society only, it might miss part of the picture.
“Study predicts that US average IQ will rise 2-3 points in the next fifty years, driven by finding that minorities are catching up very quickly. Haven’t investigated their numbers yet, but excellent if true.”
Doesn’t the Flynn effect predict that IQ will rise ~3 points per decade?
Yes, but it seems to have stopped, at least for first world whites.
There is strong evidence against the test score convergence of any large ethnic minority except “Asians”. They alone might drive the 2-3 point increase (assuming no mass chain migration of lower-IQ family members):
Fourth image. B-W test score gap has widened since late 1990s, as Steve Sailer has recently pointed out. I have a theory national acceleration is correlated with personal acceleration, but I’m not sure how that works for the Irish.
How do you explain the different data in the linked paper?
Well, it’s NAEP data for one, and NAEP data gives Black Americans an IQ of 90.1.
And, as everyone knows, that’s much closer to Hispanic IQ, not Black IQ. The NAEP is not an IQ test, it’s an assessment of educational progress.
Even the B-W NAEP score gap, BTW, has not budged since the mid-1990s:
Okay, that serves me right for just skimming the top and the bottom value and not checking what happens in between.
The Hispanic-White NAEP gap used to be roughly 23 points until 2004, when it shrank to roughly 17 points for some reason:
More evidence B-W IQ (or, at least, ACT and SAT score) convergence ended in the early 1990s:
I have a theory national acceleration is correlated with personal acceleration, but I’m not sure how that works for the Irish.
How do you mean? Celtic Tiger boom times saw the nation getting wealthier and more in line with other Western nations re: social attitudes, but there was no perceptible rise in IQ, or test scores on international tests were improving even in times when the country wasn’t as economically successful, or what?
If you’re going by the “Irish IQ is measured in the 90s”, I think the guy who did that had a lot of political motivation behind demonstrating the benighted Free Staters split off from the United Kingdom because they were too stupid to realise the benefits of higher civilisation, but that’s only my personal view 🙂
Irish in the U.S. had caught up with average Americans (on average) by the 1950s (or so it seems), before the Irish boom in both IQ and GDP/capita.
It seems to have stopped for upper-class Africans too. And their IQ in the reliably g-loaded WAIS-IV is still about a decile less than American Whites. Twin Studies give us the same 50%-80% heritability in African populations.
Chisala has a very poor argument.
Or even reversed.
We here report data from a population, namely young adult males in Denmark, showing that whereas there were modest increases between 1988 and 1998 in scores on a battery of four cognitive tests […] scores on all four tests declined between 1998 and 2003/2004. For two of the tests, levels fell to below those of 1988. Across all tests, the decrease in the 5/6 year period corresponds to approximately 1.5 IQ points, very close to the net gain between 1988 and 1998. The declines between 1998 and 2003/4 appeared amongst both men pursuing higher academic education and those not doing so.
Does this study controls for immigration?
Welp, so much for the singularity.
The 2015 NAEP math scores released last week were mostly down from 2013.
Don’t know why.
BTW, on the Federal deficit: it has a .6582 R2 with the unemployment rate. And when adjusted for the unemployment rate, it’s almost back to Bush-era highs. Especially as a percentage of GDP.
Also check 14Kr, please.
-Scott, those are pretty much the only opulent spaces in Moscow a typical visitor will see. The rest of the city largely looks like crap. Last time I was in Moscow was as a stopover point via rail in January 2013. The outside all smelled like smoke (cigarette, industrial, and automobile) and was full of dirty black snow and smog and the sky was grey and the environment was generally disgusting. But as you get into the corridors that lead down into the Metro, first, you get a covering over your head with a nice yellow-lit ceiling, then, the smoke vanishes, then, soon enough, you feel like you’ve arrived at an astonishingly opulent area, with no smoke and lots of artificial beauty.
The pictures are pretty accurate, but imagine the areas being filled with people and more yellow-tinted.
The St. Petersburg airport (where I stopped over in February 2015) is generally decent, though, and the air feels cleaner there.
That’s about as bullshit as it gets. The rail terminals are notoriously dirty and there is a lot of smoke there, although the recent wave of reconstruction along with stricter smoking regulations slowly takes care of even that. The city is far from the disgusting smog-filled dirty dystopia that E.Harding paints based on his “rail stopover”; you can walk in the central part of Moscow for hours enjoying architecture and urban landscapes both old and new without any “smog” or “disgusting environment” or “black snow” (dirty snow optional depending on the season, the time since the last snowfall and the amount of people walking in a particular area).
By the way, Burdeny seems to be concentrating on the older metro stations, the newer ones are also very cool:
Slavic boulevard (2008) – a “mineral gemstone” look
Troparevo (2014) – “aluminum trees”
The old design with the rail platforms being separated from the central “hall” by the heavily-decorated “colonnade” with with the arched “passages” that you can see in Burdeny’s photographs is unfortunately simply inefficient with the amount of people the Moscow subway has to support nowadays – the new stations has to make up with “ambience” what they cannot provide with glitter.
No offense meant mate, and this may be out of date, but I lived in Moscow back in the mid-90s for a year. It was the most depressing place I’ve ever lived except for the smaller towns in Russia. Architecture outside a few major government buildings was all identical, the buildings are tall enough to block a lot of light, everything was expensive, there were no green spaces at all, weather was crap, smog was endemic, and the crush of people was endless.
On the bright side, their metro made the entire city accessible by foot within 45 minutes, which is pretty good for a city that large.
To be honest, the mid-90s were a pretty grim time for Russia (one of many, I guess).
And the biggest change in urban environment since then has been that there are now way more cars.
Well, February 2015 was, I think, just after they renovated the airport in St. Petersburg. So that kind of doesn’t count.
I’ve been to central Moscow in summer 2014 and I can say you’re full of misleading information. Kapkov and his people have really done a great job, if on a limited scale.
Well, I wasn’t there in summer! Maybe it looks better that time of year.
Off topic, Scott: Have you considered writing a post on borderline personalities? I ask because I have a (sort of) family member who was just diagnosed. Given your excellent past posts on depression, I pointed google at SSC to see if you’d written about it. I didn’t see anything extensive.
(sorry for abusing the links post for this query but I seem to recall you prefer that to email)
DBT is very helpful. There’s no great pharmaceutical cure, but a lot of good studies have shown Seroquel provides some symptomatic relief, and I heard a vague totally unconfirmed rumor from a guy who’s usually right about things that very very high dose SSRI can have an effect, also some very anecdotal success with inositol.
Ozy has a lot of really good posts about borderline up at their blog, mostly under the tag DBT.
Interestingly, Seroquel is exactly what they put her on, but it seems to knock her flat so it probably won’t work out for long.
I’ll check out Ozy’s posts on the subject. Thanks.
Mismatch theory in education has been around a while. Sander & Taylor (of UCLA, very unlike the Heritage foundation) had the key study & book (http://www.theatlantic.com/national/archive/2012/10/the-painful-truth-about-affirmative-action/263122/) but that’s a few years ago and I wouldn’t be surprised if more evidence is out there now.
This is an excellent article. At the same time it worries me to jump on this data because of what it says about the University of Texas.
There are two admission tracks to the University of Texas, one is race blind, and takes into account where you fell within your Highschool (top 10% gets in). The second track is for people who fail that, that track uses classic affirmative action. The second track can’t account for such a massive mismatch as they describe though, less than 50 students a year get in that way. So while they’re mismatching, to abandon the primary track entry policy is to basically condemn people who were born into a bad situation.
Not that continuing it is necessarily a solution either, perhaps there is no solution except to address the K-12 system/poverty/whatever else.
The whole point of the mismatch theory is that it’s actually letting people into a college whose coursework they can’t handle that’s “condemning” them. The whole problem comes from people treating college admissions like a moral judgment.
I TAed physics for engineers at UT, the weeder class where people were convinced that, no, they were not able to hack it as an engineer. I never did a poll, but from what the grad students said to each other I think the top 10% program convinced most of us of mismatch theory.
The top 10% within a high school way to get into U Texas is also about affirmative action. The school can have an entirely low performing student body and yet still 10% can get into U Texas.
SAT or ACT is the rational way to filter applicants if the goal is to choose students with highest odds of performing well. Obviously the school administrators want to achieve other conflicting goals.
I remember that the economist also had an article which mentioned this point (in 2008), I found it here: http://www.economist.com/node/11326407
It looks like they refer to the same study (Richard Sander at UCLA) that HH linked to and that Heriot referred to in the original article.
A slightly different observation made about affirmative action and African American representation in law school is made by Freada Kapoor here:
“If you look at the law firm world, I remember in the early 1990s, every African-American associate that I interviewed at a top-tier law firm told me that affirmative action was the worst thing that happened to their careers. The automatic assumption is that you don’t ‘really’ belong here. You’re not ‘really’ qualified to be here. You’re only here because of affirmative action. I worry about seeing the same reactions today.”
Here’s a link to the article: http://techcrunch.com/2015/04/02/kapors-2/
Clarence Thomas has said this too.
“Thomas argued that what he called the stigmatizing effects of affirmative action put him at a huge disadvantage when he was trying to find work as a lawyer.
Thomas said he went on interviews with one “high-priced lawyer” after another who didn’t take him seriously because they thought he got special treatment.
“Many asked pointed questions, unsubtly suggesting they doubted I was as smart as my grades indicated,” Thomas told ABC News.”
I think that has implications beyond just affirmative action, extending to UMC culture of trying to get into the best Ivy League school possible
if all the SAT prep and resume enhancing extra circulars, just lands you in a place where you’re now over your head, what was the point again?
It depends on the relative importance of school vs major. Are you better off with a Political Science degree from Yale or a Physics degree from UConn?
sure, depends on whether you want to be a politician or a physicist too
Tumblr user worldoptimization had a post about how this accords with people she knows who likely or definitely “benefited” from affirmative action at her elite university. If you want some anecdotal evidence.
Speaking of Sander, this study seems to show that your choice of law school is basically irrelevant to career outcomes, despite that the law school rankings are obsessed over by most law school students. As someone considering law school, I’m concluding this represents a great opportunity to skim off scholarship dollars from a school where I am at something like the 75-90th percentile.
This is a good time to do it, given that law schools have suffered sharply reduced enrollments and are desperate for students. If they think you will raise their bar passage rate, at least in California where that is a big issue, they should be willing to give you a good deal.
My cousin did something more or less like this—he probably could have gone to Harvard (I’m not sure, but I vaguely recall it being on his law-school visit-list). Instead, he went to a mediocre (currently on page 6 of law school rankings) school that was local, where he graduated second in his class (he may very well have been first in another graduating class; apparently there was some sort of savant who beat him out) and now has a job he doesn’t totally hate with the state where he’s being held back by lack of seniority*, rather than lack of school-prestige.
On the other hand, he’s a freakishly good student, so he may have also have done well at Big-Name school and now have a job that pays several times as much at Big Firm (or become IRL!Sam Seaborn). The only data point I can give you is that a good student went to a law school beneath him and things turned out fine. Ish.
Also, I’m told “lawyer” is a miserable profession. Apparently there’s too many lawyers for too few positions, the hours suck, and they have some of the highest drug use and suicide rates. (My cousin’s moving towards being Toby Ziegler, which can either be seen as evidence that being a lawyer is miserable, in that it pessifies your worldview, or as a character trait biasing his view.)
*His boss quit, so he’s been doing what was his boss’s job, but without his boss’s job title or salary.
If you would like to get into the “prestige” areas of law, Fed Court clerkships, biglaw, etc, where you go to school is SUPER important. Want a Supreme court/circuit court clerkship? even as top of your class, if your not T14 you prolly arent getting an interview. The competition is too intense.
But, if you dont care about biglaw, and just want to work in some small private law firm or in gov’t law, not going to a T14 is no big deal. Esp gov’t work, have a high rank from a local school is plenty good.
That’s the conventional wisdom. For federal clerkships, that may be on target, but I am not interested in federal clerkships. For Big Law, this study says the conventional wisdom is wrong. I trust a study more than I trust, say, the opinions of random journalists or law students who push the “school is everything” line–it seems very easy for folks to be misled by confusing correlation with causation. You need to drill down a bit to figure things out. I haven’t seen any other study do that. More
I can only speak to my own personal experiences going to law school and getting a job. So if you trust studies saying otherise feel free.
I think the conventional wisdom is mostly right though. Want to be in the ultra-prestigeous and competitive areas, in the best jobs, in the real power part of law? go to a top school. Just want to make decent money as a tradesman? doesnt matter you’ll make it wherever.
I’m at a good but not really top law school. I know at least four federal appeals court judges, two or three of them pretty well. I’m reasonably sure that on the one occasion when I recommended a student for a clerkship to one of them he got considered, although I think not eventually chosen, and I think that would be typically true. I expect some of my colleagues have similar contacts.
So impressing your professors at a good but not absolutely top law school could give you at least a shot at an appeals court clerkship.
@The_Dancing_Judge its also worth considering the outcomes of students who take on a lot of debt to go to top schools, don’t do well, then don’t get a good job
which is a real outcome, and not a fun place to be (from more personal experience than I really care to recall)
given that law schools are graded on a curve, the relatively constant state of you, and the variability of the law school pools of talent you compete against, I think there’s a lot to be said for taking money to go somewhere lower ranked
I think A’s thinking pretty soundly about this, debt is more likely to be a future constraint on him than his lack of ultra-prestige options (which going to a top law school hardly guarantee)
thanks for that link, I had seen that original study a while back, and then tried to find it again, and for the life of me couldn’t
the search terms that seemed relevant –> “law school” “rankings” “career outcomes” etc are pretty busy terms
I’m glad i finally found it again
I think the Reddit WWI link could use some context. Fortunately, Reddit has this feature! If you attach the option “context=n” to a comment link, it will also show the n comments above. Here’s the link with context 1.
Are Somali refugees unselected? I assume the selection of refugees to bring to the US was not random, and it would take some guile/agentiness/english-ability to be selected.
I’m not very familiar with this subject, but the little information that I’ve gained from a quick Google search indicates that refugees are generally prioritized for resettlement based on humanitarian need. If anything, you would expect this to select for people with worse life outcomes.
I’m sure the requirements are more stringent than that. Dealing with bureaucracy is a vital part of coming to America for anyone but illegal immigrants.
I’m pretty sure the Somalis in America are strongly selected somehow. The ones in Europe (who are presumably much less selected) do pretty poorly by any metric. this article claims that in Britain 73% live on benefits and only 10% are in official work (though more surely work on the black market) so they are hardly a model minority. I think the story is similar in the rest of Europe.
If autism rates are anything to go by, the refugee settlement program selects for worse traits.
Agreed. It’s expensive to travel to another continent and navigate the bureaucracy needed to stay here. I’m not sure how much of that selects for IQ versus wealth versus social resources, but definitely not a random group. NN’s comment goes to rich countries choosing among those who apply to them in the rich country for refugee status–rich countries don’t go to Somalia and swoop up those with most need.
Yeah, this seems to be a key point in the back-and-forth between Chisala and everyone in the Unz comments. Chisala argues that a selection effect would have to be ridiculously strong to reconcile his observations with standard estimates of sub-Saharan IQ–that it would have to be harder to get into a refugee camp than to become a surgeon in the same African country, or something like that. I’m not sure I buy it, but it’s definitely under discussion.
Edit: wonder what Steve Sailer has to say about all this?
Which is why the proper problem to have with his argument is that he doesn’t actually have any observations. He just makes up hypothetical IQs for children of African immigrants and their parents, then says he’s proven regression to the mean is less than what one would expect given model X, thereby disproving model X.
It’s really just godawful.
There’s some question of whether the daraprim alternative is legal, since it is illegal to mass produce compound drugs, if I understand correctly. The CEO of the company planning to compound it said: “To be clear, we’re not mass manufacturing single drugs … That’s not our business, that’s illegal. Every drug has to be customized. There’s not just one version that we plan to make.”
But that makes no sense to me. My colleague wrote about it here; it seems to be a pretty grey area at the very least.
edit: ugh, formatting.
It is my understanding that compounding pharmacies can only fulfill individual orders, so on a per-person basis. So the doctor prescribes the drug and places an order with the compounding pharmacy, this avoids regulation since there is no mass production.
I think this works in the Daraprim case since there are a relatively low number of people who need access ot the drug.
Right. One patient calls the pharmacy for a specific drug, which is different from any mass-produced drug on the market (Pyrimethamine+Leucovorin, rather than Pyrimenthamine+inerts). Has prescription from doctor saying we want this stuff specifically on account of the Leucovorin cuts down on some side effects, which seems legit. Compounding pharmacy makes a batch of P+L at the doctor-specified dosages, specifically for that one patient, and delivers to same.
This is exactly what compounding pharmacies are supposed to do. It would be fishy to do it for fifty thousand people, one right after the other and the same way each time, but it would also be logistically impractical. And unnecessary, because there aren’t fifty thousand people who need this stuff and if there were the market would easily pay for someone else to get their $10 million mass-production license.
I don’t know the evidence on African immigrants, but Thomas Sowell, in _Ethnic America_, reported that immigrants from the West Indies reached the median U.S. family income in one generation. He argued that that fact contradicted both popular explanations for the poor outcomes of Afro-Americans, genetic and due to prejudice, since the West Indians are blacker both genetically and in appearance. His explanation was the difference between the culture that came out of West Indian slavery and the culture that came out of U.S. Southern slavery.
Sowell also, in another book, made the point about the negative effect of affirmative action on the education of blacks. His example was a black student who was better at engineering than most students and ended up at MIT—where he was worse than most MIT students. He would have done much better at a slightly less elite engineering school.
That’s what Chisala concludes as well. I can think of two potential counterarguments if the theory that African immigrants aren’t selected holds up. Number one, maybe African-Americans were negatively selected, eg only certain kinds of people got sold into slavery. Number two, maybe white people are using some characteristic other than skin color to discriminate. I suspect there’s a lot more discrimination than anybody thinks based on African-American accent – we know that accent plays a big deal in early childhood concepts of ingroup/outgroup, and this would match a lot of white people’s (myself included) intuition that Obama doesn’t really register as black. African immigrants would have different accents that white people wouldn’t necessarily have such a strong prejudice against.
Considering how often real life bigots do things like shoot up Sikh temples to get back at Muslims, I’m going to need to see some actual evidence before I consider the Remarkably Precise Racism theory.
While you may have a point about Obama not really registering as black because of his accent, Obama has an accent that sounds, for lack of a better word, American. With African immigrants, I suspect that a lot of people would associate their accents with Scary Foreigners due to decades of news stories about Rwanda, Somali pirates, Boko Haram, etc. When you toss in the fact that a lot of African immigrants, especially Somalis, are Muslim, I find it hard to believe that they would face far less discrimination than native blacks.
“to get back at Muslims,”
-You have evidence for that, right?
Wikipedia doesn’t mention the motive on that attack. However, it seems plausible that it was prejudice against Muslims, since the perpetrator was a white supremacist, who are quite outspoken about their dislike of Islam, but rarely seem to mention Sikhism.
In any case here is an example of a racially motivated murder where the murderer thought the Sikh victim was a Muslim.
Not being about to tell the difference between two cultures from the other side of the world vs not being able to tell the difference between the people who have lived in your country for hundreds of years and some similar-looking other-side-of-world people.
On the other hand, I think West Indian accents sound somewhat British, which is if anything high status to Americans. As do the accents of many Africans.
Not every racist has to be precise for precise racism to have effects. You can find anecdotes, anecdotes, anecdotes about the, say, 20% of bigots who can’t tell a Sikh from a Mussulman or a Nigerian immigrant from a slave descendant, and those 20% are gonna be unpleasant for the Nigerians and Sikhs, but the other 80% can lead to major differences in the actual statistics.
And how often *is* that?
Has it happened more than this once?
Both sound true. If you read classics like Aristotle they all thought their – white – slaves are natural slaves who are incapable of being free men and so on. This sounds really weird, as slavery usually began with being captured in war and one would imagine that is a lottery. As far as I can tell, roughly this is how it happened in West Africa too, tribal wars, capture. Perhaps, what Aristotle and guys like him thought was that being captured in war as opposed to getting killed wasn’t a lottery but it selects for some traits. I have no idea what it could select for. As far as I can tell it was not dramatically cowardly personal surrender, it was just getting knocked out by a glancing blow and waking up enslaved.
As for the second, thinking only skin color drives discrimination would imply discriminators are really stupid. My experiences mostly relate to Europe but at least here much of race-and-ethnicity reduces to class. That is people are far more interested in discriminating for class and use race or ethnicity as a proxy. They want people “acting white” and one easy way is to choose whites. So the average German cussing out dirty foreigners does not mean you, and does not even mean the Turkish businessman in a suit. He has a very certain type of lower-class Turk in mind. Some sort of an Anatolian peasant who would look just as off in a posh part of Istanbul too.
Then again, maybe it is because here actual skin color differences are smaller. Still, I think many Americans would rather hire a Thomas Sowell, Condie Rice type than some white-thrash as they are “acting whiter” i.e. higher class.
(Obviously, the term “acting white” is inherently stupid, color is not culture. But class is culture, hence color is used as a proxy for class.)
I should add that discriminating for class seems inherently acceptable for me, assuming we understand class isn’t just money. Broadly speaking, higher classes have less violence and unpredictability and behave better, and not only because these behaviors tend to select people into upper classes and keep them there, but also because having shit for upbringing gives you all kinds of traumas and issues to deal with and behaving well is difficult after that.
Hence there is a certain heartless rationality in keeping your distance from everybody who probably had a traumatizing upbringing. In liberal lingo: hating on the poors. They probably are more unpredictable, more prone to violence, have higher time preferences and a faster life history strategy.
If you read classics like Aristotle they all thought their – white – slaves are natural slaves who are incapable of being free men and so on. This sounds really weird, as slavery usually began with being captured in war and one would imagine that is a lottery.
The free-spirited man would fight to the death, or commit suicide, rather than let himself get captured and enslaved.
It is easy to get knocked out by a glancing blow and wake up in chains. When they wear helmets, this should be a regular occurance as an iron hat merely spreads the force but is not flexible enough to absorb it, giving the brain a good shake. Suicide could follow, of course, if possible.
Also, what spirited is a man who is agreeing to be enslaved and then escapes? As this sounds smart, although whether it is honorable or dishonorable smart depends on whether there was something sort of a honor code preventing this. Easy to imagine both ways – there are cultures where such agreements are absolutely honor-bound (medieval europe, although it was about ransom, not slavery) and there are cultures where outsmarting the enemy is okay.
I expect they had enough sense to pad their helmets. Certainly medievals did.
I’ve fought (SCA combat) in a metal helmet padded with horsehair. I was being hit with weapons similar in weight to real ones, although rattan rather than steel. Never got close to being knocked out.
[just barely resists snark about Prof. Friedman]
“Zeus takes away half of a man’s virtue when the day of slavery comes upon him”
-Homer, Odyssey 17.322
Of course, the situation in which this is said is interesting. The speaker, Eumaeus the swineherd, is himself a slave (kidnapped by pirates as a child and bought by Odysseus’ father), though a slave who has done quite well for himself, owning property including a house and a slave of his own. He’s also a character who is portrayed extremely sympathetically, at one point even being referred to as “godlike”. He is talking about why he believes that slaves (not including himself!) do not work as hard when their master is away, although he doesn’t state that this justifies their enslavement.
In the real world, Diogenes spent an unclear amount of time as a slave tutor to the children of a Corinthian named Xeniades after being captured by pirates. Plato was also captured and put up for sale at one point, but his students ransomed him.
“I should add that discriminating for class seems inherently acceptable for me, assuming we understand class isn’t just money”
Britain destruction-tested that idea.
If you take discrimination by class to the extent that you neglect individual competence, then you’ll be outcompeted by more meritocratic societies, because eveything that isn’t meritocracy is less effective than meritocracy.
Oh, and dsicriminating *against* lower classes kind of discincentivises them from improving.
Britain was outcompeted? By whom? Well, America. But the immense amounts of untapped land and natural resources helped, and Britain seems to have committed progressive suicide earlier than America which also helped and besides the upper classes died disproportionally in WWI and this obviously lessened their grip and created mobility. And yet, the slow decline of Britain began after that event. When the upper classes died in the trenches. This is not talked about today much, but despite Churchill, Britain really sucked in the first parts of WW2, Norway, Crete… was non-trivial to get stuff together after that.
Obviously it is important to have upwards mobility for exceptional individuals, no question to that. Does knighting industrialists count? It happened a lot.
But merit is MADE. How do you make a good racehorse? Breeding and training. And that is how human merit is made. Breed and train/educate an upper crust well and on the average they will be far better than others.
You’re telescoping two claims. There is a purely theoretical claim about the superiority of meritocracy. And there is an empirical claim that the shortcomings of the British class system were widely recognised when it was at its height.
“Progressive collapse” isn’t a thing.
“If you take discrimination by class to the extent that you neglect individual competence, then you’ll be outcompeted by more meritocratic societies, because eveything that isn’t meritocracy is less effective than meritocracy.”
One also notes that in some cases, the non-meritocratic system will be more efficient because it can sort more quickly. Non-trivial investment in testing merit means decreasing returns as you get more rigorous. As long as you get someone adequate, you may benefit in the long run.
Yes, you need modern style levels of wealth to implement meritocracy, and modern socieities need it — it is something that is both allowed and required by industrialisation. and post-industrialisation.
And there is very strong evidence of the education/meritocracy/technology bundle being able to spread by conquest, colonisation or imitation.
Class/caste systems are a good enough way of assigning people to jobs when no one does anything very complicated, but passing skills down form father to son plus slow technological advance just isn’t as good as a bundle.
Britain destruction-tested that idea.
If you take discrimination by class to the extent that you neglect individual competence, then you’ll be outcompeted by more meritocratic societies, because eveything that isn’t meritocracy is less effective than meritocracy.
A counter-argument: Britain was astonishingly successful in the 18th and 19th centuries (I think kick-starting the Industrial Revolution and conquering the largest empire the world has ever seen would count as “astonishingly successful” by any reasonable metric), when British society was quite class-bound, and less successful (relatively at least) in the 20th and 21st, when society became much more mobile and meritocratic. Of course, it’s possible that Britain succeeded despite being class-bound, and would have declined even more if it had stayed that way, but it does suggest that there was something else going on.
Also, a potential confounder: all empires, no matter what their social system and government, tend to go through cycles of expansion and decline. How far was Britain’s being out-competed by America due to the inherent superiority of meritocracy over aristocracy, and how far was it simply due to Britain having peaked sooner than the US?
Is that actually true, beyond a few specific cases that don’t apply to Britain (i.e. empires held together by one person)?
(Note that “every empire in existence has declined” doesn’t prove that statement. If empires merely underwent random walks in expansion/decline and expansion did not increase the probability of decline, you would still see pretty much every empire in history having expanded and then declined anyway.)
“As for the second, thinking only skin color drives discrimination would imply discriminators are really stupid. My experiences mostly relate to Europe but at least here much of race-and-ethnicity reduces to class. That is people are far more interested in discriminating for class and use race or ethnicity as a proxy. They want people “acting white” and one easy way is to choose whites. So the average German cussing out dirty foreigners does not mean you, and does not even mean the Turkish businessman in a suit. He has a very certain type of lower-class Turk in mind. Some sort of an Anatolian peasant who would look just as off in a posh part of Istanbul too.”
To me it seems clear that most racists adapted to the unacceptability of open racism implemented in the 70s by opening up their cultures to anyone, of any skin color, willing to take on its values, styles, and norms, while remaining prejudiced (now much less unjustifiably, and at least purportedly based on the merits of the culture, although imo still largely through inertia and for non-rational reasons of taste) against the cultures of the races they used to dislike. As I wanted to say in the Whiter Shade post but unfortunately never had the time to really expand/defend, I think this is the basic story of the Red Tribe–White Nationalism stripped of racism–and explains both why people like Trump ping peoples’ racist-detectors while statistically-similar Sanders doesn’t and our endless, exhausting circle of “you sound racist!” “but I’m not racist!” “but you smell like a racist!” “but look at how much I like this black guy that thinks and talks like me!” “well he’s just a token!” “now you sound racist!” ad nauseum.
But you are still failing the political Turing test – you rest on assumptions discriminators are just evil or stupid or whatever. That they are somehow fundamentally different human beings than you.
Employ charity, which is just diligence. Imagine they are like. Something X could happen which would convince you that some kind of discrimination is good. What could be that thing?
In other words, try to find some normal human motive: selfish but not lacking conscience and decency, something that makes sense.
For example, there are good reasons to prefer your own culture. The story could be the other way around – it was not opening up their culture to other races but a natural process of some folks of those races finally getting in. And thus one race no longer predicted culture, it eased.
But culture is a too vague word. It is class. You see high class, educated Russians e.g. mathemathicians fitting in the West excellently and that is a different enough culture. But plain simply people know that a mathemathician is not going to suddenly get violent because misunderstanding a joke and stab them.
The problem is people dislike admitting it because hating on the poor sounds like an asshole thing. In fact it is the most reasonable kind of caution. Class is even more important than religion or anything else, and I am speaking of practical experience, for example, rich Jews, Muslims and Hindus get along perfectly on the golf range. Meanwhile…
I mean, my theory is prefectly predictive. If being high class is its own race, ethnicity and religion it predicts why they get along globally, the global jet-setter class, and why they hate racism etc.
Meanwhile the lower classes don’t accept it because they know they are truly different.
“If you read classics like Aristotle they all thought their – white – slaves are natural slaves who are incapable of being free men and so on. This sounds really weird, as slavery usually began with being captured in war and one would imagine that is a lottery. ”
Not true. Aristotle thought there were men only fit to be slaves. They were distinguished by their lack of the deliberative faculty — as opposed to, say, children, who only had an immature one. These men were natural slaves — it was their very nature to be slaves.
And the reason you needed to add the adjective is that he also noticed that the process of enslavement drew no such distinction, that it was perfectly easy to see natural slaves with the legal status of freeman, and natural master with the legal status of slave.
“They were distinguished by their lack of the deliberative faculty — as opposed to, say, children, who only had an immature one.”
What did the Scholastics say about natural slaves?
If in antiquity, these people tended to be enslaved, and today they’re left to navigate the welfare state and private charity, what was going on in-between? Were they all farmers who could be directed by members of the community with full faculties?
It may have just been rationalization. On the other hand, it may have been the belief that slavery makes you slavish, which I find sort of credible.
I think Aristotle knew that he was rationalizing, since if I remember correctly he freed his slaves in his will.
@TheDividualist – “As far as I can tell it was not dramatically cowardly personal surrender, it was just getting knocked out by a glancing blow and waking up enslaved.”
I am… pretty highly confident that you are wrong about this.
First, I’m pretty sure that cowardly surrender was the cause of the vast majority of prisoners, both on pitched battlefields and especially in besieged towns and cities. If I’m remembering correctly, it’s attested to in sources as diverse as the Bible, Assyrian reliefs, the Anabasis and roman histories. Thermopylea, like the Alamo and Cameron, is remembered because the sort of bravery displayed there is exceedingly rare. Usually when the tactical situation is hopeless, people surrender.
As for the “knocked on the head” idea, given that the typical weapons of the ancient world were the bow and the spear, it seems pretty unlikely to me that “knocked out by a blow to the head” was any large percentage of injuries. It also seems likely, given the dense-packed nature of the phalanx and other ancient fighting lines, that the normal outcome for someone knocked out was to be trampled to death in the scrum.
There’s also the fact that, unlike in the movies, in real life getting knocked unconscious by a blow to the head has a high risk of brain damage, and it also increases the risk of strokes and other nasty and potentially lethal things. So it probably wouldn’t be a reliable method of obtaining good slaves.
The Aztecs would capture enemies by knocking them unconscious, but they only needed the captives to survive long enough to get to a sacrificial altar back in Tenochtitlan, and they usually captured so many people that it didn’t matter if a few of them died from brain hemorhages.
I don’t think the “negatively selected” argument makes a lot of sense on the initial import of slaves. The ones who made it here survived very difficult circumstances, which one would expect to select positively. I suppose it’s possible that smart slaves were seen as dangerous and eliminated, but my impression is that they were more likely to be seen as a valuable asset–the ones who ended up working as carpenters or, in one case, steamship captain, and paying a share of their income to their owners.
One could probably tell a story in which some characteristics, such as initiative, were both negative for slaves, hence got selected out, and positive for free descendants of slaves. That would get you back to Sowell’s argument about the difference between West Indian slavery and southern U.S. slavery.
Apparently I’m the only westerner with any internalized racism here*, so I’ll venture my experience: my stereotype of an African is roughly “polite, humble, hard-working immigrant”. My stereotype is an African-American is very different.
*I can’t judge people too hard; I’d sure as hell never post this under my real name.
I lived in Harlem (African-American) and Flatbush (Black Caribbean) and preferred my neighbors in Harlem. YMMV.
Walter Williams made similar points concerning welfare.
I want to give my personal experience as a caution that it is a Hard Problem to determine whether someone is truly ‘better’ or ‘worse’ and how this may change over time in various environments. I went to a small rural school that was not set up in a way that would prepare me for an elite college. I ran out of things to do, left a year early, and went to a decent state school. Even so, I was not appropriately prepared for it. My first year was rough. It could have gone either way. I saw plenty of talented people nosedive while I clawed my way up. By the time my junior year rolled around, I was clearly at the top of my class.
Afterward undergrad, I went to an elite school for my PhD. My honest assessment was that the undergraduate students had similar enough variance in quality compared to my undergraduate institution, but the professors were much worse at teaching (I don’t want to imply that this holds across a wide range of elite schools; I just want to point out some complicating factors). I have no idea how things would have gone down if I had entered the elite school immediately after leaving high school.
I don’t mean this to be an argument for/against any political position. I’m just wanting to caveat that it’s probably pretty difficult to make counterfactual statements like, “He would have done much better….”
His explanation was the difference between the culture that came out of West Indian slavery and the culture that came out of U.S. Southern slavery.
Insofar as West Indian slavery mostly involved expendable labor units and almost-literal mounds of skulls, it’s difficult to imagine any sort of culture at all coming out of that, much less one with better outcomes than the relatively benign US plantation slavery culture. Did Sowell elaborate on how this improved-outcome culture “came out of” West Indian Slavery?
One hypothesis would be that the few West Indian slaves who didn’t get thrown into the expendable-labor-unit meatgrinder were strongly selected for aptitude and getting-along-with-people-ness, and against pointless-violent-rebelliousness, and this small minority is the one that got to build the post-slavery culture. But that’s me handwaving and hypothesizing; Sowell hopefully studied this in some depth.
I’m going by memory of a book I read many years ago.
I think his claim was that West Indian slavery was more like serfdom. The slave owed a lot of labor to the owner, but otherwise ran his own life–grew crops to feed his family and such. Southern slavery was more big plantation, slave spends his time doing things owner orders him to do, owner provides food, housing, etc. He thought the former developed a more self-reliant sort of culture than the latter.
There were plenty of Slave Rebellions in the West Indies islands and these rebellions occurred periodically until emancipation was achieved in one fashion or another.
Just for the lolz, and combining two of the topics from above: here’s Thomas Sowell discussing this topic with a rather incredulous Joe Biden: https://youtu.be/pEOlK4y8AUo
Freakonomics ran a story on the flue vaccine, in which it was said that the polio virus (which type?) may have been eradicated by now if it wasn’t for vaccination taboo in the middle east.
Also, if you thought Vox’s deficit graph showed dishonesty (and it does, but not to a particularly shocking extent), look at what Kevin Drum did:
Now that’s shocking dishonesty.
Interestingly, Michigan is also the U.S. state with the fewest single-candidate elections for state legislatures. The state with the most is Georgia.
“Except Snyder wasn’t planning to enter the presidential rat race. Instead, he was attempting to mainstream Michigan’s form of austerity politics and its signature emergency management legislation, which stripped more than half of the state’s African-American residents of their local voting rights in 2013 and 2014.”
-I think that’s precisely what most proponents of “charter cities” have in mind. A more-or-less enlightened elite running cities whose residents keep smashing their public finances into the ground (but, then again, how did those residents get there? Something must have repelled the more responsible residents from those cities earlier.).
“Something must have repelled the more responsible residents from those cities earlier.”
Lakes, Freeways, and hilarious crime.
Gorgeous lakes about 30 miles north of town up in Oakland County providing cheap (until about 5-10 years ago. WTF, guys) lakefront property.
Freeways that let you live 30 miles north of town (and not just “next to the train station”, but anywhere across Oakland County) and commute to work downtown, which was sort of “new” to the 1950’s.
Hilarious crime that meant that you no longer wanted to live or work in the downtown 30 miles away. So the jobs followed.
At which point, faced with a rapidly crumbling tax base, and a continuous exodus of anyone who could scrape together $600/month for rent in the suburbs driving them into reduced density, the city turned to the state for help. Which was fair. They had problems, they had a lot of the cultural infrastructure that the suburbanites used but did not pay for directly…
At which point after 20 or 30 years of watching their tax dollars get poured into the hole that was Detroit, the people in the suburbs said that Something Must Be Done. (Oh and the entire concept of 13th checks just actively pissed people off). So city manager, charter city, billionaires who loved the city pouring money into revitalizing the Green Zone downtown, the revitalization of the Green Zone downtown…
Where it’ll end up? Who knows.
But it’s a gorgeous growing Metro Area surrounding a bombed out city. Like “I have been in a car that was literally on fire because we weren’t getting off in Downtown Detroit” bombed out.
Can I get a synopsis of why it is so shocking? The Fed’s graph isn’t playing nicely on my phone. There is obviously a history being omitted, but that is not shocking. AFAICT, he’s not showing the index itself, but the index under some operation to make it look bad. Without getting to a computer I don’t think I’ll see what he’s actually doing to it, though. And if it’s that shocking, I don’t know if I want to figure it out with something small, fragile, and expensive in my hand, anyway.
-Is super-duper misleading, but not dishonest.
-Is pretty clearly an outright (and probably deliberate) lie.
Kansas’s relative decline under Brownback can’t be attributed to Brownback because Kansas was relatively declining (in relation to the Coincident Indicators of both the U.S. and Nebraska) decades before Brownback! Omitting that broader context and attributing Kansas’s relative decline to Brownback makes Kevin Drum a liar.
Okay. I was looking more specifically at his graph.
Your link compares Kansas to Nebraska. It seems like that exaggerates Kansas’ decline relative to the U.S., because the low numbers in the 1987-2001 part of the graph are because Nebraska did well, not because Kansas did badly. Here‘s the graph comparing Kansas to the U.S. as a whole. You can see a downward trend before Brownback came in, but you also see lots of apparent upward or downward trends changing direction and not a ton of reason to think “momentum” in this metric means a whole lot. He came in with Kansas’s indicator at 92% of the U.S.’s and now it’s 88%. They were declining before, but I think it’s fair to say that the results of the past few years have been disappointing for Kansans, even considering low expectations going in, no?
I think using Nebraska is more defensible than using the U.S. as a whole, as it’s a Great Plains state, and, thus, more likely to be affected by the same economic forces that drive the Kansan economy. What has Kansas in common with California?
Thanks for pointing out the slide relative to the U.S. as a whole really started in the late 1990s, not in the early 1980s as happened with Kansas relative to Nebraska. Still doesn’t seem to be much of an acceleration of the slide under Brownback until around a year ago.
Yeah, both comparisons seem useful for slightly different things. Purpose of my graph was more as a complement than a replacement, since you cited both of them.
Why is Kanas declining in recent years, especially compared to a seemingly very similar state like Nebraska? Are there ongoing trends that would lead us to believe not just that Kansas’s performance would be lower, but that it would continue to decline as time went on? If this can all be explained by a collapse in the (Quickly googles…) aviation industry (apparently), and that collapse was driven by forces that we can expect to continue over time, that seems like really important context that counters Drum’s argument. If these numbers go up and down largely unpredictably, then it seems less important to point out the previous decline. How surprising would it have been if the trend had reversed and Kansas had started improving its metrics relative to other comparable states? I don’t know much about Kansas’s economy, but that’s what I’d want to see established before I called Drum dishonest here. I’m not convinced the second derivative is the right metric to look at here at all, so saying it appears to be only slightly negative doesn’t convince me that Brownback isn’t performing poorly.
I think you might be overstating Drum’s argument. He says:
“Brownback instituted an aggressive program of tax cuts and budget reductions, promising that this supply-side intervention would supercharge the state’s economy. [“The Kansas Experiment.”] But the reality has been rather different. Kansas has underperformed the US economy ever since Brownback was elected.”
I read the post as more a balloon-puncturing of Brownback’s promised supply-side miracle (this has been a hobbyhorse of Drum’s for a while) and less a blaming of Brownback for 100% of the state’s relative underperformance.
Obviously Drum, and the writer he quotes on “self-inflicted” damage, believe that austerity is generally bad for struggling economies, and many here will disagree with that. But I don’t think the assessment “much of the downturn especially post January 2013 is self-inflicted” is terribly crazy if you’re a Keynesian, and it shouldn’t be called a lie.
If the deficit expanded, it’s not austerity! Rather, it’s reducing the size of government (which, in Kansas, is on the larger side in relation to those of other states). If revenue is cut by a lot and spending is cut by less, is the net effect AD-expansionary*?
Sure, that limited argument refuting the claim the Brownback tax cuts would supercharge the economy works, and I have no problem with it. But ascribing Kansas’s present poor economic performance to tax cuts is just as bad as the prediction those tax cuts would strongly boost the Kansan economy.
*As the tax cuts and spending cuts are meant to be permanent, I presume no- there should be neither a contractionary nor expansionary effect. When such stuff is permanent, it’s the supply-side, not demand-side effects we should be talking about. But what if this was temporary? Would it be AD-expansionary then? Dunno.
And the graph totally doesn’t work for showing “much of the downturn especially post January 2013 is self-inflicted”. I do think it should be considered dishonest due to this.
“If the deficit expanded, it’s not austerity!”
That is a little like saying that if you did not hit them then you weren’t shooting at them. You are assuming the conclusion.
Austerity does generally mean attempting to reduce the deficit, by spending decreases and/or tax increases. So we can say it is not an austerity tax plan because it decreases taxes, regardless of which way the budget deficit goes. If spending is decreasing, then we can say it is an austerity budget.
“Rather, it’s reducing the size of government”.
The size of the government and the size of the deficit are not directly related. Indeed, the growth of the federal deficit post 2008 crash was not due to an attempt to decrease the size of government.
The New Keynesian model can actually imply anything.
“The size of the government and the size of the deficit are not directly related.”
-Yeah, I know.
Human brain project rant seems overstated. Many of the things we “don’t fucking know”, according to it, the project explicitly aims to work out. You can’t just not work on things because you don’t already know how to do them…
Plus, none of the problems seems particularly insurmountable. Sure there are many of them, but you’ve got to start somewhere.
Right, the problem with the Human Brain project is that it is, like its predecessor Blue Brain project, a fraud.
Naw, all science proposals are at least a little bit exaggerated. Unfortunately we’re in a situation where anybody who doesn’t overstate the potential outcomes of their research program just won’t get any funding.
I wish we could agree to a ceasefire and start representing expected scientific results accurately in our grant applications, but it’s a prisoner’s-dilemma type of coordination problem.
It’s a real problem though, in Australia the ARC has been complaining that the quantum computing people of the last decade “over-promised and under-delivered”, and they’re right. And it means less funding for the quantum computing people even though the program is more important than ever.
I don’t know if the human brain project was *particularly* bad in how much of a fraud it was, maybe it was really, really bad. But the rant didn’t convince me of it, it is consistent with business as usual as far as I can tell.
It’s not just a matter of “overstated claims.” If you have ambitions that you might not achieve, fine, that’s what ambition *means*. I’ve said “I’m gonna do [awesome thing]!” and then utterly failed to do awesome thing, and I think that’s not fraud. If you say you’re gonna do [awesome thing] and then have exactly no plan for doing it and are experienced enough that you could reasonably be expected to know that? Then it’s a little shady.
My research is in neuromorphic computing, specifically with applications to neurorobotics. My beef with HBP is that those particular areas of ignorance are precisely why their approach to neuromorphic computing, brain simulation, and neurorobotics will fail. They’re living in a world that I call ‘neuron soup’. We have no idea what the important features are in neuron soup. We don’t know what parameters do what, what circuits do what, what an output signal even is or how it encodes anything. To the extent that you can put structures on a chip that are vaguely, statistically ‘brain-like’, you can possibly get output that is also vaguely, statistically ‘brain-like’, but it’s not going to do anything interesting.
Here, the reference to C. Elegans is important. We have much more information than vague, statistical ‘brain-like’ arrangements for the neurons of C. Elegans. Nevertheless, when we simulate it, it doesn’t behave a bit like how the real creatures behave. If small-batch neuron soup is broken, does it make any sense to make industrial-scale neuron soup?
I think some of the thrusts of HBP are really good, and there will be solid science being published. There’s not a chance that right now is the right moment to think that some of their more expansive goals can even be talked about coherently. We’re almost certainly throwing money away at bad approaches which try to ‘do something’. They will surely ‘do something’ (read: do whatever they can because it can be done and then try to find a way to sell it as being good), and we might even get a surprising/interesting result or two… but I really can’t see it being an optimal use of funds at this moment in time.
Do you have someone in mind who tried to simulate C elegans and got bad results? My understanding is that we don’t even have enough data to even try. I think that there are a lot of people who said “I’m going to simulate the connectome,” but then realized that’s nonsense and gave up.
“Speaking of filter bubbles, if you’re surrounded by terrible people, consider that you might inadvertantly be selecting for them.” I thought this was the wisdom from the streets since, well, forever.
Not quite. If you follow the link, a novel selection mechanism is proposed.
I was quite amused to find that I only had two friends who had “liked” any of them: one for Biden and one for Paul.
Apparently I’m selecting for the apolitical. I consider that an inadvertent victory.
I’ve tended to think another mechanism for this is as a side effect of seeking out people who are contrarians, or take unreasonably strongly held interesting positions, or spend a substantial amount of time theorising about the world, or spend a lot of time thinking about politics.
I’m fairly sure doing that will select for people who are atypically confrontational on the personal level and transgressive against conversational norms, either because they are transgressive-by-default for coolness sake and have not been sufficiently convinced that civility is a place they should make an exception, or because they’re freaked out by something (thus all their focus on an area they can barely influence) and think that fighting is more important than being kind.
I thought the article made some good points about taking selective pressure for assholes into account as a cost of a strategy, without essentially ruling out that strategy, and about deliberately filtering to mitigate the effects. I think it’s generally worth making a deliberate effort to ignore the unkind and only interact with the kind for that reason.
I’m pretty happy with my social circle, I certainly don’t think they rank lower on agreeableness than most and probably rank higher, and they’re mostly culled from rationalist online people, libertarian activists, and people from a small liberal arts college that emphasized discussion-based education (read: arguing). I think if you signal “in-group member that likes to discuss ideas peacefully” this group is no more disagreeable than any other.
> Go to his list of candidates’ Facebook pages and see how many of your friends support each.
Zero for all of the above, except 14 for Obama? My friends aren’t advocating for political candidates a year from the election, I guess.
I thought I was doing good for apolitical Facebook friends, only 6% liked a candidate’s page. But your 0% trumps it.
Going from 2% to 17% adding Obama and Romney to the list.
Amusing looking at the types who are on the list. For Obama: PhD, teacher, hippy-type pothead, PhD, teacher, single mother on welfare, college student, Republican veteran private sector BA education level straight male(!?), and two straight white male businessmen.
For Romney: 4 straight white male and 1 straight male Indian. 2 business owners and 3 private sector employees. No drug users AFAIK
“Adopting good curricula – not even inventing it, just using the ones that are already out there and well-tested – is probably the cheapest way to make education more “effective”. Why are so few people doing it?”
Because “saving money” is not in any academic’s job description.
Moreover, there are a ton of people out there who stand to make money off expensive solution, and so argue loudly for them, while no one stands to make money off not spending money, so these nobodies tend to be much quieter.
Re: selecting good curricula, and evidence-based literary instruction, I have no idea how you’d do that. I have no idea what a good way to teach children to read is, because I literally don’t remember learning to read and I could read before I went to school (no kindergarten etc. in my day, you started at age four, I didn’t start until I was four and a half because the nearest primary school to us was a Pit of Horrors in my parents’ day and they were adamant no child of theirs was going there).
I like to joke I learned to read on a mixture of “The Cat in the Hat” and the Epistles of St Paul, but I have no idea what or how I learned and if phonics or whole word recognition or whatever the most recent trend is would be the best way.
Also, governments love messing about with curricula because it looks like They Are Doing Something about declining educational standards and how will we make sure we have an educated workforce to compete with the Chinese in twenty years’ time etc. That means that programmes get implemented, switched around, yanked out after a year, let run along but underfunded, etc. depending on who is in power, what the current flavour of the month is in educational trends, are businesses demanding all children learn German/Chinese/Spanish/Martian so they’ll have workers for their call centres and we should drop useless subjects like art and music (and history, and…) so we can concentrate on Readin’, Ritin’ and ‘Rithmetic.
That does not make for a consistent and successful approach.
Here in the U.S., it isn’t government that’s the problem, it’s the faddish ideas spawned in schools of education. The research is pretty clear about how to best teach reading. However, this is ignored in favor of trendier methods.
(If you find this at all interesting, you might want to read Greg Ashman’s blog.)
That is terrible and doesn’t even address phonics vs whole word
> so we can concentrate on Readin’, Ritin’, and Ritalin.
Concentrate on ritalin? sounds like you’re headed for a feedback loop
I think it’s possible in principle to select good curricula, but I was a bit disappointed that the article didn’t go into more detail about how, because I also think it’s difficult.
I’ve picked up some good teaching tricks by copying other people and I think that there are probably many teachers who’d be well served by doing the same. As it happens, this belief appears to be almost universal in education and you really are never short of people telling you the right way to do things.
There’s a disconnect between academia and practice. Academia wants to set up their pet curricular intervention with perfect implementation in a small sample, score a big effect size, and sit back while the world beats a path to their door. Practice wants to do what they know how to do, not be told what works by people with no experience in their context, and not be starting over from scratch (with associated overhead) at every administrative whim.
Result: they stay in their bubbles, academia gets to do their hothouse interventions to their hearts content, no paths are beaten to their door.
One promising idea to bridge the disconnect is to develop evidence-based rubrics to score what practitioners are already doing based on how well it fits with known effective practice. You can then tell them to do more of thing X they’re already doing, and less of thing Y. Or find a teacher in the same context who scores well and use them as an example of good practice.
No. The game is rigged in that what is considered “effective practice” reflects the zeitgeist rather than reality.
What exactly are we supposed to learn from comparing the graph in the Vox article with another graph? It seems like the not-very-sub-text is something like “ha ha, look at those dishonest Blue Tribe guys” but I’m not seeing either what’s so bad about the graph or why it tells us anything about Vox (or whatever political group Vox may be representative of).
The graph in the Vox article isn’t Vox’s, it’s from the White House, and the Vox article is disagreeing with the tweet that contains it. So that’s not what you might call a ringing endorsement of the graph.
The graph in the Vox article starts at the start of Obama’s presidency (also: just after the big financial crash of 2008) and therefore doesn’t let you compare the deficits with ones before then. That doesn’t seem terribly dishonest (of the White House or of Vox); the message it’s trying to convey is “Obama has reduced the deficit a lot”, and so in fact he has.
The graph in the Vox article might give you the impression that the deficit must be low now since it’s reduced a lot, whereas looking further back in history shows that deficits have very commonly been lower (or even negative). OK, but while the article is arguing that the deficit could stand to be higher it isn’t basing that assertion on a comparison with historical deficits; its claim (on which I offer no judgement) is that because interest rates right now are exceptionally low it makes sense for governments to borrow a lot. Whatever may be said for or against that, the fact that the article contains a graph that doesn’t go far back in history seems entirely irrelevant.
I’m probably being dim. What am I missing? What does the presence of that graph in the Vox article tell us that’s so enlightening?
That graph is dishonest because
1. It does not look at a broader timeframe (see also a much more egregious example of this from a different context linked to by me in the comments above).
2. It does not provide context.
This statement by Matt Yglesias: “The sad truth is that the deficit has fallen so rapidly not because of any brilliant scheme on the part of the White House, but because the political system is operating in a dysfunctional way.”
is two solid years out of date, and is today totally, utterly, 110% untrue. The truth is that the deficit hasn’t fallen all that rapidly and its fall wasn’t due to any fiscal austerity by Congress (except for revenue hikes in 2013, which ceased to exist by mid-2014), but simply due to an improving economy. See my links in an above comment.
“the message it’s trying to convey is “Obama has reduced the deficit a lot”, and so in fact he has.”
-No, he hasn’t. Obama did nothing to reduce the deficit (indeed, the only substantial reductions of the deficit that happened under Obama that could be attributed to spending cuts or tax increases occurred as a result of the 2011 Budget Control Act, which was caused by House Republicans forcing a showdown over the debt ceiling- Obama couldn’t have loosened fiscal policy any more even if he tried his hardest). The deficit merely fell under him because of an improving economy and the 2013 sequester (which was quite temporary).
I already acknowledged that and explained why I don’t think that makes the graph dishonest, the article dishonest, or the article indicative of something wrong with Vox or the Blue Tribe or whoever exactly Scott was nudge-nudge-wink-wink-ing about.
It looks to me like it has in fact fallen very rapidly; it’s a small fraction of what it was when Obama became president. (I would find it easier to believe “hasn’t fallen surprisingly rapidly”; e.g., maybe this is just what inevitably happens when there’s a big financial crash, or something.)
I don’t quite see how you can simultaneously claim in this paragraph that the deficit reduction had nothing to do with “any fiscal austerity by Congress”, and then say in the next that there were substantial reductions in the deficit because of the Budget Control Act which was the result of action by … Republicans in Congress.
I’m sorry, I should have said the message is “The deficit has fallen a lot under Obama”, which is true. Of course the graph can’t say anything about what caused its decrease. (Whether it’s true that “Obama did nothing to reduce the deficit”, I’m not so sure. E.g., if it’s because of an improving economy, and if the sitting president has much influence on the performance of the economy, then he probably gets some credit for that improvement and therefore for some of the deficit reduction, even if he did nothing else.
It sounds as if you’re agreeing with Matt Yglesias’s article: the deficit has fallen by nearly a factor of 4, but that isn’t because Obama did anything to fix it but because Congress is unable to agree either to spend more (which the Democrats might like) or to cut taxes (which the Republicans might like).
The Congressional fiscal austerity actions are over and have been for over a year. Taxes have been cut. This is not 2013. This is why I said the deficit’s fall was not due to any action by Congress-because it’s the current year, not two years ago. Look at
Yeah, I should have said “fallen surprisingly rapidly”; you’re right. I make mistakes.
I’ve seen no evidence Presidents have much of an impact on economic performance except in a few exceptional cases (e.g., FDR).
It looks to me like it has in fact fallen very rapidly; it’s a small fraction of what it was when Obama became president
You are wrong and look silly.
The sequester produced a temporary hiatus in spending. Together with the tax increases, fiscal consolidation definitely moved the needle in 2013- deficit reduction wasn’t just economic growth. Spending as a % of GDP has increased since 2013, well into a recovery. Hmmm.
What both of these charts miss, however, is the demographic supercycle. The fat “baby boomers in peak earnings” years are now largely behind us. In order to understand our position, you need to look ahead to the explosion in entitlement costs around the next corner.
The Pharaoh has no Joseph.
It claims the deficit is really low by showing that the deficit is much lower than it has been the past four years. But in the broader context, all four of the past four years have been remarkably high, and this is still very high by historical standards.
Compare to people who “prove” global warming doesn’t exist by starting their graph at 1998 (a very very hot El Nino year) and showing that the next few years were cooler than that, without including how everything on the graph is way hotter than anything from the 70s and 80s.
Well, it’s moderate by historical standards (as a percentage of GDP, as using absolute dollar amounts would be a classic case of “the money illusion”), certainly not “very high”, but that’s only because the economy has improved to such a degree that unemployment is a bit below the historical average. Vox says the deficit as a percentage of GDP is below the historical average, and it may be, but it’s not far from it. See the relevant graphs I’ve linked to above (the revenue-expenses ratio is close enough to the deficit as a percentage of GDP for present purposes, and is in *some* ways superior to it).
But, yeah, it would have been best to look at a longer timeframe.
Here‘s a graph of the deficit as a percent of GDP over a longer timeframe. The money illusion is a big deal here, and I’d say that makes using the second graph Scott linked even more misleading than the first one if you’re trying to represent the effective size of the deficit over time. The graph the White House quoted is of deficit as a percent of GDP, and I don’t know why you’d instead cite raw numbers if all you’re doing is trying to find a fair comparison with further endpoints. GDP has more than sextupled since 1980, and if you don’t account for that, your graph is going to imply some incorrect things.
As Yglesias mentions, 2.5% is below the historical average. It’s lower than the deficit was throughout the Reagan and Bush 1 years, in particular (but higher than it was under Clinton or before the mid-70s). It’s roughly middle-of-the-road for the size of the deficit across time, and for someone like Matt Yglesias who believes in deficit spending (at least in times like today when he’d say the conditions are favorable for it), I don’t think there’s anything objectionable about him saying it’s too low when it’s below the historical average.
Also think everything g says holds up: Vox makes this case without referring to past deficit rates at all, except by mentioning that “The White House is crowing about it”. I don’t think you can fault them too much for quoting (disapprovingly) this graph when it’s not essential to their point at all; they’re arguing that the deficit should be higher because interest rates are low and this is a good time for deficit spending, not just because it used to be higher. The White House is being a little misleading by cutting the tweet off at 09, but no worse than when it looks at how unemployment has gone down under Obama’s presidency, which seems like a pretty fair point to me. Yeah, he’d get some of both of these gains from regression to the mean anyway, but he took office when the economy was at aberrant levels and it is now at fairly standard levels, and that holds up even if you zoom out and expand your endpoints. People can argue whether Congress or the presidency had more to do with that, but “Under @POTUS, we’ve seen the fastest deficit decline over a sustained period since WWII.” is a clearly correct statement, even if he was given more deficit to reduce in the first place than anyone else.
“I don’t think you can fault them too much for quoting (disapprovingly) this graph when it’s not essential to their point at all; they’re arguing that the deficit should be higher because interest rates are low and this is a good time for deficit spending, not just because it used to be higher.”
-That argument was made by many a government that went into default.
Yeah, it’s a batshit insane argument without some serious crisis that needs tons of spending.
The people I have seen doing what Scott describes were not proving that global warming did not exist, they were claiming to prove that it had stopped about 1998, with temperatures constant since then.
Eyeballing the temperature graph, the claim is defensible if you start at 2002. I did a straight line fit to the NASA data from 2002 to 2013 (the 2014 data were not out at the time), and the slope was just about zero. My guess is that including 2014 would make the slope positive, but with zero still within the error range.
I wouldn’t be surprised if it is statistically defensible starting at 1998—if a straight line fit from then to now has a slope consistent with zero. But it’s pretty clear looking at the graph that that’s cherry picking the starting point. Also, of course, “consistent with zero” doesn’t mean the slope is zero, only that it might be.
What is the justification for claiming it stopped in 2002 (or 1998, or 2003, or whatever)? If there’s been a clearly established trend that nobody disputes, you can’t just say “well, if we look at an arbitrary segment of the line, zero slope is within the error range”. At the very least, you’d need an explanation for what changed in 2002 to stop this trend. Ideally, you’d have a prediction from beforehand rather than an after-the-fact hypothesis. But without that, 2002 is cherry-picking the starting point just as badly as 1998 would be.
If we were discussing a deeply held red-tribe belief (say, that tax cuts spur economic growth), and the correlation behind that belief was indisputable over 100 years of data, I don’t think the people who are now making this argument would be pleased if blue-tribers started going “well, if we look at just the past 13 years of data, this correlation looks pretty much like a straight line. I’m going to assume the trend’s stopped working now”, especially if they didn’t even offer an explanation as to why it would have stopped.
Solar activity is a common justification, but I don’t know if it is a good one.
I got 2002 by eyeballing the graph. The line appeared to be rising up to that point, with considerable random variation, and flat thereafter.
So far as an explanation, my own guess is that there is some cyclic process, possibly involving heat moving between ocean and atmosphere, with a period of about sixty years, superimposed on the rising trend from AGW. That fits the 20th century pattern, including the long mid-century pause. The IPCC models explained that as due to aerosols, an explanation I have been suspicious of for some time.
There is serious scientific work, not by me, supporting the idea of such a cyclic effect and offering a mechanism and evidence from a very long local temperature series. I discuss it, and link to it, at:
The justification for saying the upward warming trend looks to have paused was simply that the trend did, in fact, look to have paused.
It might be hard to remember now, but warmists had previously been claiming that warming was on an accelerating trend. The short-term prediction in the early 2000s was that world temperatures would increase by about .3C (give or take .1C) over the next 20 years. Given that premise, a trend that persists in remaining merely flat (combined with increases in CO2 production consistent with their “business as usual” scenario) should be surprising and interesting, more so the longer it persists. No?
Some possible explanations include:
– “natural” variability factors had been underestimated
– “sensitivity” had been overestimated
– feedback factors have changed
Or maybe we’re just completely confused and don’t know what the heck is going on.
Trends that “can’t go on forever” eventually stop. We’re in the middle of an interglacial; eventually SOMETHING will cause it to get cold again. There must eventually come a day when the warming trend slows down, stops, even reverses. Should we not be on the lookout for that possibility? How many decades of flat-to-cooling do we need to see before it becomes okay to notice that yeah, the most recent trend seems to have become flat-to-cooling?
As with e.g. continental drift, you don’t need a mechanism to note that a trend or correlation is statistically significant and it’s time to start looking at why it might be so rather than insisting you can’t think of a reason so it must be wrong.
If I plot satellite data for lower troposphere temperature from 1979-2003, throwing out 1998 as an outlier, and apply Statistics 101,
there’s a warming trend of 1.48+0.34 deg/century. For 2004-2014, the warming trend is 0.10+1.07 deg/century. Fewer data points means bigger error bars but even with the huge error bar for the hiatus, this is statistically significant at about the one-sigma level.
Not conclusive proof, but a very strong indication that we should be looking for why that is so rather than demanding that it can’t be so.
@David: A short segment of the line being flat does not in any way argue against the general trend of a line “rising up to that point, with considerable random variation”. There have been lots of flat segments in the graph, because that’s what random variation does to a graph with a consistent upward trend; it makes some bits of it look flat and some bits look really steep. Someone writing in 1976 would say temperature had been mostly flat since 1966. Someone writing in 1986 would say temperature had been mostly flat since 1977. Someone writing in 1996 would say temperatures had been mostly flat since 1987. They’d all be wrong, because they’re only looking at a 10-year sample size and ignoring the greater century-long trend. (And of course, they never seem to do that for the majority of 10-year segments, the ones that have the same or greater slope as the rest of the data.)
“If I cut off the huge increases before and after this segment starts, it looks largely flat within this segment” is a really weak statistical trick that shouldn’t fool anyone. If the true underlying expectation without the variance truly is flat, then those arguing against global warming’s (continued) existence should be able to find not just endpoints to select where the graph appears flat, but they should be able to find endpoints from which the graph appears to be declining, and those segments should be as common and as steep as the ones in the other direction. If the best they can do is arbitrarily choose 2002 specifically because it is the endpoint that best supports their conclusion, and even then, with cherry-picked endpoints, the slope is positive, that is actually pretty strong evidence that the underlying trend is still increasing, not that it’s flattened out.
This illustration is definitely partisan, but it shows pretty clearly how there have been lots of segments in the past that looked flat, how the model expects that to happen since the year-to-year variability drowns out the underlying trend in small samples, and how the appearance of another short segment where the graph appears flat should not in any way argue against the prevailing trend.
That’s why this argument should never have been believable, but there’s an even better reason to dismiss that claim now, and it’s that more data has come in, and it strongly argues against the idea that warming has stopped. 2014 was the warmest year on record, and through 9 months, 2015 is coming in well above even that. I plotted a graph of the average temperature from 2002 to 2015 here; the best-fit line has slope .98 (in hundredths of degree Celsius increase per year) and r^2 of .33. That slope almost exactly matches the overall trend that’s brought us up 1 degree Celsius in the past century, and it’s from an endpoint cherry-picked to make the graph look flat. If we use a different endpoint, like 2001 or 2003, or if we randomly choose a year in the rough vicinity of 2002 to start looking, the line is even steeper that that, and in many cases it shows an increase faster than the overall trend.
@Glen: I’m not sure why you’re bringing that prediction up as a point against the “warmists”, because it looks like it’s on track to be true. 2005 was an abnormally hot year, measured at .69 degrees C over the 1951-80 global mean. (The three years before it averaged .60 and the three after averaged .61, so it’s not exactly an ideal baseline for most things, but I’ll run with it for now.) 2015, another abnormally hot year, has averaged .81 degrees C through nine months. In increase of .12 over 10 years puts us on pace for .24 over 20 years, and fits squarely within the .3 +/- .1 camp prediction.
Of course, comparing just one year to another isn’t ideal, since each year has so much year-to-year variation. The three-year average ending in 2005 is 0.62, compared to 0.74 for the three-year average ending in 2015. The increase in the five-year rolling average across a 10-year gap is .08. However you parse the data, with pretty much any number of years in the rolling average, and pretty much any endpoint, we’re on pace for that prediction to be right, and for the increase over 20 years to be greater than 0.15 degrees C.
That illustration also shows pretty clearly that the segment people actually CALLED a pause was longer than all the earlier candidates. One might even go so far as to call it unprecedented. 🙂
Sorry, it’s not kosher to guess that the current year is going to make your theory look good; you have to actually wait for the results to be in. And as James Picone mentions elsewhere: if a single data point changes your conclusion, it’s probably not very robust. Patience! (Feel free to gloat after the data is in if it still helps your case.)
(also, it’d be nice if we could stop CHANGING THE METHODS by which temperature is computed. If you guys get to do stuff like switch from HadCRUT3 to HadCRUT4 and call the new, higher value the anomaly you’re pretty much guaranteed to be able to find new record years whether they happen or not.)
In my post above, I mentioned that 66-76, 77-86, 87-96 all appeared pretty flat, and may have been declining. Those stretches are 11, 10, and 10 years long, respectively, and look like some of them might even have a slight negative slope (eyeballing it right now, sorry). Are you saying those are normal and to be expected from random variation, but a 12-year span which *still* has a clear upward trend is clearly not random variation?
Agreed. I was responding to you gloating 10 years into a 20-year prediction, a prediction which the warmists are on pace to be right about. And “if a single data point changes your conclusion, it’s probably not very robust” is the exact reason the “pause” in 2002 is not robust at all; the line is still trending upward once you include 2014, even more so once you include the 2015, and the upward trend is even stronger if you start counting at 2001 or 2003 or 2000 or 2004.
That’s fair, and that’s the biggest issue I have with the long-bets link. 2025 is going to roll around, and both sides are going to point to different measurements and claim victory. It would be useful to establish a fair standard for measurement now for these predictions, and it would be more useful to have done it 10 years ago. David specifically cited the NASA data and claimed that it supported the idea of a pause in 2002, so that’s why I’m referring to the NASA dataset throughout.
What is the justification for claiming it stopped in 2002
Assuming that’s what the data shows, presumably that it stopped in 2002 would be very good justification for claiming it stopped in 2002
edit: and you appear to be arguing that theory trumps facts.
Someone writing in 1976 would say temperature had been mostly flat since 1966. Someone writing in 1986 would say temperature had been mostly flat since 1977. Someone writing in 1996 would say temperatures had been mostly flat since 1987. They’d all be wrong, because they’re only looking at a 10-year sample size and ignoring the greater century-long trend.
Oh my god. /facepalm
Ah, I see the problem. You’ve been looking specifically at NOAA, which shows by far the steepest trend in the relevant period whereas I had been focused on a few others (mainly HadCRUT). When playing around with the skepticalscience trend tool you should be sure to look at more than one series. Or use something like Woodfortrees that averages a few of them. In particular, try comparing with HadCRUT4, UAH, and RSS.
Whether Brian is on track to win his proposed longbet strongly depends on which series we look at; he’d lose big on several of them. Whether “the pause” “has a clear upward trend” also strongly depends on which series you look at, which is kind of the point – it SHOULDN’T. Using the SS tool:
With Land/Ocean:HadCRUT4 the central trend from 2002-2014 is -0.018/decade (negative)
(and would have been MORE negative had we stuck with HadCRUT3; that switch was made specifically to pull in more influence from the part of the globe which we now know warmed the most)
Using Satellite:RSS, the trend from 2002-2014 is -0.082/decade (so, STILL showing negative, as of the last complete year on record)
Regarding that longbet, using Land:Berkeley, even the trend from 2005-2015 is -0.073/decade (negative)
The Land:NOAA trend for 2005-2015 is also still negative (albeit very very slightly so).
So when I look at HadCRUT3 or HadCRUT4, I see the pause as an actually flat or declining trend that (a) has been so for a bit longer than the other ones you mention
(b) might not even have ended yet – it’s still timely!
(c) comes after the period in which people started predicting (and caring about!) lots of warming
(d) comes in a period for which some predictions had been that warming would accelerate. If warming is accelerating, then a long flat trend should be less likely than it was in the past.
At a tangent to most of this, I note that there was a pause from about 1940 to 1975, during which temperatures if anything trended a little down. That’s eyeballing the Global Land-Ocean Temperature Index. The 5 year running mean is about .1° C lower at the end of that period than at the beginning. Looking at the graph, it looks as though the upward trend from AGW is superimposed on a cyclic term with a period of about sixty years. For more on that, and a link to a published article that offers evidence of such a pattern, see:
But the claim in the article isn’t that “the deficit is really low”, and Yglesias nowhere claims the fact that “the deficit is much lower than it has been the past four years”. That historical comparison simply has nothing to do with the point he’s making; it’s just there to provide context. (The White House is crowing about low deficit, as per this tweet; well, I say the deficit should be higher and here’s why. — And the “why” has nothing to do with the deficit’s having been higher in the past few years.)
> Compare to people who “prove” global warming doesn’t exist by starting their graph at 1998
That might be a reasonable analogy for what the White House was doing with their tweet containing the graph but it has nothing to do with what Vox was doing in this article.
(I’m not quite sure how reasonable an analogy it actually is, because I’m not sure how the inter-year dynamics compare in the two cases. The White House’s graph shows a steadily reducing deficit rather than a single monster year and then back to business as usual; it’s hard to tell, because the temperature graph is very noisy, whether that shows an abnormally hot year in 1998 and then business as usual (which would be relevantly unlike the White House deficit graph) or an abnormally hot year in 1998 and then a slower-than-historical increase for several years (which would be more like the White House deficit graph). And I don’t know (this is my ignorance rather than noisiness in any graph) exactly what we should expect each graph to look like if there’s a single aberrant year and then no temperature-increase forcing / no measures to get the deficit back under control. The point of all this is just to explain why I said “might be a reasonable analogy” rather than “would be a reasonable analogy”.)
I feel like the real lesson here is that one can’t expect to post a couple graphs and think that the conclusions that should be drawn are self-evident.
OK, but while the article is arguing that the deficit could stand to be higher it isn’t basing that assertion on a comparison with historical deficits; its claim (on which I offer no judgement) is that because interest rates right now are exceptionally low it makes sense for governments to borrow a lot.
I’ve seen Krugman make that argument, but even though Krugman is smarter than Yglesias, it’s still a stupid argument. Government debt has a wide range of maturities, and some significant part of it rolls over every year. Increasing the deficit, and thus more rapidly increasing the debt, when some part of it rolls over every year, is pretty irresponsible unless you’re certain the markets will continue to buy government debt at ridiculously low interest rates.
I’ve read, but don’t have a reference, that over the past decade, the average maturity of government debt has gotten significantly shorter – there are a lot fewer 30-year bonds and a lot more 1-year bills being sold than in the past. If that’s true, then following Krugman’s advice is setting up a potential fiscal disaster. Also, if it’s true, the governors of the Federal Reserve know it, and that may explain their reluctance to raise interest rates despite the improving economy, to avoid triggering that fiscal disaster.
I think Krugman is pretty confident that the US will not face high interest rates in the near future; why would it? (Higher than now – quite possible, but dangerously high is quite unlikely – keep in mind that higher rates would come with a hotter economy, which generally go with higher tax receipts and lower expenditures.)
And that implies that if you consider an government action that has a short-term cost and long-term payoff, this is a good time to do it. And that covers a lot of things the government does – certainly the classic stimulus construction projects are covered, or education funding or job training programs or admitting new immigrants and the like. Even tax cuts have that property – short term hit to government finances partially made up for by higher future economic growth.
keep in mind that higher rates would come with a hotter economy
I mean, I don’t remember the 70s, but I am aware that they happened
It seems that the argument from the Heritage Foundation can be generalized so that people in general, not just minorities, will benefit more from being the 50th percentile in competency in an institution that they’re approximately average in than in a somewhat better institution where they are the Xth percentile in (X<50). Is this borne out by the evidence?
Anecdotally I always felt I did well in school because I was surrounded by a strong peer group that meant the class was fast-paced and even someone doing the bare minimum left with high learning outcomes, relative to other schools. I have a friend who swears the opposite, that the continued successes of outperforming others encouraged him and validated ambition and effort.
I’d love to see case study research in a single discipline to work out exactly what’s going on.
Ah. Now I remember how angry I was that I could have much better grades if not put into this class where there are 6-8 ultra high achievers by pure statistical chance. As most of our testing was verbal, it influenced grading, as it is human nature that teachers compared everybody to the best students, so if you were not like the 6-8 superboys/girls you could not get the equivalent of an A for a verbal test. In teachers mind, there should be a normal distribution of grades in a class, so you get a C for being average in your class, B for being slightly better and so on. So if your class is good, your grades may not be very good. But this is only valid for these backwards verbal testing schools not the US style standardized written tests stuff.
However, not our grades, but our actual performance was actually good. Doing well a city level Latin competition wasn’t even really news.
I don’t know. Certainly for the first seven years of primary education, I was reading at such a higher level than everyone else in class, the teacher let me get my choice of a book from the bookshelves/classroom library and sit quietly at my desk reading while the rest of the class went through the textbook (anyone remember the “Dick and Jane” books?)
I can’t say this translated out into better test results or accelerated learning or anything like that 🙂
The first couple of years I adamantly refused to read anything I didn’t want to, which was almost exclusively behind grade level. I really liked my favorite baby books, damn it. This of course worried my teachers, but I always banged out 99th percentile in everything on the annual aptitude tests. They didn’t know what to do with me.
At some point I skipped over the whole youth/young adult genre and went straight into serious literature. I have a distinct memory of a teacher flipping out because I was sitting quietly at my desk reading instead of paying attention to the lesson. I’m pretty sure “Plato” was not the expected answer to her petulant question. They still had no idea what to do with me after that, but at least the teachers left me alone.
Judging by your comments, though, you’ve done a much better job of picking a wide selection of redeeming literature.
I have a distinct memory of a teacher flipping out because I was sitting quietly at my desk reading instead of paying attention to the lesson.
ha, I spent all of junior high reading (including walking between classes and at gym). I probably got that 6 times
Good question! The best evidence I know on this is the stuff showing that people who are born after the school year cutoff date do better in sports because they’re the oldest/biggest kid in their class, that gives them confidence, and they keep that confidence even when they grow up.
But if you ask me “Would I recommend my child not go to their ‘reach’ school, even if that school accepts them?”, well, I’d have a hard time making that recommendation, which I guess means I don’t really believe this argument?
What if it’s a choice between “Reach school with over $100K in debt” (My mental model of student loans makes that $1100/month for 10 years) or “State school 30 minutes down the road with a nearly-free ride between financial aid and parents and actual free time because you’re not struggling to keep up”?
Keep in mind that both me and my sister and my cousin and about um… 6 of my friends ended up doing the state school because that was the exact choice we had to make.
Or in other terms “The Bay Area is cool, but if you’re making $60K/year, have you considered Cleveland? Because Cleveland with discretionary income is way cooler than the Bay Area without.”
There’s probably limits in that being a whale in a fish tank is constraining, but as long as you’re in the same general tier, unless there’s very specific reasons you want to be at THAT SCHOOL…
I made a similar decision. I was accepted to an Ivy, but decided to go to a state school (albeit not in my state), because… it was cooler? And cheaper, and I liked the climate better, and I never regretted it for a moment. Plus, I got out of school with negligible debt.
I have considered Cleveland.
I went to the (admittedly highly ranked) state school instead of Chicago or Emory. I’m super thankful for it as I was not ready for college in the least and it took me some extra time and therapy (yay mental disorders) before I could graduate. I’ll do the same thing my father did to me; show them how much I’d be in the hole going to private school vs a public school. Either they’re smart enough to realize that the public school is the right choice or they’re not, in which case they’re not smart enough to succeed at the private school.
Exceptions being Ivy League where the hardest part tends to be getting in of course.
The school sports thing is complicated by the fact that winning in bracketed sports in school is very much a tournament good (practically by definition).
In some sense I suppose grades are also a tournament good (what with curves and so forth), but I’m not sure how much GPAs/class ranking has predicative value for future career earnings or life satisfaction relative to say the name brand of your school or network effects.
Whenever I see news about elite colleges vs. state ones, it’s either “after controlling for X Y and Z, earnings are approximately the same, *except possibly for underprivileged/minority students, who might still have an earnings premium from attending an elite institution*” or “Nope, actually the conventional wisdom is right and elite colleges do have an earnings premium.” I’ve yet to see news along the lines of “After controlling for these factors, elite college attendance decrease earnings for this subgroup.”
Admittedly there might be some unconscious bias there, considering who’s likely to conduct those studies. Still.
I was one of the oldest/biggest in my class and I was hopeless at sports (no hand-eye co-ordination at all) 🙂
My sister had to stay back and repeat a year due to ill-health, so that meant she was one of the older (but not bigger, she was always small and delicate) in her class; average at sports.
Brother was average at sports, but an uncle taught him to play golf and this actually helped him out when going for a job interview, as the interviewer was a member of the local golf club and they spent most of the interview having a nice chat about golf 🙂
Youngest brother absolutely hated sports; to the point where he was allowed skip P.E. classes (which would have been Gaelic football and hurling mostly) in his all-boys’ school (Irish schools tended to be sex-segregated after age seven) and go play basketball with the girls’ school. He’s also the best-educated of us all (re: qualifications) and is now science teacher at a secondary school.
What this says about age of starting school, sporting prowess, and later life success, I have no idea 🙂
Are you sure this is because he hated sports, or because he liked girls?
Are you sure this is because he hated sports, or because he liked girls?
Oh God, is this Too Much Information?
Ah well, you’re not all going to run out and yell it in the streets, right?
Youngest Brother is gay 🙂
He’s also the Evangelical Vegan and Animal Rights Activist that I complain about on here and more or less atheist. Considering I’m the traditional Catholic socially conservative BLOODMOUTH CARNIST, birth order is one hell of a determinant! 🙂
My favorite affirmative action policy proposal is as follows:
With every admit letter, the school must inform the student their expected GPA distribution and degree completion chance. (This should be easy for schools to calculate, since they model this stuff anyway, and easy to judge the correctness of in aggregate after the fact.)
If your child got a letter from their reach school that said “We’d be happy to admit you! We think there’s a 10% chance you’ll graduate with a physics major, a 20% chance you’ll graduate with a ‘physical sciences’ major, and a 70% chance you won’t graduate,” and a letter from the state school where the numbers were 70%, 10%, and 20% respectively, I imagine you would recommend they attend the state school.
I would like to happen everywhere regardless of any affirmative action policy, thinking about it. More information for more informed choices is good.
Not sure how you’d go about making it happen; maybe a lobbying group could nudge it into being without needing any actual regulatory pressure, or maybe if it’s too disadvantageous for universities to release it you could mandate it.
The version I have been pushing for years, for law schools, is to report bar passage rate not as an overall figure but as a function of entering LSAT. That eliminates the current incentive to admit (and buy with lots of financial aid) the best possible students in order to raise your bar passage rate.
One would think that law professors, who are mostly left of center, would jump at a way of subverting a system which subsidizes the smart and rich at the expense of the less smart and poorer, but I haven’t gotten a lot of enthusiastic responses.
But… law schools don’t teach you to pass the bar. Law school bar passage rates are just a function of IQ
I can’t offer you data, but our faculty meetings have discussed competitors that have a lower average LSAT and a higher bar passage rate (and why), which suggests that it is not just IQ.
That’s in California. I cannot speak to other states.
When I started at the University of Delaware (not when I got admitted) I was given a piece of paper which had information on how students were likely to do, based on high school grades and SAT scores. I had mediocre grades and high scores, which put me in the expected to do badly group. I gave up at that point. Admittedly (see grades and score combination), I wasn’t a gung ho student, but that information really didn’t help.
To be fair, I think I stopped reading then. Maybe there was information on how to get help, but if so, I didn’t see it.
Scott, look at Herbert Marsh’s work in psychology, and the “Big Fish Little Pond Effect” on academic self-concept. There’s an idea that people norm their self-concepts based on their performance, so a high-performing school leads to lower concept (and thus presumably reduced motivation and subsequent performance).
HOWEVER, while this is the claim, it’s not been demonstrated directly, just inferred from other work about academic self-concept in general (also his work). So, grain of salt. I certainly think it needs to be considered against whether a higher performing school might also offer *aspirational* standards that you’d otherwise be unaware of, so if you’re lowering your self-image because longitudinally you know you have to shoot higher, that might not work out so well.
Psychological impact of something on seven years old kids might not be the same as impact on college aged almost adults.
I’d guess because you’re not thinking of your kid going to Harvard instead of MIT (the schools in this post probably should be adjusted slightly in level)
But I suppose I’m biased here, because I was always in worse schools than my IQ would have predicted, because I was terrible at completing homework, so of course I was used to being smarter than my peers in college.
edit: not something my daughter is likely to experience. I’ll have to think about it more.
I think you’re selling yourself/this argument short.
I can see three major ways to differentiate “got into reach school” from “affirmative action selects less skilled students”.
1. Level of under-qualification.
‘Reach school’ is a broad concept. If you’re going to go be 15th or 20th percentile, I might recommend attending. If you’re going to be 1st percentile, I’d say “maybe rethink that”.
At several high-profile colleges I’ve checked, black men are far more likely to drop out than any other ethnic-gender group. Like, 20 points lower graduation rates. There’s a difference between someone who will struggle and face adversity at a reach school, and someone who’s fundamentally unqualified for the introductory classes at that school.
If admissions people are abandoning their normal standards to improve their demographics, that’s more extreme than most ‘reach’ cases.
2. Motivated admissions.
Assuming college admissions boards are somewhat competent (citation needed), they presumably have some reason for letting in students with below-average applications. Some will imply success, some won’t. If you get in because you have a crap high school GPA but a good record in your chosen field, you can probably do fine. If you get in because you’re a great running back, you’re less likely to succeed.
Affirmative action admissions are more like athlete admissions than random ‘reach’ acceptances. If there’s a prominent non-academic reason to take you, it’s less likely that you’re academically qualified.
I would be far less likely to say “Go to your reach, it’ll be fine!” if it was clear that the student in question had gotten in based on something like athletic skill.
3. College vs life success.
We’re inclined to say “go to your reach school” because much of the post-college impact of a degree is down to university name. Studies showing that affirmative action diminishes college success for POC don’t clarify whether their 2.1 from Harvard is more valuable than a 4.0 from NoName College.
This doesn’t make the studies bad – they’re looking at a different metric. It’s entirely possible that even if affirmative action harms minorities by lowering how many people ‘succeed’ in college and driving POC out of STEM, it’s still a good idea for those minorities to accept their admissions.
So I don’t know if it would generalize to life, but I think it generalizes to college because of what college is trying to do, and how they go about doing it.
You’re trying to stand out in order to get the “best” for some optimal value system job and career path possible.
1) It helps to be exceptional.
The sophomore coming up to you with the same 3 classes as everyone else is average, the sophomore coming up to you with leadership in multiple activities (because all the Juniors quit when they couldn’t bear the course load and the extracurriculars. Heh) is pretty exceptional and gets an interview at least.
There’s this very large positive reinforcement to being slightly above average. The guy who’s a little bit ahead gets the research position so now he has the resume to get the internship and now he’s the guy with the internship so he gets the better internship and now he gets the job…
There’s probably a point where that stops, but “Big fish in a small pond” is probably a good place to be. Whale in a fish tank not so much, but…
2) Weedout courses.
So at least at my college:
* The curve is a B-/C+.
* Anything less than a C is a failing grade.
* Any GPA below a 3.0 is going to have a hard time finding work. See #1.
So semester 1, a quarter of the class flunks out and gets held back. Semester 2, a quarter of the class flunks out and gets held back. Semester 3, a quarter of the class flunks out and…
There’s going to be some catchup people who did it on the 2nd try, and in practice, they steadily shifted the curve upwards after the first couple of years (to the point where if you couldn’t 3.7 your senior year, you weren’t trying).
But that’s probably a third of the class flunking out or switching majors.
And yep, I knew some people in that bottom third, and they ended up transferring out or dropping out. And the ones who transferred out went to lower tier schools and flourished.
Are you a fellow yuro or weedout courses are thing in merika too?
I remember how it was the sole purpose of mandatory calculus in a business school, because we haven’t actually used it afterwards.
I mean, I figure weedout courses are a feature of “free” education. Why would a for-profit school weed out paying customers? Well, okay, to maintain its reputation. But still.
Certainly here in Ireland, I know universities
cram inaccept as many first-years as possible because they know a good quarter (at least) of those will drop out during/after first year, and so they have the revenue from the fees for first year but don’t have to spend extra on extra facilities for increased number of second etc. years.
“Charges money for education” is not the same as “for-profit school”. All the high-status American universities are run as non-profits that charge tuition fees in order to finance their academic goals, rather than for-profits that charge tuition to enrich shareholders. So they are allowed to care about the quality of their upper-level classes, even if that means turning away the fees of the people who would degrade that quality.
Although, I have a friend who used to teach at a for-profit university, and he says they have weed-out classes there too. It turns out that selling the lowest quality product you possibly can is not always the profit-maximizing strategy. “But still” nothing, reputation matters a lot in terms of getting more paying customers.
All “non profit” means is that there are no shareholders to receive profit after expenses. Non-profits have all the other motives to maximize intake and minimize outgo of money.
The closest US equivalent I can offer at first hand would be my History of Economic Thought class at UCLA. I deliberately spent the first few classes making it clear that it was about the thought, not the history, in order to drive out students who figured all they would have to do was memorize dates and names and stuff, not learn any economics.
But I wasn’t trying to drive them out of UCLA, just out of my course.
An economics course that wasn’t pure math? *gasp*
As best I can tell, the introductory economics course at Chicago, as of pretty recently (when my younger son took it), was economics not math. Unfortunately, that does not seem to be true of later courses–one reason my daughter decided not to major in economics.
She had enjoyed economics at Oberlin (she transferred to Chicago after her second year), which I presume means those courses were economics not math as well.
I keep meaning to ask my father what economics courses at Oberlin were like circa 1970.
Anyway, I was ambushed by economics being math at Northwestern, but that happened just after my successful psychiatric treatment, which also changed me to liking math, so no complaints.
From a psych perspective, there’s a zone of maximum learning. This is a place where a student is challenged beyond what is comfortable for him, but not so much that it moves achievement beyond his reach. I don’t know exactly how this maps onto class percentiles, and it may be confounded by students obsessing over grades. There is a difference between “maximum learning” and “may cause students to drop out because their performance is discouraging”. This difference will furthermore be different for every student.
As a high-performing student, I am used to blowing the low standards set for the median student out of the water and being hated for skewing the curve. I learn best in advanced classes with a lot of high-performing students that can chew through more material and at a higher level. Then I drop from top student to top-5 pretty quickly. However, if I were dropping from top-5 to below average, that might cause me to rethink.
A whole other issue is the massive grade inflation at many levels of college.
Overall, I’d say there is a balance between teaching the maximum number of students the maximum amount of knowledge and giving them good enough grades to encourage them to continue. This balance is probably being messed with by student’s expectations as well.
Don’t know what other studies there are, but couldn’t that crime change be drastically overstating impact? Increasing university police may decrease local crime, but how much of that effect is just moving crime to adjacent areas and not eliminating it?
As I understand it, there is considerable evidence of crime being pretty opportunistic; most criminals don’t make a career of it, they seize opportunities they see. So if they fail to see an opportunity (because there are too many cops around), they will not necessarily try to make up for it later somewhere else; more likely that’s just one less crime that will happen. Not that further research wouldn’t be good, but I do think this study is very encouraging.
Maybe you’ve seen this already, but if not it seems up your alley.
“Averting Catastrophes: The Strange Economics of Scylla and Charybdis”
“How should we evaluate public policies or projects to avert, or reduce the likelihood of, a catastrophic event? Examples might include inspection and surveillance programs to avert nuclear terrorism, investments in vaccine technologies to help respond to a “mega-virus,” or the construction of levees to avert major flooding. A policy to avert a particular catastrophe considered in isolation might be evaluated in a cost-benefit framework. But because society faces multiple potential catastrophes, simple cost-benefit analysis breaks down: Even if the benefit of averting each one exceeds the cost, we should not necessarily avert all of them. We explore the policy interdependence of catastrophic events, and show that considering these events in isolation can lead to policies that are far from optimal. We develop a rule for determining which events should be averted and which should not.”
The Bronx truce to invent hip hop was in 1971. I don’t know how violence trends in the Bronx compare to city or national trends, but I doubt that is when “gang violence reached peak levels.”
“Presaged”? And then it jumps ahead to 1990. It links to a Rolling Stone article that does claim that it stopped the violence overnight, but I’m skeptical.
Yeah, they had a truce, then twenty years of massively increasing violence just to get it out of their system, then they got to work on reducing the violence.
I was very interested in the article about people deciding to do hip-hop instead of gang warfare, but the actual text of the article sort of betrays the premise. To wit:
“The 1971 peace summit didn’t stop the violence, but it did presage a decades-long trend of violent-crime reduction in the Bronx—a trend that continues today…”
“New Orleans has more than its fair share of gun violence today, but things would likely be much more dangerous in certain communities if the Mardi Gras Indian gangs never transformed from brawlers to beaders.”
The cynical reading of both of these statements is that the hip-hop-instead-of-violence move never actually worked, but they free-loaded on the backs of the nationwide decrease in violence over the past few decades. It’s possible that the peaceful alternatives played a part, but far from obvious.
Omnilibrium is a social site that tries to improve online filtering. Instead of a big pot of Reddit-style karma that shows everyone the most upvoted posts, it tries to show everyone posts upvoted by people whose opinions have previously been correlated with theirs, with various customizable options to decide how much you want to be exposed to differing opinions. It needs more users for a good trial run, so check out their FAQ and then join in.
2015 – Omnilibrium founded. People gain the ability to filter their information to include only what they already agree with, or what their circle of friends agree with.
2025 – War of the Circlejerks begins
2085 – With the sun’s rays now penetrating the upper atmosphere once more, the tiny handful of survivors were once again able to plant crops from the Antarctic seedback in Equitorial regions, beginning the attempt to repopulate the Earth.
Let me just state for the record – THIS SEEMS LIKE A REALLY BAD IDEA.
It’s not like people don’t already strive very hard, with substantial success, to avoid being exposed to things they dislike.
Sure, but it’s never been so precise, so easy and so automated, and that matters a lot. New ideas happen all the time, but when we automate them a lot of people think the singularity is going to happen. I think people underestimate how much effect this trend is likely to have.
In private conversations it’s been suggested that the site might be looking at ways to combat or even reverse this effect. It’s too early to say if that’s possible, but either way, I think it’s an interesting one to watch. Scott especially I hope you keep an eye on how the site develops and share your thoughts!
I wouldn’t be too worried, Omnilibrium’s site is … unimpressive … thus far.
Still, it does concern me that Scott thinks “everyone should embed themselves in EVEN DEEPER bubbles” is a good idea, to the extent that he’s written posts implying that it’ll somehow lead to a utopia.
I’m in two minds about it. On the one hand I found the Archipelago idea to be really interesting and appealing, on the other hand I feel applied to information flow it could lead to total disaster.
From the WWI link, The Great War: Myth and Memory sounds like the book I wanted The Great War and Modern Memory to be.
Pseudo-Erasmus gives a good introduction to the emerging study of cultural institutions.
Some might even call it “sociology”. I get that there’s innovations from other fields (game theory) that represent interesting opportunities to improve how we look at culture, but if they’re going to ignore the main field designed to examine this topic entirely, shouldn’t these people demonstrate awareness of the field and include a fairly thorough reasoning as to why they don’t think its relevant? I mean, sociology is far (far) from perfect, but the concept of cultural evolution has been around in sociology since Herbert Spencer, and while I’m personally no expert on it, there’s a gazillion papers written on that sort of thing…
Pseudoerasmus’s take on the topic seems to be more or less anti-sociological, as the modes of explanation in evolutionary and differential psychology and behavioral economics are rather different from those of sociology. In other words, one of the implicit points of his post is that traditional sociological explanations are lacking.
Neglecting to even mention the field is a pretty extreme statement of contempt though. And its unclear to me why economics and differential psychology get a free pass. It’s not like there isn’t comparable methodological difficulties in those fields.
Pseudoerasmus replied to your comments in his blog, go check it.
Thanks. I think I’ll have a good think before I reply. His reponse seems fairly reasonable.
World War I: Just because it wasn’t as bad as an uninformed civilian in 2015 vaguely imagines it was doesn’t mean it wasn’t really, really terrible. That guy on reddit has a point, but he’s handwaving a lot of things. I mean, maybe you aren’t as terrified of the malice of the cosmos as a nervous dweeb like H.P. Lovecraft, but that doesn’t mean you’d want to get into a tussle with Cthulhu, even if you only had to volunteer in the Miskatonic Eldritch Horrors Response Team for a couple of days per year.
Also, even if there were arguably worse times to be a soldier, a lot more people were soldiers in WWI than ever before, and often not by choice either.
(I’m away from my personal library, most of this is from memory. Sorry I can’t source some of my assertions.)
The popular memory of World War I is a combination of the Somme and Verdun, in about that order. The Somme is particularly instructive. Why? Because the Somme was the first major battle the British fought with a non-professional army. You see, from 1914-1916, casualties in the British Expeditionary Force on the western front were so high that most of these units–which represented the British Imperial Ground Forces in capital letters going back at least to the time of the Civil War–basically ceased to exist. The British instituted first a volunteer drive, then general conscription. Yes, this netted up a bunch of men who were probably unsuited to be soldiers, including the bookish literary country gentlemen who later wrote about how much the war hurt their feelings. But everybody ran out of hardcore dudes who face the storm of steel and doesn’t afraid of anything, so they had to start drafting Siegfried Sassoon.
(If you want to read a “non-crybaby” account of WWI, read The Storm of Steel by Ernst Junger, or Infantry Attacks if you want something more technical and less literary. Feel free to tell me if you’d be OK with going through that.)
Right now the United States has a large volunteer army. If we restarted the draft because we were losing so many soldiers in Afghanistan would you say that the war was going really badly? Or would you call the conscripts a bunch of crybabies because they weren’t as enthusiastic about their lot as the pre-screened volunteers they replaced? Even if it was the biggest draft in the history of humankind?
Not everybody is a sensitive soul who feels completely traumatized just because they had to live in terrible conditions under the constant threat of grisly death or dismemberment. Some, maybe most, people could grind through it and maybe look back on it all fondly after it was all over, and some people actually actively enjoyed the sense of purpose, the adrenaline, the life-or-death stakes, maybe even just the personal power. People like that really exist, they do! You probably want these men fighting your wars for you. When you run out and have to start forcing people from your state college’s English faculty into the trenches, you might be having a bad time.
Truly, most soldiers didn’t spend all that much time fighting, which is good, because otherwise they would be dead. Both the quantity and quality of combat wasn’t evenly distributed. It never is and never was. Rape victims also spend the vast majority of their lives not being raped. And some rapes are worse than others. Does this sound like a defense? Especially considering, you know, the non-consensual nature of military service for millions of men in the war.
There were real technical innovations in WWI as well. Machine guns are well known, but the big one was artillery. Breech loading howitzers firing high explosive shells were at least an order of magnitude more lethal than ye olde cannons, whose effects were heavily psychological. Large units could simply walk up across open ground to blackpowder cannons firing at maximum rate into their formation as long as they were disciplined enough to accept the casualties. Walking up to quick-firing howitzers like this was/is physically impossible–no amount of human flesh could survive the bombardment advancing across the open any more than they could have willed themselves to fly.
Also, just look at the big picture! Germany collapsed, Russia collapsed, The Holy Roman Empire collapsed, the Ottomans collapsed. There was total anarchy on the streets of Berlin, worse than anything that happened in the US because of Vietnam. (Luigi Barzini’s The Europeans is a great first-hand account of the complete shitshow that was post-WWI Germany, and Europe in general.) Is that consistent with “well, it wasn’t that bad, people just need to harden the fuck up”? Or did something really bad happen to cause all that?
A couple of technical points, I think.
Breech loading artillery had been around for a while, for example, the Whitworth Breechloading Rifle saw action (albeit limited) in the Civil War.
The important developments in artillery that differentiated WWI (plus or minus a couple dust-ups I didn’t have any relatives in) from previous wars were hydro-pneumatic recoil mechanisms, and gun cotton (and high explosives). The first modern artillery piece was the French 75mm modèle 1897. That gun could easily sustain a rate of fire of 15 rounds a minute, or 30 during short busts with a skilled crew, because the recoil mechanism could return the gun to battery in two seconds and stayed on target after firing. Prior to the development of the M1897 (which, IIRC, involved some industrial espionage against Germany who were having trouble getting the mechanism to work reliably), the limiting factor in the rate of fire was the need to reposition and aim the gun after every shot; consequently, achieving two rounds a minute was a feat of considerable skill.
Guncotton, in its various forms, is the base for smokeless powder. When producing the same projectile power as black powder, it produces less smoke, barrel fouling, and waste heat. All three of which are bad things in a gun. While technically invented before the Civil War, blowing up the first factory was a major damper on development. Practical smokeless powders were not really produced until the 1880’s. Again, in France. Oh, and high explosive shells, like picric acid (which the French started using in the 1880’s) or TNT (which the Germans started using in the 1900’s). Even though it was a field gun (long barreled, direct fire with a flat trajectory), the M1897 could achieve a range of over 10km with a boat tailed round. The barrel could not be elevated above, I think, 18 degrees. For comparison, the Whitworth Rifle was a bigger gun and had a range of about 10km when elevated to 35 degrees. Which was an impressive range for the Era.
The M1897 was, AFAICT, the most popular antipersonnel artillery piece in the war, used by everyone that could get their hands on it. However, it did not have the power to destroy earthen works like its ~155mm howitzer (generally a medium length barrel, arching trajectory for indirect fire) competitors. Because the French didn’t have any significant large artillery, they ended up at a major tactical disadvantage against trenches.
But anyway, the combination of ten times the rate of fire, more accurate fires, longer range, and powerful high explosive shells (and, I get the impression, lower maintenance) made WWI a really, really, really shitty place to be taking artillery fire in an infantry formation. By way of example, the French fired 16 million rounds from their M1897’s at Verdun alone.
This is the kind of thing that makes you really glad to be a gen X American
Siegfried Sassoon volunteered for the army before the war started. I think most of the other war poets were also volunteers.
I think what the post was trying to say was that the First World War was not uniquely horrible, because all modern war in particular is horrible, otherwise by saying one war was uniquely awful you are saying some wars are good and that’s wrong because War Is Hell.
Which is not a bad proposition, but it does come off as sounding “We only think the First World War was bad because whiny soft upper-class types got involved and wrote poetry about it” which does sound like “All they needed to do was toughen up”.
Which also ignores that the officer class in previous wars was drawn from those same whiny soft upper-class types and yet, while Tennyson may have written about the charge of the Light Brigade, there was not the same reaction of horror by those involved.
Possibly the Edwardians were softer and whinier than the Victorians – or possibly the First World War was uniquely horrible as warfare up till then had been done.
Hang on a second.
The officer class in previous wars was also drawn from the upper class, but they weren’t whiny and soft. They were hard, tough, outdoors types, obsessed with honour, who thrilled in the traditional aristocratic pursuits of hunting and war. So a group of men charging into overwhelming gunfire, and being ripped into horrible pieces, makes Tennyson write “Honour the charge they made, Honour the Light Brigade, Noble six hundred.”
But this had completely changed by 1914 (it was in its last gasps in the Crimea). The long agricultural depression meant that farmland was just not very important as a generator of wealth, particularly when combined with heavy taxes on land and estate duties. Indeed, the noble families were pulling down their grand houses because their landed estates couldn’t even pay to run them, let alone provide them with the huge wealth to dominate the rest of society. Instead, the way to get rich and powerful was to work for a living (despised by the previous aristocrats), whether in finance, industry, or the civil service. Intelligence had replaced bravery as the cardinal virtue. So the English officer class in 1914 haven’t all grown up hunting, and fencing, and generally preparing for war; they’ve been studying engineering, and theology, and preparing for desk jobs. In other words, they are whiny, soft, sensitive types, and so seeing people gassed makes Owen write “The old Lie; Dulce et Decorum est
Pro patria mori.”
Compare too the conduct of Cardigan at Balaclava to Haig at the Somme. Cardigan was a brave idiot. Haig put himself in no danger whatever.
Even if WWI wasn’t conclusively worse in degree (e.g. Verdun vs Napoleon’s Russia campaign) there were clearly differences in kind. The volume and lethality of artillery was totally unprecedented. Ditto with automatic small arms. Probably more night fighting occurred than armies were previously accustomed to, but I’m not quite certain about that. A lot of the war’s nocturnal character was driven by the novel need to hide from air observation. That’s a whole new kind of suck right there.
Gas wasn’t objectively that dangerous but clearly left a serious psychological impression and was again unprecedented.
Really the only reason that being a soldier wasn’t subjectively markedly worse in WWI is hedonic adaptation, which I can believe based on my own experience.
The bottom line is that the implements of war were just much more lethal than they had been. I’ve got some other comments on that but they’re too long to type out where I am now.
Regarding Haig: everybody understood that the days when an officer could stand tall and unafraid on the battlefield were over. What was he supposed to do anyway? Everyone loved how Wellesley rode around on a horse at Waterloo sitting tall like an alpha male primate big man giving orders while his staff officers ate stray bullets but anybody doing that in 1914 would’ve been lucky to last a day. If he made bad decisions that’s one thing, and maybe he should have taken more personal risk than he did. “Didn’t commit suicide” isn’t a valid criticism though.
WWI also occurred at the Belle Epoque, one of the greatest flowerings of civilization and culture Europe had seen up to that point. It seemed to be pointing at a seriously different kind of society than anything that had ever existed before, where war, disease, poverty, and every human suffering and failing could be addressed by science and, somehow, solved. No one could imagine the shitshow the war ended up being, even if wars had always been shitshows, because people didn’t want to believe that could happen again and few of them would have remembered it ever happening before, anyway.
All the promise, the hope, the confidence of 19th century European civilization came crashing down in a torrential rain of blood and shells and death that broke the spirit or backs of millions. Even down to today the self-assuredness that characterized European society at the time is only just beginning to come back. Whether this is good or bad is an interesting question, but it explains why people felt the way they did about it. It seemed like the bad old days had come back, and on a scale and with a butchers bill that would be terrifying to the most hardened of their ancestors.
Just looking over that reddit post, the general thrust of “WWI was not uniquely awful and being a combat soldier is generally quite dangerous and awful” is correct, but I can’t find an answer to this question – is the 25% of the time under fire, 2-3 days a year of front-line combat figure a total average for all troops, or for all combat troops, or for all combat troops? If it’s for all troops, regardless of where they served or in what capacity, that would affect the numbers – and the % of troops serving in non-combat positions has grown significantly over the last couple centuries. If, say, the % of time under fire and the average number of days in combat is brought down by large numbers of supply or administrative personnel outside of artillery range, I can imagine that would not be much of a consolation to the ones who spend a higher than average amount of time getting shot at.
Additionally, I find the idea that WWI was the first time that the “literate upper class” found itself exposed to combat and wrote about it kind of curious. I mean, as I understand it, most nations got their officers from the middle to upper classes, and still do (more the middle class today, I’d wager). Were they just not writing about it beforehand?
WWI also seems to have been the first war with more casualties due to combat than disease, which would affect perceptions of it.
I’m almost certain that 2-3 days must be an average for all troops because the typical front line gig for an infantryman was 5-7 days and they usually had to do it at least once every few months.
And by “front line” that meant in the furthest forward position possible. Being in the second or third line was no cakewalk; often you wouldn’t even be out of rifle range.
That makes sense. The reddit post only says “direct combat”, however that’s being defined.
It cites, but does not provide a link to, the calculations of Charles Carrington, who as far as I can tell from Google was a British army officer in the war (1) and who seems to have had, personally, a fairly positive view of the war, as far as such things go.
It is unclear whether these calculations were only for the British army, or for all the combatants.
(1) https://en.wikipedia.org/wiki/Charles_Carrington_(British_Army_officer) – help help I don’t know how to embed links
What makes everyone so sure that the popular memory of WW1 has anything to do with the people who fought in it, rather than the way the chattering classes chose to write about it afterwards?
None of us experienced the war, so our memory of it is dominated by what people happened to write afterwards.
It is as if the popular memory of Soviet Communism was dominated by the works of Sidney and Beatrice Webb.
IIRC J. B. S. Haldane had a great time fighting in WWI, saying something like “I rather enjoyed killing Germans.”
Re: Somali refugees vs. native black american
I have a theory, this is not a nice theory so if you are the kind of person who is “triggered” you might want to avert eyes.
First a few assumptions: black americans are descendant of 15th – 18th century slaves, social mores on “race mixing” means that only a minority of ancestors of black americans is european, success is biologically inheritable possibly thourgh IQ and other factors.
How did the slave trade work? British and portuguese slavers would go to west africa and buy slaves from local tribes. It seems logical that a west african village chief would try to sell to british and portuguese slavers the worst people he could get away with, slavers could select against phisical impairment but had no way (and no desire) to select against low IQ or inclination to crime.
The result of this would be that black americans are not descendants of a random sample of 15th to 18th century africans but rather the stupides, most criminal people that could be found during that time. I’m move to 4chan’s /pol/ now, I clearly belong there.
Patting of yourself on the back for being subversive is obnoxious.
The whole point is that they were buying people who were already enslaved, not common free tribe members just enslaved for the traders purpose! Slavery was morally objected against from day 1 (see Burke: Sketch Of A Negro Code) but one thing that made it tolerated was the argument “that they are already slaves, working for brutal African masters, we are not making then any worse off than they are, maybe Christian masters will even be better” etc. this helped ease the conscience. Without this, the Western public opinion would have never accepted it. But it could be cast as an almost humanitarian measure, saving poor slaves from brutal African masters and giving them to good Christian masters who will care for them like a father – it is horseshit, of course, but this horseshit was necessary because people back then were far better than todays people tend to give them credit for, and actually cared about not doing evil and maybe doing some good.
So, they were already slaves. Would the chieftain sell off the stupidest slaves? I doubt it. Stupid slaves are bad at escaping and there are enough primitive menial tasks to do. Selling off slaves with violent inclinations? That is more likely.
I mean, today, a military officer, who cannot simply just fire people, will of course want to transfer the troublemakers somewhere else, but the goody-goody stupid soldiers? No, they are just ideal for unloading trucks and cleaning everything. They don’t complain that it is boring.
Also, a huge selective factor was surviving the voyage on the ocean. Beyond good constitution, it could select for aggressivity (robbing each others food or something).
If this theory is correct, then slave traders were systematically removing from the West African gene pool people who were heritably stupid and criminal. We would then expect present-day West Africa to be unusually not-stupid and not-criminal. I’m not sure that fits particularly well with actual observations.
Hmm. Estimation time. Looks like Africa’s population was maybe ~100 million before population growth took off and a total of 12.5 million slaves were taken to the New World over about three hundred years. Call that 10 generations and say that two generations are alive at once (so ~50 million in each generation) and we have ~2% of each generation being lost, on average. I’ve done this quickly- maybe I got something badly wrong.
I’m not sure if that’s enough to create enough selective pressure over only ten generations to be a strongly noticeable effect, so it could be that it’s just overwhelmed by other things.
How did slavers and slave-owners select which slaves to pass on to Europeans – well, we have an account of one 19th century slave who was born in Darfur and ended up in Italy.
Potted biography lifted from Wikipedia follows:
– Her family was “well-regarded and prosperous”, her father was brother of the village chief. Sometime between the age of seven to nine, probably in February 1877, she was kidnapped by Arab slave traders, who already had kidnapped her elder sister two years earlier.
– Already sold and bought twice before she arrived in El Obeid.
– Over the course of twelve years (1877–1889) she was resold again three more times and then given away.
– Her fourth owner was a Turkish general. By the end of 1882, El Obeid came under the threat of an attack of Mahdist revolutionaries; the general sold all his slaves but selected ten of them to be sold later, on his way through Khartoum.
– There in 1883 Bakhita was bought by the Italian Vice Consul; in March 1885 he returned to Italy and gave the enslavement of Bakhita to new masters who acquired property in the Sudan and decided to move there.
– Bakhita and the daughter of the family were left in the custody of the Canossian Sisters in Venice while the family were making arrangements in the Sudan, but when her mistress returned to take them both to the Sudan, Bakhita refused to leave. The superior of the institute for baptismal candidates (Catechumenate) that Bakhita attended complained to the Italian authorities over the mistress’ insistence on taking back her servant/property (depends on how she regarded Bakhita, which we don’t know). On 29 November 1889 an Italian court ruled that, because Sudan had outlawed slavery before Bakhita’s birth and because Italian law did not recognize slavery, Bakhita had never legally been a slave.
– Eventually due to her popular cultus in Italy, in October 2000 she was canonized and became Saint Josephine Bakhita.
I don’t know if that’s particularly evidence of stupidity and criminality (some might say ending up a Catholic saint is proof positive of being a bad hat), but I don’t think you can extrapolate that every slave who ended up in Europe/USA was one of the bad ‘uns pawned off on the white guys by the original owners/slavers.
Of course, that’s an account of slavery other than the transatlantic trade. Perhaps a better example would be Olaudah Equiano:
-His father was a chief or elder of some description, a status Equiano expected to inherit.
-He lived in a society which owned slaves, and sometimes punished criminals with enslavement. However, slaves in that society were treated relatively well.
-Slave traders (from another African culture) routinely passed through his country. They were only allowed to pass once they had been asked how they had obtained any slaves they had with them. They did buy slaves there, who were prisoners of war or criminals. They would also kidnap slaves (including Equiano himself) on the way through and smuggle them out in large sacks.
-Sometimes tribes started wars in order to obtain prisoners to sell as slaves. If the chief of a tribe that started such a war was captured, he was executed.
-Participating in slave-raiding was one of the crimes punishable by enslavement. The other one Equiano lists was adultery.
-Equiano was kidnapped from his parents’ house at the age of 11 along with his sister. Slave raids while the adults were away working in the fields were a fairly regular occurrence- he recalls an unsuccessful one.
Doesn’t work, since West Indian blacks do better than african americans.
To remind you, ALL the West Indian blacks were slaves, and most of the african american ancestors were imported from that group, not directly from Africa. So they have the same genetic lines, albeit mixed differently over a few hundred years. You can’t make the case that in-Africa selection somehow selected poorly for the ones that wound up in America, but selected better for the ones that wound up in Cuba.
This is exacerbated by the fact that all those economies and countries are pretty terrible in terms of economic and political performance. Growing up in a country as destitute as Haiti should make them worse performing than American blacks, if poverty had anything to do with it.
But this isn’t what we find. What we find is that skin color is no bar to achievement. We find poverty is no bar to achievement. We do find that a disproportionate part of American blacks are less achieving. If it’s not poverty, and not race, that leaves culture. And that has always been the explanation that makes sense.
You can explain that several ways, that the unique experiences of slavery and Jim Crow created a culture that was bad, or that constantly blaming all ills on slavery and Jim Crow created a culture that was bad, or some combination. But the data doesn’t hold up much of a genetic argument.
Maybe we are looking at the wrong genetic contribution.
Perhaps the rapist slaveholders were disproportionately dumb and antisocial.
I doubt that’s the answer either, although I could make my standard cultural argument here as well. It is an old sentiment that slavery not only debased the slave, it brutalized (as in made brutal) the slave owner. Think the Zimbardo experiment. Playing the role of slave owners likely skewed the culture of the South in deep and long-lasting ways.
One could argue epigenetics, but mass slavery didn’t last that long in the US, so I doubt there’s enough time for the culture to impact back on genetics.
Or maybe civil rights and all that created a bad culture. This is kicking around on some right-wing blogs and I have no way of checking it from this side of the pond (googling turns up nothing) but there are some claims that before the 1960’s there were more black judges, succesful black businessmen, even black business districts in the US than now. Despite discrimnation and JC and all that. Moldbug claims in place of the current burnt out ghettoes there were actually functional black business districts. If it is true, the basically most of the blame is those who tried to help blacks by liberal means.
Of course the alternative is that the black business districts later dispersed, became individual black businessmen all over the map, but they are still there to this day.
Again, from here I cannot really tell which one, I am just saying the most important fact-checking should be comparing it to pre civil rights black achievement to see if the great society could actually have harmed them.
The “it’s all the damn liberal do-gooders’ fault” theory runs into a major problem when trying to explain the successes of 1865-76 and the precipitous decline thereafter.
OTOH, things such as delinquency and family breakdown were lower pre-60s, although the welfare state seems like a more obvious culprit in this regard.
Not so much a fault as a flashing neon sign saying, “What was different that time?”
I suppose the black business districts disappearing, if true, would be because the demise of Jim Crow meant that black customers were now free to patronise white businesses, which were generally better (able to sell better-quality stuff) than black businesses. Hence black businesses would now get out-competed by their white counterparts.
Jim Crow didn’t generally apply to retail stores, or more generally to businesses that didn’t involve extended on-premise services (most obviously restaurants). A department store might offer a welcoming suite of segregated amenities for its white customers, e.g. a lunch counter and rest rooms, while encouraging its black customers to complete their purchase and get out, but they did want the black customers to make those purchases and both the law and the white customers accepted this.
And a quick google suggests that business districts of the era weren’t generally segregated.
According to Harry Golden, segregation only occurred when people sat down.
One can think of segregation as having a sort of protectionism side effect. If blacks aren’t allowed to patronize white businesses and whites don’t patronize black businesses, they have a captive market.
One sees this effect with the entrance of women into much of the workforce driving down the quality of schoolteachers. The artificial constraints on women before that channeled many high-achieving women into one of the only careers they could undertake. Without those constraints, they found that being lawyers and doctors paid better. But it did harm teacher quality.
With the demise of segregation, the cream of black society had little trouble making their way in the larger world. But this had the adverse effect of removing the most stable, intellectual, educated and successful people from the black community. Now they live where all the other rich, successful people live, and the black community is continually siphoned of its talent pool. This has the effect of impoverishing the community more than it might otherwise be, and robbing them of effective leadership.
Now, I think the civil rights movement as a whole was very necessary, and the system of segregation absolutely had to go. But not all the effects were positive for blacks, as we can see around us every day.
> “If it is true, the basically most of the blame is those who tried to help blacks by liberal means.”
The #1 entry, as well as some of the comment, implies the blame is on deliberate oppression.
I would trust Cracked slightly less than Buzzfeed and slightly more than the Onion, and for reasons similar to both.
Doesn’t work, since West Indian blacks do better than african americans.
Interesting observation, I don’t know much about caribbean history but reading the wikipedia pages about it give me the impression that there was considerable “mixing” between african slaves and enslaved natives.
I’m not sure how much, since in many instances, all the natives died from imported diseases, hence the necessity of importing slaves from far away. But certainly more than the US. Unfortunately for your theory, native Americans don’t do all that well either, certainly not well enough to counteract the purported genetic deficit of adverse african selection.
For a long time, blacks in America were stereotyped as being meek and subservient. Hence their good fit for slavery. Doesn’t really fit into the “selected for high criminality” idea.
And fortunately for you, there is no way to go back in time and test the IQ of Africans back then, so this hypothesis can’t be falsified.
Have another theory: The word racism is a distraction from what’s going on. Race is a large artificial division of the human race, but what mostly affects human behavior is something I’m going to call ethnicity– a combination of physical and cultural traits which produces an impression of predictability.
In the modern world, where the theory that races are important has been around for a long time, race serves as a sort of ethnicity, but accent still matters.
Another premise: Bigotry tends to be most in play between groups which have had long association with each other.
What I believe is that racism by white Americans is against African Americans, and they are somewhat distinguishable from Africans by both appearance and accent.
I also believe that, while there’s such a thing as dysfunctional African American culture (which seems to have gathered strength in the 60s or 70s– it isn’t genetic), it is beyond me to believe that there’s no white American racism. My current take is that white racism and the small proportion of African Americans who are screwing up their own and other people’s lives are amplifying each other, with the reputation of African Americans being damaged by the dysfunctional minority and the dysfunctional minority being glad to not cooperate with people who obviously hate them.
Plausible. My girlfriend in college had a lot of old friends who were the children of East African immigrants, a bunch of whom had settled in her childhood neighborhood for some reason. In terms of skin tone these girls were darker than most black Americans, but they had the accent and mannerisms of upper-middle-class West Coast white girls — in other words, the people they’d grown up around. I got to know some of them well enough to stay in touch after my girlfriend and I broke up, and those all ended up marrying white or Asian guys.
So teh racism of whites isn’t based on race?
Shouldn’t we then call it something else? Cultural bigotry? Ethno-skepticism?
It’s pretty hard to make the case that someone who is cool with all races, so long as they conform to his cultural norms is a racist because he dislikes people of all races who don’t. It then becomes a question of how useful the heuristic of discriminating against a culture is, combined with the difficulty of detecting it.
I’d venture that modern “whites” (ie. the modern West) are the most prominent example in history where this “cultural bigotry” was actually considered bigotry. Everywhere else it has simply been the norm.
Some see this kind of progressive thinking (multiculturalism, etc) to be more indicative of cultural supremacy (“our way should be better than the norm”) than actual cultural bigotry (“we don’t like other cultures any more or less than they like ours”).
I agree. People in the US are astonished when I tell them there is less bigotry here than anywhere else on earth. It’s just that no one else cares about that, it’s normal for everyone to be a bit tribal, make stereotypical jokes about outgroups, and consider themselves a bit better than all the other savages.
My explanation is that in the US, our culture and civilization is so successful that for most of us, our outgroups are not immigrants or neighbors, but other Americans.
The racial obsession is just inter-white competition, has almost nothing to do with blacks.
On the particular case of Somali’s…
We tend to lump Africans together by skin color. But Somalis (and I think also Ethiopians) don’t look like what I, at least, think of as the typical sub-saharan African. They might well be a population with substantially different innate characteristics than the West Africans who ended up as New World slaves.
>I have a theory, this is not a nice theory
So what you’re saying is, your comment is neither “kind” nor “true”?
“It’s no secret, or it shouldn’t be, that a majority of African Americans have European ancestry – on average between 20 and 25 percent. It’s one of those vestiges of America’s history of slavery.
“For anyone still naïve enough to believe in the myth of racial purity, it is one more corroboration that the social categories of ‘white’ and ‘black’ are and always have been more porous than can be imagined,” wrote Harvard Professor Henry Louis “Skip” Gates Jr. in an article in The Root about Michelle Obama’s [white] ancestry.”
If Afro-Americans average 20-25% white ancestry, then only a minority of ancestors of black americans is european, as Nero said.
Turing: somehow highly intelligent people tend to be better in endurance / cardio sports than strength sports.
Maybe they just dislike becoming The Hulk, as it is associated (wrongly) with stupidity. Or other reasons?
Men who are outliers on academic achievement men tend to have low T (otherwise they wouldn’t be sitting still reading). So they’re better at cardio than weights.
It’s weird seeing “low T” being used outside the context of spam or late night television commercials.
To me it is daily bread and butter so I liked Salem’s answer. Precisely because intellectuals are low T, there is not a properly intellectual vocabulary to talk about high T and low T behaviors. To everybody speaking the intellectual language, low T behavior equals normal and high T behavior equals a violent idiot townie out there. Just as fish cannot speak about water, discussing the difference between these behaviors is hard in intellectual circles. High T is for the intellectual an alien culture so he lacks terms to describe it.
This actually predicts political differences and Haidt axes and similar stuff as well.
E.g. the thrive-and-survive theory is excellent, fighting the zombie apocalypse is textbook high-T culture.
So high and low T are even the best words out there, the others are worse. If someone would use terms like “real manly man with balls” and “wimp girlie boy”, you would probably think he is an idiot. An yet, better words just don’t exist.
Well, or perhaps you could call the muscular wrestler high-D (dominance) and the marathon runner high-P (prestige).
True. See also http://www.psmag.com/health-and-behavior/half-lifts-workout-says-social-class-85221
This… is not only surprising for me, it also shows how incredibly out-of-touch American liberal urban upper classes can be with everybody else, basically the whole planet. There are seriously hardly any people on this planet, at least in the more western oriented nations, and especially in the more wealthier classes, who don’t consider the whole Arnold – California Fitness – muscle beaches – Gold’s Gym thing seriously cool. Amongst young people, Zyzz was THE definition of infinite coolness. Of course the author is talking about 40+ people, but still, if you were 10 years old 35 years ago, your Spiderman or Superman comics books were already far more muscular than 60 years ago, and this was basically the ideal every boy strived for. To imagine an elite who basically dislikes the whole thing is… really seriously different than the rest of us. And given how good a predictor T levels or upper body strength are a lot of views…
Edit: “especially those like my friends and myself, richer in fancy degrees than in actual dollars”
This also says as lot. I would say most of the planet assigns higher status to actual money than fancy degrees, most respected money making-degrees are usually not “fancy”, just law, business, medicine.
However most of the planet takes their political ideology from this US “Brahmin” class, this is basically what worries me.
Also it’s a lot easier to run 100 miles a week when you’re not making a living doing manual labor.
Could there be a link to do with the effort/patience/willpower involved in developing your ability in maths etc. and the similar thing in running marathons?
If by strength sports you mean weightlifting, it wasn’t nearly as popular then as it is today
Javelin throwing and suchlike.
Anecdotally, all the smart people in my high school were on the cross country team if they did sports at all. Running is solitary, can be picked up later in life even if you were unathletic in childhood, and rewards uncomplaining endurance rather than dexterity. It’s an easier skill for nerds to learn. (I am not built like a distance runner and not naturally good at it — super duper fast-twitch. I was still on the cross-country team because it was manageable for my *temperament.*)
Uncomplaining endurance is easy for nerds? First of all, smart kids tend to be lazy because they don’t need to work hard to do well in the first two tiers of education so they don’t really learn work ethics early. Second, nerd culture totally tolerates bitching, moaning and complaining because it is a bit of a “low-T” culture i.e. a culture where you don’t have to be very “manly” and thus don’t have suffer in silence.
If you aren’t at least a little masochistic, you won’t get far in math.
It’s hard and requires a lot of discipline. (context: I have a math PhD.)
It’s fun. You can tuck it all into your head and look at it with your eyes closed.
(Context: I have a PhD in theoretical physics.)
Mine too – we didn’t have a track team, but the people who were good at math all did long-distance running (and were really good at it).
TheDividualist == Shenpen?
I’ve been wondering this myself.
Was that a stereotype in the Forties?
Honest question, I don’t actually know — but the literature of the era that I’ve read suggests that the stereotype was, at least, weaker.
Nassim Taleb is big into serious weightlifting, for a counterexample.
For another, Oliver Sacks was quite a good powerlifter in his day (squatted over 600lbs).
Oh, I did that!
Or did you mean LIFT 600 lb?
Just a squat, but 50 years ago, that set the California record.
Michigan: admittedly, democracy has many meanings and I am not sure which ones are the most popular in the USA. But the article seriously doesn’t seem to be related to the violation of any kind of democracy I know of.
Not informing citizens is simply a screwup. Reforming the public school towards a more market oriented thingy sounds like a good idea that happened to fail. The state running public schools sounds like another Tuesday because what even makes them public if they are not state ran one way or another, and why not have a single bureaucrat instead of many, it again does not sounds like anything related to democracy. Removing voting rights, now that indeed sounds like routing around democracy in a major way.
What I think the article means is probably that municipal democracy in the USA is perhaps meant to be _participative_, as in, not simply electing your overlords every N years but also having a more direct say in how schools are ran and suchlike. So a form of a crossover between pure representational political democracy (vote every N years and in the meantime shut up and let the elect work) and something like civil society, a committee of parents sitting in school boards and whatnot?
I don’t have a very high view of participative democracy or democracy in general (the market proved that exit > voice). The way I would rectify such situation that the removal of voice should be balanced out by more rights to exit / secession i.e. parts of a city could vote to secede, decorporate from the city and form their own municipal government. So you don’t get much say in how this city is ran but you can choose to be not part of it without having to move somewhere else. That sounds like a decent imitation of how the market works.
Parts of the article feel like to me that it is the local “Brahmins” complaining who used to have an above-average voice without the responsibility of hold a clear office, elected or not. Like in public schools, I assume not all parents used to have equal voice in running them, but more of an elite.
My view is that while democratic voice is inherently problematic, the true problem is having above-average voice without the clear responsibility of an office, elected or not. This is a major part of our critique of the “Cathedral”, I think. It doesn’t matter much if you are elected Mayor or the King appoints you to be the First Lord Of Admiralty, the important thing is that if you are influential, you should be influential in a very visible way so that we can throw tomatoes at you if your ideas fail. Skin in the game, as Taleb would put it. The problem with the “Brahmin” class is influence without that kind of visibility and thus responsibility. E.g. if this migrant waves things crashes something here in Europe I have no idea who to throw tomatoes at, beyond Angie. Every newspaper editor ever? Crap. Who crashed Detroit? Part of the story I heard is pensions paid to retired union workers. Who created the athmosphere in whicj demands for high pensions were accepted? Every newspaper editor ever. Crap. And that’s why you don’t see the tomatoes flying.
When it comes to adopting good curricula for math, of course, you’re going to find better curricula from Russia than you are here in the US. But as I hear it, if you try to just straight-up port Russian math curricula to the States, you quickly run into a problem: Lots of math teachers in the US don’t actually understand math. They already often don’t understand their own curricula well enough to teach it properly, you introduce Russian curricula which require more actual understanding, it’s going to be a disaster without substantial retraining.
(On that note, here’s Andrei Toom on the differences between the use of “word problems” in Russia and in the US. Long. H/T Reddit user Hrjdc.)
Now, there is a company, Reasoning Mind, which basically goes to schools and introduces their own (Russian-derived) curriculum through a computer program the kids use, which becomes much of the source of instruction. The teachers go and answer particular kids’ questions rather than lecturing. Retraining is needed, but they provide that too, and they do things in such a way that it can be done gradually, rather than having to get all the teachers to learn actual math suddenly in one summer. They’ve been around a while now, I don’t know what the results have been. But, interesting thing to look into.
Damn it, reading that linked paper makes me think I would have done a lot better under the Russian system of teaching maths than the “New Maths” which was probably getting into its stride when I was old enough to suffer under it.
This is why I only learned how to do long division from my Victorian-educated grandmother, not whatever method the New Maths was inflicting on us in class! 🙁
Russians do as well on Math PISA as Americans or Spaniards, but do really well on TIMSS Math.
Also, Art of Problem Solving is American and probably the best curriculum for elite math students (99th percentile mathematical ability.) Also, I’m pretty impressed by Khan Academy (also American), although a lot of its benefit comes from “let’s make sure you understand stuff before you move on.” (You won’t have a good time understanding algebra if your arithmetic is weak!) In his book, Sal Khan describes a summer math program he ran for students in grades 6–8; he started one group grade 5 material, the other at “1+1=2”. Something like 6 weeks in, the latter group had overtaken the former.
I came pretty close to a perfect bubble, but one of my friends likes Joe Biden. Can I get a mulligan on that since he’s not technically a candidate anymore (or indeed ever)?
Re showing your working
Regarding the link between good ventilation and cognitive scores.
Personally I think this is entirely believable. I remember back in uni I sometimes had terrible problems keeping my focus during lectures, to the point of literally struggling to stay awake. It took me quite a while before I realized that this only happened in two specific lecture rooms. After that it was obvious. Those rooms were poorly lit and very poorly ventilated. Talking to other students confirmed that they too had these issues, though mostly to a lesser extend than I did.
Where I work right now we have one meeting room in the middle of the building. If all doors are closed there is basically no ventilation. Again I find myself losing focus during long meetings in this room, and literally struggling to stay awake. I recognize this feeling now, and I simply open one of the room’s doors. I literally go from “sleepy, barely able to focus” to “fully alert” in seconds after opening the door. And this isn’t an effect of moving, because it also happens if someone else opens the door.
Adding on to the ventilation and cognitive performance studies:
Lawrence Berkeley National Laboratory did a study on this a few years ago:
so consider it replicated. We’ve known that really high CO2 rates (>5000ppm) impair cognitive function, but only recently that moderate levels (>800ppm) do as well. These levels are way above what will ever happen in our atmosphere, so please don’t associate indoor CO2 rates with climate change.
Ventilation standards are set by ASHRAE 62.1, and are based on sum of a required ventilation per floor area and ventilation per person. Ventilation rates are designed per max occupancy of the space. The study referenced in the post doubled ventilation rates per person, whereas a real building would already have a high base ventilation rate and typically lower occupancy than the design occupancy. So the difference in CO2 levels and resulting cognitive function shown in this study would not be as extreme in a real building. For example – the CO2 sensor in our office right now shows CO2 level of 430ppm, and it’s a really old office building designed back when 15 cfm/person was the norm + a per floor area component. However, it’s really common in buildings for dampers to get stuck, fans run amok, the system to be horribly balanced etc. This happens a lot in older buildings, especially cheap public buildings like schools that don’t have the funds to keep the systems in good repair. (Schools have one of the lowest $/ft2 build cost of any buildings. Probably lowest maintenance funds too). And we expect kids to stay awake in them…
The ventilation rate standards are set by a big committee that gets together twice a year and decides how much they want to balance indoor air quality concerns with energy concerns. It takes a LOT of energy to heat, cool, and move ventilation air. The standards in recent years have gone towards a bit more ventilation, as building enclosures have gotten more air tight, and it’s common for newer buildings (green building or otherwise) to separate the heating and cooling from the ventilation systems to have dedicated, continuous ventilation, and/or have sensors that control ventilation rate on the CO2 levels in a room.
One interesting and somewhat horrifying thing to think about is that ASHRAE also sets these ventilation rates for vehicles, but they are rarely followed. How often do you drive with the windows up and no fan running? So the CO2 levels can easily go over the 1000ppm mark in cars, trucks, etc. Ever wondered why you got so sleepy on long road trips? I suggest keeping a window cracked or running the fan without recirculation while driving.
Vox reports almost exactly the same number as the Federal reserve. See here:
Also if you take the WSJ numbers and divide it by current GDP, i.e. 500 billion/16.77trillion, then you get a deficit of 2.9%.
These numbers above are 2014 and the deficit has dropped further.
Edit: I just read the other discussion of the deficit. Wow! I think that was a very misleading comment of Scott’s. It sounded very much like a claim that the graph contained inaccuracies. The graph was not inaccurate. It was extra misleading because the other graph linked to was not in terms of percentage of GDP. This could easily leave some readers of SLC thinking that the numbers in the vox article were actually wrong. Why would you not use a graph that measured it using the same scale?
The graph, tweeted by the Whitehouse, was not misleading either. It showed what had happened to the deficit since the government took power – not what had happened previously. Of course, if they had shown what had happened previously under Bush then it would have made them look even better.
Vox criticised the rapid deficit reduction on grounds unrelated to the historical deficit and in response to a tweet of the government. If they had said, “the deficit is too low, look at this graph, its the lowest its ever been” then you would have a case. But they have clearly not done that. They have argued based on interest rates being low and the fact that the US could be investing in infrastructure instead of bringing down a deficit when they are borrowing at close to nothing. I often have cause to criticise Vox’s reporting but not for this.
I think this deserves an edit of the link noting that Vox didn’t misrepresent the facts. Because right now I think it looks misleading and prejudiced. I am very surprised by this because generally SLC strikes me as the very opposite of misleading or prejudiced.
“Borrow when interest rates are low” didn’t work too well for Greece.
Moreover, I think it’s disingenuous for us to act like this low-interest rate environment is just a fortuitous happenstance we need to take advantage of. Interest rates are low because we’re making them low by printing money to buy bonds, essentially. So what people are really saying when they say we should borrow more while interest rates are low is: “we should continue to push the confidence people have in the strength of the US dollar so long as it holds out.”
The low interest rate environment reflects a decision based on inflation rate targeting. The inflation rate is low despite low interest rates. Put differently, the 2% inflation consistent interest rate is low.
This implies that the natural interest rate (I don’t like the term natural, but there you have it) is low. That is, risk adjusted returns to investment are very low and hence people are happy to use US, UK and other treasury bonds as saving vehicles despite the fact that the real (inflation adjusted) return is close to zero.
According to standard economic analysis at least, it is fair to say interest rates are low.
Also, it is not the “strength of the dollar” but the US economy in which they need to have faith. Given the role that good infrastructure plays in any modern national economy borrowing to invest semi-competently ( a big ask I know!) should increase the risk adjusted returns to US treasuries for a given interest rate, not harm it. Of course, if the borrowed money is either invested poorly or spent on consumption* this should increase the interest rate.
*Unless that consumption expenditure, welfare payments, somehow reduces other costs such as political instability or crime.
“Also, it is not the “strength of the dollar” but the US economy in which they need to have faith. Given the role that good infrastructure plays in any modern national economy borrowing to invest semi-competently ( a big ask I know!) should increase the risk adjusted returns to US treasuries for a given interest rate, not harm it.”
Okay, but “the strength of the dollar,” given that it is a fiat currency, is, as you say, just a measure of people’s confidence in the ability of the American economy to keep producing stuff relative to the rate at which the Fed will dilute the value of each claim on that production through quantitative easing, etc.
The fact that people have not lost confidence in the US economy and the restraint of the Fed yet doesn’t mean they never will.
Re. infrastructure, I know it’s a common argument, but I just flat out don’t buy it. First of all, we’ve been hearing about all the wonderful infrastructure we would get out of the kazillions of dollars borrowed for the TARP and stimulus, but to say nothing of TARP, which was an obvious and explicit bailout, even the subsequent stimulus turned out mostly to be a bailout: for bankrupt pension funds and the like. And one of the main purposes of “quantitative easing,” of course, is to give the fed more power to bail out: not just the US government via bond purchases, but the housing market and so on.
And the vast majority of federal spending continues to be spent not on high speed rail, but on keeping our ridiculously massive military, social security, and medicare going. If all that had really been spent on infrastructure I would expect we could at least have some high-speed trains by now–or that Amtrak would at least not be literally going off the rails.
So really what we’re saying is “while the rest of the world hasn’t yet lost faith in the US economy, let’s borrow huge amounts of money from them and our own citizens so we can keep avoiding unpleasant political consequences for the time being.”
I wouldn’t have faith in the government to make better investments than the private sector even if the only goal were to increase US productivity through infrastructure, etc., but it’s obviously not.
re: Fiat – Of course people could have a crisis of confidence if it looked like the US couldn’t pay back its debt or if the Fed started creating high levels of inflation. The fed has not created high levels of inflation so there is no reason to think that the FED has been unrestrained. The probability of default or very high inflation can be inferred from the market. The market is willing to lend to the US government at close to 0% which is a far lower rate than they will lend to nearly any corporation. This suggests a vanishingly low probability of default or high inflation. It could be a danger but there are many potential dangers in the world lets focus on those that seem likely.
Re: Infrastructure and other things. The current deficit is less than annual spending on infrastructure and research and development. If we include education then way more than the current deficit is being spent on infrastructure. Of course there is a lot of non-infrastructure spending but I am not claiming that infrastructure spending is the only form of worthwhile spending.
Re: Bail out, the FED is not bailing out the government with QE. The reason for QE was a semi-effective attempt to deal with the problem of not being able to get interest rates below zero in an economy with slack demand, evidenced at the time by unusually high unemployment.
Finally, there is a whole other debate to be had about the size of the state. This is separate from whether the natural interest rate is low and whether there is confidence in the US economy. There is huge confidence in the US economy evidenced by the fact that markets are willing to lend at such low rates. The US does not need to be worrying about the deficit or the debt to gdp ratio right now. Of course, you might want to shrink the state for other reasons but those are other reasons. I think that the deficit/inflation mania is a smokescreen to shrink the state without having to have the argument about the size of the state. Among economists it is only taken seriously by a small minority, across the political spectrum economists do not think that the US has or had serious need for concern about its deficit or a major FED generated crisis of confidence.
“Bail out, the FED is not bailing out the government with QE.”
But most of QE consists of buying government bonds…
The Fed is a part of the federal government. It’s amazing to me that they ever managed to convince anyone otherwise.
“The Fed is a part of the federal government. It’s amazing to me that they ever managed to convince anyone otherwise.”
I think probably the greatest error of judgment made when designing the US federal government was in the notion that simply by being separate, different branches and institutions within the government would compete with, and check each others’ powers, rather than simply cooperating to allow all branches to gain more power.
Yes, of course they buy government bonds. This is the mechanism for conducting monetary policy. It is not a sign that the government is failing. Only in a very weird and non-conventional sense is it a bailout. If they didn’t buy up US bonds it wouldn’t be the debt that was the problem. The problem would be the high unemployment and low private sector investment.
You’re begging the question by assuming that low inflation=high unemployment. And sure, it’s not presented as a bailout, but it is a way for the government to spend more than it takes in year after year without explicitly raising taxes, which would piss people off. I see that as a political bailout. A government with a sovereign fiat currency can never need a financial bailout in the traditional sense.
I’m not saying they are intentionally or explicitly funding the government through sneaky means; rather, I’m pointing out that it seems rather convenient that politicians and largely pro-government intellectuals happen to have arrived at an interpretation of monetary policy which requires the government to spend huge amounts of money to prevent economic disaster.
Keep in mind, money is also technically government debt. Under 12 USC 411-412, Federal Reserve Notes are “obligations of the United States” and they must be collateralized by pledged assets of the Federal Reserve Banks, most of which are Treasury bonds.
So the US government issues IOUs backed up by… interest-bearing IOUs, the value of which is maintained by… creating more non-interest-bearing IOUs to buy up the interest-bearing IOUs… This is why the social security “trust fund” has always struck me as such a silly fiction:
A: Give me some of your money every year and I’ll give it back to you when you retire.
B: How do I know I can trust you with my retirement money?
A: Because I’ll save it in the safest asset there is… IOUs from me!
Though, as mentioned before, of course, what actually backs up our fiat currency is confidence in the productive capacity of the US economy.
I’m not particularly interested in whether to call it a bail out or not. I don’t accept bail out as a good description. However, it is not a substantive issue for me. The issue is whether the US deficit is or was a very serious issue, whether we should be worrying about very high inflation as a result of the Fed, and whether it is meaningful to talk of a low interest rate. I covered those points earlier.
I didn’t and don’t assume low inflation=high unemployment.
Economists beliefs about monetary policy do not require the government to spend large amounts of money to prevent disaster. Some economists beliefs about the limits to monetary policy require the use of fiscal policy in the event of an economic down turn. That issue is highly contested whereas the monetary policy, as I have described, is pretty much agreed upon.
Well, part of it’s the Fed holdings (this was shown in 2013), but the Fed in the 1970s also held a large portion of treasuries, larger than today. The dollar today is really strong, and might even weather a selective default.
It would have been more accurate to say that real rates are low and the evidence for the Fed channel is weak at best. I offered a particular alternative explanation, poor risk adjusted outside investment opportunities. That might not be the best explanation either. Here are some possible explanations for low rates:
That was because Greece’s low interest rates reflected a correct belief that the ECB would backstop Greek debts if it reneged. The US is in a very different position, in part because it is lending in its own currency which it can print. Higher levels of debt do make a country more vulnerable to shocks but so does a failure to invest in key parts of a nations infrastructure. Given that many of these investments need to occur at some point it is better if you make them when you can borrow cheaply.
How was that a correct belief? Greek debt holders lost ~80% in the 2012 default, which was engineered by the ECB to maximally fuck over private holders. The ECB backstopped Greece, not Greek debt.
I am happy to remove “correct” from the above statement, it changes little.
This said, I thought that private lending to Greece prior to the crisis got pretty well compensated in 2010 with the money lent to Greece by the Troika.
However, those who held on to the debt, or kept lending at higher interest rates, did get burned in 2012. If I am wrong on this I would not be massively surprised, Greece is not my area of expertise.
Where do you get your numbers from?
It’s always struck me as a little weird to focus on “deficit reduction” as a metric of goodness. Debt is the value which reflects how much shit we’re in; it’s what you should consider when you want to figure out how much to criticize past budgets. The first derivative of debt, deficit, is the value which reflects how fast we’re digging ourselves deeper into the shit; it’s what you should consider when you want to figure out how much to criticize current budgets. But the second derivative of debt? Why go there, and if so why stop there? The only mathematical heuristic that tells you u” is critical but u’ and u”’ are unimportant is “keep adding derivatives until you find one that doesn’t look bad”.
Debt is not necessarily, or usually, the amount of “shit” we’re in. Debt levels often reflect the confidence we and other countries have in us seeing continued economic growth and reliably paying our debt back. People don’t blink an eye at households borrowing 4-500% of their annual income in order to buy a house. I’m not arguing that countries should lever up as much as new home owners, countries are not households, but there is nothing wrong with borrowing against your future income.
We’re not borrowing against our future income. We’re borrowing against our future borrowing–hence the howling that not raising the debt ceiling on schedule will bring calamity. The typical homeowner will actually have paid off that mortgage in 20, 30 years; there is no timeframe in which the US govt debt is plausibly paid off.
The national debt is not some amorphous lump sum. Most of it consists of bonds sold to various entities, each with specified interest and duration. If you have a managed retirement account in the US, you probably own some of them.
These bonds will actually get paid off in full, barring total atomic annihilation or something analogously severe. And the government will continue to issue more of them, but that’s no trouble as long as the borrower’s trust in USG’s ability to pay stays the same.
Although it goes against the whole American debt is a moral failure thing, there would be nothing so terrible about a mortgage that was IO and the principal only came due on sale of the collateral. A lot of people end up with that in effect though refinancing.
Anyway, a fiat issuing nation-state isn’t just a giant household. Trying to use the simplistic Puritan view of household finance (which isn’t even necessarily optimal for households) for a nation will lead you terribly astray.
The most important thing about a debt holder is their ability to service it on an ongoing basis. For governments, bonds mature and interest payments are due on an ongoing basis. When operating a budget surplus, this debt service is easily cleared by tax revenue. However, when operating in a state of (constant) deficit, the ability to pay the debt due at any given time is fundamentally predicated on the ability to borrow at that time. Because revenue is insufficient to both pay to keep the lights on and service the debt, the options are to borrow, turn the lights off, print money, or default. If the ability to borrow evaporates, the options are to turn the lights off, print money like it is going out of style, or default.
This might be an interesting read.
Also, what I want to see if a graph of debt service cost as a percentage of revenue. I think it would be illuminating.
You seem to have missed 4) sell assets and 5) print money.
Oh, you beat me to it. I edited in printing money. I actually didn’t think of selling assets, but it is just as bad an idea as the rest. Virtually everything governments own is either essential, or generates income. If it is essential (and you don’t want to cut the services that depend on it), you’ll need to rent it back from whoever you sold it to, so it is just a backhand loan. Probably worse, over generational time scales. Besides, who wants to own a military base if they can’t rent it to the military? If it is income generating, your forfeiting the future income. Which really isn’t good for your ability to pay future liabilities.
The state, unlike the household, does not have a finite life span. It doesn’t have to ever be in surplus, as long as debt grows slower than nominal GDP.
If you have a debt level of 50% then deficit must be twice (in percent of GDP terms) the rate of nominal growth. If you have a debt level of 100% you can keep debt to GDP stable if growth is equal to the deficit.
You pay back more and more but can promise to do this so long as you are getting richer and richer. If we hit a point where growth permanently halts then we will need to move back into surplus and make cuts. But when that happens we will be much richer than we are now. So the cuts in the future would be cutting back to where we are now.
None of this is to say that government currently spends money well. Nor is it a defence of big government. This is just a description of how debt works and why the shouting about the dangers of American debt is mostly unwarranted.
The Forbes article doesn’t imply much more for these issues. All he did was point out that the rankings change when you rank debt to current government revenues rather than GDP. Oh, and that when you divide something by a number smaller than one it gets bigger.
Current tax revenues are just that, current, we are nowhere near the top of the Laffer curve. It is unclear what the correct measure is to use for debt. The debt to gdp ratio is a reasonable summary statistic but with drawbacks to be sure.However, debt to revenue is problematic because revenue is very responsive to policy i.e. we can cut or hike taxes.
(see http://ftalphaville.ft.com/tag/national-debt/ )
The ranking changes when we use this other measure. We are supposed to be scared because the debt to revenue in the US is higher than Portugal and Iceland. There is even the claim we are in the ‘financial danger zone’, without evidence or explaining what exactly happens in the house of Kenny Loggins. What it really tells us is that debt to government revenues is not a reliable way to rank the probability of default as evidenced by the rates at which people are willing to lend to the US government.
Regarding original online fiction, one I’m in the process of reading at the moment is Worm, which is one of the best takes on the whole superhero/villain concept I’ve ever seen. Has anybody else here read it? (Warning: It’s very very long.)
I’ve read it; it seems to be somewhat popular in rationalist circles. I sometimes found it oppressive how relentlessly awful everything is in the wormverse, but the complex characters and their extremely creative use of their powers was impressive; I’d also recommend it.
I found it excellent, particularly in just the variety of characters and applications of powers. I’d recommend it to people too.
At times, the storyline poses “Meditations on Moloch”-style coordination issues; when the city or world is faced with a crisis of an existential level, superpowered individuals (hero and villain alike) put aside their separate agendas and struggles to band together against the menace… but, always lurking under the surface (and sometimes coming explicitly above-ground) is the jockeying for position that everybody must keep on thinking about, because anybody who actually goes so wholeheartedly into a truce to save the world will get screwed over by somebody else who does a little bit of underhanded plotting.
…but, if the gangs in the Bronx could have a truce to invent hip-hop, maybe the parahumans can have a truce to save the world.
They do something like that.
They settle on a weird game of cops and robbers where villains are generally treated leniently unless they break some “unwritten rules”.
Which, of course, incentivizes catching or framing others at violating those “unwritten rules” in order to gain the upper hand over them, with such gamesmanship sometimes taking precedence over actually putting all your efforts into saving the world.
I enjoyed Worm. It bears the marks of an inexperienced writer — overlong digressions, clunky dialogue, takes itself too seriously — but its sheer creativity makes up for a lot. It also has a wonderful, well-realized, deeply-flawed-yet-sympathetic protagonist. Definitely recommended to anyone looking for an interesting take on superhero mythology, especially fans of its spiritual forebears, the Wild Cards books.
Edit: I should point out that the story gets seriously grim later on, with explicit depictions of torture, sexual violence, and insanity. Even I found it disturbing, and I’m very, very hard to disturb. Readers with delicate sensibilities might want to look elsewhere.
I’ve read it. I think it drags on a bit in the middle, but it’s certainly a very engaging, I remember reading several chapters a day, even on vacation.
I was introduced to it through LW, so I think it’s probably somewhat popular in this community.
It’s a lot of fun. By no means perfect, but the worst of the problems are only visible in retrospect.
>Omnilibrium is a social site that tries to improve online filtering. Instead of a big pot of Reddit-style karma that shows everyone the most upvoted posts, it tries to show everyone posts upvoted by people whose opinions have previously been correlated with theirs, with various customizable options to decide how much you want to be exposed to differing opinions. It needs more users for a good trial run, so check out their FAQ and then join in.
I don’t think this’ll work out, because we really like believing that we’re open-minded, so you can’t have “the world safest hugbox” as a main selling point. That being said, it if proves to be actually effective at its stated purpose I can see established social networks implementing the system, either stealthily or as an optional feature.
>we really like believing that we’re open-minded
Are you sure it’s not a projection? Maybe not everyone is like you; maybe they think that an open mind is like a fortress with its gates unbarred and unguarded.
I think that historically people (or, Americans, at least, can’t speak outside of that) really liked believing that they were open-minded, but that the internet is slowly changing that. It’s given all of us an opportunity to choose between truly open and anarchic forums and much more curated ones, and many people find that they prefer the latter–I certainly do. Do that long enough and you might start to question whether 18th century philosophers really did have the best and final word on the merits of the marketplace of ideas. Thus you have fumbling college kids raised on the internet intuitively, if not always wisely, calling for a certain level of RL moderation while generating endless clickbait for aghast Gen-X’ers and Boomers who just can’t seem to comprehend why keedz deez dayz don’t value free expression sufficiently.
>you can’t have “the world safest hugbox” as a main selling point
But you can have “Shows you stuff you’ll probably like” as a selling point. Every aggregator wants to sell itself that way.
If you use upvote to say “I agree” that’s a big problem. If you use it to say “interesting” or “well argued” then less so.
I don’t think “interesting” gets you far enough; the typical “interesting” comment is akin to the typical “interesting” article- clickbait making troll arguments, or zingers based on any party line response the original writer didn’t remember to explicitly counter, regardless of how obvious those counters would be given actual thought. “agreement” doesn’t work, but simply a standard of “interesting” encourages not putting effort into being correct.
“A honest and valuable attempt to genuinely engage” would probably be how I’d describe a good rule for voting and I’m doubtful there will be much population interested in that; the bulk of the effect I think would be encouraging inclined-towards-zingers people who are mostly okay at the moment to go off the deep end.
You only have to upvote based on criteria that match your values; if you are really open-minded, and click a variety of thought provoking people from diverse perspectives, you would (presumably) see the recommendations of others who did likewise.
Of course, if you liked to *think* of yourself as open-minded but were biased, you might not like seeing your hypocrisy revealed.
If sufficient numbers of people who are interested in valuable thoughtful, genuine effort to think contributions exist for it to work, and the algorithm is actually smart enough, it’d work for an individual user.
The problem that everyone else who is even mildly inclined to upvote political zingers and strong “takedowns” for their side is going to end up seeing nothing but, with decreasing actual comprehension of what it is they’re zinging and increasing tendency to view the enemy as a caricature of evil is still there, though.
It wouldn’t reveal hypocrisy, because if you were hypocritical, you’d just get bubbled with other people who were hypocritical in more or less the same way and become more certain that you were right in the way you acted.
I hope you’re right.
I’m a big fan of civility and an expectation of cooperation, and have little interest in dealing with people who either can’t or choose not to maintain it, so in general I’m in favour of the kind of moderated environments to which the term “hugbox” is generally applied, but filtering by *opinion* isn’t something I want and is something I’d think would be an awful idea.
Filtering out civil to borderline uncivil disagreement would increase the ease with which everyone ends up getting strongly convinced of the absolute truth of convenient narrative and policy and dubious assertions that agree with them, and with only hearing about enemies and their reasoning through people who already agree with you you end up with the assumption that the only reason anyone disagrees with you is because they’re an evil power-hungry hater of all that is good.
I don’t want to deal with any more of this leaking out of toxic communities than I already have to.
Shouldn’t Omnilibrium be *inverting* their recommendation scores & encouraging people to engage with posts they disagree with, if their goal is rational discussion of controversial issues? After all, groups with very similar ideas tend to fragment and become split-partisan themselves (“bigotry is fractal”); if a group continues to engage with its opposite number, it’s less likely to spiral out into a kind of disconnected navel-gazing fringe group.
Are you trying to describe some sort of self-powering outrage generator?
Isn’t the Internet already one?
I think this might work out better than the reverse strategy of automatically assembling extremely partisan echo chambers who only know of their enemies through caricatures shared internally.
Although the way in which it encourages the uncivil people by making downvotes cause them to be more visible means that it probably wouldn’t work out great either.
Maybe a more complex strategy of favouring posts which are not controversial by people aligned with other people who make controversial posts you are on the disagreeing side of would work? Where controversial means “attracts lots of upvotes and downvotes”.
Let people who get unanimously downvoted sink through the floor as usual.
Not only automatically assembling echo chambers, but providing a pseudo-objective score that allows people to confidently assert that their echo chamber is in fact the best possible place for them to be.
It’s great in theory, but in practice mixed partisan spaces on the internet tend to be awful. There are a few narrow exceptions (including this place only some of the time) but by and large internet induced sociopathy destroys any chance for thoughtful discussion of contested issues. Offline is much better. People are far less likely to completely demonize those their co-workers or brother-in-law than Random Internet Commentator #54.
I leave it as an exercise for the reader what implications it has that an entire generation is spending more and more of its formative years interacting in sociopathy inducing spaces.
One partial solution to the problem you point out is to have online spaces defined by a common interest. People on those spaces esteem each other for what they say and do with regard to that interest, which make it harder to decide that someone is evil or stupid when he turns out to have religious or political views different from yours. It’s an online equivalent of your co-workers.
The Rialto, the SCA Usenet group, worked pretty well that way, and I expect there are other examples.
If you want that, you could just downvote the things you like and upvote the things you hated. It’s not as if there’s any special significance to an arrow pointing up vs. down.
The Facebook political bubble test seems screwy. For example, when I checked on my computer it said zero of my friends liked Carly Fiorina. But when I used the mobile app it said 9 friends liked her, and when I clicked to see the list there were 15 names on it. The same sort of thing happens with the other candidates. Not sure what’s up with that.
Wikipedia has a number of Earth flags and I like this one better.
The Death Star Flag is pretty badass.
I prefer Old Freebie.
*Earth contacts aliens, is invited to a meeting of the Galactic United Nations.*
*Every species there has exactly the same flag, a picture of a small blue planet against a big yellow star against the black backdrop of space.*
“Now, see, this one’s the flag for [Local word for dirt], and that one over there is the flag for [Local word for dirt.]”
“the government did grudgingly allow us to use the genetic tests we were using just fine years ago until they took them away.”
You’d be surprised how much FDA regulation of advanced medical technology is based on What The Reviewer Considers Icky.
“You wanna do what with human brains? Ew. No, no, no, I don’t care if you’ve found a way to stop strokes without heparin, you are not doing that to brains! EW.”
Oh, compounding pharmacies? You mean the ones that the FDA plans to regulate just like any other facility that produces pharmaceuticals?
Compounding pharmacies “solve” the problem of high drug prices the same way that Uber “solves” the problem of restrictive regulations on taxis; they solve it by redefining themselves into a different category of operation that isn’t regulated the same way. And part of the reason it wasn’t regulated the same way was that nobody was trying to make it a large-scale business operation before. And now that people are doing that, regulation will follow.
Uber has yet to be crushed; I doubt that the government is going to be too interested in crushing compounding pharmacies for keeping AIDS patients from getting ripped off.
The way bureaucracy works, the fact that being bureaucratic is embarrassing to a different part of the organization doesn’t stop the bureaucracy. Some guy whose job description says that he has to enforce the law will enforce the law. The part of the government that would actually be harmed by the embarrassment of enforcing the law is probably not his direct superior and it will be hard for their influence to reach him.
The ventilation study had a sample size of only 24.
But a sufficiently high effect size that p was still < .0001
Number of participants means a totally different thing for within-subject comparisons (same subjects in each arm of the test, taking multiple “samples” per subject) than for between-subject comparisons (each arm gets a different set of participants, as in typical drug trials.)
Perceptual psychology you typically have 3 subjects including the experimenter self-experimenting, and the Reproducibility project tells us that it’s producing results more reliably than social psychology which tends to use between-subject comparisons on ~50-100 subjects.
The Okinawa nuclear war thing is not true at all. There were missiles there, a fact not officially revealed until 2007, but early accounts said that it was really boring, and nothing at all happened. As time went on, the source behind the story (who was one of the people who said it was boring) started to tell of incidents that he originally hadn’t mentioned, which got more and more interesting.
Also, there are dozens of factual errors. For instance, the author was a missile tech, who shouldn’t have been in the control center at all. Why was the person trying to stop WWIII telling a missile tech what he was up to? The missiles on Okinawa were Maces, which don’t go in silos. And that’s not how launch codes were handled. Ever.
This comes from several people who served in various portions of our nuclear apparatus. See here for more details.
Thank you ben, that was an interesting comment!
Yes, many of the technical details set off my BS detector as well. Mace was well before my time, of course, but there are veterans commenting on the original article saying the same thing.
And step back one level. The entire story is sourced to John Bordne, resident of Blakeslee, PN. There is absolutely no supporting information or evidence presented. We are told that “the US Air Force [has] given him permission to tell the tale”. There is no supporting evidence for that.
And there damn well should be, starting with the USAF’s press release with its version of the story. Really, there’s some guy with a story that, if true, is profoundly embarrassing to the USAF. If true, the USAF has the ability to keep the story Top Secret essentially forever, because it involves details of the command & control system for nuclear missiles (which have probably changed, but the USAF can say “they haven’t changed enough to make it safe to tell the story, and you’re not cleared to know any more than that”).
And the USAF is going to tell this one guy that he can go ahead and tell his version of the profoundly embarrassing no-longer-secret story, without having its own press corps tell the USAF-friendly version first? I can maybe, at least for entertainment purposes, believe a story that violates the laws of thermodynamics, but this one violates the laws of bureaucracy.
People keep saying “oh, I’d go back in time and assassinate Hitler, that would fix ALL THE PROBLEMS” Garbage. You want to fix all the problems by assassination, you go back in time and wax Kaiser Bill. Wilhelm II’s military fantasies were the reason for the military buildup in Europe, and his desire to be a big-league player was why the death of a minor government official from a third-rate country resulted in the First World War. And if Germany hadn’t got its butt kicked in that war, then the economic and social conditions that allowed Hitler to grab power would never have come to pass.
There is so much wrong with this comment that it’s hard to know where to begin, but the person you describe as a “minor government official from a third-rate country” was in reality the heir to the throne, in a country where the monarch ruled, not just reigned; said country was probably the sixth-most powerful country in the world, and Germany’s closest ally.
Prince Charles is an infinitely more inconsequential figure than Franz Ferdinand, but if the Irish Government were to assassinate the Prince of Wales on a trip to Belfast and Russia were to back them up, then a major war would very much be looming.
“so much wrong”
ah-heh. The point of the comment was that without Kaiser Wilhelm II, World War One would never have happened, and there wouldn’t have been a reason for World War Two. You aren’t actually saying anything to refute that.
“if the Irish Government were to assassinate the Prince of Wales on a trip to Belfast and Russia were to back them up…”
Right, so, what if Russia weren’t to back them up?
We’d have the English government send in the SAS to waste a bunch of revolutionaries, and there’d be some Very Serious News Stories about it, and then everyone else would get on with their lives.
But Russia did back up Serbia in the July Crisis. And not just diplomatically. When Austria-Hungary mobilised against Serbia, Russia mobilised against Austria-Hungary.
Serbian terrorism, and Russia’s support of Serbia, made a war between Germany and Austria-Hungary, on the one hand, and Russia and Serbia on the other, inevitable (cf Irish hypothetical). It had nothing to do with the Emperor – indeed, he tried his best to keep things under control, both to restrain the Austrians, and to limit the war to the East. It was the General Staff, and von Moltke in particular, who made sure that the war included France.
So no, you’re quite wrong on this score. You’re also obviously wrong that he was responsible for the military buildup in Europe (how he can be blamed for Britain’s adoption of the two-power standard is beyond me). And it’s very possible that a populist nationalist would have taken power in Germany even absent WW1. It’s not like Germany was on an incrementalist path to Westminster-style democracy; a socialist/communist revolution similar to what happened in 1918 was very likely sooner or later.
“But Russia did back up Serbia in the July Crisis.”
Without a huge German military and a Kaiser itchin’ to use it, there would not have been a July Crisis.
Which has been my point all along.
“how [Wilhelm II] can be blamed for Britain’s adoption of the two-power standard is beyond me”
Their buildup was a reaction to the German buildup, which was fueled by Wilhelm II’s military ambitions. And–as I keep saying–no Wilhelm II, no buildup.
“it’s very possible that a populist nationalist would have taken power in Germany even absent WW1.”
A society that isn’t experiencing the kind of economic and political deprivations of 1930s Germany might elect a popular nationalist as its leader, but it’s a lot less likely to invade Poland.
So you’re saying that without a huge German military, if Serbia assassinates the heir to the Austrian throne, nothing happens? As you seem to admit in your response to my Irish example, if no outside parties interfere, it’s inevitable that Austria will invade and crush Serbia. Possibly after some ultimata – so there is a July Crisis, but just a small one, the precursor to the Serbian War of 1914-16 in which Austria conquered Serbia and annexed some land; another minor page in the Balkan Wars of the early 20th century.
But in the real world, the Russians made it known that they would back up Serbia militarily. It was this Russian aggression (and desire for pan-Slavic influence) that made a wider war inevitable. Austria couldn’t allow Serbia’s actions, and Germany couldn’t abandon Austria to be crushed by Russia. Now, the Austrians also played the Germans somewhat, by getting the famous “blank cheque,” but I don’t think this changed the ultimate outcome. At any rate, the Emperor personally was trying to restrain the Austrians and get them to come to terms with the Serbians right up to the last moment.
As for the British military buildup being a “response” to the German buildup… yes and no. Yes, in the sense that Britain was responding. But no in the sense that they didn’t have to respond. British policy (the “two-power standard”) was that the Royal Navy had to be larger than the second and third navies in the world put together. It was this policy that led to the constant military escalation. Germany wasn’t trying to surpass the RN, they just wanted a rough defensive parity, but British policy was not to allow this. And indeed, this British policy was upsetting and scaring plenty of other countries, including France prior to the entente.
As for your claim that absent the unique economic and political deprivations of the 1930s caused by WW1, Germany would never have had an aggressive militaristic ruler… this sits ill with your claim that Wilhelm II was exactly that kind of aggressive militaristic ruler, and indeed that he caused WW1. They can’t both be true.
“Gentlemen, gentlemen! Surely we can agree that you’re both wrong.”
In addition to what Salem said, Kaiser Bill was hardly the sole source of militarism in Germany.
It was my impression that people were actively working on that one (OpenWorm etc.) I’d be curious what is holding them up.
I just happened to read this “rant” earlier today. I am no expert with respect to brain simulation, but I know a thing or two about biology, and I wholeheartedly believe that all of the points it makes are accurate… which ought to answer your question.
Perhaps you ran into that because Scott linked to it in this post? Indeed, FC’s quote is from that rant, so reading it again is probably not going to answer the question.
[Maybe it’s worth saying that in that rant (4) and (5) are elaborations on (1), while (2) and (3) are about human brains are even more difficult. My answer, below, is that a version of (2) applies even to C elegans.]
Oh, I saw it here & didn’t see Scott link to it. My mistake. I thought that the original comment referred to another discussion above.
In any case, I think that the rant does answer the question. The way I see it, it’s rather simple: At the present time, we don’t know enough about the underlying biology to simulate the brain of a nematode. The brain is almost a black box. Glial cells make up about half of the brain and CNS by volume, and yet their role in cognition is still a subject of much debate… so, for that reason among others, the cellular mechanisms we use to describe learning and cognition are incomplete. When our knowledge of the underlying biology improves, so will the models. To try and model a brain without first understanding its biology might lead to serendipitous discoveries, but it might accomplish nothing at all.
By “the question” I meant FC’s question – why is C elegans not done. The rant doesn’t really address this question. Points (4) and (5) are potentially answers to that question, but I don’t think that they are good answers.
I agree. The way I see it, it must be a combination of factors.
It seems likely that (4) and (5) have something to do with it.
The fact that we don’t fully understand the brain’s biology also plays an enormous role. C.elegans has something like 50 glial cells, some of which are presumably involved in movement (GLR cells) and others in modulating neuron maintenance, development, and activity — but these cells are, in general, not very well understood. As such, they must necessarily be difficult to accurately model. As far as I can tell, the OpenWorm project’s documentation page does not make mention of any attempt to model the activity and function of glial cells.
There are other unknowns — the behavioural and cognitive importance of neuropeptides, c.elegans-specific hormones like daumone, etc. Most of the identified c.elegans neuropeptides have neither known role, nor do they have a receptor. (At the same time, c.elegans has lots of orphan receptors!) And it’s known that daumone and other ascarosides are extremely potent modulators of c.elegans behavior — but small molecule signalling in c.elegans is also not well understood — and whether the known effects of the ascarosides are modeled, and how they might be modeled, I don’t know.
Just connecting neurons in a sort of wiring diagram doesn’t seem to be doing the trick. I’m reminded of this passage from a (rather interesting) Chomsky interview:
“So if we take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don’t stop to ask if the wiring diagram is the right level of abstraction — maybe it’s not, so what is your view on that?”
Chomsky: “Well, there are much simpler questions. Like here at MIT, there’s been an interdisciplinary program on the nematode C. elegans for decades, and as far as I understand, even with this miniscule animal, where you know the wiring diagram, I think there’s 800 neurons or something …”
“I think 300..”
Chomsky: “…Still, you can’t predict what the thing [C. elegans nematode] is going to do. Maybe because you’re looking in the wrong place.”
Does anyone else here think “Connectomics” sounds like something a rival firm to Meccano would create?
Just me? 🙂
I don’t know specifically about OpenWorm, but a lot of previous projects failed because they were more interested in computers than in biology. Or rather, they gave up, in contrast to Blue Brain, which ran into this problem and ignored it and produced bullshit. A lot more biological input is necessary, like whether a particular connection is excitatory or inhibitory.
Let me rephrase that. The project isn’t done because no one is working on it. No one is working on it because there is no money in it, and because computer people are more enthusiastic about the project than biologists; but they don’t want to get their hands dirty. The reason I gave above is just to demonstrate that more biology work is necessary.
Maybe the project is hard and it wouldn’t be done even if people were working on it. I’ve heard from people who have very specific approaches, but they’re waiting for better microscopes. If people were really enthusiastic about this project, they wouldn’t restrict themselves to a particular path, but would utilize microscopes available today. Maybe they’d fail, but they’d produce partial results. Maybe those partial results will be subsumed by better techniques, but at least they would give us some calibration about what works and what doesn’t and how much detail is needed, so that we can determine the limiting factors and predict whether a new technique will be adequate.
RE: preschool. I think that the last fifty years have been one big long struggle against the idea that women should stay home and raise the kids.
“ah ah AAAHHHA AAAAAAH AAAAAAAAAHHHHH” jesus bro I can hear you screaming from here, calm the hell down. Okay, SOME MEMBER OF THE FAMILY should stay home and raise the kids. Maybe it’s the father, maybe it’s a grandparent who lives in the home (instead of A Home).
But think about it. Two-Income Trap. Latchkey kids (remember how this was a thing in the early Eighties?) All these studies showing that massive government programs don’t work as well as having the kids stay home while Mom watches YnR.
You are forgetting the key bit! [dons conspiracy tinfoil]
If one member of every family stays home, yes, all the outcomes are better for everyone except…..the IRS.
If you have a stay-at-home parent, they earn no income, no tax. The kids don’t have to go to daycare, no tax. They won’t go to pre-school, no tax.
Cui Bono…..feminism is a plot* by the taxman (Taxperson? GAHH, THEY’VE GOTTEN TO ME!) to convince people to hurt their children in order to move more of the nation’s unpaid but productive work into remunerative fields, which can then be skimmed.
Actually, Scott did a column on the Two Income Trap. And of course he completely missed all the criticism of that book which pointed out that the “two income trap” is really caused by taxes.
It’s a little counterintuitive as to why an apparently small tax increase has such a large effect, but think of it this way: if having two incomes multiplies both income and expenses by a fixed amount, and then increases the taxes by a small amount on top of that, the tax increase is small *as a percentage of total income* but is large *as a percentage of the discretionary portion of the income.*
Would this mean the two-income trap could be broken by changing tax policy? If so, how?
Yes, either don’t have progressive taxation, or reduce the taxes of two income married couples by enough to make up for the fact that they are now in a higher tax bracket.
(Also, you would have to avoid taxing income that is used to buy services that would be provided tax free by a stay at home parent. If you compare taking care of your own children to earning income and paying for daycare, the facts that daycare is paid for in post-tax dollars and that the people who receive the money for the daycare have to pay income tax make it more expensive.)
How does reducing the tax burden on two income families incentivize a stay at home parent? You should expect it to do the exact opposite.
You seem to be arguing at cross purposes to Sastan and DensityDuck.
“Avoiding the two income trap” means “incentivising two income families”, not “incentivising stay at home parents”.
The Two Income Trap is the idea that having two working parents isn’t as good as it seems. It turned out (using the author’s own figures and contrary to the author’s own conclusions) that that is because of the increased tax burden on two income families. In order to avoid it, reduce the tax burden.
I guess I got lost in the weeds between the larger point and the smaller point.
— I agree with you that the current tax system disfavors two income families
— But IF DensityDuck and Sastan are correct that it is a social positive to have a stay home parent THEN the “income trap” or in other words a tax system that disincentive one is a good policy response.
IF despite the tax system we still have a socially suboptimal level of one earner households THEN that argues that we need more intervention, not less.
Of course you might not agree that having a stay at home parent is a social positive. But if that’s what you were arguing it wasn’t clear at least to me.
Finally, with respect to the argument about the other post, I fail to see how your explanation (taxes) is better than Scott’s (housing). Even if two earner families are paying a higher percentage of their income or disposable income in taxes than their 1970s one earner forebearers (which isn’t obvious to me given the overall changes in the tax code since then) it doesn’t seem like the effect size is nearly as large as the housing costs change. Also, AFIAK the ‘we don’t feel twice as rich’ being described is also being felt by one earner households today.
The post isn’t big on solutions (and describes Warren as all over the map) but reducing the cost of housing both by eliminating government programs designed to increase the cost of housing and by decoupling school assignment from location to the extent possible, sounds like a good start to me.
If housing costs were reduced households could decide to spend that new wealth on having a stay at home parent (and reap the additional tax benefits) or other things.
I actually don’t think the trap is caused by taxes. I think it is caused by social pressure.
“Oh what do you do?”
“I’m a mother”
“Oh, the poor dear, she has no job”.
The denigration of motherhood and homemaking is what causes it. Women decided that the freedom to do what they wanted to do in terms of career meant that they needed to do what men had been doing forever, and hated. Now female happiness is down, male happiness is up, and the kids are basically feral. GG!
If low performers do worse at highly competitive colleges and have worse outcomes because of it, isn’t that an argument for value-add at college and against signaling?
I think so yes. But one could explain it in terms of signaling. One just has to think that doing well at a “good” school is better than doing poorly at a “great” school in terms of signaling. It is not obvious why this would be true ahead of time however. So I do think the result supports value added theories of education.
Maybe signalling applies more to having a college degree in general than going to a highly competitive school.
Not quite; if failing at college is value-subtracting, then we would see similar things. (And how could it not be, given that you pay them and you don’t spend your time doing anything else?)
I’ve seen statistics that say going to college without obtaining some sort of degree is effectively a *complete* waste of time – your outcomes (statistically) in the workforce are identical to those who graduated high school when you quit college.
What statistics are those? I would not be surprised at the claim that going to college is not worth it for those people, given the opportunity cost (time in school, money spent), but I would be surprised at the claim that these are identical.
“If this negative effect is larger than the positive effect from going to a more prestigious school, the overall effect of affirmative action could be negative.”
The fundamental assumption of Affirmative Action is that everyone is exactly identical, and the any differences in outcome are the result of racist bigotry in the authorities. i.e. if black people score lower on IQ tests, it’s because the tests make a bunch of racist assumptions about what a typical person would know. If black people get jobs that aren’t as good, it’s because racist employers won’t hire them. If black people show up in jail more, it’s because racist cops arrest them more than they ought to do.
And, so, if black students pushed into schools by affirmative-action acceptances do poorly, it’s because the racist teachers and administrators intentionally screw them over, and therefore we need to be even more thorough in our quest to Eliminate Racism.
The counterargument to the undermatching hypothesis for law schools is that, actually, black students with the same entering credentials *still* do worse than white students.
One hypothesis for why that is that the culture of law schools is less comfortable for black people in ways that negatively impact their performance. (You could use the word “racist”, even, if you wanted.) I find that entirely plausible. Being in a minority group of whatever kind that people make negative assumptions about is probably not going to be great for your performance, and there may be things about the culture of law school (for instance, students self-organizing into small study groups, the specific professor-student class interactions) that make peer and professor judgement more important.
As per Heritage article, white and black students entering similar-tier law schools with similar credentials pass the bar at a similar rate, which wouldn’t happen if law schools had a culture that negatively impacted minority’s performance.
Check out footnote 117 for some discussion of the back-and-forth that’s gone on regarding this. The author implies that discussion has been hypothetical, but some of Sander’s critics actually have analyzed the data in a way that’s different from him and determined that there is under-performance on grades that is not accounted for by entering credentials. Sander says it is “fatally compromised by methodological errors” and such. I can’t speak to whether it is.
Alternative hypothesis: quality of law school instruction does exactly nothing to impact bar passage rates; the differences between schools are entirely caused by selection effects. Having been to law school, this sounds plausible to me. It even fits nicely into the above racist theory, in that bar prep is generally done solo and away from the teachers of the school, so there wouldn’t be much of networking disadvantage.
I think that’s rather accepted wisdom at this point. But the bar is irrelevant to practice. Law school mostly is too.
I don’t think law school is a good candidate for mismatch theory.
First because all but the lowest tier law schools have undergone so much grade inflation that it is impossible to do that poorly. They have some ways of identifying the cream of the crop, but beyond that you are still an e.g. NYU grad. There aren’t any C- Stanford grads that are worse off than A- Pepperdine grads.
Second, because the legal profession itself is the most prestige biased one I’ve ever seen. There are avenues that are simply not open to even the very top student at a lower ranked school.
Third, there’s no real specialization within law school (though sometimes law schools like to pretend there is) so there’s no effect like the mooted STEM drop out mentioned in the linked article.
The only good reason to go to a lower ranked school (outside of geographic factors or similar) is scholarship money. But then again, not going to law school at all saves even more money (and time).
“Alternative hypothesis: quality of law school instruction does exactly nothing to impact bar passage rates; the differences between schools are entirely caused by selection effects.”
I don’t think that is consistent with the data. I don’t have figures ready to hand, but I have sat through a number of faculty meetings where one issue was whether we should copy policies of schools that produced better bar passage rates than we do with a lower LSAT distribution of students.
Selection effects are surely important, but I don’t think they are all that matters.
“Being in a minority group of whatever kind that people make negative assumptions about is probably not going to be great for your performance,”
Comments along these lines, along with arguments which assume that racial prejudice against blacks has a large effect on outcomes, have to deal with the counterexample of American Jews in the 20th century. There was widespread prejudice against them and elite universities were deliberately discriminating against them.
My guess is that private prejudice against a reasonably small minority doesn’t have a very large effect in a market system, because the members of the minority can manage almost as well interacting with the (say) quarter of the majority who are not prejudiced as they could interacting with all of the majority.
There are/were far fewer Jews than blacks in America – 2% to 10%. Jews also have a much easier time pretending to be white than blacks. There were also many discriminatory laws passed against blacks, restricting them in their choice of university. All these factors suggest that discrimination against Jews would have had less of an impact than discrimination against blacks.
But isn’t the point that advocates of this model of downwards discrimination have to account for the fact the Jewish performance substantially exceeded that of white students. According to your account they should have been worse off than whites but nowhere near as badly off as black students.
I can well believe that prejudice implemented through law could impose sizable costs on the minority. I said “in a market system.” Discriminatory laws passed against blacks are not part of the market system.
I don’t think Jews succeeded by passing as gentiles.
The class of 23andMe tests that FDA approved are, of course, tests that 23andMe had been carrying out for years, but they are of a particular class, carrier tests, that have been available without prescription for decades.
“New study suggests that good ventilation in buildings increases cognitive test scores as much as 100% ”
It could also be just removing bad smells from the area. If someone farts and it hangs around forever, we’re going to spend at least a little bit of thoughtspace being annoyed by the fart smell.
Ah, the good ol’ starving family/trolley/violinist dilemma.
Was confused by “evidence-based literary instruction”. A/B tests on whether Shakespeare is better than Chaucer?
Turns out it’s literacy instruction.
I’d be very interested in “evidence-based literary instruction.”
What quality of literature can we teach children, and how soon, at what risk of the sort of breakdown John Stuart Mill suffered?
Omnilibrium is a social site that tries to improve online filtering
It’d be easier to just only look at One’s Own Side’s media in the first place, no? The last thing I want is to never be exposed to outgroup ideas at all.
Me, I want filtering that only shows me thoughtful, well reasoned ideas from all sides, and hides all stupid political/issue memes and clickbait.
Pity that’s orders of magnitude harder…
(I constantly block political sites on Facebook … not for disagreeing with me, but for disagreeing with me stupidly. Or agreeing with me stupidly. Ain’t nobody got time for that, not if they care about ideas.)
Maybe I should just stick to cat pictures.
“Me, I want filtering that only shows me thoughtful, well reasoned ideas from all sides, and hides all stupid political/issue memes and clickbait.”
And, as you have just demonstrated, you found it.
Not perfect, but enormously better than Facebook is or Usenet was.
Actually, Omnilibrium is supposed to identify stuff that is appealing to both sides. First it finds the sides, then it can find stuff with uniform support.
The trouble is that everyone idea of thoughtful and well-reasoned is relative to their own side’s assumptions. To a certain flavor of leftist, for example, everything that implies skepticism of anthropogenic climate change betrays hopeless ideological bias and is better off being tarred and feathered and run out of town on a rail. Meanwhile, to a certain flavor of rightist, everything that doesn’t imply skepticism of anthropogenic climate change betrays hopeless ideological bias and is better off being tarred and feathered and run out of town on a rail.
(I’m not trying to derail this into a global warming thread; it’s just one example. There are lots of others, many even more mindkilling.)
On the ventilation study: a glaring issue is that the air quality conditions were not counterbalanced by time of week.
They had people come in 3 days a week (Tuesday, Wednesday, Thursday) for 2 weeks. Like this:
Day 1: Tuesday, Green + (positive)
Day 2: Wednesday, Moderate CO2 (moderately negative)
Day 3: Thursday, High CO2 (negative)
Day 4: Tuesday, Green (positive)
Day 5: Wednesday, Conventional (negative)
Day 6: Thursday, Green + (positive)
Notice how two of the positive green conditions are the first day of the week (when people are most refreshed). And how the negative and moderately negative conditions are in the middle of the week.
They get 81% explained variance in cognitive scores by condition! That’s so extremely high that I can’t imagine that the only thing coded in condition was the air quality they were trying to manipulate.
A reasonable size effect is maybe 10-30% of the variance.
That’s actually a decently well-designed flag. I’m surprised.
I loved The Northern Caves — it is possibly the first time an author has nailed the voice of the “crazy, pretentious, garbled, early-2000s forum oracle” in a 100% convincing way (so convincing I have to wonder if Nostalgebraist actually was one) — but it had one enormous, glaring flaw. Without spoiling anything too badly, the flaw is the one Scott identified in his review, the central, “creepy” hook, which makes little sense and adds nothing to the story. Still an excellent read, though, a sort of House of Leaves for the internet generation. Fantastic job on the character voices, right up to the podcast at the end. You can practically hear those guys.
I just going to leave this here and bemoan the fact that politicization has yet again made it extra difficult to figure out what is true (or, perhaps, motivated a bunch of people to find flaws in research, making it harder to wade through the morass of claims and counterclaims, but overall giving someone determined to figure truth better raw materials to create a map of a complicated territory).
I think this is telling:
Admitting in your conclusion that the purpose of your paper is to kill a heresy with fire is not really conducive to actually killing it with fire or, failing that, a mature discussion about it.
So I will join you in bemoaning the politicization of interesting questions.
Edit: Also, reading the footnotes, he seems to references his own unpublished works an awful lot. Which, okay, whatever. But using your own unpublished work as the foundation of your methodology does not seem like the model of sound practice. It seems like you would want to have the paper were you propose your novel procedure published and at least notionally vetted before diving headlong into using it in politically charged fields of research. Which, okay, whatever. But aside from a brief description, the only thing Wikipedia says about the methodology in general is that it is frequently criticized, and how to get it even more wrong than usual. Much less his particular procedure for doing it.
So, not being a statistician, my question is matching actually a well accepted method, and is he doing it right?
You ought to say that this is about mismatch theory, rather than just dropping it with no context. This link might last longer.
No, politicized social science research is better. Run of the mill stuff makes the same errors, but it’s not worth it for anyone to rock the boat and challenge it.
1. Am I allowed to use different “names” on a whim here, or is it like Econlog where a webmaster hunts you down and demands you use the one you originally commented with?
Cool! what are some of those conferences? I’d like to submit talks to them.
3. Does anyone here do a lot of public speaking? How did you improve your craft? I’ve been videotaping my talks and writing down things I should do differently next time and trying to remember to do them, but is there a more methodological way to get better at public speaking?
4. In that Joe Biden article, not one mention of the war on drugs, the creation of the office of Drug Czar, or civil asset forfeiture. Well, it’s the NYT, go figure.
5. Thomas Sowell already made that anti-affirmative action argument. (Oh wait I see that other people pointed that out. Good.)
You can use the names you want, but the Gravatar will make the link obvious unless you also change your email address. I wouldn’t recommend leaning heavily on socks, though.
I don’t think people pay enough attention to most Gravatars that it would become obvious unless one is actively looking for sockpuppets.
Yeah, I definitely wouldn’t remember someone just by their gravatar.
I don’t even recognize my gravatar.
Me too, other than whoever it is – another Anonymous, I think – who has had the misfortune of Gravatar turning their email address into a swastika.
I always remember my Gravatar, and make sure to post it on all my social media profiles to make sure people know it’s me.
I remember the swastika avatar.
Of all the things to be remembered for, this is the one that sticks.
OTOH, the channers have a point with Anonymous posting. Other posters should engage the content of your post, not the person who made the post. Not having a consistent identity between threads helps with that. If the Anon is abusive, then the webmaster checks the IP logs and bans them.
Not having a consistent identity WITHIN threads makes discussion much harder. (Although I do find it helpful when I switch between trolling and posting seriously in the same thread.)
I am a big fan of content devoid of identity.
I am also uncomfortable keeping a cohesive identity but that is more a psychological problem than how I think others should behave.
I have had a decent amount of success with public speaking. If I have ample time to prepare and the talk is important, then I will write the “ideal” form of my talk, record myself reading it, and listen to that recording over and over again. This serves several functions. First, it cements the flow of the talk in my head. Second, it makes good phrasings available to me. Third, it suggests improvements and highlights weaknesses in my planned talk. When I finally give the talk, it is more like a riff on the script that I had recorded and listened to, benefiting from the comfort and familiarity that I obtained through repetition.
I honestly believe this approach is better than simply practicing over and over. Practicing doesn’t actually involve any feedback, there’s no circuit of improvement.
My usual approach to public speaking, for a very long time, has been to write an outline to tell me what I am supposed to say in what order, not try to write out and memorize actual wording. In my experience, people who are reading or reciting from memory a text sound as though that is what they are doing unless they are much better actors than most of us.
I used to use a single 3×5 card for my outline. Nowadays it’s about a page produced with a word processor.
If I gave a talk just off an outline I’d be “um”ing and “uh”ing and using the same idiosyncratic phrases over and over again while I searched for examples to put next to the bullet points. And I’d have no rhythm to my speech whatsoever. (I know because I’ve tried this method.)
I’ve seen videos of you giving talks and I don’t remember you ever having those problems. What’s your secret? Is it just an extremely detailed outline? How do you remember to use or not use certain words or turns of phrase? How do you lay out your “cues” to look back down at the outline so that you’re not just glued to it?
I don’t have an extremely detailed outline. But I’m usually talking about ideas I have talked about many times before, whether in public talks or conversation.
That’s exactly what I’ve been doing to prepare for my talks, and it’s worked extremely well.
But then what about when you actually give the talk? How do you train yourself to be less nervous when addressing large groups, insert pauses appropriately, use a more natural voice and not talk too fast, etc.? That sort of thing requires practice, but what’s the right way to practice it if you don’t have much control over how often you’re giving talks, and are only giving them once every few months for now?
Also, do you use a system for analyzing your talks after you’ve given them or do you just kinda know what to look for?
PS. @Moridinamael: Nice to happen across someone else who read Seveneves. I thought the coolest thing in it was “Amistics” which he sort of drops like a $100 bill and then moves on without looking back.
Take control. Join a club that has talks. Grabs some friends and demand that they listen to your talk.
Yeah, Stephenson is definitely one of my favorite authors.
As for actually improving in your style and delivery, I have some thoughts, but I’m not really an expert so YMMV.
On confidence, I choose deliberately to put myself into a frame of mind that goes like “this is important, this is going to be an opportunity to test my abilities, this is my opportunity to be magnetic, I am excited about this.” This helps me cultivate an attitude which effectively neutralizes nervousness, worry, or the kind of torpor that you sometimes see where a speaker has convinced themselves that it *isnt* important in order to avoid stressing about it. Basically I cultivate powerful eustress so that distress doesn’t have a chance to take hold.
As for improving technique, I just try to get objective feedback. I would like to become part of Toastmasters or something to improve my abilities, but I haven’t had the time.
Thanks. I’ll keep videotaping my talks and analyzing them afterwards–can’t get much more objective than that. I’ve done that enough times that I’m capable of picking apart my “performance” pretty objectively.
I see your point about confidence, too. The talks I’m giving lately are all basically about the same thing–an original idea I’d like to be a thought leader with–and though intellectually I feel confident about it, I also am just not the personality type to say “Yes, this is important dammit and I’m the one who’s gotta go out there and let them know how it is and screw all the naysayers”, so I could use a sort of ritualized way of doing that (pumping myself up), to get me over the rationalizing hump of “Well, c’mon, there’s more important things in the world and there’s a lot about this you haven’t figured out yet, and you’re really a nobody at this point.”
I’ll look into Toastmasters…I’ve always kinda wondered what kind of commitment it requires.
You can change names as long as it isn’t to escape a ban or something.
It’s too easy to escape a ban here. I first thought my attempt at getting unblocked by the spam filter here was a method of getting around a ban.
Nah, just decided I want my online anonymity to be a tad more secure.
At least you’re keeping a constant Gravatar. I look at those things.
Well you gotta draw the line somewhere!
“Anonymous” with frequently changed emails is probably a superior solution, though.
>Female education decreases teenage fertility, but does not have a more general lifetime effect on fertility.
I spent five minutes trying to work this out – people are making up the difference later? How? Fertility treatments? – before I realize that it meant “if you ignore teenage pregnancies, education doesn’t significantly decrease fertility”.
I just assumed it was causation backwards through time. If you don’t get a college degree, then you are more likely to find that you have had a teenage pregnancy. Surprise! Finish that college degree, ladies!
I was wondering was this backwards – teenage girls who get pregnant tend to drop out of school and not go on to college/drop out of college if they got pregnant while there, so their level of education is lower.
Conversely, if you’re motivated to stay in school and go on to do that degree, you are more careful about getting pregnant, or likely to get an abortion/give up the baby for adoption.
So young women who get pregnant don’t complete their education, and those young women who do complete their education then go on to have babies later in life (I was going to say, “as is normal”, but I really think ‘having babies’ is beginning to be abnormal, in the sense that “you can’t have a baby now, think of your education/career!” and places like Apple offering to pay for female employees to freeze their eggs so they can have babies later – you know, after the important stuff gets done).
There’s a term for that – the false life-plan.
I understood it to mean that a teenager with a lower level of education will have more children *in her teens* than one with a higher level of education, but if you compare the number of children the two have by (e.g.) age 50, they will be the same. It’s the difference between having your 2.3 children in your teens and early 20’s, vs late 20’s and thirties.
That’s what I thought too, at first; but I think I was mistaken, reading the link.
What definition of “pre-K” do all of these studies use? In my district, most or all incoming 5-year-olds were tested for “kindergarten readiness”: those who were deemed to have the appropriate levels of intellectual/emotional/social maturity started immediately in kindergarten, while those who were not were put in a year of pre-K first.
Under this model of pre-K, the results you mention are utterly unremarkable. The pre-K students do better in kindergarten because they’ve had a year’s head start on the whole going-to-school deal, but after a few years that advantage is no longer very relevant and the students who performed better on the initial readiness test have retaken the lead.
Obviously I realize that this is not what every school district means by “pre-K”, but are the studies all nuanced enough to exclude students from districts like mine? Or are they just assembling data about students who were and were not in a class with the word “pre-K” on the door?
I heard that Turing once picked up a dead baby, and the baby came back to life.
Is this how our culture makes saints now?
Insufficiently meta? Not if the point of “hate speech” laws is to ban everything but your favoured ideology. If Putin was trying to put a traditionalist Russia back together, this is how you’d expect him to do it.
It’s only insufficiently meta if you’re a liberal. And they’re an endangered species.
“I heard that Turing once picked up a dead baby, and the baby came back to life. Is this how our culture makes saints now?”
Since I need somewhere to share this – a patient of my colleague’s suddenly got really sick after going to see a dead saint’s remains. I think this provides a nice counterpoint to all of the people who suddenly recover after going to see a dead saint’s remains.
What I want to know is: which saint?
Was this St Marie Goretti? I saw in the news that her relics were on tour in the USA.
What kind of sickness? Streaming head cold? Collapsed in a heap on the bathroom floor at 4 a.m. in the morning and ambulance had to be called?
Morbidly enquiring minds want to know! 🙂
I’m more concerned with the patient’s other symptoms. Extreme photophobia, porphyria, and garlic allergy, perhaps?
I can’t track down the reference, but I seem to recall that something similar happened on one of the occasions when John Baskerville’s. body was put on display.
Take that, Kenneth Kolson!
(Kenneth Kolson, back in 2003 wrote “Big Plans: The Allure and Folly of Urban Design”. In that book, he said of SimCity, “Some social issues are dealt with in a hopelessly simplistic manner. Crime rates, for instance, instantly plummet with the construction of a police station. Would that it were so simple.”)
>The Person-Guy tells you that he’s getting really ‘into’ candles. He spends most of his day lighting candles with a specialist Egyptian cotton taper, and then extinguishing them with the tips of his fingers. He goes to trendy candle clubs to hang out with other Person-Guys. He subscribes to Candle Lighter’s Monthly.
Is Wint Person-Guy?
I don’t think the ethical offset argument is so good. I think the way it works is that people eat meat, and they enjoy eating meat, and everyone they know eats meat and finds it socially acceptable. Then they take this tool (ethical offset calculation) and do the calculation. And if their calculation returns a small number, then they will say, “aha! our social norm of eating meat is justified!”, and if it returns a large number, they will just ignore the result, because stopping eating meat would go against everything they were socialized to do and what they enjoy doing.
What if eating meat was actually more expensive than eating vegetarian (I’m pretty sure it is in reality). Then by eating meat you’re further causing harm by not donating the difference. Can you make up that harm by additionally donating the difference? Why not apply this “ethical offset” to everything? I could buy a cheaper car and donate the difference to charity, but instead I will buy a more expensive car and donate the difference to charity anyways? Am I allowed to choose which choices I ethically offset and which ones I don’t? At what point does it become “ethically offsetting” instead of just living your life as you wish, and donating the amount of money you wish?
I think the key difference is that it allows you some rebuttal to people who have messed up moral intuitions and try to shame you specifically for eating meat. It’s just a social tool to justify what you shouldn’t have to justify (the fact that everyone is inherently selfish to some degree, and that there’s nothing special about the selfishness inherent in eating meat).
The only reason “don’t eat meat” stands out from any other moral prescription (such as donate the maximum possible amount of your income to charity) is that I think that depending on some personal factors, reducing your meat consumption drastically might not have very much of a downside to you, whereas donating most of your money obviously would. If you disagree that it would work for you, then, well, okay! Let’s not get all obfuscatory with the ethical offsets…
This advice is presumably intended for people who believe they ought to donate all their income beyond what they need for subsistence, but don’t. For those people, they can increase whatever amount they are currently donating by an amount they believe is equal to the utile cost they impose by eating meat. I suppose it requires some way to commit to donating a certain amount beforehand, so you can’t just ignore the money you believe you owe as compensation for your meat-eating.
Although the justification for donating it to an animal welfare charity, rather than whichever charity you believe creates the most utiles per dollar, still eludes me.
But that’s exactly the point I am making, it depends entirely on your reference point. “For those people, they can increase whatever amount they are currently donating by an amount they believe is equal to the utile cost they impose by eating meat”. I don’t get why they would not donate the utile cost imposed by /every single other action they take which is not giving 90% of their income to charity/.
>Although the justification for donating it to an animal welfare charity, rather than whichever charity you believe creates the most utiles per dollar, still eludes me.
It makes perfect sense if you consider it’s about social rebuttal more than it is about a coherent moral philosophy.
Their argument typically seems to be “moral failure”. In other words, they feel they ought to do exactly that, but don’t only because they’re flawed selfish humans rather than perfectly benevolent utilitarians.
The best argument that I come up with, while trying to steelman that position, is that it allows for positive outcomes for people who believe in consequentialism but alieve deontology, or at least the Harm/Not Help distinction.
Maybe I missed this, but did anyone in the discussion about success of African immigrants vs success of American-born blacks consider how long and whether the immigrants were exposed to American school education, and how long to education that was not American school education?
It might be the case that American school education (especially in public schools) does not compare with the education children get in their native countries, or, worse, that it has a negative effect on the children’s education. (From what I’ve seen, this is certainly mostly the case for children from Eastern Europe.)
Sorry if somebody else brought this up before.
Come on; that’s surely nonsense. U.S. schools are some of the best-funded in the world, and second-generation Black immigrants regularly achieve much higher incomes and educational backgrounds than first-generation Black immigrants. This is also the case in the U.K. This upward mobility is puzzling to me as a hereditarian, but it still exists.
And Eastern European school systems are difficult to compare to American. What evidence makes you think they’re better? American schools are certainly better than almost any Hispanic ones (including Chilean, Uruguayan, and Mexican) for Mestizos.
“Best-funded in the world” does not actually mean much – not for math education, at least. “Higher incomes and educational backgrounds” is also not the same thing as better education.
I do not know much about schools that black immigrants come from, but I do know that a child of Eastern European immigrants who is born in the US will normally be a lot worse educated than a child of Eastern European immigrants who managed to get some Eastern European school education before coming to the US. Whenever this is not the case, it is because it took a lot of work by the parents to bring their kids to the level they would have been at if they were educated in Eastern Europe.
As I pointed out above, Russia scores really well on TIMSS, less well on PISA (similar to the U.S.) on math:
Poland and Estonia score quite well on PISA math, though.
It really depends on which classes the kid goes to. If he/she gets into Algebra by 8th grade and can test out of Geometry, he/she can spend 10th grade learning Calculus. It depends on the school.
I know Steve Sailer has done a ton of comparisons of educational outcomes broken out by race, and IIRC, his broad conclusion was that the US system is pretty good, once you do that.
Basically, most racial groups outperform their home countries when they are in the US system. The US just happens to have a system with many groups that underperform the average.
By this metric, we might have little to fix. Once the IQ equalization in minority groups levels out, we’ll be just fine.
I taught for two years in the Ghanaian education system as a Peace Corps volunteer. While it wasn’t one of the countries used as a comparision, I can definitely say that the education system there is far more dysfunctional than the education system in America.
Somehow that’s not a ringing endorsement of the US system.
It wasn’t intended to be a statement about the US system, rather it is about the Ghanaian system. The difference is vast.
So my question was whether anyone studying IQ of black immigrants actually looked at how much and what kind of education they got before coming to the US. Apparently, this question is answered by a “no”.
Maybe this is really not a factor for most black immigrants. This is, however, hugely important and defining for immigrants from some ethnic groups. IMHO it’s a pity that nobody bothered to figure out whether this is the elephant in the room that skews all the data.
Maybe others have pointed this out first but Thomas Sowell and others have been giving this(and other) arguments against affirmative action for a long time.
A lot of people in this comments section and elsewhere have responded to Chisala’s article by wondering whether the Somali refugees that ended up in America might have been positively selected for intelligence somehow. A commentor on Chisala’s article who claims to be a former Somali refugee has weighed in on this subject:
I wonder if small hated ethnic groups are often hated because they outperform neighboring ethnic groups, ala the Jews?
Having just read The Northern Caves via your link, I have a different interpretation of it which is probably completely out to lunch.
And even if it’s right I’m not sure if it’s any help to point it out since the ambiguity is probably important to the story.
Nccebnpuvat gur fgbel sebz zber bs na FS-bevragrq onpxtebhaq, vg frrzrq gb zr gung gurer jrer uvagf gung guvf jnf fbzr xvaq bs zrzr-onfrq zvaq-pbageby, jvgu shegure uvagf gung vg jnf sebz nyvraf. Abg arprffnevyl sebz gur Fbyrabvq Ragvgl, ohg gur Fbyrabvq Ragvgl frrzf yvxr n uvag gb zr.
Gur cybg qbrf frrz (gb zr) ernfbanoyl pburerag sebz guvf cbvag bs ivrj.
Well damn. This is completely prosaic and probably not what the author intended (if indeed he intended anything this well-defined at all), but I can’t deny it fits all available evidence.
The article on Detroit is interesting, but I have trouble seeing how it could support Scott’s “optimistic” view. The article’s point is that life under a non-democratic, appointed city manager is pretty horrible: undrinkable tap water, handouts to companies that befoul your environment with lethal waste, shock privatization leading to bad schools becoming even worse.
You can certainly argue with the article’s conclusions, and even if you accept it at face value it’s clearly only describing a few fragments of a very complex story. But I don’t see any way you can read it as “[proving] that people are good at limiting this extreme remedy to the times when it’s needed”.
“a figure so high that it seems impossible no one noticed this before.”
Oh I’ve noticed it before. I’ve been fighting over it with every single employer (sans the current one) ever since I entered the workforce. I am extremely sensitive to this; I may be well rested, but in a badly ventilated and hot room I’ll end up sleeping on the keyboard all day, no matter how much coffee I drink.
Since a lot of people here (including Scott, I think?) seem interested in basic income guarantees, here is a pretty good discussion of the ethics of that from a libertarian perspective: https://www.youtube.com/watch?v=L0LabBo7Amk
Let there be sung Non Nobis and Te Deum
Liverpool were away to Chelsea at Stamford Bridge today and won 1-3.
The reign of our new overlord getting off to a hopeful start, or flattering to deceive? Anyway, Mourinho looked like he was sucking lemons by the end, which is always pleasant to watch 🙂
I’m still amused that the Chelsea players apparently attribute their terrible season to when they got shellacked 4-2 by New York Red Bulls’ B team in a friendly.
Karma’s a bitch, man 🙂 (Now, if only LA Galaxy hadn’t decided to emulate Liverpool so as to make Stevie feel right at home, up to throwing away their leading position in the last weeks of the season…)
Mourinho is even more annoying than Ferguson was, and that takes some doing. I saw some discussion that what he actually wants to achieve by his performance is to take the heat off the players; after all, if the press conference is busy asking him why he said that thing about Rafa’s wife, they’re not asking him “So is this going to be John Terry’s final season with the club?” or the likes. I’m not of the generation for whom Chelsea is the main rivalry (that will always be with Manchester United), but I’ve never liked his attitude, and I’m not even mollified that he admires Steven Gerrard and tried to sign him three times.
Apparently the home support was reduced to singing the Stevie G song in the final ten minutes, which makes it all the sweeter. Chelski* ain’t got no history! 🙂
*Yes, I bowdlerised my initial choice of epithet. I am trying to be gracious in victory, after all. I am also highly amused Jamie Redknapp puts some of the improvement down to Kloppo being a good cuddler:
Routing around democracy? That’s begging the question, whereas it seems this disputes is more about the (intractable) problem in democracy involving subdivision of political authority — who defines the boundaries between the units that govern. And by “boundaries” I mean both horizontally between different jurisdictions but also vertically about the allocation of authority between Federal/State/Municipal units.
As it happens, in the US, all the government at the State level is democratic only at that particular vertical/horizontal scale. Some have adopted State Constitutions that limit the power of the legislature/governor to abrogate home rule for some vertical allocations, but nevertheless, at the end of the day all government authority in the State is by the grace of the State. And each democratic “State Unit” can completely dissolve all lower verticals and govern the cities/counties/school districts/water district/transit authorities directly if they so chose, following the democratic process enshrined in their respective Constitutions.
So in Michigan, the State legislature and governor have exercised that power (in ways that I disagree with) but that I have to acknowledge as legitimate within the structure of their government. The City of Detroit governs only through delegation of authority from the State of Michigan — and that power can be revoked just as well as granted. The City of Detroit has no moral claim to draw a line around itself and claim that it is not subject to the laws of the State of Michigan any more than I have the right to draw a line around my neighborhood and claim self-government from our city (or county).
The implementation of democracy is pretty hairy when you come to the question of whether and how different sub-units within a population are divided. And “we’ll just have a democratic way of figuring out who votes on what” is obviously circular 🙂
[ And the US Civil War looms over all this — 750,000 people dead over the question of who draws the lines between voting units. ]
I think in some senses, the US has too much democracy, at least on the local levels. For one thing, the sheer number of elected officials and the obscurity of many of their offices is counterproductive – how is a voter to decide which candidate would be a better coroner? And as recent events in Kentucky have shown, elected officials are nearly impossible to remove from office even if they refuse to do their jobs.
For another, the standard system of local government is designed around 19th century patterns of settlement, with the size of a county being determined by how far one could travel in a day on horseback. Cities and special districts are formed by a petition-referendum process that can lead to inefficiency and fragmentation because nobody can review the map to make sure it’s reasonable. See “Boroughitis” in New Jersey for a particularly absurd result. And although a city can annex neighboring regions, there’s no way to reverse the process, so Detroit can’t just shed its depopulated outlying areas even though the remainder of Detroit can’t bear the expense of maintaining them. The notion of a higher-level government stepping in and reorganizing local governments for greater efficiency, though common in some other countries, is seen as an unacceptable affront to local control here.
And then there’s the referendum – good to route around a corrupt, complacent political class, but tends to build up nasty side effects that are difficult to reverse. See, e.g., any major proposition that California voters have passed in the last 50 years.
Re: Jewish laws of Halloween: apparently there are also jewish laws of Xmas: http://www.jewishworldreview.com/1298/kringler1.asp
I know very little about biochemistry, but could there could be any connection between melanocortin and sun exposure? I’ve read low vitamin D supposedly makes it harder to lose weight, and it would make sense, perhaps, that less UV exposure would, to the brain, code as “winter is coming (so pack on the pounds while you can)”?
And also, I’m pretty sure most Americans get significantly less UV exposure now than 30 or 40 years ago due to a combination of more office jobs, digital entertainment, and dermatologist warnings about sun exposure.
I really admire the researchers (Grover Whitehurst, David Amour etc) at Brookings and their quest to introduce some rigour into PreK evaluation especially as PreK proponents seem to regard all forms opposition as equivalent to endorsing achievement gaps. We are probably all familiar with examples of sites/groups that maintain the pretence of objectivity and yet manage to endorse every tribal position (mumble mumble VOX). Brookings is often described as centrist but it’s much more blue tribe than anything else and PreK has become an article of faith within progressive circles.
Since it was ignored last time, I’ll mention it again: sleep time varies by race. Hunter-gatherers sleeping the least is exactly what the HBD prediction would be, AFAICT
Gail Heriot also tried pulling down the affirmative action in favor of men. It came on the heels of accusations against the College of William and Mary. Her proposal got shot down however, even from feminists.
No wonder though once you saw this line included in the Dean of Admissions’ defense of their admission policy,
“MIT exhibited the largest relative discrepancy between the admit rates (in 2006) for