Sumerian mythology includes a bunch of weird legendary debates, like The Debate Between Sheep And Grain, the Debate Between Winter And Summer, and the Debate Between Bird And Fish. In case you’re wondering, the winners were (spoiler alert) grain, winter, and bird respectively.
Stevenson and Wolfers find that liberalization of divorce laws significantly decreased rates of domestic violence and female suicide.
History’s first conspiracy theory? The Nero Redivivus legend said that Emperor Nero survived his apparent death in 68 AD and was going to reclaim the Roman Imperial crown; it inspired three rebellions by people pretending to be Nero.
And in the world of modern crazy conspiracy theories: some Pakistanis believe Malala Yousafzai was never shot by the Taliban at all; her shooting was staged by the CIA (or, in one version, Robert De Niro). Also: “In November 2014, just a month after she was awarded the Nobel Peace Prize, the All Pakistan Private Schools Federation – which claimed to represent 150,000 schools – announced an ‘I Am Not Malala’ day.”
Interview with Open Philanthropy Project director Holden Karnofsky on what academic science can and can’t tell us about how to do altruism. Related: Karnofsky’s AMA about job openings at OpenPhil.
New England Journal of Medicine claims firearm injuries go down during NRA conventions (because the people who are busy attending the convention are the same people who would otherwise be shooting themselves). But Andrew Gelman thinks we shouldn’t believe it. Related: firearms researcher Carlos Goes changes his mind, now says there is good evidence for guns causing more crime.
After Washington DC cracked down on fraudulently graduating students who didn’t meet requirements, only 42% of high school seniors are on track to graduate this year. Some interesting discussion here, including the observation that even with the widespread fraud, DC’s graduation rates were still well below the national average. Before we start blaming the DC education system, I hope someone checks that this isn’t exactly what we would predict based on DC’s racial composition and known racial disparities in education.
David Graeber on the new understanding of prehistory. Claims that recent research has converged around a model where prehistoric humans formed large communities reminiscent of “civilizations” in the off-season from hunting, and that agriculture was less of a sudden shock and more a transition to having civilization year-round. Not sure if his views are as consensus as he claims, interested in learning more from prehistorically knowledgeable readers.
Italian election ends with center-right and populists in power, likely a victory for anti-immigrant forces and Euroskeptics. Still unclear who will get to lead the government, prediction markets slightly favor Di Maio and Five Stars. No market on whether Italy will leave the Euro, but most people I’ve read seem doubtful.
Reddit discussion on why the South African decision to seize white land probably won’t come to anything – strongest evidence is they’ve tried this a bunch of times before and it’s never come to anything.
Given the magnitude of the decline in global insect populations, why aren’t we all dead yet?
Bad signs: when your government becomes so censorious that it bans the word “disagree”.
WeForum has numbers on the bullshit-jobs phenomenon: “In a 2013 survey of 12,000 professionals by the Harvard Business Review, half said they felt their job had no “meaning and significance,” and an equal number were unable to relate to their company’s mission, while another poll among 230,000 employees in 142 countries showed that only 13% of workers actually like their job. A recent poll among Brits revealed that as many as 37% think they have a job that is utterly useless.”
Nominative non-determinism: the town of Equality, Illinois was a historical center of the slave trade, and center of a perverse scheme for kidnapping Northern blacks and selling them into slavery called the Reverse Underground Railroad.
Two new major papers on growth mindset. A large pre-registered experiment (related Twitter discussion here) concluded that a growth mindset intervention had very modest (but statistically significant) benefits, and given that it was so cheap it might still be cost-effective to spam the school system with it in the hopes that a couple of students benefit a little. A large meta-analysis agreed, with the caveat that spamming the school system with basically anything else would be more cost-effective (“From a practical perspective, resources might be better allocated elsewhere than mind-set interventions. Across a range of treatment types, Hattie, Biggs, and Purdie (1996) found that the meta-analytic average effect size for a typical educational intervention on academic performance is 0.57. All meta-analytic effects of mind-set interventions on academic performance were less than 0.35, and most were null. The evidence suggests that the “mindset revolution” might not be the best avenue to reshape our education system.”) People who previously supported growth mindset are taking this as proof that at least it works. I admit I am pretty biased against this idea, but I have a different perspective. Imagine I claimed our next-door neighbor was a billionaire oil sheik who kept thousands of boxes of gold and diamonds hidden in his basement. Later we meet the neighbor, and he is the manager of a small bookstore and has a salary 10% above the US average (though below the average for our neighborhood). Should we describe this as “we have confirmed the Wealthy Neighbor Hypothesis, though the effect size was smaller than expected”? Or as “I made up a completely crazy story, and in unrelated news there was an irrelevant deviation from literally-zero in the same space”?
The Bible says God doomed Cain to wander forever, so where is he these days? Various theories advanced through history have included “it’s metaphorical”, “on the Moon”, and “in Tennessee”.
Business Insider: companies are publicly liberal on social issues mostly because liberals are a more valuable consumer demographic.
New paper The Moral Hazard Of Lifesaving Innovations concludes that when states promote freer distribution of the opiate-overdose-antidote naloxone, people are more likely to abuse opiates because it’s perceived as safer, and in the end there’s higher crime and no reduction in mortality. Interested in hearing what the thus-far-very-successful pro-naloxone movement thinks about this.
Nathan Cofnas debunks Kevin MacDonald’s anti-Semitic conspiracy theories. Interesting not so much because I expect many people to believe Kevin MacDonald’s anti-Semitic conspiracy theories, but because an attempt to disprove anti-Semitism on the merits got published in a journal and was generally-well-received, and because it shows the way that careful and intelligent study of group differences can be used to fight racism (the argument is basically “Jews’ success in various fields is as we would predict from their IQs, so there’s no need to posit any conspiracy theory”). I don’t think there’s a good way to debunk these kinds of conspiracy theories without citing this research, which is one reason I become so concerned when people try to suppress it.
Related: Wikipedia’s article on Jewtown, Pennsylvania has a certain kind of minimalist beauty to it.
The Darian calendar, created for future colonists to keep time on Mars. In case you have the same question as I do – no, the months aren’t named after Homestuck trolls, they’re named after the Sanskrit names of zodiac signs (which Homestuck trolls are also named after).
Businessman Andrew Yang will run in the 2020 presidential election on a platform of universal basic income. “I’m a capitalist, and I believe that universal basic income is necessary for capitalism to continue”. Also supports banning federal regulators from moving to jobs in the fields they regulate, and “turning April 15 into a national holiday”. Who knows, he might even beat Vermin Supreme to win the coveted First Place Among People Who Will Never Win award.
Let’s be fair to Bernie Sanders: he never actually said anything positive about Venezuela. In fact, let’s celebrate this: given how many socialists did praise Venezuela when it looked like it was doing well, this demonstrates admirable judgment and restraint.
The Long-Run Effect Of Teacher Strikes: Evidence From Argentina tries to measure the effect of teachers by seeing if students who are exposed to long teacher strikes do worse in life. It claims “robust evidence” that a standard (for Argentina) of three months’ of teacher strikes over one’s educational career lowers adult earnings by 3%. This goes against all my priors but potentially matches some related results by Chetty. Interested in seeing further discussion of this.
The Fatebenefratrelli Hospital in Italy is famous for the mysterious “Syndrome K” – a fake, supposedly contagious diagnosis they would give Jews in order to keep them out of the Nazi concentration camps. The Nazis never investigated the hospital “out of fear of contracting the disease”.
If your favorite websites have become more censorious lately or cracked down on mostly harmless activity, it’s not their fault – it’s a result of FOSTA, a new anti-sex-trafficking law that in practice enables a wide variety of legal crackdowns and censorship against the Internet. RIP most of Reddit’s darknet- and drug- related communities, and Craigslist personals.
Here’s a graph showing favorability of various groups/people among Democrats vs. Republicans. Republicans view women more favorably than they view the NRA; Democrats view Christians more favorably than they view Nancy Pelosi.
New paper claims that cutting back on stop-and-frisk in Chicago caused a spike in homicides. Some discussion on Reason (1, 2) and on an SSC open thread.
Judge finds Starbucks guilty of refusing to put “This product contains chemicals known to the State of California to cause cancer” warning on their coffee, exposing them and other coffee chains to potentially millions of dollars in fines. Relevant law seriously seems to say that if they can’t prove their coffee doesn’t cause cancer, they need to include the warning.
Does anyone else think the UK #knifefree campaign comes off as a little creepy and Orwellian?
Paul Christiano’s AI safety research is up online at ai-alignment.com, including a summary by Ajeya Cotra intended to by comprehensible by us mere mortals. There’s also been a lot of good discussion of his research program at the new Less Wrong, including a response by Wei Dai.
Also: not one, but two good comic-book-style illustrated guides to AI safety topics. Abram Demski on some of MIRI’s research, and Chris Noessel illustrating Stuart Armstrong’s Smarter Than Us: The Rise Of Machine Intelligence.
Results of all 833 of Aella’s twitter polls. Content warning for frequent graphic sexual content.
As long as I’m getting out of my system observations I’ve made on other posts: In regard to the lack of female participation in libertarian forums, I copied down this from some source:
Security
or
Free Will.
Choose.
Which sounds really good until you understand that there’s a huge market for security. Women on the whole are much more willing to trade free will for security than men are. Libertarianism suffers for this because it’s a political philosophy dedicated to maximizing free will and placing little value on security. Whereas by comparison, a lot of “freaky fundamentalist cults” place a lot of emphasis on providing security for adherents that follow the rules. If your life doesn’t have enough security in it, that’s attractive.
It’s always tempting to choose security if you think it’s other people’s wills who will be restrained to provide for your security. For those of us who always seem to fall on the other side of the equation, freedom looks a lot more appealing.
It’s always tempting to choose freedom if it’s someone else’s life that gets sacrificed for it.
Relative to females I think the male attitude is that there’s something inherently unsafe and un-secure about letting strangers (specifically, out-group strangers) make decisions for them.
Obviously men delegate authority within a pecking order but if they see orders coming from outside that pecking order I can understand at a subconscious level why they would feel unsafe.
This doesn’t speak to the empirical-efficacy of freedom or security trade offs.
Interestingly, Google Maps does not label a “Jewtown” at 40°37’53.0″N 78°54’21.0″W. The closest settlement seems to be the sleepy hamlet of “Heilwood” about 1.5 miles to the southwest.
Well, nothing problematic about that.
This professional archaeologist says David Graeber is not wrong, just biased in his predictable direction. His argument seems to be that we have seen people in the past try hierarchical “civilization” and then abandon it for a more anarchic existence, so we could do the same. But consider one of the most famous cases, the classic Maya. They built a fabulous civilization with elaborate art and monumental architecture and a hereditary elite with great powers and so on. Then that civilization “collapsed” and most of them went back to living in small peasant communities. But imagine what that process must have been like to live through: for starters the population fell by more than half, maybe much more. There is evidence of great violence, and mass movements of refugees. Does that sound like something we want to repeat?
As for the people who live in different ways at different seasons, sure, this happened. But while those people who spent the summer wandering the hills in small bands may have been in practice less controlled by chiefs or what have you, that doesn’t mean they weren’t still theoretically part of the same polity. Some Indians of eastern North America lived part of the year in agricultural villages and part as wandering hunter-gatherers, but Powhatan would not have been amused to hear that when they were off hunting they had ceased to be his subjects. And if outsiders had tried to impinge on their hunting territories they would have been quick to notify the chief and ask him to organize a defense.
Everybody knows that towns and tribes have had different kinds of leadership systems; Aristotle wrote about this, for heaven’s sake. But it remains true that as populations grow and cities grow and people accumulate more stuff, by and large governments get more powerful and centralized. And as Aristotle realized, democratic polities were highly unstable, very likely to be taken over by tyrants and collapse into anarchy. That is, they were until we invented representative democracy, which has turned out to be as stable as any other system while still giving ordinary people considerable input into how things are done. Graeber refuses to admit this because he hates modern nation states, but of course he can’t offer any alternative that would be better.
So I say, with another commenter, that Graeber is interesting but ultimately infuriating, because he can’t for a moment let go of his anarchist faith.
Graeber misses the forest for the trees in another way: he’s asking the wrong question. Instead of asking when/how the idea of heirarchy and dominance came about, a more sensible question is how/when did the idea of equality come about. Heirarchy and dominance go back far further than he seems to think possible.
If you set your priors based on primatology rather than history, you should not expect to find equality among terrestrial apes with a significant degree of sexual size dimorphism – you should expect to find hierarchy and privileges enforced with the threat of physical force.
(Is that a steelman of Jordan Peterson’s lobster argument?)
It’s a testament to the power of our brains that we’ve evolved that more “natural” state of affairs into complex systems of property rights, and eventually the idea of true equality. It should come as no surprise that strong forms of equality didn’t appear in human philosophy, much less society, until very recently.
Sexual dimorphism does not predict hierarchy. Why would it? Maybe it predicts that males rank above females, but nothing more. Gorillas have big dimorphism, but they barely have a male hierarchy at all. Baboons have big dimorphism, but, on the whole, females rank above males, because they cooperate better, because they’re matrilocal.
The divorce study is from 2003 and covers 30 years.
Didn’t the use of antidepressants by women increase significantly over that period, presenting a potential powerful confounder?
“Creepy” is subjective, but I think that anti-knife campaigns are likely to be a bigger part of American conceptions of a dystopia than they are in those of a Brit like Orwell.
It’s a long time since I read 1984 – I imagine that the proles there are disarmed, but is it ever specified or referenced?
No. But by pure coincidence I’ve been reading Homage to Catalonia, and Orwell there describes the internecine fighting on the Republican side as being precipitated by an attempt by its Soviet-backed government to seize arms in private hands (read: held by members of anarchist or Trotskyist militias).
Let me try again.
The policy is ridiculous — literally worthy of ridicule. But on its own, I’d be content to sit back and ridicule it from a safe distance, as was indeed my initial reaction. What really disturbs me is the level of support it seems to be enjoying, and how easily its supporters seem to be rounding any criticism off to “crazy Yank gun culture stuff”.
Now, don’t get me wrong, I’m broadly though somewhat ambivalently supportive of gun rights. But I recognize there’s a spectrum of positions here, trading off trust in and empowerment of ordinary citizens on one side and safety on the other. That’s not limited to weapons (and indeed I don’t think of my [incredibly UK-illegal] knife as a weapon); we could say similar things about power tools, radiological samples, toxic or corrosive chemicals, construction equipment, or even private motor vehicles. Anything potentially dangerous. Strict restrictions on pocketknives already imply a stance so paternalistic that I can scarcely conceive of it, crippling one of the most useful everyday tools out there to very marginally complicate criminal access to weapons, but it’s still a point on that spectrum. The fact that I’m sitting here explaining to people from a first-world country that other stances can coherently exist, and that their gross misconceptions of American attitudes are gross misconceptions? That’s the real issue. That implies that within the UK Overton window — at least for one political background — there is no such spectrum. Inasmuch as that’s a product of British culture, it’s bizarre, alien, frightening. Inasmuch as it’s a product of politics, it — despite abovementioned ambivalence — makes me want to buy a cabin in Montana and tile it with AR-15s.
But that wasn’t the question; the question is “why do Americans find the campaign Orwellian”. And I think the answer is “because to American eyes, it would never fly outside a stupendously authoritarian political culture”.
I don’t think that downplaying or overlooking trade-offs is all that unusual, or in any way particularly British – I forget what the fallacy is called, but I’m sure I’ve read about it either on SSC or one of the rationalist sites. (I want to say “Just World” but having looked that up it doesn’t quite seem to match.)
And while I can’t speak for the British, I personally don’t know what the heck I would ever use something like an EDC knife for, and I suspect I’d very likely injure myself if I tried. So in the absence of a special effort to bridge the inferential distance, I can easily imagine myself not understanding why you’re talking about trade-offs.
… in fact, I don’t think I did understand your comment as originally posted, until you added the bit about “most useful everyday tools”.
Do you feel the same way about eating with a steak knife? That’s rather more dangerous, since it doesn’t fold, and the blade is longer than the average EDC knife, judging by a quick google.
I’ve just been making some improvements in the irrigation system in my back yard, using plastic tubing, drippers, and sprayers. The knife I usually carry, which has about a two inch blade, was what I cut the tubing with. Also what I cut the wrapping that one of the little sprayers came with. And wire I was using to tie a grapevine where I wanted it. What I would use to cut an orange or apple for a snack while traveling, if the airlines allowed it, and do use for that sort of purpose when traveling by car, my wife driving.
A small knife is a useful tool. And I can think of a lot of things I would rather use if I needed a weapon—such as a hammer or, better, a walking stick.
Opening letters and packages. Breaking down cardboard for recycling. Puncturing the air bladders used to pad Amazon packages. Sectioning fruit. Halving bagels. Cutting up steaks when you can’t find a steak knife. Testing doneness of meat. Fishing stuff out of liquids or between the prongs of a stove’s burner. Light prying. Cutting ribbon, rope, twine, or Ethernet cable to length. Scraping surfaces, for example to clean up stray glue. Whittling, for example, sticks into marshmallow skewers. Cleaning and trimming fingernails. Removing splinters. Peeling off stubborn labels. Snipping stray threads on clothing. Converting old socks and T-shirts into rags. Converting rags into smaller, handier rags. Cutting down semi-rigid materials like Styrofoam or insulation board. Using the tip as an ersatz screwdriver (only do this if you need the screwdriver more than you need the knife to stay intact, though; it’s a good way to snap off the tip or at least ding the edge).
I could go on and on. I use my knife probably a half-dozen times a day, zero of which are as a weapon — all else equal I’d rather have a possible weapon than not have one, but a knife fight is something you really, really don’t want to get into.
Sounds useful indeed. But the point is that if someone has never used a general-purpose knife for such things, they’re not going to know that.
(I won’t attempt to break down that list in detail, but it’s a combination of things I probably wouldn’t try to do, things I’d do with other tools such as scissors or kitchen knives, and things I might try to do but only when I’m at home. I do have a craft knife, of sorts, but I have no reason to want to carry it around with me when I’m not actually using it.)
@David, yes, I do know how to eat, thank you. 🙂
The trouble is, this is the kind of legislation introduced because it was targeted at the kind of people who were not carrying sharpened screwdrivers at two in the morning because they needed to do some urgent home repair, they were scumbags out to go and stab other scumbags (probably over drug deals, but often for the kind of stupid “you spilled my pint/were you looking at my bird?” fights that those kind of young, violent, criminal types get into).
So if you stop someone on the street that you have a very good idea is heading off to get into a fight, and they’re carrying a knife, and their lawyer gets them off in court over “but my client could have been going to eat a steak for his dinner/cut up some packaging”, then the next time you stop someone carrying a knife you have to let them go, and then they go stab another yob, and then there is hysteria in the media about rampant crime and What Are The Police Doing? And then you get laws about blades and sharp edges and publicity campaigns.
Basically, it’s an extreme example of “this is why we can’t have nice things”. Because some people are scumbags and spoil it for everyone else. And then you get decisions about a blunt-edged blade of a butter knife does fall under the ruling about knives because of the wording of the Act. I can’t find the original conviction in the magistrate’s court where the particular individual was stopped and found to be carrying a butter knife blade without a handle, but I imagine the police had some reason for stopping him and it wasn’t simply “I wonder if he’s carrying half a blunt knife in his pocket”.
The guy in question seems to have a long string of convictions over the years, so it seems not to have been a case of “innocent citizen with perfectly good reason to carry a butterknife was put through this ridiculous arrest under an over-zealous law”.
@Deiseach —
That would be an example of the stupendously authoritarian political culture I mentioned earlier. Like I said, it would actually be less disturbing to me if the British or Irish government just went off its meds one morning and decided to ban knives for no reason. It’s also not particularly disturbing to me that you guys get lower-class violence with improvised weapons — we have that too, though not as much by most of the statistics I’ve seen. It’s the step in between that weirds me out, where once some idiot stabs some other idiot to death with a screwdriver, the political culture decides not only that Something Must Be Done, but that an appropriate, indeed an inevitable, Something is to start banning very simple, very useful tools that have been everyday kit for thousands of years.
I don’t doubt the guy’s a scumbag. But being a scumbag is not in itself grounds for arrest, and there’s no way to give the police latitude to arrest every scumbag that’s doing something vaguely suspicious without either correspondingly disempowering everybody who isn’t one, or throwing out any pretense of objectivity under the law. Maybe you’re willing to accept that. I’m not.
Intuitively, it also seems like it’s possible to get the rate of thugs knifing each other (or stabbing each other with screwdrivers, or whatever) down pretty low by arresting and imprisoning people who knife/stab/beat up other people, rather than by banning someone having a pocketknife on them. The UK is a very wealthy society, and can afford to lock up more yobs if it needs to.
On the other side, there’s an obvious societal failure mode where the police can’t or won’t keep order. Places in the midst of that failure mode commonly have draconian laws against weapons *and* very high crime done by armed predators. This is the case in quite a few large US cities, and it’s one reason that a lot of Americans are pretty skeptical about whether gun control will decrease crime or will simply end up with the victims being disarmed.
@albatross11, I suspect that the number of people who actually wind up knifing one another may be such a small fraction of the number of people liable to knife one another that arresting them afterwards doesn’t affect the rate very much. That’s just a guess, mind you, I have no data.
@Nornangest
Do you want all forms of intent to do Y and conspiracy to do X taken off the books?
Which is subjective, too.
It’s also worth keeping in mind that “knife control” has been used as a reductio ad absurdum by gun-rights advocates for a long time (multiple decades that I’m aware of) in the US.
Most US states have a Nonzero level of knife control. What is that telling us?
Knife control exists in most US states. It probably exists in your state. Youve probably never noticed, because it probably never noticed because it applies as something that allows cops to arrest someone who appears to get up to no good, rather than airport-style security where everyone gets checked.
A different model actually had to be different. You can do what you like in Arkansas , but I’m afraid that other US states are groaning under authoritarian tyranny. Or nowhere is.
Laws about knives exist in most US states. UK-style restrictions do not. Things vary substantially state to state, but the most common restrictions are bans on switchblades or other automatically actuated knives, or bans on Bowie knives, daggers, or dirks. No US states have the sweeping restrictions on length or the bans on locking knives that the UK does, although a few smaller localities — most notably, New York City — do, or have in the past. Some states do have “carrying with intent” statutes with similar provisions, but which don’t forbid simple possession. Illinois prohibits possession on public land.
Historically there have been a couple reasons for these. Laws on large (e.g. Bowie) knives are most common in the South, typically dating from the 19th century and stemming from attempts to curb duels or other forms of lower-class violence at a time when pistols, especially those capable of multiple shots, were a luxury item. (There’s also a race angle, of course.) More recent ones are typically relics of a moral panic: switchblade restrictions, which are very widespread, usually date from the mid-to-late 20th century when they were one of the traditional accoutrements of Troubled Youth. My personal favorite is California’s 1980s ban on ninja-style weapons.
All of these strike me as bad law (and indeed many of these laws have been struck down over the last few years), but not as a serious curtailment of rights or of culture: there’s nothing you can do with a switchblade that you can’t do just as well with a regular knife, including self-defense.
IANAL, IANYL, etc.
Theory: the various US States wrote their knife laws in such a way as to accommodate the apparently large number of citizens who regularly carry and use knives as everyday tools. In the UK, where carrying a knife as a tool is (I’m assuming) much less common, that was not seen as necessary. Plus there may have been Second Amendment concerns, I guess, which obviously don’t apply in the UK.
Plus I suspect the politicians in the UK would have taken the sensible use of police and prosecutorial discretion for granted – that’s why they included a “for a reason” clause. The fact that this discretion may have failed to materialize is evidence of political incompetence, perhaps, but not necessarily political authoritarianism.
(OTOH, from what I remember of some other UK policies, you may have a point.)
I’m far less concerned by laws which put the onus on police to show that the guy they’re interested in is Up To No Good, than by laws which put the onus on the suspect to prove they had a valid reason to be doing what they’re doing.
And none of the US knife laws are of the latter type?
US “knife control laws”, where they exist, are generally
1. Absolute bans on specific types of knives, e.g. switchblades, on the grounds that these belong in the same category as Saturday Night Specials. Weapons used by the Wrong Sort of People as they rob and rape and kill the Right Sort of People, as clearly distinguished from the weapons used by the Right Sort of People to defend themselves against the Wrong Sort of People. None of these would likely survive a Heller challenge, but with the Right and Wrong sorts of people now mostly Glocked up, nobody much cares to bring one.
2. Bans on carrying knives into airports, schools, courthouses, etc.
3. Requirements that particularly large and scary knives should be carried openly rather than concealed. We’ve talked about this sort of thing before.
Very few of these are, at least as written, the sort of discretionary laws you seem to be (rightfully) concerned about. In almost all of the US, any adult could openly carry a Rambo Knife in a sheath on their hip without being arrested (for that, at least). New York City is one of the exceptions, limiting blade length even for knives carried openly; sorry Mick.
And note that the NYC knife laws get abused just as the UK ones do.
There are nuanced differences between UK and US knife laws, but you weren’t making a nuanced point.
No one had considered the advantage of discretionary law, which is that if you stop someoneon their way to kill someone, you save a life.
The implicit idea seems to be that outcomes matter less than some notion the of due process.
You can be arrested for carrying tools if it seems that the intention is to break and enter. So far no one has been stupid enough to call that a “tool ban”.
In regards to the jobs thing, pretty much my entire life I have assumed most jobs are boring and unfulfilling. I always thought this was okay, because it was the price you paid to get the money you need to do worthwhile things when you get home at the end of the day. The idea that jobs are supposed to be meaningful and fulfilling in and of themselves seems insane to me. If they were, why would people need to be paid to do them? I thought the whole point of paying people was to compensate them for how awful and boring their jobs are.
I knew jobs were often useful, I understood that a sanitation worker or a farmer was doing something useful to others, even if they hated it. But I didn’t think you were supposed to take pride in being useful. I thought that most useful activities that society needs to be done are boring and soul-sucking, and that’s why people get paid to do them. To compensate for the awfulness.
But apparently I am some sort of weird outlier and most other people have some sort of expectation that jobs are supposed to be meaningful and fulfilling. I feel like this “problem” that a “growing number of people” are going through is something I had made my peace with when I was 10. Nearly job I’ve worked at I haven’t cared about the work at all, I just did it because I got paid.
Why is the opposite (that jobs should be meaningful) attitude so common? It seems entitled to me, but obviously I’m a weirdo.
It is only necessary that their jobs are less rewarding than the things they could be doing with their leisure time, not that they are awful and boring.
And it isn’t even necessary that that be true for all employees, only the marginal ones.
Even if their job is just as rewarding as whatever they could be doing with their leisure time, that just means the Zone of Possible Agreement ranges from zero to the employer’s marginal profit on their labor. This will likely result in a wage or salary somewhere near the middle of that range, and the zero point will obviously be a chump move that even the most naive negotiator will (and can afford to) avoid.
But there is a persistent belief in some circles that employers always have such overwhelming negotiating power that wages are always at or near the bottom of the ZOPA; I suspect we are seeing that in action here.
You appear to be assuming a bilateral monopoly. That doesn’t describe most of the labor market.
Assuming a competitive market, if the number of people for whom working is as good as leisure is as large as the number employers would choose to hire at a wage of zero, that will be the equilibrium wage. More generally, the equilibrium wage will be the wage at which the number of people who prefer working at that wage to leisure without a wage equal the number of people employers wish to employ at that wage. If most people regard employment as just as pleasant as leisure, I would expect that wage to be pretty low—not zero, because at zero the quantity of labor demanded is probably larger than the population.
But it isn’t, and I’m not clear why you would imagine it ever would be so. As the cost of labor (or almost anything else) goes to zero, the demand goes to infinity. There’s always something useful or valuable you can do with an absolutely free minion, if only having them march around with a placard saying “[Boss] is the greatest” as an ego/status boost.
That does gloss over the difference between “cost” and “salary”, but I don’t think that is going to be significant here.
Also, real humans will basically all place a hedonic cost on living with the knowledge that their employer is profiting from their work and they are getting nothing in return, and will therefore not do that even if the net pleasure of working for free would otherwise be zero or slightly positive.
So the supply of workers willing to work for zero, will always be much less than the demand for workers at zero wages, and zero is not the market-clearing wage even for pleasant work.
As I just said, for the same reason you offer, further down in the very comment you are quoting from, although with a more realistic number than infinity.
Really?
To pick three counter-examples from the top of my head, no matter how cheap tap water is I’m not going to leave it running 24 hours a day, no matter how cheap heating is I’m not going to heat my apartment to 30 °C, and no matter how cheap table salt is I’m not going to eat a kilo of it a week.
But you might do agriculture in semi-arid areas, skimp on insulation in new houses, and salt meat instead of doing other preservation techniques — or use salt instead of sand in sandbags.
But I didn’t say that the demand by A1987dM would go to infinity. If the price of salt is zero and the transaction costs are small enough, you and people like you will go to the store and buy enough to last a month or so. Other people will look at this and say, “We can have all the salt in the world for free? And people like A1987dM only took a month’s worth? Fine, send us all the rest, and next month salt will not be free any more”.
Plus, as sandor notes, using salt inefficiently in a multitude of applications.
Obligatory Relevant (What If) XKCD
@John:
What you wrote was “the demand goes to infinity.”
In defending that, you wrote:
“Other people will look at this and say, “We can have all the salt in the world for free? ”
And where do those people put all the salt in the world? There is no reason to expect quantity of a good demanded to be infinite at a price of zero, let alone for “almost everything.”
Could John win this argument by precommitting to buying an infinite quantity of anything if it price (better, its TCO?) went to zero?
Where will he put it?
There are many commodities one can buy without having to put them anywhere, on account of the market, bank, or brokerage house through which you bought them maintaining warehouses in which those commodities are stored for their legal owners. This, and other related services, are usually provided or a fee which I believe is usually expressed as a small percentage of the purchase price
That’s part of the difference between price and TCO, and it’s obviously going to become significant as the quantity goes to infinity. But if you know of any opportunities where the TCO stays at zero for arbitrarily large quantities…
I don’t know about “meaningful and fulfilling”, but my own job (software engineering) is actually pretty fun (most of the time). If I won the lottery tomorrow and never had to work again, I’d still program, though I might work on different projects.
I work with a lot of scientists, and, as far as I can tell, they feel the same about their jobs — except those parts that have to do with acquiring funding.
My father (a scientist) still goes to work a few days a week, despite having retired several years ago.
I could have retired comfortably years before I did. And having retired, I still give two guest lectures a year in the undergraduate law and econ course, and am giving a public lecture Monday evening for which I don’t think I will be paid.
I just thought of something: is Big Brother going to let every Sikh in the UK carry a knife without being arrested, or is exemption from the law only for Muslims?
The UK does allow Sikhs to wear the kirpan. They also allow gardeners to carry scythes, though the Crown Prosecution Service was a bit confused about this one.
Kevin MacDonald’s theory isn’t a conspiracy theory, it’s more like an evol psych hypothesis pointing up a certain kind of Jewish sphexishness, a kind of loose overall tendency of Jews, as an ethnic group, to act willy-nilly for the benefit of their ethnic group, often to the detriment of their hosts (which is mostly Whites, in recent historical context), and sometimes with false consciousness.
I often would like to read the comments about one particular link in these links posts. This means I have to search through the comments, which can be more or less of a pain depending on how easy it is to figure out what keywords to look for. Barring a different UI (something like this, maybe), wouldn’t it be easier if each link (or set of related links) were a separate post? Would this be annoying to some people? Has it been proposed before and shot down for good reasons?
This was the first time one link in particular called out to me, and I found it wasn’t too hard to scan for comments on it by looking at each top-level comment and clicking “Hide” to vanish the whole thread if it wasn’t relevant.
I like the link-fest posts, but if Scott broke them into separate posts they would swamp the others, so I wouldn’t recommend that. We could try to adopt a community convention that the top-level comments start by quoting the link paragraph, which would be a more reliable search term than the usual “Regarding MacDonald” or “The revised prehistory made me wonder.” My guess is that this would require more discipline than even the regulars would be likely to adopt, but it’s worth raising so I’ll raise it.
Or number each entry in the post and ask commenters to begin each comment on that entry with the number.
The good news is that the number doesn’t need to be in every comment on an entry, or even on every top comment on an entry– if it appears anywhere in the comment thread on an entry, the comment thread is easily identified.
Looks like a couple of people have weighed in on naloxone already, but I’ll take a bigger crack at it.
Firstly, the methodology here is weak – comparing states with easier access to states without is like comparing rates of suicide in states with more or less antidepressant prescribing. Whether you see an effect or not, there are so many confounding factors it’s hard to attribute it to your particular exposure. Here’s an analysis of presumably the same (or very similar) data, finding the opposite result, 14% lower opioid-related mortality in states with better access laws: https://www.ncbi.nlm.nih.gov/pubmed/29610001
Early evidence from Scotland’s rollout to people leaving prisons finds reduction in fatal overdoses in the weeks after prison (the highest-risk time for ODs, as people have reduced tolerance after being inside). https://www.ncbi.nlm.nih.gov/pubmed/26642424 And this is provision, no just laws allowing access – they literally put it into people’s hands before they walk out the door. The UK is currently doing an RCT on the same policy, and evidence on that will come out over the next few years. A systematic review by one of the people leading that trial concluded that naloxone programs cut overdose mortality: https://www.ncbi.nlm.nih.gov/pubmed/27028542
Anecdotally on the risk-compensation issue, people I know who work in user-facing jobs say that people are still plenty afraid of overdoses, regardless of whether naloxone is available – an overdose is still a horrible experience. In addition, there’s actually a fear of having naloxone administered to you, because a large dose can swing somebody too far and put them into withdrawal, also a horrible experience.
So that’s the evidence on mortality and the risk-compensation issue. When it comes to the “moral hazard” of increased theft and how that weighs against, um, preventing *deaths* among people at risk of overdose, I find that pretty hard to take seriously as a moral question. Naloxone programs could increase theft ten-fold, and I would still think we should implement them. Maybe there’s some kind of Libertarian case for property rights trumping life, but no mainstream ethical analysis is going to come down on the side of protecting televisions and laptops over preventing fatal overdoses, I don’t think.
You think it doesn’t depends on how many televisions and laptops get stolen per fatal overdose prevented? That corresponds to most people’s moral intuitions, but it’s hard to make sense of it in an explicit moral theory–impossible if the theory is utilitarianism.
Suppose preventing on overdose results in an extra million laptops stolen, at a cost of a thousand dollars each. Does a value of life greater than a billion dollars make sense? It’s strikingly inconsistent with how people actually, whether dealing with their own lives or the lives of others.
@DavidFriedman:
I think it actually kind of does. I mean, I’m pretty sure we could drastically decrease the number of deaths from car accidents by banning automobiles, but proposals to do so aren’t really within the Overton Window (knock on wood).
In practice, people are willing to trade human life for any number of other things, if the exchange rate is good. What’s slightly taboo is saying that out loud.
I think Robin Hanson’s Near/Far dichotomy is relevant here; if thefts are rare enough that you’re not personally worried, you take the opportunity to signal your belief in the Value of Human Life; if thefts are more common, you’re More willing to make reasonable tradeoffs. (If thefts are common, but they’re happening to other people, for instance, on the other side of the country, you’ll probably stay in Value of Human Life mode) [Not ‘you’ personally; I have the impression you’re unusually reasonable about this, tbh].
Note that we send armed policemen to arrest people who are known to be stealing laptops and televisions, and we put those people (if they do enough stealing of laptops and televisions) in prisons where the guards can and will shoot them if they try to leave. So we’re already trading off safety of laptops and TVs against lives.
Why should I not value the integrity of my property over the lives of those who doing the stealing, and who are in and out of prison and killing themselves doing risky things?
Because vigilante murder is an ineffective form of justice and because weighing life against property is a moral quagmire.
Nobody’s talking about vigilante murder here. Just talking about not taking measures to save the lives of people who are likely to be valued, by a utilitarian reckoning, a net negative.
A quagmire you have to traverse, though, unless you want to be in the position of allowing people to hold their own lives hostage to obtain your property.
So what would be an effective form of justice, which doesn’t involve quagmirey weighing of property against other things?
Suppose the paper was 100% true. That doesn’t imply we should restrict access to naxalone! If true, there’s a large (implausibly so?) moral hazard effect, so it would be hugely beneficial to attack that. It’s (if true) such a large effect so we should be able to make a dent. Media (‘informational’) campaigns (e.g. it was already noted that there are risks with naxalone, so sell that), nudges of other sorts, etc – if the study is true, it’s saying that we need to try these things with the utmost urgency. Whether we should also tighten up access to naxalone – well, that’s harder, and the right answer is almost certainly ‘not’ – but that’s not the only direction this study points us.
I feel that many people want to shut this down in part because they think it implies ‘restrict naxalone’. But if this study were true, and is ignored because of ‘restrict naxalone’ concerns, we risk making a big mistake.
We’re not trying to shut this down. We’re criticising the methodology of the paper and therefore the credibility of the conclusions.
Yes, there’s a separate discussion to be had about the implications of the paper. You might be concerned with moral hazard and therefore tempted to put the paper forward more strongly than the credibility warrants; you might be more concerned with the othering and exclusion of opioid users and therefore tempted to put the paper down more strongly than the credibility warrants.
The credibility is still a bit shit though.
In other news, it’s the job of the US Surgeon General to weigh up the evidence for public health actions and advise where that evidence justifies action. This week he’s calling for naloxone to be much more widely available because it saves lives. This is the first advisory in more than a decade and it comes in response to tens of thousands of Americans dying and solid evidence about how to keep people alive.
Fair enough, in principle, but I claim that a lot of people eager to tear this study apart for its rigor and methodology (maybe rightly) jump seamlessly, within their same critique, to defend naxalone availability and view this study as an attack on this. This is dangerous! Tear the study apart. Promote naxalone. Don’t do both at the same time.
>They just used naloxone availability laws as a proxy for naloxone use.
No. I have problems with the statistics in this study, but unless you have mistyped this is entirely wrong. It makes me doubt your sincerity.
From the paper:
“We use the gradual adoption of state-level naloxone access laws as a natural experiment to measure the effects of broadened access… Local data on actual naloxone distribution are unavailable.”
So looks to me like they don’t have data on naloxone use, or even naloxone availability, just on whether the law allows that availability. Are you interpreting the author’s statement differently?
Not included in the article: VICEX a literal sin-based mutual fund that’s apparently been beating market average over the last decade and a half. To be fair, if vanity and envy were included among the other anti-socially conscious stocks then the fund would not perform as well.
So I suppose the optimal strategy is to hoodwink liberals into thinking your company is socially conscious while engaging in dirty business behind the scenes?
The article about Kevin MacDonald’s conspiracy theory is an interesting read, but also a bit disappointing. They did a good job debunking a conspiracy theory; what about Group Evolutionary Strategy – a much more scientifically interesting claim?
Given that Jews ought to evolve their group strategies, both culture-bound and hereditary (if this is even a thing for humans), in much the same way as any other population does, there’s bound to be discounting based on Hamilton’s rule – “die for two brothers or eight cousins”, the regular stuff. As such, we cannot expect geographically distant groups of Jews, no matter how orthodox, to unquestionably support “Jewish interests” in the sense of furthering specific pro-Israel politics or playing the long con of destabilizing Gentile world, or any other coordinated long-term project for that matter; it couldn’t have plausibly evolved, and ancient faith such as Judaism is unlikely to be enough to override the discounting in favor of globally concerted nation-building.
However, Jews, regardless of geography or political stance, do seem to be extremely competent at furthering the interests of their family, close associates and local Jewish community in general. In fact they seem to establish an efficient local network no matter how outnumbered they are in a given environment; and Judaism (plus associated institutes) presumably plays as much a role here as IQ does. Speaking of IQ, while the default hypothesis should hold for cognitively demanding occupations, I’ve observed some rather mediocre Jews reaping the benefit of the aforementioned group efficiency at securing the profitable (for example, managerial) position for one’s kin, typically at the expense of gentile competitor. This is of course expected behavior for any functional diaspora, yet the efficiency – given low numbers of participants and sometimes the lack of overt power over the situation – was surprising. Could this be explained by average IQ as well? Perhaps. If not, what could be the compound effects of this efficiency, and to what extent could they contribute to the perception of large-scale conspiracy?
In any case, the article makes no effort to debate titular group strategies, which, while reasonable – they were after MacDonald’s conspiracy model to begin with – still leaves the issue open.
On a completely different note, Malala Yousafzai is an ally of International Marxist Tendency, which is effectively an entryist Trotskyist group. The CIA thing aside, it’s quite easy to get paranoid (especially as a Pakistani Muslim) when an entryist Trotskyist is given a freaking Nobel Peace prize and forced on your country as a West-approved superhero representative.
…And people wonder why diaspora Jews in Western countries are so against racialist politics even when those come with the caveat of “Jews are high-IQ and totally whiter than us”!
It’s not paranoia if they really are out to get you and all that, lol. This is so predictable.
(Speaking as a Gentile: get fucked.)
Re. Kevin MacDonald: it’s hilarious how many times you repeated “conspiracy theory”; it’s like a verbal tic. If I suppose you have friends, is that also a conspiracy theory?
In reality, MacDonald’s work doesn’t assume any ‘conspiracies’, and Cornas’s article basically argues that, since there are some exceptions to a rule, there is no rule. Btw., do you think that it took so long for anyone to review MacDonald’s work a conspiracy, or perhaps a sign of anti-Semitism?
You’ve got it backwards. Have you read MacDonald’s book? He cherry-picked examples and tried to generalize them into a rule — without statistical analysis, without predictive ability, without even a full and impartial survey of Jewish activity in intellectual movements. Cofnas has exposed the weaknesses of this cherry-pickin’ approach. And I think that MacDonald’s work hadn’t previously merited scientific review simply because it is so inherently weak.
Lots of books never get reviewed. In many cases, it’s because they’re just no good.
And not only is it bad science and a poorly-supported hypothesis, it may have been downright dishonest, as Cofnas accuses MacDonald of misquoting from sources.
In other words, it’s not “since there are some exceptions to a rule, there is no rule.”
It is the opposite: “You shouldn’t formulate a broad rule from just a handful of cherry-picked examples.” A practice which is known as faulty generalization.
MacDonald didn’t even come close to proving a rule; he didn’t even build a good case.
I grew up maybe a half hour or so from Jewtown. I can’t say I’ve ever been there personally, but I remember my dad telling me that way back when, it was home to a bunch of small shops and businesses owned by Jews. I think he said they were trying to avoid ordinances in nearby towns that prevented stores from being open on Sundays.
I think you meant to post this in the open thread. Or maybe Georgia.
It’s relevant. Ctrl-F “Jewtown”.
Ah, oops, I did read that, snickered, and promptly forgot it.
To make it even more relevant, there’s also a Jewtown in Philadelphia. It’s a part of an overall white neighborhood… where black people live.
>Bad signs: when your government becomes so censorious that it bans the word “disagree”.
Do not be so quick to believe everything you hear in Western newspapers from the “enemies” of the West
>Let’s be fair to Bernie Sanders: he never actually said anything positive about Venezuela. In fact, let’s celebrate this: given how many socialists did praise Venezuela
Do not tell me you are one of those people who thinks social democrats are socialists. If so I just lost a lot of respect for you
Scott didn’t mix those two up, Bernie literally calls himself a socialist. Sometimes he qualifies with “Democratic Socialist” and sometimes it’s just “Socialist.” This isn’t Scott mixing definitions, it’s Bernie.
And Richard Spencer says he’s not a racist
The difference between social democrats and socialists doesn’t seem to be in policies, but in that social democrats point to white countries to say their policies work. Or at least haven’t descended into chaos.
I think the difference being pointed at here is between people who support a market society with a good deal of income redistribution and people who support a society where large parts of the economy are at least controlled, very likely owned, by the government. It’s worth noting that the Scandinavian welfare states are, on other measures, somewhat more free market than the U.S.
… All of europe is, on a number of important aspects -for example, more or less the entire us corporate tax code would get tossed by the EU on the grounds of being a market distortion. Overly specific tax breaks are deeply verboten by the rules.
reposting what I said about the knife thing on tumblr:
Its really really hard to measure the effectiveness of this sort of thing, but stats from 2014 say it was working: https://www.thetimes.co.uk/article/no-knives-campaign-hailed-a-success-as-crime-rate-plunges-7607m8td6gq
The situation is pretty much the same as gun crime in the US, carrying knives makes it easy for a fight/mugging to escalate for a stabbing, same as for it to escalate into shooting if they have guns.
(Since American libertarians on the internet tend to freak out about this: they’re not going to send you to jail for having a swiss army knife in your pocket. If they find you with a knife of the sort that is mainly used to hurt people, especially if you are hanging around a dark alley at night, you may have a problem. And you are let off for any ‘good reason’ for having one. https://www.gov.uk/buying-carrying-knives )
This stuff is failry uncontroversial in the Uk. Mu guess for why Americans find it more unsettlng is less trust that police and courts will use discretion fairly and sensibly?
Stats from 2018 say it’s really, really not.
My good reason for having the sort of knife mainly used to hurt people is that, see above re 2018 crime statistics, and if in my travels in the UK I should meet one of the people behind those statistics I want to hurt those people. Or credibly threaten to hurt them; that’s usually sufficient. Since the UK police won’t let me carry a gun for that purpose, I might settle for a good knife, the scary kind with a long locking blade and a stabby pointy tip. Are you suggesting that the UK police and courts will “use discretion fairly and sensibly” about that?
But I don’t have to visit the UK, so I don’t have to put up with this. I can just laugh at it from a safe distance. That the people who chose to live there, chose to put up with such nonsense, is another thing I will laugh at from a safe distance.
Your risk of stabbing as a tourist, like my risk of stabbing as a middle class Brit, is essentially 0. Stabbing is something that is almost exclusively done by and to young poor people. Even if you live in an area where stabbings are common, if you are not part of the subculture within which they occur, you are wildly unlikely to be a victim.
That’s a good argument for why the law is useless.
No, because not everyone
is middle class. You basically have a law that applies to everyone, but is really targeted a a few.
I dare say most of the people carrying knives in knife crime black spots believe they are necessary for self.defense. Your logic is pure Moloch
..a decision that makes things better lfor the individual, worse collectively. “Moloch is bad” is perhaps the central political claim of this blog, so why do so many people fail to get it?
I’m pretty certain that my carrying a knife or gun for self-defense does not make things collectively worse. But more to the point, your claim proves far too much.
Most actual rapists believe they are engaging in consensual sex. Therefore the logic “It is good and should be legal for me to engage in consensual sex” is pure Moloch; having consensual sex makes things better for the individual but worse collectively, sensible societies should outlaw the whole practice and until they do their governments should run #sexfree social media campaigns.
What’s this based on? I know it’s a common defense given by accused rapists, but that seems to tell us little about whether it’s true. It sounds pretty implausible to me.
Most rapes by far are committed by friends or acquaintances of the victim. So the alternative would be that most rapists who fully understand that they are committing rape, nonetheless chose to rape a victim who will positively be able to identify them. That strikes me as far less plausible than someone believing that e.g. a marriage contract constitutes ongoing consent to sex.
And that, plus most accused rapists actually saying that the sex was consensual, is far more evidence than the Ancient Geek provided for his claim that most of England’s stab-happy criminals think they are acting in self-defense.
For normal people, yes. But sociopaths aren’t really deterred by long-term consequences, even highly predictable ones; that’s sort of their thing.
I think it’s wise to take any and all scholarship on rape with several grains of salt, but a persistent theme of it is that most rapes are committed by repeat offenders. It takes some fudging to get the reported prevalence of said repeat offenders to line up with the rate of sociopathy in the general population, but it’s not that far off — and again we’re dealing with highly unreliable numbers here.
@Protagoras,
One data point, I think sourced from here. Content is disturbing.
The chance that you’ll be arrested, much less go to prison, if you rape an acquaintance is quite low. Nearly all acquaintance rapes are he-said/she-said cases, simply by the nature of the crime.
I suspect, based on my personal interactions with rapists, that they are generally perfectly aware that they’re having sex with someone who doesn’t want to have sex with them, but they have rationalized some scenario where that doesn’t count as rape.
If consensual sex delivered no positive utility, banning it would be positive,because you would be only getting rid of the negatives. But it does deliver positive utility, and that why it isn’t Molochian.
The main reason to carry a knife, your stated reason, is to protect yourself against other armed people. Feeling you have to defect because other people have detected is the essence of Moloch.
I don’t see any reference in the comment I think you are referring to the other people being armed, let alone armed with knives, which is essential for the argument you are making. If I was worried about protecting myself from other people, that would include people who could beat me up without a knife.
Saying that this makes things worse collectively amounts to assuming your conclusion. I know it makes a tempting superweapon to wave at anything you don’t like and shout “Moloch!”, but the logic doesn’t actually work.
Superweapon? Moloch is barely a pop gun in relation to the amount of notice people take.
And the other reason that (accusation of ) Moloch (ian equilbria) isn’t a superweapon is that there is a fairly specific set of game-theoretic conditions attached to it.
Let’s make a really oversimplified model. We have a population of sheep (who never initiate violence) and wolves (who routinely do).
We might be in one of four situations:
1. Wolves and sheep are both disarmed
2. Wolves are armed, sheep are disarmed
3. Wolves are disarmed, sheep are armed
4. Wolves and sheep are both armed
Any situation where the wolves are armed leads to more deadly attacks on sheep (instead of just taking your wallet, the guy stabs you or shoots you), and also more deadly fights between wolves (when the two rival drug dealers have it out, it ends with the loser in the morgue instead of in the hospital).
Advocates of any kind of weapons ban are shooting for world 1. They’re hoping for a world where hardly anyone has any weapons.
Opponents of a weapons ban may either:
a. Suspect they’ll end up in world 2, where guns are outlawed so only outlaws have guns. This is the worst possible outcome.
b. Suspect they’ll end up in world 1, but with the authorities not doing much to keep the wolf population down, so that it’s a safe and fun thing to be a wolf going around terrorizing the sheep with little fear of consequences, and a terrible thing to be a sheep.
I’m pretty sure the knife ban is *intended* to disarm wolves, so their attacks on both sheep and other wolves are less likely to leave anyone dead.
We are speaking on SSC, where Moloch has a great deal of rhetorical weight. As to game theory, my point is precisely that you haven’t established that the game-theoretic conditions apply. (Your summary of the concept in your exchange with John Schilling above is incorrect — it’s globally optimal to defect against known defectors in a variety of situations, and that’s standard game theory — but even if it was correct, you’ve failed to establish significant negative externalities.)
I think that’s broadly true, but I think there may also be a preference for world 2 over world 4 on the premise that sheep who try to fight wolves are more likely to get themselves badly hurt or killed than those who don’t.
NB: I’m not making the claim that the premise is true.
But not this one. “Some things aren’t Moloch” doesn’t imply “nothing is Moloch”.
I don’t believe you.
I am roughly one hundred percent certain that the definition of “mainly used to hurt people” will expand until it covers everything with a sharp edge. I am also roughly one hundred percent certain that the regulations that cover it are not drafted by anyone who understands how these tools are actually used (for example, my EDC knife that has never been brandished at anything more threatening than an orange is illegal in the UK in at least two ways and could be interpreted as such in a couple more), that they are therefore totally uncorrelated with criminal usefulness and will have no effect on crime, and that every expansion along the way will be cheered on by members of the public who are likewise ignorant. And despite your reassurances, I’m finally pretty certain that this process is already well along, since I’m seeing agitprop about kitchen knives.
So yes, it’s pretty much the same as gun laws — a fact that hasn’t escaped the notice of gun advocates here. No doubt you’ll file this under “less trust”, but it seems to me that low levels of trust are entirely justified.
One of The Nybbler’s links was about a butter knife.
@Alkatyn:
You wrote:
The Nybbler replied with a link to a news story by someone who was arrested and threatened with six months in jail—at the time of the story the final result was not yet in—for possession of a swiss army knife (I think—brand a little unclear). There seem to be only two possibilities:
1. The story is correct, what you wrote was wrong, what you viewed as libertarian paranoia was a more accurate view of the situation than yours. If so, you should say so.
2. The story is in some way bogus. If you have reason to believe that, you should say so.
Saying nothing in response implies a disinterest in whether what you write and believe is true.
Although technically people are often put in jails as part of being arrested, the term “send [x] to jail” (as you certainly know) typically refers to shorter sentences of incarceration.
So Alkatyn can be understood as expressing skepticism that that would actually happen in practice. And “at the time of the story the final result was not yet in” is not some kind of check-mate to such skepticism.
I’m less skeptical, but only in the sense that I can imagine jail time for a pocket knife being imposed in conjunction with some other sentence. The U.S. system certainly piles on what it can. But I am probably just as doubtful that someone would actually be sent to jail just for having a knife.
Anyway, this seems like something reasonable people can disagree about rather than something that calls for Numbered Demands.
The butterknife guy was convicted. The penknife guy was not, but you know what they say: “you can beat the rap but you can’t beat the ride”. If carrying a penknife means you’re at high risk of being arrested and have to spend a fair amount of money on lawyers, it’s effectively prohibited even if it isn’t de jure so. The Swiss Army guy pled guilty. Here’s an American busted for a pocket knife.
And these are valid reasons for thinking the law is bad, even contemptible. I myself don’t like laws that unnecessarily make entirely ordinary life situations illegal, which it seems (based on the evidence in the thread) this law does.
Nevertheless, the point about jail time is not unreasonable or beyond the pale of discourse. A better criticism, and the one you both seem to be implicitly offering, is that jail sentences aren’t a good measure of the impact of this law, and the impact of being arrested and fined even a small amount is not trivial. That point could be further generalized to all the circumstances in which people are arrested only to have the charges against them dropped later, such as when protesting. So make that point instead of accusing the OP of being obviously mistaken.
I don’t know if he is mistaken–the story linked to only gave one side of the controversy. But I don’t think the distinction between a jail sentence and being arrested, booked, let out on bail, and then eventually acquitted or having the charges dropped, is enough to justify what the OP said. His point was that people who worried about the law being applied in obviously inappropriate contexts was libertarian paranoia, and if the story as reported is true, it isn’t.
And the linked to story, if true, was about British police using discretion unfairly and not sensibly–to the point of a policeman assaulting the arrested party and other police then threatening to charge the arrested party with assault.
The whole story reminded me of a much worse version of the same sort that occurred in the U.S., in which someone coming through customs was assaulted by the customs law enforcement officer, much larger than he was, then charged with assaulting the officers and jailed. For some odd reason, the video of the scene was not available, although there was video surveillance of the room. But the defense found videos of an earlier part of the incident, where the victim was peacefully escorted to the room where the assault happened–inconsistent with the testimony of the officer as to that part of the incident.
Charges were eventually dropped–but only on the condition that the victim agree not to sue the officers. My feeling is that the prosecutor who set that condition ought to be disbarred, since he was using his power of selective prosecution to protect people who he knew were guilty of perjury and had good reason to think were guilty of assault.
@The Nybbler,
For the record, the fact that it made the news suggests that it isn’t actually a high risk.
… which by no means excuses the behaviour of either the police or the prosecutors in those particular incidents.
There’s a difference between “happens” and “happens, and is under all the right circumstances for it to get reported in the news”. The latter could be low risk while the former is high.
Also, the news might report the first 2 or 3 instances of something and not report any of the rest. It is technically true that being in the first 2 or 3 instances isn’t a high risk, but just having it happen at all is a high risk.
And beware of different subgroups. “Not a high risk for a random person” is different from “not a high risk for someone who the police want to get for contempt of cop so they file charges based on this overbroad law”.
A data point of my own: last summer I attended a court building in the UK and walked in with my usual bag, in which I’d left a Swiss Army Knife that I thought I’d lost. The police at the x-ray machine at the entrance found it, to which my first reaction was “so THAT’s where I left it”, immediately followed by “oh ****”. But all that happened is they told me knives were not allowed in court (fair enough!) and they kept it safe for me and gave it back at the end of the day.
My feeling is that there may be more to the mentioned cases that comes across in the news stories, and without seeing the actual judgements and the court’s reasoning behind them, I’m not convinced.
It’s not – said with a bit of British understatement – entirely implausible that individual police officers might sometimes treat people differently based on race and social class, of course.
Of course there’s something more. The most likely explanation is that the police wanted to get someone, either because he’s a minority, had the wrong political views, looks weird, tried to date the cop’s daughter, or just embarrassed the cop by insisting on his rights or proving himself innocent of some other accusation. In short, comtempt of cop.
Alternatively the police may just be under internal or external pressure to catch some criminal, any criminal.
That doesn’t make the law non-dangerous.
Initiatives to respond to some social problem (crime, unwed pregnancies, people talking in theaters[1]) tend to look like successes, at least for awhile, because of regression to the mean. You started the initiative when the problem seemed especially bad, and it’s common that the next year or two will show things getting better no matter what you do.
[1] They’re goin’ to the special hell.
It’s even worse that that due to what I’d call temporal selection bias. That 2014 article was written because the people running the experiment put out a press release, and they put out a press release then because at that exact moment the numbers seen thus far happened to look good for the case they wanted to make. Given this dynamic, the experiment would look good according to articles in the Guardian even if the crime rate were a totally random walk. If the crime rate looks bad, you quietly and privately claim the experiment hasn’t run long enough yet or needs more resources devoted to enforcement or needs to be tried on a wider scale to see the positive effect you’re sure it would produce; if the crime rate looks good you issue a press release trumpeting that fact as evidence the experiment works and deserves to be tried on a wider scale.
(applying the same logic to global warming alarm, or even Apple keynotes, is left as an exercise for the reader. 🙂 )
What’s Orwellian about the knife campaign?
My best guess is that Scott is talking about creating new categories and words like “knifefree,” but I don’t see it.
Having a government agency (this seems to be a creation of the Home Office, but I’m not certain on that) deliberately create what on its face looks like a grassroots social movement with artificial “peer” pressure and canned narratives, is not a central example of Orwellian bootfaceism, but I think I’d put it somewhere Orwell-adjacent.
Thanks, that helps me see it as a little Orwellian, but, still, is it newsworthy? Is it more than a marginal advance over the last 30 years of lifestyle propaganda? Didn’t MADD do this kind of stuff? It looks to me like it’s just adapting to social media. Is the point that social media is inherently creepy? Anyone can recruit a few peers to do videos, but trying to create a hashtag is taking it to the next level?
(I assumed it was government from the very beginning, so I didn’t get the feeling it was trying to trick me. Maybe I read Scott’s “UK” as the government, but I’m not sure why.)
I also didn’t get a deceptive vibe from it, but possibly that’s because the design suggests “official UK government site”, or at least “anti-knife charity site” in some way.
Yeah, the design doesn’t seem pseudo-grassroots to me. The Home Office logo at the bottom is hardly hidden away, either.
What’s the ethics of a government that is supposed to reflect or carry out the people’s values attempting to change those values? Does it matter if it is trying to prevent drift vs being “progressive”?
A government is supposed to reflect the values of the majority (or an average), which might mean trying to change the values of a minority. I think “I need to carry a knife to stab people who try to stab me” is a minority value (given that acting on it is illegal), and certainly not one I particularly care about protecting.
Governments using social media for social engineering strikes me as inherently creepy; perhaps not “100% always wrong”, but certainly cause for suspicion.
And, e.g., the Russian government using social media to engineer American public opinion has certainly been denouced as creepy or worse (far, far worse) from every non-Trumpist corner of late. I don’t think there’s a sign change just because the government in question claims the society it is trying to engineer as its subjects.
I’ll admit that propaganda campaigns seem inherently creepy to me, regardless of who’s carrying them out.
Where’s the lower limit? Is “drive safely” government propaganda?
That is, uh, not a point in this campaign’s favor. There are some movements heavier than MADD on cynical emotional manipulation in service of an engineered moral panic, but not very many of them.
I don’t know if it has changed since first posted, but on my browser the site displays a large Home Office logo at the bottom left, linked to its .gov.uk website. Technically that’s not proof of affiliation, but if someone deceptively slapped a government logo onto a prominent site like this, I doubt they would get away with it.
I’d be interested to hear Scott elaborate on this. I wonder if it’s a culture-clash thing, where even non-gun-toting Americans instinctively feel uncomfortable about the government trying to stop people from carrying weapons? Or maybe it’s just a matter of tone, but if so I’m not really seeing it — at least at first glance, it seems like a pretty standard ‘please stop stabbing each other, here are some reasons why it’s a bad idea’ kind of thing.
Admittedly I can see why the
bit could trigger some Orwell-receptors, if you get the sense that ‘help’ is a euphemism for ‘have them arrested, or at least put on a list somewhere’. But if you follow the link, it seems to genuinely prioritise talking to them, with contacting the authorities as more of a last resort. And surely there’s nothing inherently wrong with getting the police involved to prevent violent crime.
I honestly thought that website was some sort of spoof made by the pro-gun tribe to satire some of the arguments made in favor of gun regulation.
Neurogenesis in adult humans might still be a thing. Autopsy results from March 2018.
http://www.cell.com/cell-stem-cell/fulltext/S1934-5909(18)30121-8
Don’t know if this counts as a “conspiracy theory,” but in the final years of Emperor Wu of Han’s reign (141-87 BC), a prominent minister’s accusations of witchcraft led to the deaths of thousands, including the crown prince, and may have significantly affected the course of early Chinese political history.
I gotta say, that Homestuck reference just makes me wish there was a rational Homestuck fic by Scott. If anyone can pull it off…
I feel like Yudkowski might have a slight competitive advantage with regards specifically to rationalist fanfics of popular nerd brands, based on available evidence. I think Scott does best with original fiction and world building, as well as pursuing a broader range which includes philosophical, sociological, and kabalistic elements.
But that’s exactly why Scott would be the better Homestuck rationalfic writer.
I think this very assumption is actually closer to the problem. Once you’ve narrowed down your explanantia to 1) Some People Are Just Better and 2) A Malevolent Conspiracy, or rather once enough people accept that those are the possible explanantia, you are trapped in Culture War Hell, forever.
Huh? “Things that are explaining”??
The Italian Wikipedia puts forward some doubts on Syndrome K story: https://it.wikipedia.org/wiki/Giovanni_Borromeo
Regarding the South African land issue, Tyler Cowen posted this comment on marginal revolution (though it doesn’t take the same approach as the link here) http://marginalrevolution.com/marginalrevolution/2018/03/comments-south-africa.html
Interesting that George Bush was so neutral in that favourability graph. I’d have expected him to be almost as polarising as Trump. I wonder if that’s my UK bias speaking…
Speaking as a New Zealander, I never had any strong reactions (positive or negative) to George Bush. Certainly nothing like Trump.
[Edit: mind you, I may well be an outlier.]
Starting multiple unnecessary wars is not nearly as bad for a political figure’s reputation as acting crass. Of course, peoples’ impressions of Bush Jr. have unjustly improved a lot with the passage of time, so who knows what they’ll think of Trump in ten years under President Barr.
+1
Bush was a well-intentioned serious guy who had some pretty bad ideas. Unfortunately, he was effective at getting his bad ideas enacted into policy, which caused a lot of needless human misery[1]. Trump is a crass unserious blowhard who has some pretty bad ideas. Fortunately, he’s pretty ineffective at actually getting them enacted into policy, so he’s not causing a lot of human misery.
[1] IMO, as far as I can tell, etc. We can’t actually run history back and try other policies to deal with 9/11 and Saddam and financial regulation of mortgages and such.
Any president would have gone into Afghanistan (assuming bin Ladin executes on his long laid plans irrespective of who holds the office, which seems very likely. And that said plans work, which is not as certain, but still.. way to bet) , that was, as far as I can tell, a forced move. Iraq 2.0, tough, not so much. And other presidents might have put considerably more boots on the ground in Afghanistan, which.. uhm. Unknowable outcomes, here.
the obama administration tried substantially more boots on the ground in iraq, and they didn’t achieve much.
If people’s impression of bush have improved unjustly, that’s probably because they were unjustly low to begin with. Bush got himself into a massive hole into in Iraq, but it led to some administrative soul searching and him putting together a team that dug itself out of that hole. Contrast to, say, the mess the obama administration made of Syria and their total lack of re-appraisal and I know who I’d prefer.
Regarding that graph, there is an alternative way to view it which offers some additional illumination of the data.
Take “Wealthy people” for instance. Seeing as that item lies in the upper left you may get the impression that Republicans view wealthy people quite favorably. However, if you view related items in conjunction instead of viewing each item in isolation, an interesting new picture pops up. Republicans score “Poor people” about 0.4 points above “Wealthy people”. Democrats, on the other hand, score “Poor people” about 2.8 points above “Wealthy people”. So rather than Republicans liking wealthy people and Democrats liking poor people (red vs blue side of square), the concusion is now that Republicans care comparatively little about economic status, while Democrats care a lot (favoring poor people).
It’s a pretty fun game looking for interesting relationships. Drawing a line between related items, the length of the line roughly represents the degree of opinionation, and the slope of the line represents the ratio of republican to democrat opinionation. So a slope of +1 indicates the parties agree, whereas a slope of -1 indicates diametrically opposed opinions. Here are a couple of examples:
Republicans and Democrats have inverse views on race. That is to say, the republican ranking is
White people – Jews – Latinos – African Americans ,
while the democrat ranking is
African Americans – Latinos – Jews – White people .
In addition to being negative, the slope is steep, meaning Republicans care a lot more than Democrats.
The Men-Women line has a slight positive slope. Democrats and Republicans agree Women are better than Men, but Democrats care a lot.
The Muslim-Christian line is maybe the steepest. And it’s long too. Republicans care a lot.
Finally, the Republicans-Democrats line has a slope of almost exactly -1, which I find sort of cute for some reason.
You might want to try putting Jews as a religion, which Republicans might be more likely to classify them as (unless they’re really far or alt-right).
I was wondering myself about the *lack* of antisemitism on the Republican side, and how long to expect it to last as racial polarization and identity politics increase and more and more Jewish SJW types jump on the anti-white train. You’ve got lots of really prominent Jewish lefties and something like 75% of Jews vote Dem. I guess Ben Shapiro and Dave Rubin are doing G_d’s work? (One of the main differences may be the small but significant fraction of center-right Jewish talking heads–you don’t see entire Latino or black right-wing publications like Commentary, and I can’t think of a Latino Mike Savage or Dennis Prager.)
I don’t know what’s more accurate between viewing “Jewish” as an ethnicity and a religion, but I think the ethnicity angle is more fun in the context of that graph. Admittedly, that’s mostly due to the fact that “Jews” falls almost right on the curve of the other ethnicities, while the religions are more scattered.
It’s painfully overlong, and I recommend starting about 1/3 of the way through, unless you want to read digressions into Rosseau and Occupy Wall Street (I’m not joking), and scintillating prose like this:
How do you duck out of the way of a boot stomping you in the face? Doesn’t Orwell’s simile suggest you’re lying on the ground, where ducking (defined as “falling quickly to avoid a blow”) would be impossible? And if you’re ducking, why do you need wiggle room?
I found it dishonest, too. He castigates Jared Diamond as someone who was “never trained in the relevant disciplines”, and then mentions that Diamond did his PhD on “the physiology of the gall bladder”. His intent (I believe) was to make Diamond sound like an idiot who wandered off the reservation and is writing about a topic he doesn’t understand.
But Diamond also has a Bachelor of Arts in anthropology and history from Harvard. He is trained in the relevant disciplines. I don’t see how this could be an honest mistake: Graeber clearly researched Diamond’s education, and would have seen his BA. I don’t like Diamond, but this is plain unfair.
Graeber wrote a book called Debt: The First 5,000 Years (Noah Smith’s review, he doesn’t love it but thinks it contains some interesting stuff). In it, he claims that the popularly taught history of economics (barter economies which turned into tokenist economies, which led to fiat currency) is wrong, and that societies based on barter probably never existed. As evidence, he cites the fact that no modern hunter gatherer societies use barter.
…but here he’s claiming that the lack of inequality in modern hunter gatherer societies doesn’t prove anything about early man, since thousands of years of co-evolution have occurred. So which is it? Can we learn things by studying modern HGs, or not? He’s trying to have it both ways.
He doesn’t advance much evidence of his thesis beyond 1) there are some graves with lots of gifts in them, and 2) there are some impressive early structures that would have required collaboration to build. He also declares that the “agricultural revolution” never happened, because it took a few thousand years for it to be fully adopted. Given that we’re talking about a 15,000 year timespan, that seems like a really poor argument.
I’m far from an expert on the subject, but I did read The World Until Yesterday, and Diamond notes that the New Guinea highlanders, the one group of non-staters he’s most familiar with, were isolated from the outside world until the Sixties or so–within living memory. They, at least, would represent a legitimate coelacanth.
There is probably a selection bias in that any until-recently-uncontacted hunter-gatherers must necessarily be living in places hostile to permanent human settlement (e.g. jungles/rainforests), else humans with ships and guns would long since have shown up and said “we’re settling here”. So I think there’s at least an argument that we shouldn’t trust modern examples for the question of whether neolithic hunter-gatherers built semipermanent settlements.
But Graeber should have made that more explicit, and acknowledged its limitations rather than treating it as a slam-dunk “modern H-Gs don’t count”. Also, I’m going to guess members of the extended tribes gathered in proto-cities for the off season might have engaged in barter the way members of a tight, permanent band usually don’t.
@ John Schilling:
A related selection bias is that the sorts of H-Gs whose culture encouraged things like barter and tokens and settlements turned into us. Any culture that stayed at the H-G stage while never engaging in trade or exploration until now might just be deeply weird – their cultural hill-climbing strategy got stuck in some rare local maxima they never escaped. In which case you’d study them not to learn how we used to be, but to discover the peculiar ways in which they avoided being how we used to be.
Indigenous tribes of the Great Plains had a sign language for trade between tribes that had different spoken languages. So there was at least some interaction at a scale even greater than extended tribes. That is probably not the case for until-recently-uncontacted HGs.
Having read Debt: The First 5,000 Years, I largely agree with Noah Smith’s review. Graeber is a certain type of insufferably pretentious leftist who mistakes rambling for authoritativeness.
That said, Graeber got his PhD in anthropology doing real field work in Madagascar and has been teaching anthropology ever since. He is right to point out that Jared Diamond has divided his attention between a career teaching physiology and studying birds and ecology. We probably should assume Graeber has a better handle on the latest anthropological consensus unless he gets things egregiously wrong.
As for elaborate graves and impressive structures in the Upper Paleolithic and Mesolithic, what is the alternative explanation to large, hierarchical societies of hunter gatherers? Sure, their mere existence doesn’t prove they were common; that’s something that would depend on the relative frequency of such finds compared to simple graves and simple huts. OTOH elaborate graves and buildings might be more likely to leave remains, thus biasing the available evidence. It’d be nice to have confirmation from another anthropologist/archeologist that this is the real consensus in the field.
It’s interesting that Graeber argues for the existence of inequality deep into the Paleolithic given how he manifestly desires to find fault with modern society for its inequality. It speaks to his integrity (or the overwhelming evidence on this issue) and makes me more inclined to believe his thesis on the prehistoric frequency of inequality.
P.S. Ducking can also just mean evade or avoid. That’s why you can duck out for a bit, even if you don’t actually fall toward the floor while doing so.
At a very large tangent, a search of the Usenet archives should find some long ago exchanges involving me, Graeber, and James Donald.
Do Usenet archives still exist? Google deleted their search page for them a while back.
You could try the internet archive.
Here is one page with posts from all three, as one starting point.
I skimmed through skef’s link. Graeber comes across to me as someone not interested in honest and thoughtful engagement.
As an aside, looking back and seeing how bad internet political debate was in 1998, just as bad as almost all other places I’ve seen political discussion up through modern day, makes me appreciate all the more the quality of discussion on SSC.
My memory from one part of that extended interaction was that at some point I was trying to act as a referee between Jim and Graeber. I asked each of them to put his claim in some form clear enough to be tested. Jim did, Graeber didn’t.
When I read Debt I mostly thought it was pretentious but had a lot of interesting anecdotes. When I got to the chapter on the 19th century I was disappointed because I was most interested in taking away from the book how bankruptcy came to exist as a common legal reality. I think that an acknowledgement that things could get better would have contradicted Graeber’s thesis. Then I got to the last chapter on the 20th century and history I knew fairly well and just got angrier and angrier. I nearly threw the book when Graeber claimed that after Argentina’s default America went to war in response to demonstrate to the global south that defaults wouldn’t be tolerated. Most people reading probably end up thinking this is some forgotten early 20th century banana war, of which there were admittedly many. But he’s actually talking about Argentina’s 2001 default and the war has to be either the US invasion of Afghanistan or Iraq.
So then I realized I couldn’t trust anything Graeber had said in his book until then, stuff I’d already gone and believed.
I don’t think that anyone who reads
will interpret it as being about the early 20th century.
Vox article suggests the Naloxone study has many problems:
https://www.vox.com/the-big-idea/2018/3/13/17115558/naloxone-opioid-overdoses-deaths-theft-moral-hazard-study
About the twitter polls:
“would you rather eliminate all: [Dogs: 37] [Members of a Race: 63]
WHAT????????????
There are really people who would rather kill humans than animals? and that’s a MAJORITY!?!?!?!?!?!?!?!?!?
I could steelman this response, perhaps, by using pure Utilitarian reasoning:
Eliminating all dogs everywhere will cause a lot of unhappiness to a very large number of humans, and may have strongly detrimental effects on the ecosystem (possibly catastrophic ones, depending on how you define “dogs”). By comparison, eliminating a handful of humans from some really rare and tiny race may be less damaging.
I’m having a tough time with that explanation, but I can’t come up with any steelman. Kudos.
Maybe dogs are just nicer than humans…This is like that utilitarianism vs. deontology fight Loki and Thor had at the climax of Thor.
(Loki tries to save the Nine Worlds forever by genociding the race doomed to start Ragrarok, to which Thor replies “YOU CAN’T KILL A RACE! *hammer*”)
Yes, this is how ethical debates are usually won.
Conflict theory FTW.
Actually, it’s more of a case of “thunder hammer FTW”, but yes, technically that’s part of conflict theory too 🙂
I think dogs, like most human hanger-oners, have a negative impact on many ecosystems because they are the near universal invasive species.
If I could at the same time I was eliminating dogs make everyone forget they ever existed, I consider doing it. I wouldn’t want to cause the suffering of people losing something they had, but I don’t care for seeing dogs everywhere and I don’t think dog culture is flourishing-maximizing (c.f. wireheading).
Americans are *really* into dogs. It gets creepy sometimes. In some demographics they have essentially replaced children, and their masters call them “fur babies” and refer to themselves as “dog parents”. The survey results honestly do not surprise me at all.
It’s awful. Being not a dog person, and in the age range where many are attempting to assuage their desires for children with dogs instead, going anywhere in a mid-size city sucks.
I used to enjoy sitting on the patio area if the restaurant offered that- no more! Walking around downtown on a weekend morning? Pssh, hope you like dodging leashes.
It’s part of our routine now when my wife and I want to explore- what’s the dog policy at that restaurant/park/shopping area/etc?
I hear you. They could have rephrased that as “would you rather eliminate all dogs or have 10,000 dollars deposited into your account?” and I’d still probably vote to genocide the dogs.
That said, I confess if you replace dogs with cats my first instinct is to save the cats. I understand the impulse. Actually voting that way is pretty jaw-dropping though.
It’s awful being an actual dog person as well, i.e. someone that grew up in a rural area and spent a lot of time around working animals. I love dogs, but I treat them like dogs, not people. The dogs are much happier for it as well.
Then your dogs are dog dogs, and you’re a dog dog person. Whereas the kind of person I was complaining about treats dogs like people; therefore their dogs are person dogs, and they are a person dog person.
This should not be confused with canine furries, who are person-dogs, and dog otherkin, who are dog-persons. A furry otherkin would be a person-dog dog-person, and if they liked people like you they would be a dog dog person person-dog dog-person.
I take my daughter to the dog park to try an convert the dog people into people people. (Well, okay, mostly to satisfy the tiny dog lover despite apartment life)
This is stupid. Having children doesn’t preclude having dogs or vice versa , and having a dog is rational: they’re good for your health.
Pet owning may be rational, and I’ve seen examples where for some people their pet really is necessary to their mental health.
But dogs are not humans, and the rights of humans over-ride those of animals. Saying you’d rather kill all Blagovians than all dogs is species traitor, pally!
Right. There’s something very wrong with a society that chooses dogs over people. I don’t know if that’s a reflection on what society has become or whether that’s just a reflection of the people saying that but it’s not a good sign.
@Deiseach:
Right, killing a race is Not Good. If even the heathen god Thor would pound me with his hammer for trying, how much more the Good Himself? 😛
This is the best match between handle and post I’ve ever seen.
Maybe you have not experienced the social context I am referring to. People absolutely do use dogs as surrogates for children, this is literally[1] true with no exaggeration whatsoever.
[1]: Actually literally, not figuratively.
The question is incomprehensible to me: a race of which species? And what is meant by “a race” – a one I can choose or a random one?
With a question that makes as little sense as this one, it’s unsurprising to have answers that are confusing.
Good point – it might have been interpreted as “all dogs, or just one particular breed of dog?” (It seems like a bit of stretch since I’ve never heard anyone referring to a dog breed as a race, but then again I don’t move in those circles.)
If I could define “members of a race” as “participants in a particular athletic competition”, yeah, I’d vote for that one. Especially if I could also define “eliminate” to mean the “nobody wins” sense, rather than the “everyone dies” sense (though this only makes grammatical sense for the latter option).
Is no one going to say this? Is everyone really going to take this result at face value? Am I going to have to be the buzz kill?
*Sigh*
When you post a poll online, especially on a very popular social media site, and it’s on a frankly ridiculous topic that has effectively zero chance of ever happening, people are not going to respond seriously.
This doesn’t show some deep cultural psychosis that people like dogs too much or are racists somehow. It doesn’t indicate some widespread lack of rationality. It shows that people think voting for the race to be eliminated is funnier. Maybe you don’t think it’s funny and sure fair enough, but all these hot takes or shock at a twitter poll strikes me as really missing what’s happening here.
Is it a possibility that people just answer whatever they think is the funniest? Yes. But are you really more justified in believing that it’s the case compared to alternative? I don’t think so. Twitter polls are not the final arbiter of public opinion but that doesn’t mean people don’t answer them honestly.
OK, maybe this angle would help.
There is literally zero skin in the game on answering this question, so it tells us nothing about how people would act if they really did have the individual ability to make that choice. Polls should be looked at as being super prone to giving an answer that’s different from what would happen if you actually credibly gave a person power to enact their poll choice.
Which by the way, is why they aren’t that bad at tracking voting behavior since in most large elections, individual votes have near zero chances of impacting the outcome of the election. Thus the actual skin in the game between voting and polls is fairly close.
Skin in the game is not the end-all be-all of usefulness. You can get some sense of what people believe by asking them. Is it flawed? Sure. But it does track with other objective measures, like Americans increasingly willing to watch tv shows with gay characters tracks well with polls on the subject.
You already admit that regular polls are somewhat weak, why’s it so hard to accept that a twitter poll, that lacks random sampling is so weak as to not be worth a single hot take on it. Especially for a ridiculous subject!
Polls for opinions on gays at least are talking about a situation people are likely to face. It’s not great compared to just seeing how people act or what shows they’re willing to watch, but a person is much more likely to take a poll seriously if the question is one they consider serious. So no, I don’t care about a twitter poll that says the people taking it answered they preferred eliminating a race to dogs. That’s as bad a piece of evidence as a pure anecdote.
I dunno, I found the fact that most people think IQ is mostly learned to be an interesting look out of the bubble. This trend seemed to be reflected in all the IQ-related questions and I can’t think of why there’d be unidirectional trolling.
Also the questions about D/S orientations were interesting to me- I also expect those to be relatively honest- though probably from an extremely biased source.
I’m sure that there is a healthy Lizardman’s Constant going on there, as well as people answering for the lulz.
But I am also sure that there are people out there, not particularly racists or racist about any particular ethnicity, who would value animals above humans and answer accordingly. If you’ve never had militant vegans sermonising* at you online, consider yourself to have dodged the bullet!
*With some really ludicrous assertions which I won’t even get into because no-one wants to step into that septic tank.
There’s an argument from a utilitarian standpoint that dogs are a species (or a family), and for a species, extinction is forever. Eliminate all caucasian humans, and unless you are a white supremacist you probably acknowledge that all of the good works caucasian humans would have done will eventually be done by the other races of humanity. Eliminate all dogs, and all the good that would ever have come from Man having his Best Friend by his side is now lost for all time(*). Integrate over the stelliferous era, and the transient harm of killing some people may be preferable to the enduring harm of eliminating all dogs.
Every other moral foundation says the dogs have got to go. Well, OK, maybe Nietzsche would have regarded that as “slave morality”, valuing some distant race of hominds you’ve never met over your own faithful canine companions, but that’s a stretch.
* I’m assuming the spirit of the question is such that “eliminate” does not mean “…and then recreate via e.g. genetic engineering”, else the word “kill” would have sufficed.
Would you be willing to go on the record saying the same thing about other races? No trick, just literally replace “caucasian” and “white” and post it. Some races/ethnicities to try: Asians, Blacks, Arabs, Jews.
The unwritten rule is that you aren’t supposed to even speak hypothetically or joke about genociding any race other than whites. It’s like saying the word “bomb” in an airport or using the word “n*****” in a discussion of racist attitudes in the Jim Crow Era.
Edit: What’s funny is that my original version of this comment — which spelled out the n-word — seems to have been automatically censored by the blogging system.
“Eliminate all caucasian humans, and unless you are a white supremacist you probably acknowledge that all of the good works caucasian humans would have done will eventually be done by the other races of humanity.”
I believe in chaos. An approximately equal amount of good works might be done, but it will be a different set.
Eliminate all dogs, and all the good that would ever have come from Man having his Best Friend by his side is now lost for all time
Or we just domesticate and breed some other species to be Humanity’s Best Friend. Think of how thrilled a lot of readers were by the daemons in Pullman’s His Dark Materials series; some people would be happy with pet monkeys, snakes, or other creatures as pets/companions instead of dogs.
Pick some appropriately cuddly species, domesticate it, have people accustomed to it as a companion animal so that the “when I was a kid I always wanted a [name instead of dog]” is considered not just average but normal, and we miss none of the good we would have obtained from dogs (I agree it might be a different good to that of the good from dogs, but that’s another question).
Out of curiosity, does anyone have any suggestions for how we could replace farm dogs? (I’m thinking mainly of herding.)
I guess maybe a guy on horseback could do much the same job, but that would be significantly more expensive.
Is there much herding with dogs still?
@Aapje: Look up “extreme sheep herding”
A bigger issue would be replacing guard dogs. Dogs are really good – and really cheap! – for defending property against potential intruders and defending people against potential attackers. In places with a lot of crime or corruption or insecure property titles, people still rely on watchdogs as a night alarm system and general bad-behavior deterrent around the clock. It’s hard to imagine what the next-best animal for that purpose would be. What other animal is as good at defending territory or at raising a ruckus when disturbed? Geese?
@Aapje, I can’t imagine what the alternative is.
… well, people have apparently started using drones, but I don’t think that’s all that common yet. It is a potential answer to my original question, however.
@Harry Maurice Johnston
Fencing & people.
In my country, sheep are already typically confined to relatively small areas, so with a few people side by side, they seem to be able to prevent the sheep from getting behind the people.
If necessary, one can even use movable fences to pin down the sheep in a smaller and smaller area.
Then again, my country has high quality grassland and is compact. More extensive farming methods may not make the above very feasible.
@Glen Raphael
And yet a lot of places are secured with fences, cameras and guards without dogs. People will make do without dogs or an alternative, I bet.
Huh. The grass grows quickly enough for the sheep to survive in a confined area?
Dutch soil is generally quite rich and with some added fertilizer/manure, perennial ryegrass can grow quite fast with the moderate temperatures of the Dutch climate.
In some places it is obvious that the sheep are moved between areas. In other places, the sheep seem to always be there, so they may depend on a sheep to grass ratio that allows the grass to regrow.
I’m not a sheep farmer though, so I can just observe from a distance.
@AG: or “one man and his dog”- competitive sheep-herding was regularly shown on national television in the UK until 1999, and is still shown annually as a segment on another programme.
Look at it this way: There are a lot of people for whom some race X is a hated out-group; a scapegoat for all their problems; a target to bash for purposes of virtue-signalling; or just a general nuisance which the world would be much better off without.
Very few people see dogs in this way.
Here’s a quote from one well known writer:
Given that this sort of attitude is floating around, I don’t have a hard time believing that a lot of people would prefer genociding some race to eradicating dogs. Although 63 and 37 percent seem kinda high, so who knows.
Susan Sontag
She could have been worse– that is, she was also opposed to communist dictatorship, at least to some extent.
Good or bad, nobody writes that way about dogs.
Here’s another example:
https://www.youtube.com/watch?v=kBS2Frnbt6U
A black activist talks about the need to “exterminate white people” and is applauded by some subset of his audience.
The result is hardly surprising. Humans have, mostly without realizing it, evolved dogs to
1) recognize a lot of human emotion,
2) want to make their masters happy.
That “want to make their masters happy” is stronger than the general “want to make the people we encounter happy” most of us feel. Sometimes dogs are even better at number 1.
It brings to mind a quote from Chesterton: “Wherever there is Animal Worship there is Human Sacrifice. That is, both symbolically and literally, a real truth of historical experience.”
Although christhenottopher’s point about how seriously to view twitter polls is well taken.
Is it strange that (having never heard of Vermin Supreme) I at first assumed it was a new nickname for Donald Trump?
Every time I read Vermin Supreme’s name, I try to guess whether it’s a nickname for Trump or Clinton, before my brains kicks in and remembers “No, that’s an actual person who actually calls himself that”.
I’ve long been fascinated by the public left leaning of a certain kind of capitalistic enterprises:
Rock bands!
It’s hard to think of any business that more purely free enterprise capitalistic, and yet the stars are almost universally outspoken leftists. I assume this posture is a necessity in that line of business, whether you personally agree with it or not. I’m less sure of why, and welcome theories!
Leftists have more disposable income, collectively?
Rock bands are only 50ish years old, so I would hardly call it a necessity. I don’t think of AC/DC as being political but they made a lot of money.
That really isn’t surprising if you think about what sort of people are likely to join rock bands.
For every musician, in any genre, that hits it big, there are a very large number that don’t make enough money from their music to live off without a second job. And with a few exceptions, even most of the ones that do hit it big started out in the latter situation. As a result, musicians and creative types in general are highly selected for either valuing self-expression more than they value making money, or not having any better options for employment available to them. In recent times, outside of rural areas (hence the perceived association of Country Music with conservative politics), these demographics have tended to be heavily skewed towards the left side of the political spectrum.
I’d also add that being a performing artist often involves a lot of traveling, meeting new people of various backgrounds, and finding ways to appeal to them and their interests. So in addition to being high on self-expression, performing artists are also probably really high on openness.
Their main point of interaction with capital is with record brands, which are infamously incredibly abusive towards their contractors. Its not signaling, they are simply in a sector of the economy where most capitalists bear a striking resemblance to Snidely Whiplash, Mustache-Twirler extraordinaire, which does not inspire a lot of confidence in capitalism.
I’m not sure your premise is true. The first four rock bands that came to mind for me were the Rolling Stones, AC/DC, Kiss and Springsteen. I dunno why those were the first to come to mind, but I would say only one is left-leaning (Springsteen). I believe Kiss was actually a very explicit money-making enterprise (which is perhaps why it came to mind). The Stones and, as Wrong Species points out, AC/DC are pretty apolitical.
The next few bands to come to mind are the Beatles, Pink Floyd, Dire Straits. Obviously some Beatles members were a bit left-leaning, but I’m not sure about Pink Floyd and Dire Straits. They certainly weren’t particularly publicly-lefty as a band…
Am I missing the leftiness of the bands I’ve mentioned or are you thinking of different bands?
Oh, I just thought of Meat Loaf, who is right-wing.
I seem to remember an interview clip from Top 10 Prog Rock or some such documentary in which Bob Ezrin (producer of The Wall) said something like “People ask me if [Pink Floyd bassist and principal songwriter] Roger [Waters] was a Nazi, and I tell them, ‘yes'”.
Brief Googling suggests to me that he was and is a committed member of that part of the left whose pro-Palestinianism and antipathy to Israel at the very least threatens to teeter into outright anti-Semitism.
Whether that extends to other members of the band I’m not sure; they don’t like each other very much. My best guess is that they’re a more generic kind of vague public school artsy lefty.
Being both a nazi and of the left is not possible by my (or I think the most common) definition of the left. Those ‘leftists’ who are so anti-Israel that they teeter into anti-Semitism are debatably part of the left.
The activism part of his wikipedia page mentions supporting some neither left nor right things such as tsunami relief and fighting extreme poverty and malaria.
One thing seems right-ish to me which is support for the right to hunt.
Two things seem left-ish to me which are climate change and pro-Palestine (and perhaps anti-Israel).
Although really none of those right-ish or left-ish things are actually left- or right-wing. I’d put him down as a generic mishmash philanthropist.
I suspect based on lyrics he is not much of a fan of capitalism.
I don’t really think Nazi is an accurate description – he’s an oddball, more than anything – but the combination of his belief that it was a mistake for Britain to go to war with Germany (driven primarily one suspects by his father’s death in action at Anzio only a few months after he was born) and his vocal antipathy to the state of Israel… well, it’s tough to blame Jewish people for being wary of him, to put it mildly.
I’d be surprised if many of them, besides the few obvious exceptions, care much about politics at all. They happen to occupy a subculture (it’s true of the performing arts generally) in which political statements have largely ceased to function as expressions of genuine opinion and become a form of etiquette: if you want to be regarded as a good person, you understand, these are the sort of noises you are expected to make.
“it inspired three rebellions by people pretending to be Nero.”
Impostors played a pretty big role in history until not that long ago when pictures finally became cheap. King Henry VII of England had to fight off two rebellions under the nominal control of impostors pretending to be the poor Princes in the Tower.
Made rather confusing by the way “pretender” is usually used in that context…
A similar thing happened in Russia not long after. Following the death of Ivan the Terrible’s son Feodor (who left no heirs), no fewer than three different people (in sequence) claimed to be Feodor’s younger brother Dmitri. The first one actually managed to make himself Czar for a bit under a year, before he was lynched for being perceived as too pro-Polish and pro-Western. A minority of historians think “False Dmitri I” might actually have been the real deal, but II and III were definitely fakers.
It doesn’t sound like Graeber is saying anything new. We’ve known that (relatively) large sedentary populations of hunter-gatherers were possible in places where the food supported it, and that was true in parts of Mesopotamia and the Near East (as well as the Pacific Northwest).
Of course, he buries it in several pages’ worth of complaining about inequality and complaints about how historians are supposedly making inequality inevitable.
Don’t forget Japan, the Pacific Northwest of Asia.
Right and he says that just as people from 10,000 years ago could switch social systems just like that, so could we. How would that even work in a modern capitalist society? “Ok guys, every winter we’re going to stop protecting private property. We trust you to make the right decision.”
Yeah, the polemic was so aggressive I almost didn’t make it to the “new model”.
But there were some interesting bits, e.g.
and
But yes, I was seeing this model 35 years ago when I was reading Jean Auel, who was well-read if no specialist.
“Kevin MacDonald’s anti-Semitic conspiracy theories.”
I don’t think that word means what you think it means…
Stevenson & Wolfer’s finding appears to be solid for the drop in suicide, but statistically insignificant for murder after they controlled for a secular decline in spousal homicide, which resolves the inconsistency of their finding with the conventional wisdom that announcing a divorce is when you’re most at risk for being murdered by your husband.
As someone with some experience in this respect, banning regulators from joining the industry they regulate is a foolish and shortsighted move.
The reason for this is simple: long term incentives.
Take my profession, the law. The best lawyers in NYC earn about $10m a year. The very good lawyers that make partner can expect to earn about $1-5m a year. Their law school summer interns (2L summer associates) make $180,000 a year (prorated for the summer).
Meanwhile a federal prosecutor starts at about $80,000 and is capped by law at around $120,000. A line attorney at the SEC makes maybe $90,000. Promotion in either job requires politics connections as you’ll have to be appointed. Your offices are shabby with outdated equipment and 90s era technology. You don’t have business development budgets, you are committed to stay for multiple years, you certainly don’t get bonuses, and you often work harder than the private sectors. And don’t get me started on the state guys who are basically seen as junior varsity compared to federal.
So who does the government jobs? Think of every government employee stereotype: the lazy (because it’s hard to get fired); the power hungry; the independently wealthy; the nakedly political.
Or – the really good ones who see the revolving door as an opportunity.
See in the legal field there’s two ways to make partner: prove your worth as a great associate or as a great regulator. Clients love to have a former regulator on your side because it shows the government they’re serious about it. And what kind of regulator do clients want? The guy who in office was a puppet? Or the guy who was tough as tacks who can now convincingly advocate on your behalf?
I think people have this view of regulation as some sort of ideological battle where Sanders-type regulators yell at Romney-type corporatists in a cultural battle. In this model the revolving door is awful because it proves those Sanders types were really closet corporatists after all. In reality working on either side involves basically the same issues and tests the same skills.
Basically the revolving door is the only fix for the fact that the private sector literally pays orders of magnitudes more than the public sector. And it fixes it in exactly the way you want it to be fixed: to encourage talented lawyers to join the government and be tough, capabale regulators. To ban it just eliminates one of the last good incentives to go into the public sector. It’ll catastrophically devastate the talent corps of the government until you can find a way to get the government to start matching the private sector’s multi million dollar salaries.
If I was a government regulator for an industry, I’d make sure to be extra vicious in my regulating so that industry hires me out. And they better keep me hired because if I get unhappy, I’ll go back to the king’s court to decree everyone in industry must wear a duck on their head. This threat doesn’t work if there is no revolving door.
Very much so. In law at least the issue is mitigated by the fact that there’s tons of decent lawyers. In more specialized fields, there are probably only a few dozen experts qualified at the top of their profession that have the requisite subject matter expertise.
Now imagine declaring that you’re going to fix government regulation by making absolutely sure none of those experts will ever have any financial incentive whatsoever to work for the government, because not only are they going to destroy their salaries doing so, they are guaranteeing that even after they leave they still aren’t allowed to get a high-paying job.
Imagine the same proposal in medicine: decreeing that if you join a nonprofit hospital, you’re first taking a 90% paycut by law, and then that paycut sticks with you for ten years even after quitting your nonprofit job. Is that going to make more or fewer talented doctors join nonprofits?
Now I get Yang wants to raise federal salaries to solve this problem, but come on. The private sector will always outspend the public sector and heavily so. “Let’s massively increase government salaries by 10x” is political suicide in this climate, and as a rule I don’t vote for people whose policy plans require “And Then A Miracle Occurred” as one of its steps.
In more specialized fields, there are probably only a few dozen experts qualified at the top of their profession that have the requisite subject matter expertise.
This concern is the key one for me. I’m an actuary: there are only about 20,000 credentialed life actuaries in the US, and they specialize in a wide variety of things. If you are looking for “understands and has an informed opinion on asset factors for C1 risk” or “knows financial reinsurance well and can distinguish reasonable and unreasonable structures”, you’re talking a talent pool that totals less than 100. I want some of them working for the NAIC, and some working in industry–but it’s unlikely that they would work on only one side of the divide.
@Theory
I think that you put too much stock in people’s rationality here (on both sides). I question whether most people can so easily switch allegiances & whether the people at the companies can avoid developing antipathy towards harsh regulators.
It seems much more likely that companies prefer to hire the capable, who choose not to regulate harshly, out of sympathy with business, even though they could.
I’m not putting stock in anything. I’m telling you how it empirically works in the real world. You can look at the resumes of those hired at Biglaw firms – they’re all quite prominent prosecutors/regulators who made a name for themselves through big cases.
Sure Ajit Pai will get hired by some pro business group when he leaves. But so will Preet Bharara and Patrick Fitzgerald and Sally Yates. The point isn’t that they and their firms are being hired for their ideology, but because they’ve 1) demonstrated your legal chops, and 2) serve as an excellent signaling device for clients wanting to prove to the government how seriously they’re taking an investigation.
A billion dollar corporation doesn’t need to find lawyers “on its side”. That’s what the money is for. The ideology of an attorney is basically irrelevant because government investigations are rarely sexy ideological battles. They are just legal work.
Is it your position that regulatory capture is not a real/significant problem?
Regulatory capture exists, but it’s not because the person regulating knows something about his field.
It is, and the answer is not “let’s make 99% of anyone competent in the field never join the government”.
The government is already seriously handicapped by the fact that they pay so much less than the private sector. Take tax law. If you’re one of the country’s great tax attorneys, you’re easily worth $5-10m+/year. Meanwhile the freakin’ IRS Commissioner makes less than 2% of that: $167,000. And that’s a job you need political connections and a lifetime of service to attain. A line IRS attorney makes a fraction of that. So the IRS already struggles to hire the best in the business.
And now people want the IRS to have to further say, ‘oh yes, once you come work for us, you’ll never get to be paid your market rate again’. It’s absolutely insane.
This is more or less my position, well-articulated. I would like to add that “Regulatory capture” makes a lot less sense when you’re in the trenches, actually working with the regulators. I spent 2 years doing regulatory compliance work for an airline manufacturer, and it was very different from the screaming match. The FAA people wanted to make sure we’d done good work, but they worked with us on efforts to improve our quality and to make it easier for all of us to get our jobs done. There were days I wished we’d captured them, though that was mostly to avoid read-through 17 of a complicated and boring technical document.
I think people are a little more worried about the revolving door being used for simple, lightly obfuscated bribery.
“If you scratch our back today and use your powers as a regulator to change the rules to favor our business or to make sure decision X goes our way we’ll make sure that there’s a cushy long term post where you can get overpaid while doing very little work”
Even if nobody holds up a sign stating that the offer than be implicitly on the table every time an important issue worth a lot of money comes up.
If someone working as a regulator is involved in a decision which makes a company a lot of money and just happens to move jobs to that company a few months later to 5x the salary…. people are reasonable to be suspicious of wrongdoing.
I think people are a little more worried about the revolving door being used for simple, lightly obfuscated bribery.
Matt Levine has written a bunch about this topic. And cited the literature, which exists. The literature always exists. People have come up with ways of measuring this.
If you have strong feelings about the topic, you should read through his archives.
If you don’t have strong feelings, well, it doesn’t matter. I could say what the summary is, but the people who agree with what I say would nod and then forget about it, and the people who disagree with it would tsk and then forget about it.
he just seems to be a random blogger who discards the idea as “dull and unimaginative” then quotes a discussion piece that itself cautions that the observations they talk about shouldn’t be taken to mean [exactly the position he takes]
Matt Levine is very much not a random blogger in the sense you are talking about. He’s a deeply respected and incredibly intelligent commentator on the financial services industry. That doesn’t mean he is right about this but dismissing his views without considering them is not a very good truth seeking strategy.
It’s fine to not care much about the issue, or care so little that you don’t even know who the names of the most prominent writers in the business press about the topic today. There’s no requirement you care about these things.
Specifically he cites this study of 35k banking regulators which finds that during periods of higher regulatory action the revolving door spins faster, which seems to be the opposite of the conventional wisdom.
It depends on what the cause is for the increased regulation. If it is higher pressure being put on the regulators by politicians, then it makes sense for companies to hire away the most capable regulators, to weaken the regulatory agency and to get inside information.
If more people being hired away causes more regulatory action, then companies are not decreasing regulatory pressure by hiring away people.
This is a pretty over-simplistic model of how all of this works. Major regulatory decisions aren’t made by one dude who then cackles all the way to the bank.
Of course businesses have a view on what regulations should be passed or how they should be enforced. In fact some regulations will favor some businesses over others. Bias in making these decisions is a serious issue – but one correctly and (largely) adequately addressed by internal controls at the regulators.
Whereas banning the revolving door is the worst possible answer to this issue, because you’re essentially declaring that all of the experts in the field have to pick sides: either they can be fabulously compensated by the market in accordance with their skillset, or they can work for the government and then be barred from ever being paid at market level again. That is just an excellent way of making sure the government can’t hire anyone competent.
either they can be fabulously compensated by the market in accordance with their skillset, or they can work for the government and then be barred from ever being paid at market level again
Is it any better to encourage a culture of “I can have a career in government where, if I manage to snag a position that lets me set policy whether directly or by the advice I give to those who make policy, I can assure myself of a comfortable second career at fabulous market rate compensation levels by walking into a soft job with a particular company in an area that I made sure, by the policies set, would be favourable to that company regardless of whether this was in the best interests of the taxpayers, the users of the services/products from that company, or the nation. And that there is a tacit quid pro quo where, when I am in government, lobbyists from that company and others in that field will woo me with inducements to be favourable to them and I can pick and choose which job I expect to be compensated with after I finish my government career”?
And indeed the culture of the other side of the revolving door, where it is “I can build up a good career in this industry, when it suits me then I can move over to the government side as a consultant or advisor or other unelected position where I use my influence and contacts to set policy favourable to my former employers, who are probably still paying me a private pension after retiring from their employ”?
As someone with some experience in this respect, banning regulators from joining the industry they regulate is a foolish and shortsighted move.
There may be good reasons for letting Jones, former Permanent Secretary to the Minister for Lightbulbs, join the BlazeOLight company after Jones leaves their position in the Civil Service.
But too often it seems to be that BlazeOLight offers inducements and cushy terms to Jones while Jones is still in the Civil Service or government; it just so happens that BlazeOLight gets that government tender; and Jones has the full expectation that the day after they retire from the public service they will walk straight into that high-paying job with BlazeOLight and use their contacts and knowledge to help the company get an unfair advantage over its competitors.
The whole idea is about being a regulator; how much regulating can you be said to do when you go easy on them when in office expecting the payoff for this to be a comfortable job with that company after leaving office?
Then we have to do the hard work of seeing if Jones was bribed or not.
Civilization isn’t easy and we can’t just come up with The Right Rules and leave it on autopilot.
how much regulating can you be said to do when you go easy on them when in office expecting the payoff for this to be a comfortable job with that company after leaving office?
Jones has a much better play than “if I be nice to these amoral greedy strangers when I have power over them, then they’ll be nice to me when I have no power over them.” It’s “I’m going to cook up the most byzantine set of rules anyone has ever seen, and firms will need to hire specialists to figure it all out, and, hey, wouldn’t I be the best specialist?”
We’re starting with the assumption that these people being regulated aren’t nice people. They aren’t going to go all aw-shucks at a nice guy. They’d laugh the nice guy out of the office.
“Why aren’t we all dead yet” is amazing headline clickbait. Can’t wait for the content farms to catch on and start giving us “10 reasons we’re not all dead yet (number 5 might kill us all!)”
From that list of surveys:
Can you make your ears roar/rumble by contracting that one muscle inside your head?
I’ve wondered if other people could do this. Now I know how many.
Your link to the story about Sumerian debate texts reminds me of some of the medieval Islamic debate texts—two of them on the relative attractions of homsexual vs heterosexual sex, one or two debates between different foodstuffs.
Well now I’m going to need a link.
The one example of the genre I’m familiar with is the debate between coffee and qaat that Victoria Clark includes in Dancing on the Heads of Snakes, her history of Yemen:
Re the Cofnas/Macdonald argument:
I think it’s extremely important to do this kind of thing.
We live in a world where there are a bunch of ideas and questions and facts that have been largely ruled out-of-bounds by the most of the respectable and powerful voices in our society. One result of that is that we’re ignorant of a lot of important facts and don’t ever think to ask a lot of important questions. But another result is that a lot of misunderstandings and dumb ideas and crackpot theories circulate along with the forbidden but true knowledge, because there aren’t many people interested in actually engaging with them and showing where they’re wrong.
re: Cain .
I had been under the impression that Lamech killed him, but now looking over that area again, it doesn’t say that. Lamech killed a man in response to being injured by them, but it doesn’t say who. Also, the man is described as young, and Cain was Lamech’s great great grandfather, so presumably the man that Lamech killed was not Cain.
I am not sure why I had thought that it was Cain that Lamech killed.
But, yes, I would suppose that Cain did not survive the flood. (By this I do not mean to say that he survived until the flood, only that he didn’t survive it.)
Nothing in the text indicates to me that Cain would be made unkillable, as the statement made is that “anyone who kills Cain will […]”, and also that it isn’t true that whoever finds him will kill him. This seems to strongly imply that he would still be possible to kill, just that people would have reason not to do so, and as a result, he needn’t expect whoever finds him to kill him.
I also don’t see anything in the text that suggests that he would be prevented from dying of other causes?
—https://en.wikipedia.org/wiki/Lamech_(descendant_of_Cain)#Song_of_the_Sword
I found the results from this question in that linked collection of questions really interesting: “If you press the button, all living humans instantly gain immortality, but lose the ability to have children. Do you press it?” roughly two thirds voted no. This is a much stronger majority than I would have expected in either direction! Though having said that I don’t know anything about the population the responders came from
I suspect that a decent number of people oppose immortality regardless of whether they would lose the ability to have children.
Personally, my gut reaction to that was to interpret “immortal” as “doesn’t age but can still be killed” and think, “if that happened then eventually the human race would go extinct.” That is generally what the word means in most pop culture depictions (Highlander, the overwhelming majority of vampire stories), though it’s usually packaged with being significantly harder to kill, so I expect that other people had that reaction too.
There’s no way I would ever deprive people of the ability to have babies. Not only is it a primevally satisfying thing to do at a personal level, but it’s also exceptionally good insurance for the species should the magical immortality endowment wear off or be overcome by unexpected circumstances.
Because of this, my take on the matter is that people are more pro-baby than they are pro-immortality.
But then immortality has always seemed a very unattractive prospect to me in any case. Life needs a span, a number of days, in order to mean anything.
If I pressed the button, a lot of people would be really really angry at me
And they’d now be immortal
And I’d be immortal
Which means they’d be able to torture me forever
It matters whether we’re talking technological immortality (which probably means you don’t die of old age, but probably some really rare diseases, accidents, and violence can kill you) or immortality like nothing can kill you and you can never die even if you want to, which could be pretty horrible under the wrong conditions.
Larry Correia’s zombies are like this – unable to die, even if chopped up or severely burned, and eventually regenerating/reforming.
Now imagine zombies in the context of modern industrial warfare. Or don’t, because you might have trouble sleeping afterwards.
I don’t think they’d change it much, actually. Oh, they’d be able to soak up rifle fire fairly well, but rifles in an industrialized army are more a self-defense weapon than anything. Outside of a few specialized tactics, most damage these days is done with a bewildering variety of crew-served weapons, mines, grenades, anti-tank rockets, airstrikes, tank or IFV guns, and above all artillery. Something like 80% of casualties in modern warfare come from artillery.
Modern armies are very good at blowing stuff up. And what zombies would gain in durability, they’d lose several times over in inability to take cover, suppress the people who’d otherwise be walking in mortar fire, call for counterbattery fire or other countermeasures, or even follow simple directions.
If “immortality” means “literally can’t die, ever even as the stars go cold” then I’m going nowhere near that button because horror.
If “immortality” means “won’t die of old age but can still die from things like stabbing or suicide” then pressing the button means extinction for humanity.
Regarding the German insect study: Any possibility that the advent of agriculture created a whale fall for insects and the invention of extremely successful pesticides is simply returning insect populations to more “natural” levels?
I would be interested to see if the declines were concentrated in agricultural areas. (Mind you, not interested enough to actually read the study)
I’m so surprised that leftists statistically have more money than us. Also surprised that they don’t see discriminating against us as classism. 😛
It’s worth remembering that this smart and moral trick only had to be invented in September 1943, when Northern Italy became occupied territory because the Kingdom of Italy surrendered to the Allies. The Kingdom of Italy under Fascism never shipped its Jewish citizens abroad to German camps.
Right–the racial argument made a lot less sense in Italy (heck, your average Ashkenazi looks more Italian than a German does), and I wouldn’t be too surprised if Italians were more reluctant to off their neighbors because the government says so.
On South Africa, consider me unconvinced by an unsourced reddit comment. Do you have any datapoints to back your claim?
One attempt to enact expropriation was through the Communal Land Rights Act of 2004. This was deemed unconstitutional in 2009. However this was a legislative change not a constitutional change that was disallowed. Here’s a high level summary.
Changes may be struck down under constitutional grounds relating to human rights as per this article. But note, this author agrees that this is a novel approach and results are uncertain.
The most comprehensive historical review I’ve found is worth reading. My tl;dr is that since the constitution was agreed in 1994 courts and legislation has broadly adhered to expropriation with compensation. Expropriation though has been largely ineffective despite efforts from day zero. Weak performance has been attributed to:
None of this is to say that the constitutional changes will be successful in a ‘we’re violently removing white farmers’ sort of way. President Ramaphosa has reassured that only ‘unproductive’ land will be claimed. His most recent comments on Good Friday were:
Of course Hermanus experienced land related violence last week so at this point it’s anyones guess how this ends.
Just noticed one link isn’t working and I’m outside of the edit timeframe. Use this link for the historical review.
In theory, a paper based on a notion that “Jews’ success in various fields is as we would predict from their IQs, so there’s no need to posit any conspiracy theory” would be comparable to a paper “Underperforming (ethnic?) subpopulation X lack of success in various fields is as we would predict from their IQs, so there’s no need to posit any conspiracy theory of oppression”, assuming that there is some subpopulation X about which this is factually true – however, in practice that would IMHO not be welcome at all, since it seems that similar observations have been used as an argument for “obviously we/you must be measuring IQ wrong since that subpopulation’s average is low”.
Yeah, it’s clear that what’s sauce for the Jews is not sauce for the gander.
Hah, love the pun. I was gosling to respond a followup pun, but it wasn’t any good at all…
Certainly it isn’t hard to find places where saying “Jews have above-average intelligence” is considered outright racist.
Sigh. I guess we just have to wait another 20 years for people to internalize “Different ethnic groups can have differences in average IQ and other major traits, some these differences likely being biological and inherited, but that doesn’t make deporting immigrants any more okay”.
It doesn’t? Because if that whole melting pot blend immigrants from everywhere into one people who have all their different ancestors strengths and etc etc thing turns out to not be possible because the differences between us are inherent, I think that definitely does carry weight in the decision about how many, and which, people to invite in.
From a straightforward economic point of view, diversity of groups is an argument for letting them in, not against. People with different abilities are more likely to gain by exchange than people with similar abilities.
One problem arises if you assume strong egalitarian pressures, such that letting in people who are less productive means that someone else has to subsidize them. Other problems may arise in political interaction, where people with different values and abilities face conflicts over what they want the government to do. But in straightforward market transactions, diversity is a plus not a minus.
Provided, of course, that the people worse in one area are in fact superior in another area.
Hopefully they are using point buy and not rolling for stats.
David:
I’m not convinced that’s always true.
Suppose we have a political constraint that we’re only taking in N immigrants per year. And suppose we have way more than N people wanting to immigrate, so that we can be pretty selective in various ways.
One policy we could pursue would more-or-less randomly select people to let in, from many different countries and with many different levels of qualifications and such.
Another policy would be to carefully select people we expect to have the best chances of being successful here in the US–for example, people with advanced degrees in technical fields, people in good physical and mental health, people with no criminal behavior in their background, etc.
Which policy would you expect to yield better outcomes for the nation as a whole? My strong intuition is that we’d get better outcomes by following the second strategy–selecting people who seem likely to do well here. That will get us somewhat less diverse immigrants, but still with plenty of diversity.
In what sense would we be better off with a random selection strategy instead of a selective one? The only way I see that this would be true is if our selection criteria were really dumb. I think it’s not so hard to predict how well different people will do in the US, before the come here. And indeed, it’s not so hard to predict something about how well their kids and grandkids are likely to do. That kind of prediction will often be wrong at an individual level, but I’d expect it to be pretty good when done on a large scale.
What am I missing?
They don’t have to be superior for comparative advantage to be a thing. Since I’ve got old-school D&D on my mind from the other thread, imagine two players who’ve just rolled their stats using 3d6 down the line. Alice gets very lucky and rolls 17 Strength and 16 Dexterity; those stats are best for a fighter, so that’s what she decides to make. Bob rolls 10 Strength and 15 Dexterity. These stats are strictly worse than Alice’s, but relatively better for a thief, and Alice has already chosen her role and can’t do two things at once. So he makes a thief and both players are better off.
I think I mentioned recently, in another thread, that almost nobody understands the principle of comparative advantage.
What do you think “worse” and “better” mean here–what’s the common unit in which you are measuring John and Mike’s ability to do things? You can’t use output per hour, because an hour of John’s time doesn’t convert to an hour of Mike’s time.
Suppose John can make a widget in an hour and make a farkel in an hour. Mike is “worse” at both–two hours for a widget and six for a farkel. Mike will be happy to give two widgets in exchange for a farkel, since that saves him two hours. John will be happy to offer one farkel in exchange for two widgets, since that saves him an hour. Both gain.
I do not see how comparative advantage plays any part in immigration decisions. DF makes the point that diversity is good in itself, and I guess that makes sense to the extent diversity gives us useful skills in both A and B. But for this to work, you still want to get those with the highest skills in either A or B. Comparative advantage only fits in here to the extent that you already have a citizen X with low skills in both A and B, then you might want X doing B tasks instead of person Y with higher B skills, just because Y has even higher A skills. But it doesn’t make sense to allow the X into the country in the first place if both his skills are low.
As long as we are talking about market interaction rather than redistribution, what matters is not how high his skills are but whether the ratio of his different skills is different from the ratio for those already here. In my example, Mike has lower skill than John in both areas, measured by how many hours it takes him to make something. Nonetheless John and Mike both benefit by the exchange.
I cannot tell if you simply don’t understand the point or are assuming that someone with low skills, hence low income, will have to be subsidized via welfare or perhaps some other assumption I am missing.
So comparative advantage means that Einstein and the village idiot can engage in mutually useful trade, even when Einstein can do everything better than the village idiot. But that just means there will be some benefits to even letting a village idiot move into town (assuming away welfare programs, anyway).
But it seems inevitable that there are even more benefits to letting another Einstein move into town. (Perhaps not if we mean literally a clone of Einstein, but almost certainly if we mean a very intelligent person who’s accomplished a lot of demanding things in a hard field.) So it intuitively seems like an immigration policy that’s more selective ought to be a net benefit, assuming we keep the same number of immigrants coming in per year but just change which immigrants we let in.
That’s not clear, for the reason I tried to sketch. If the abler immigrants we let in have about the same mix of skills as lots of people already here, we don’t gain much from them. If the less able immigrants have a very different mix of skills we do.
It also matters whether the skills are producing something easily imported, because if they do we may be able to, in effect, import the skills without the people. That’s an argument in favor of letting in people willing to do things like picking crops or mowing lawns over people who, say, write good novels.
All of this, as I said before, is in the context of voluntary exchange on the market. Letting in very productive people does have the advantage that you can collect more income tax from then than from the less productive.
Yes I think Albatross is right. The point is that highly capable people have more positive externalities. I think your point, DF, is that bringing in less capable people is just as good because the price system will pay them appropriately so it will work out as well (pricing is how comparative advantage works too, but I still don’t think it’s the same thing as this).
Overall, I think a country will be a better place to live when the residents have more skills than when they have fewer skills, even if we pay each person the amount they deserve based on their skills. So it does make sense to prefer higher ability people for immigration.
Actually that isn’t really my position on immigration, but it is because I feel it is not a good thing to beggar the rest of the world just because we can. I think the US should let in lower ability people to the extent the US workplace has capacity for them. I think it would probably be better for the US to just let in those with college degrees, but I think it is good for the country to help the rest of the world by letting in lower level workers also. Kind of a foreign aid thing.
So what?
Yes, these two people can better themselves by trade.
But you said that they each have more to gain than would two people with similar skillsets.
If you have two Johns, one making widgets and one making farkels, you [edit:each individual] end up with more of both than if you have a John and a Mike.
Hence, two people with similar but better (more relevant, or higher output, or whatever) skills will be better than a diverse set.
Now, you can say that perhaps it is cheaper to produce a John and a Mike than two Johns… but that analogy is going to have to be made explicit.
On the numbers I gave, you don’t. You end up with the same amount.
You are adding an additional assumption–economies of scale or something similar. That works if you imagine one John and one Mike in the whole economy, and each can get better by specializing. It’s much less plausible if you start with a million Johns and then add another million Johns instead of a million Mike’s, since with a million Johns they are already taking advantage of specialization, economies of scale and the like.
With two Johns, after 8 hours, you have 16 products, split equally among the two varieties.
With a John and a Mike, Mike doing what he is best at and John doing the other, after 8 hours you have 12 products.
What are you saying happens to make the latter better than the former? I see the disconnect. In the situation with two Johns, there is no economic need for trade, presuming the widgets and farkels don’t require machinery, investment, etc. That’s the assumptions I was making, I guess.
Okay, so if you have a diverse economy, you have more need for trade (contingent on those assumptions) and trade will make both parties better off than if you had those parties and no trade occurred. But it still doesn’t bring them up to the level of a non-diverse but higher skilled workforce. And creating a metric for “amount of improvement given by trade” rather than using “total productivity” seems strange.
But my bad for quoting and taking issue with that technically accurate statement! I should have argued against this:
That’s a statement your scenario does not prove.
In your simplified scenario with all else being equal and the economy caring about only widgets and farkels, there is never a reason to prefer a Mike over a John.
Maybe these are poor assumptions, but they are the thought experiment you gave. It sure seems like you are the one with unstated assumptions, or I cannot do elementary math. (I’d say 70% odds the former, 45% the latter)
And, if Bob had been able to specify the rolls for this thief partner ahead of time, and been given the choice between a 17/16 thief and a 10/15 thief, he would definitely have chosen the 17/16 and in as much as the stats matter at all, the party with less diversity will have a better chance. (Of course, like David’s example, with identical skills they don’t benefit each other as much as the scenario where one has a glaring weakness. But in real life we care more about productivity than niche protection)
Sure there is. Letting in a Mike makes the Johns already here better off. Letting in a John doesn’t.
The economy doesn’t care–people care. Letting in a Mike may not increase the GNP as much as letting in a John, but it increases the welfare of the people already here. It also increases the welfare of the Mike.
Your argument only makes sense in the context of non-market transactions such as taxation–we might prefer a John because he will have a higher income so we can collect more from him.
Letting in poor people to a rich country lowers the average income, but it can do so even while making every single human being affected better off. You are committing a fallacy of composition.
Explain to me what is going on then? Is the issue that one John has no incentive to trade with another? The thing that Mike has that has value to John is his neediness, basically? Socialist critiques of capitalism are starting to make sense when you posit it as a good for the elite to have the workers low skilled and dependent.
The value in the economy is the things people create. If more things are created, then there are more things to go around. You are arguing the same amount of people are better off when there is less to around.
Also, are we assuming that John and Mike can’t trade widgets for farkels unless Mike is a resident?
Let’s see if I can figure this out. Along with Mike (2 hours/widget and 6 hours/farkel), let’s consider Michel, who stays in his homeland where, we assume, the infrastructure is inferior to here, so that he needs 3 hours/widget and 9 hours/farkel. He would also be happy to trade John two widgets for a farkel.
Is there an economic argument for converting Michel to Mike? In particular, is there any advantage to John?
@Doctor Mist:
For the sake of argument we are assuming that, perhaps because one of the products is expensive to ship or is location specific (or perhaps there are taxes/quotas limiting trade across borders).
Often this argument is put in terms of services rather than goods. If Mike’s comparative advantage is providing lawn care or growing oranges or building houses, those aren’t services that can easily be imported from elsewhere so having Mike do them here frees up John to do more of whatever his local comparative advantage is – say, lawyering. Or computer programming.
@Randy:
No, the thing that Mike has that is of value to John is his difference, the fact that Mike has different skills, interests and abilities than John, such that whatever thing Mike is relatively better at doing (compared to all other things Mike can do) is different from the thing John is relatively better at doing (compared to all other things John can do).
This is an actual example of “the value of diversity”. 🙂 It doesn’t matter that Mike is worse at everything so long as he’s worse at different things in different proportions. [If Mike were precisely half as good at everything John does that’d be a different story.]
If Mike’s weakness was not as pronounced, he might be exactly half as good as John–and then worthless (to John). So John needs Mike to be needy.
I mean, that’s not quite the indictment that I make it out to be. The reason the farmer needs me is my hunger.
But it’s very disconcerting to propose a world where everyone is better off by making one person strictly worse. (Next, less decide that the broken window fallacy is actually sound policy.) It’s counter intuitive to the point of making me annoying obstinate, I think.
@Randy M:
I can’t tell what you mean by “needy” in this context. The fact that the principle of comparative advantage works even if Mike is worse in every specific way than John doesn’t mean it works because Mike is worse. It would also work if Mike were better in every way. It would also work if Mike were better in some ways and worse in other ways.
A society of Johns benefits from adding a Mike who is different from John. That’s it. Different. John benefits more from trading with people who are different from him (in their balance of skills) than from trading with people who are exactly like John in their balance of skills.
Trading with a John (or a doubled-John or a halved-John) doesn’t produce a clearly profitable division of labor where each person does the thing they are relatively better at to the degree that trading with a Mike does.
Oppression can certainly decrease IQs, such as by causing the oppressed people to experience lead poisoning and poor prenatal nutrition, while it is unlikely that conspiracies can increase IQs.
Yep. In fact, this is the whole actual argument[1] w.r.t. the source of the black/white IQ gap. We know blacks and whites differ substantially in genetic makeup, so a genetic cause is plausible. We also know blacks and whites differ substantially in the situation in which they grow up and the way they’re treated by everyone, so an environmental cause is plausible. There are various ways we can try to tease these apart, but they’re messy and subject to going wrong in subtle ways.
[1] Not the one in the prestige media, but the one among people worth listening to on the issue.
Seems unlikely that all the other factors just happen to occur globally by coincidence.
Is “coincidence” really the best non-genetic explanation you can think of, though?
Not the most Orwellian ad campaign out of the UK this season.
Their “If you suspect it, report it” campaign was also hilariously Orwellian
Sadly, it has been satirized so heavily that, mixed with how google image search was nerfed lately, it’s impossible to find the proper pictures of the real billboards
>how google image search was nerfed lately,
Offtopic, but what’s the deal with that? What’s the extent of the nerf and why was it done?
I wonder if they are deliberately referencing Orwell to shock people into remembering the ads? “No such thing as bad publicity” and all that…
The UK is an Orwellian dystopia at this point, so at least the ads are honest.
Authentic, not honest.
The Business Insider article seems silly to me. Yes, some customers are better than others, but businesses try not to have to choose between customers. They gave discounts to NRA members to court them, but that didn’t cost them liberals until recently. It’s not about how big or how rich is the demographic, it’s about how much they care. I think Taleb generalizes way too much in that article, and I’ve been annoyed every time anyone has ever mentioned it to me, but it’s the perfect rejoinder to the BI one.
Also, it’s odd that BI says that businesses don’t care about old people. They don’t advertise to old people but that is, they claim, because old people are set in their ways. Maybe they’re set in their brands, but are they incapable of becoming angry and boycotting? I don’t know, but it is the relevant question.
Of course businesses advertise to old people. Mostly Viagra, adult diapers, denture cream, and life insurance, but that counts.
Pharmaceutical ads seem to me to be extensively targeted at old people.
From the very rare times I’ve watched cable TV (or its internet equivalent?) recently, it seems like pharma ads are targeted for a) old people or b) women with dry skin* (I think I’ve seen ads for multiple different drugs addressing the latter!). Taking the ads at their word (always dubious) I’m surprised that this is a widespread problem or that it affects women particularly.
However, I can’t disentangle this from small sample size or targeting the shows’ expected audiences.
* Well the symptoms they mentioned sounded kind of like eczema but I don’t remember if that’s what they were actually for.
Presumably women just care more about the dry skin than men?
Let me clarify that a bit. I think that BI’s model of what’s going on is incorrect. I think that Taleb is right that the amount people care is important, and I think that’s obvious in just about every example. But I don’t think that it often leads to the result that he emphasizes, although sometimes it does.
But if they’re just responding to Douthat to say that the companies are targeting their customers, I agree.
Watch Jeopardy! and Wheel of Fortune and you’ll see lots of ads targeted to old people.
Perhaps it’s more that young(er) people are more likely to rabble-rouse campaigns on social media when they are displeased, and so it’s worth more to a business to avoid being on the receiving end of a screeching condemnatory Twitter boycott and accompanying media publicity by pandering to the perceived interests of the woke.
I’m not sure that social media campaigns have equates to public opinion though. This may be a miscalculation leftover from when public campaigns took more effort than hashtags and signaled wider agreement. Reacting to a crisis required a plan. Chick-fil-A did nothing and business boomed. The beach body campaign attacked by certain feminists wasn’t harmed.
I don’t think these businesses are executing sound business strategy. People want to know their business has a morals. Now they’re stuck in the morality business instead of the business business.
Does Italy have more major political parties than is typical for a European country (or any country, really)? It sure seems like it.
More that gain a substantial amount of votes maybe?
The Netherlands, France, Germany and Belgium all have 7-12 parties that get some substantial amount of representation in parliament. the Brits are probably the exception.
Belgium is not directly comparable to other countries because the French-speaking community and the Dutch-speaking community have basically two separate sets of political parties, though.
More than most other countries, I think Israel, possibly Iraq, and probably others are comparable though.
So what’s the kabalistic significance of your country’s name starting with an I? Does it have to do with Iota being the smallest letter of the Greek alphabet thus ensuring that every party with an Iota of support will have representation in Parlament? It’s gotta be something.
Well in English the I shares the sound with the eye, the part of the body that sees, and the “I” is a term for oneself, so one would expect people from these places to be unusually introspective.
I also looks like a numeral one (and as a word refers to a solitary individual) so we should expect a link to some primary concept; Italy being the land of the foremost empire, perhaps (not the first, but the largest pre-industrial); Iraq is the land of mesopotamia, home of the first empire; and Israel, the home of the Jews, who brought forth monotheism and worshiped the one God.
Likewise, see Islam. We should not be surprised that the Trinitarian Christianity starts with the third letter of the alphabet, too.
Was it larger in area than the Persian Empire at its earlier maximum?
The Achaemenid Empire topped out at around 5,500,000 square km according to Wikipedia; the Roman Empire at 5,000,000. So it’s pretty close, but the Persians have it. However, the Mongol Empire was about 24 million, the Abbasid and Umayyad Caliphates both around 11, the Spanish Empire almost 14, the Macedonian Empire about 5.2 (though very short-lived) and Qing-dynasty China about 15 (though it extended into industrial times, and I’m not sure how much of its maximum extent it gained after the Industrial Revolution).
For comparison, the modern United States is just under 10 million km^2.
The Achaemenids had more land area, but Rome had more population. Also Rome lasted longer (by a LOT).
How about Ireland, Indonesia, and Iceland? And maybe the state of Indiana?
It’s not that unusual for countries with proportional representation or another form friendly to small parties. See the Netherlands, Belgium, France, ect.
It makes sense given how they have perhaps the strongest regional identities. So consider perhaps a two dimensional matrix, one how a party is conservative, libereral, socialist, the other whether it represents Lombardian, Venetian, Neapolitan etc. interests/identity.
Not quite. The only sizeable party pandering to a particular part of Italy is the North League, and even it (after branding itself without the “North” in its name) received a non-negligible share of votes from central and southern Italy. There are large differences in which parties got more votes in which regions, but that’s due to socioeconomic differences — describing them your way is no more enlightening than describing the US Democratic Party as “representing New England and Californian interests/identity” or “the US Republican Party as “representing Heartland interests/identity”. (It’s not like the US has a coastal conservative party and a heartland progressive party as well.) Not comparable to Belgium where the Walloons and the Flemings have two entirely separated set of parties.
The matrix is at least two-dimensional, but the second axis is pro-establishment vs populism.
Yeah, GP’s comment is completely off-base.
No, France has a lot of political parties too, at least five of which had a decent chance of winning the recent presidential election. This is because most of our elections (presidential, assembly, etc) are some variant of two-turns, which, while not ideal, still reduces the need to form big coalitions.
Before the Five Stars Movement was founded in 2009, center-left and center-right were getting most of the votes as basically two major parties. Also, the electoral law changed last year towards a more proportional system.
The Northern League abandoned the focus on the North, thus moving towards being a truly national party (but still got most of the votes in North, whereas the Five Stars Movement got a lot of votes in the South).
But that wasn’t the case up through the 1990s in Italy. Several big parties – Christian Democrats, Socialists, Communists, and various others. The CD were almost always the ones able to form a government after an election, but the coalitions never lasted long.
The Italian center-right and center-left you are talking about were coalitions, not parties.
I don’t know that it has more parties as such, but it seems to have more instability. Mostly it looks like the same major players do all the power-broking, but they join, leave, split from, and found their own parties and alliances as their interests of the moment dictate, so perhaps it seems as if there are more new parties than there really are.
The South Africa link has this comment
Which seems like an interesting ratchet effect—it seems to mean that the substantial rights granted by the constitution can only ever increase. It makes sense from a “constraining the tyranny of the majority” point of view, but it still seems like a pretty radical setup.
Yeah, if your amendments to the Constitution are constitutional, seems like you’re doing them wrong.
It isn’t terribly uncommon internationally to have a constitution that’s generally amendable, but with some smaller subset of K-extra-constitutional++ provisions that can’t be amended or overridden by later amendments. Don’t know if that’s what’s going on here, but it sounds plausible.
The US constitution in fact has two such provisions, one of which is expired. Since it’s not self-referential, the remaining one is probably not effective; I see no legal bar to an amendment stating
I. The words “and that no state, without its consent, shall be deprived of its equal suffrage in the Senate.” are stricken from Article V.
II. Fuck Delaware, it gets no Senate representation.
Even if this wouldn’t fly, two amendments ought to do it, the first striking the restriction and the second screwing Delaware
That isn’t obvious to me at all. It would contravene the plain language of the amendment provision being utilized to promulgate the amendment, and it would also usurp rights that the States plainly reserved in entering the Constitution in the first place.
South Africa does indeed seem to have such a system. Per Wikipedia,
The current proposal is to amend section 25 in the bill of rights, so it would fall in the intermediate level and require approval by the council of provinces in addition to 2/3rds of the national assembly.
The news articles I found don’t mention that the constitutional court would have to approve the amendment, so in that respect maybe the reddit commenter is confused?
So I followed the twitter link about the DC graduation rate and @notwokielink’s … erm … “colorful” twitter profile piqued my interest, so I started scrolling down to try and get a sense of who they are, and next thing I know I find this Rolf Degen tweet (via Noah Smith):
You don’t say …
ETA: The whole Noah Smith thread in which this appeared is worth reading.
Businessman Andrew Yang will run in the 2020 presidential election on a platform of universal basic income. …Also supports banning federal regulators from moving to jobs in the fields they regulate, and “turning April 15 into a national holiday”.
I agree with banning the revolving door, but this does remind me of the Blackadder episode Dish and Dishonesty, where there’s one potentially popular proposal with widespread appeal and two that maybe wouldn’t be so good from the “Standing at the Back Dressed Stupidly and Looking Stupid Party” candidate, Ivor ‘Jest-ye-not-madam’ Biggun:
I’m not sure where I stand on this, but one thing I get concerned about goes something like: Banning the revolving door makes the job of being a regulator much much worse. You spend years developing a particular skill set that you’re not allowed to use unless you continue government work. This sucks. Imagine if working as a coder for Google meant you could never work as a coder anywhere else. Google would have a lot more trouble getting coders.
I worry this would lead to a drastic reduction in the competence level of regulators over time. Do you have any thoughts on this concern?
+1
Ban government employees from working in their field in industry for five years, and you’ll get nobody willing to take a government job except people for whom the government job is their dream job, and people with no other choices. The end result of that is that you can’t get competent people working for the government.
There is an argument that Silicon Valley is a thing, despite California in general and the Bay Area in particular being hostile to business in many ways, because non-compete agreements are unenforceable in California. Signing an enforceable non-compete is not quite the same thing as saying you can’t work as a coder for N years after leaving the company, but it’s definitely within spitting distance.
I’m not sure how seriously I take this argument, though; California companies were definitely including non-competes (enforceable or not) as late as ten years ago.
There are ways around that specific problem. For example, you could specify that a regulator of, say, medicines, wouldn’t be able to work for a medical entity he regulates in that jurisdiction. I know you’re going to raise your example up to a federal level from a state one, but in Europe, where we can all live and work in each other’s countries, this is not as impossible a solution as you’d think it is.
Another possibility would be to have very specialist people – so someone who regulated use of vaccines could be allowed to find a job working on, say, research into airborne diseases. In a way, someone who specialises that much would almost welcome the change and be a better-qualified person at the end of it.
My only other suggestion: Regulators should be populated with people who are close to their retirement anyway. Hire people who are 50+ and whose career prospects would not be a factor. But this only makes sense in places like Europe where your pension is what the government gives you. In the US with private pension schemes, why would you want to give up a good pension scheme from your company to join the government where (i’m guessing) the conditions are not as good?
Boy, the comments at that Quillette article really show that a certain class of people is very, very unwilling to let go of the Bernie loves Venezuela meme. Clearly the name “Bernie Sanders” hits a berserk button of a certain kind in many of the people who read Quillette.
I’m disappointed no attention has been paid to the SCP theory of Cain and Abel.
Thanks for the link I enjoyed playing around in that site!
When I first moved to California, I noticed that some Starbucksen had prop 65 warnings, and I was quite surprised. “Why do they sell me coffee that causes cancer?”. It turns out that the answer is, like the answer to many things in California, “sigh fucking California”
So, the way that the law is written, if there exists any evidence under any context or circumstances whatsoever that a given substance can cause cancer, it must carry a warning. This is really stupid because, among other things, it makes no allowance for dosage. If a study found that giving an adult rat 27 grams of Chemical X caused them to develop cancer, and the typical cup of Starbucks coffee contains 12 micrograms of Chemical X, they have to put on the warning.
Now, it turns out that there is one such chemical: Acrylamide. Some studies have at some point in the past shown that there are certain circumstances in which this chemical can cause cancer.
Where does acrylamide come from? Heating various organic matter can generate trace amounts of this substance. Organic matter such as coffee beans which are roasted.
As a result, Starbucks coffee (can? May? It’s not clear to me if this is a known thing vs a theoretical could-happen thing) contain(s) trace amounts of a chemical which, in much much much larger quantities, may be carcinogenic.
And because the law makes no allowance for dosage or risk factor, the coffee must display the warning.
Incidentally, various similar stories to this are why everything causes cancer in California
Which, in the end, is useless because everyone ignores the warnings. Can you imagine a paranoid Californian trying to get a job and cutting from consideration everywhere that has a prop 65 warning?
Not everyone ignores them! My family went on a trip to California when I was relatively young, and I was shocked that we were so poor, we were staying at a hotel that was going to give us cancer.
The implication was “everyone who lives in California”. (This is true in my experience.) But I agree the Prop65 signs aren’t helping the tourist industry.
Yeah, at this point I’ve spent enough time in California to become innoculated against the Prop 65 signs.
Certainly anyone who knows about Prop 65 doesn’t take them seriously; I just wanted to share a time when I was naive enough to be fooled.
But you stayed there.
He didn’t have a choice, and his parents were probably less naive about the purpose and value of the sign. Which, to be clear, is found on pretty much every hotel in California.
Welcome to a Hotel California
Such a cancerous place, such a cancerous face…
Yeah, I got that he was a kid; the moral is, as you said, adults know to disregard the warnings, so it’s of limited interest that a kid found it alarming.
I wasn’t accusing him of hypocrisy, just confirming the lack of utility in the warnings.
@Randy M
Probably worse than useless, because people will then likely also ignore warnings that they should heed.
This is why I was strongly opposed to the GMO-labelling ballot prop from a few years ago.
I would be pretty strongly in favour of informational labelling about GMOs telling me who owns the genome, what the GMO variant is, maybe an ID number I can easily look up, etc.
But the law as it was voted on didn’t have any of that. As proposed, it was a simple label: if thing contains any GMO material, label it GMO.
I’m glad it failed. If it had passed, within a few years it would be no different than prop 65 warning labels: ignored by virtually everyone in California, mocked by virtually everyone outside of California
That’s like the best argument I’ve heard for voting *for* the GMO proposition.
dihydrogen monoxide has been found in the tumors of cancer patients.
Best put a warning for it….
Hey water intoxication is a real thing, you don’t even need the spurious correlation to justify putting a warning label on all bottled water.
I don’t know if I’m causing unnecessary drama by comparing this to the Canadian bill that Jordan Peterson is concerned about; back in 1986 this probably seemed like a good and necessary requirement and I’m sure if anyone had said “This will mean restaurants can be sued for not warning that their coffee could be carcinogenic”, it would have been regarded as not alone scare-mongering but crazy talk.
And now we’ve got “Starbucks has to warn you about their terrible beverages, not because they’re sugar-laden monstrosities but because they might give you cancer if you drip-fed their coffee into yourself 24/7 for years”. Though if everyone is ignoring the Prop 65 signs, this may mean that similar over-reaching laws will end up being ignored because nobody takes them seriously due to hysteria over enforcement resulting in this kind of over-reaction.
So maybe Professor Peterson has little to worry about, after all?
Naloxone*. Naloxone is the opioid overdose antidote while naltrexone is mainly to reduce cravings.
Thanks, I am a moron.
You are a beacon of light.
On a related note, isn’t it kinda remarkable that naloxone in opioid naive folks seems to have no detectable effect? (citation needed)
Let’s see now:
1. Paper not peer reviewed, it’s just an early working paper.
2. Methodological flaws, e.g. naloxone might increase emergency room visits because people who use it are advised to go to emergency rooms. That’s literally what it says on the instructions for narcan.
3. The paper didn’t even measure naloxone use. They just used naloxone availability laws as a proxy for naloxone use. That’s a weak proxy, as there are other factors influencing access.
4. The preponderance of evidence in favour of naloxone reducing deaths directly, set against a single shaky paper about naloxone increasing deaths indirectly. We all know how to weigh that evidence up.
So it’s a a poor paper that can be interpreted to blame poor people for their behaviour. Of course it’s going to get attention. Doesn’t mean it has credibility.
It has been suggested that if all cars were fitted with a sharp spike jutting up from the centre of the steering wheel, people would drive much more carefully.
Whether this is true or not, I suspect it would work better than making drugs more dangerous.
I believe that was originally a suggestion by Gordon Tullock. As I remember, he offered a high tech alternative. Use the same sensors used to set off air bags, but wire them to a hand grenade instead.
Does promising people $1k/month count as buying votes?
Not anymore than promising to enact any other policy which would be economically beneficial to some subset of the population
The question is whether you promise people that the ones who vote for you will get the money, which is vote buying, or promise that if you get elected various people will get money whether or not they voted for you, which is politics as usual.
Business Insider: companies are publicly liberal on social issues mostly because liberals are a more valuable consumer demographic.
Pasta is not actually gay or anti-gay, it’s just freakin’ pasta and companies only jump on bandwagons to flog more of their goods especially when the idea of the pink pound was in ascendance (educated, professional, well-off middle/upper-middle class, no kids, aspirational lifestyles, responded well to targeting for expensive fol-de-rols)?
I am shocked, shocked, I tell you! 😀
I’m a teeny bit disappointed with that Darian calendar because come on now:
So – Sunday, Monday, Tuesday, Wednesday, Thursday, Friday and Saturday? I mean, it would be cool if people called it “Marsday” instead of “Tuesday” and “Jovesday” not “Thursday”, but I don’t think that’s going to happen if Sunday, Monday and Saturday are still in the calendar.
FWIW, in French, Tuesday is “mardi” and Thursday is jeudi. I wouldn’t be surprised if Thursday’s origin is “Thor’s day”, and there may be a link between Thor and Jupiter. I still dont know where Tuesday comes from.
If I’m remembering correctly, Tuesday is supposed to come from Tiw/Tyr (like Thor and Woden/Odin for the other weekdays), a sort of war god in Norse mythology (it’s not entirely clear what his area was, or if he was considered a major god). Hence the identification with Mars.
English names of the week are a weird mix of Church Latin, Germanic/Nordic/whatyoucall’em and Classical references.
Ok thanks.
I think English as a whole is a weird mix of latin/french/Germanic/Nordic.
Two kinds of German mixed with French (as spoken by Germanics who settled in France). With some loanwords from various other countries. I’m told parts of the grammar are still recognizably Celtic, though.
English indeed is a mix of, if you will, German + Norse + French. I’m not aware of a single notable Celtic feature in the grammar of English, whoever said this must have been confused about something.
Oh, it’s something my wife told me once, I forget the substance of it, something about a weird construction not found in other form of German but common in Celtic tongues. I’m not a linguist, so cum grano.
https://en.m.wikipedia.org/wiki/Brittonicisms_in_English
Tuesday is from (Old English) Tīw, but we wouldn’t know him by that name — the Germanic gods are generally known by Anglicizations of their Old Norse names, because they’re best attested in Norse literature. His Old Norse name was Týr, and he’s generally identified with Mars, but he wasn’t a major god.
(Friday is from what, Frigg? Freyja? Apparently Frigg. Who the hell is Frigg?)
Wikipedia, at least, thinks Thor was identified variably with Hercules or Jupiter. Germanic paganism was much more distant from Greek or Roman paganism than either of those two were to the other, so the interpretations don’t always make much sense. Maybe they did at the time. The distance between the Germanic tribes the Romans primarily dealt with and the Germanic tribes from whom we have the best attestations of paganism doesn’t help, either — how much do you know about the Roman-era Frisians?
(Despite what Wiktionary will tell you, Tyr (probably) isn’t etymologically connected with Zeus — the PGmc form *Tīwaz is from PIE *deywos, which is probably unrelated to *dyews > Zeus. *deywos AFAIK isn’t attested in Greek (the expected form, I think, would be *deios), but it gives Latin deus, Sanskrit deva, and Avestan daēuua. The name of Zeus, on the other hand, is from PIE *dyews, which gives Greek Zeus, Latin Jupiter, and Sanskrit Dyauṣpitṛ — these last two with compounding with *ph2ter, so “sky-father”. The schwebeablaut explanation (*deywos derived from a zero-grade of *dyews with the vowel in the wrong place) doesn’t really hold water, since this would require a phonotactically invalid root **deyw-, and *-wos is a known derivational suffix, so we’re probably dealing with an entirely separate stem *dey-wos.)
He wasn’t a major god in late Norse culture, but placenames, among other lines of evidence, suggest he was much more important in an earlier version of the religion. Which we have next to no information on, unfortunately. This is typical of one of the major obstacles to studying Old Germanic religion, which is that practically everything we know about it comes from a time well after it had come into contact with Christianity (and to some extent Islam) and assimilated and syncretized a fair amount. The Balder story for example may have been a Germanicized version of the story of Christ. Loki looks less overtly malicious the further back you go.
Odin’s wife. She doesn’t show up in stories much, but may have been identified with Freya in an earlier version of the religion — the two have a lot in common.
@Nornagest:
Old English has the more conservative name Tues for the deity called Tyr in Old Norse… close to Zeus. One of the neat things about Indo-European philology is that we can rationally speculate that “Tyr” was probably the high god before Odin became popular, and that he didn’t combine Zeus’s thunderer role with high god role: we can detect in the Eddas a pre-Thor god of the atmosphere named Fjörgynn, forming cognates to the Vedic Dyeus Pitar and Parjanya.
This is nonsense. The s in Zeus is part of the name. The s in tuesday, wednesday, and thursday is a possessive marker.
It’s worth mentioning here that while the father element is not technically compounded into the name of Zeus, he is still often addressed as Zeu Pater, “father Zeus”.
Why should a vrddhi-derivative stem be required to be phonotactically acceptable? It doesn’t have to comply with the usual restrictions on verb roots because it isn’t one, and since the thematic vowel is inserted after the stem as part of the derivation, in practice the resulting word is never unpronounceable. Also, *-wos is a suffix, but the **-wih2 you would need to get e.g. Sanskrit devī is unattested afaik, whereas *-ih2 is well-attested.
He certainly wasn’t a top-tier god, but judging by place names and the attention he gets in the Prose Edda he wasn’t so far down the totem pole.
That could be a later formation, although *-ih2 is generally archaic. Does *deywih2 show up outside Sanskrit?
The suffix in Sanskrit devī́ definitely goes back to *-ih₂, not *-wih₂– the root was originally *dyew-, and the *deyw- form is due to Schwebeablaut (a phenomenon often seen in roots of the form *CeRC-, where secondary e-grades *CReC- have been back-formed from the zero-grade *CṚC-). That the original root was *dyew- can be seen in descendents such as Sanskrit dyu- ‘day’, with nom. sg. vṛddhi form dyāus and dat. sg. guṇa form dyave.
However, there WAS a reinterpretation in many descendents of the *w of *deyw- as a suffix, producing a root *dey-, which then got a new suffix *-n-. Descendents of *deynos/*dinos are found in Indo-Iranian, Germanic, Balto-Slavic and Italic.
(Edit: apparently the Schwebeablaut explanation may not hold, per Azhdahak. On the other hand, we know that a pre-form *deywos must prefigure the Sanskrit, Latin, etc. words, and just because a root was phonotactically invalid in, say, 4000 BC doesn’t mean you can’t get a reanalysis fifteen hundred years later that produces it.)
mardi – Mars’s day (Latin: dies Martiis)
jeudi – Jupiter’s day (Latin: dies Iovis)
Monday – Moon’s day (Hellenic)
Tuesday – Tiw’s (Tyr’s) day (Germanic)
Wednesday – Woden’s (Odin’s) day (Germanic)
Thursday – Thor’s day (Germanic)
Friday – Frigg’s (Freia’s) day (Germanic)
Saturday – Saturn’s day (Roman/Hellenic)
Sunday – Sun’s day (Hellenic)
Why have you marked “Monday” and “Sunday” as Hellenic? You might just squeak by on Monday — there is an Ancient Greek word for the moon, mene, cognate with the English “moon” — but the Greek word for the sun is helios. Sunday is surely from the Germanic word for the Sun, and Monday is almost as surely equally Germanic.
That’s nothing. The names of the days of the week in (traditional) Chinese and related languages are “Sun Day”, “Moon Day”, “Mars Day”, “Mercury Day”, “Jupiter Day”, “Venus Day”, and “Saturn Day”. When I first encountered this (in Korean) I assumed that it was a relatively recent borrowing…but it’s not.
(Apparently modern Chinese call them “day one, two, three” etc but the Koreans and Japanese definitely still use the traditional names.)
More about Chinese:
The seven weekdays are now called “week one”, “week two”, “week three”, “week four”, “week five”, “week six”, and “week day”. “Day” can be the normal 天 or, in writing, the fancier 日. There are three common ways to provide the “week” part of the day name; the official way is to use xingqi 星期 (“week”), the vernacular libai 礼拜 (originally “worship”, but it can now also be used as a noun meaning “week”) is common, and it is also possible to use zhou 周 (“cycle”). The first two are nouns, but 周 is a measure word.
My understanding of the history is that the terms 礼拜一 etc. counting up to 礼拜天 “worship day” were provided by the Christians and replaced the traditional names. 星期 is a more recent term intended to preserve the 1, 2, …, day system that everyone was familiar with while not being overtly Christian.
However, while the 7-day week concept is documented early in China, it is definitely not the traditional week. The Chinese calendar is divided into months of thirty days consisting of 3 ten-day xun 旬 each; the 旬 unit was in common use (it survives today in the still-common terms 上旬, 中旬, and 下旬 to refer to the beginning, middle, and end of a month) and would presumably have been more familiar to the premodern Chinese in practice than the 7-day system was.
Huh, the sun character is fancy for day in Chinese? It’s normal for day in Japanese.
Yes, Chinese is written on a syllabic principle where characters have pronunciations. The characters also contain a massive amount of semantic information, but there is no concept of unrelated words like nǚ 女 “female” / niáng 娘 “girl” / nī 妮 “girl” being written the same way merely because they have similar meanings. The Japanese somehow got this badly confused when they tried to adopt the system.
日 originates as the word for the sun, but the ordinary process of language change has developed the modern word 日 into a fossil that exists mostly in compounds. The word for “day” (the period of time) is now 天 (etymologically “sky”, which it still also means today), and the word for the sun is now 太阳, “the great Yang”.
Since Japanese is written on the much stupider principle that different words may be written the same way as long as they mean similar things, there’s nothing stopping it from preserving a character like 日 as the written symbol for “day” even if the word used to pronounce that symbol changes (though I am not claiming that this has happened).
In Japanese, it’s:
日曜日/nichiyoubi — Sun Day
月曜日/getsuyoubi — Moon Day
火曜日/kayoubi — Fire Day
水曜日/suiyoubi — Water Day
木曜日/mokuyoubi — Wood Day
金曜日/kinyoubi — Metal Day
土曜日/douyoubi — Earth Day
So the traditional five eastern elements + sun and moon.
Now, the five eastern elements are used in the names for the five visible planets. But in combination with another character, which does not appear in the days of the week. The part of the planet character that appears in the day of the week is the element character only. Obviously it’s suggestive that they’re used alongside Sun and Moon, but it’s not clear to me that the days of the week come from the planets instead of directly from the elements.
EDIT: That is, the planets are:
Mercury: 水星 (Water star)
Venus: 金星 (Metal star)
Mars: 火星 (Fire star)
Jupiter: 木星 (Wood star)
Saturn: 土星 (Earth star)
You forgot to translate the middle character of the Japanese weekday terms, which severely undermines the point you’re trying to make.
In modern Chinese, and I suspect also in Japanese, the planets are:
水星 “water star” – Mercury.
金星 “gold star” – Venus
火星 “fire star” – Mars
木星 “wood star” – Jupiter
土星 “earth star” – Saturn
Since the sun and the moon are obviously not stars, but they are lights that move through the sky, they are grouped with the planets for astrological purposes under the term 七曜, the “seven lights”. It would be nonsensical to refer to 月星, the “moon star”.
The weekday terms refer to every 曜 as a 曜 because that is the context. But the word is quite overtly a reference to the astronomical object and not the element. 木 may be wood, but 木曜 can only refer to Jupiter.
We further “know” (less concretely, but the odds of this happening by coincidence are not good) that the identical terms aren’t just a case of independent development, because the Oriental system identifies the days in exactly the same order the West does – Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn. The conventional order for listing the elements is 金木水火土 gold, wood, water, fire, earth.
Huh, I didn’t know about the meaning of 曜, which I had only encountered in the context of weekdays, and didn’t bother to look up independently when I saw that the planets were 星 and not 曜. Thanks for the info-drop!
So we’re assuming that this is basically “relatively antique Christians translated their theologically significant week into Asian languages,” right?
I doubt the transfer was Christian; where Christians have gotten to set weekday names (as in modern China), they tend to go with utilitarian names. And I can see why they’d be less than enthusiastic about a system of weekday names that honored pagan gods.
Wikipedia says “The Chinese seem to have adopted the seven-day week from the Hellenistic system by the 4th century, although by which route is not entirely clear. It was again transmitted to China in the 8th century by Manichaeans, via the country of Kang (a Central Asian polity near Samarkand).”, so it seems like there was plenty of room for classical, non-Christian astronomers to handle the cultural transfer.
If the transfer wasn’t Christian, or I suppose Jewish, though that seems unlikely, why did it stick? I assumed that somebody who needed a seven day week for religious reasons would be the one who stomped their feet and said, “I don’t care that you guys don’t want this, I’m calling this day Tuesday.”
Is it really valid to say that modern Christians prefer utilitarian weekday names thus 1500 year-ago Christians also prefer utilitarian weekday names?
Probably matching up as closely as possible with “full moon/half-and-half/new moon” was a factor in having a 7-day week.
Who says it stuck? As I pointed out above, the traditional Chinese calendar, which honors the moon and assigns 30 days to every month, doesn’t have 7-day weeks at all. It uses 10-day weeks. This is why the options for “week” in modern Chinese are a Christian term and a neologism intended to replace the Christian term.
I’m in the process of asking some Chinese acquaintances about this, but I can’t make any guarantee of getting useful information back.
CORRECTION: the traditional Chinese calendar alternates between 29 and 30-day months; the last 旬 of a month therefore alternates between 9 and 10 days.
Does anyone divide up a fortnight with days named for the lanthanides?
Wait. Pasta isn’t anti-gay anymore?
Is is gross to watch gay people eat pasta?
I don’t think anyone actually claimed pasta was gay / anti-gay? It seems to be a pretty common position on the Left that progressive advocacy by corporations is more cynical cash grab than sincere conviction (see also: Kendall Jenner Pepsi Ad) but that corporate lip service can still be a useful tool. And it’s not like the HuffPo Left is alone in boycotting companies that go against their values. So I’m not quite sure who you’re winking to.
So far as I know, it was never pasta in general, it was Barilla which was just one brand.
Oh, so that explains Nero’s Thrice-Setting Sun skill in the game Fate/Grand Order which allows her to revive from death up to 3 times.
Given how much that one psychologist likes to mention the story of Cain and Abel, I’m surprised I haven’t heard him theorize on this.
The moon is very orderly, and cain, being the first murderer, is a symbol of disorder, so he kabbalistically he couldn’t…. wait, I think I’m getting two psychologists mixed up.
It’s even more broken in Fate/extra, where the version of Thrice-Setting Sun that “Playable Saber” has is “revive from death once each combat round, no cap on how many times you can use it other than your mana supply.” In a game with cheap mana potions.
Basically the only way to lose after you get it is to get killed twice in a single round. Which isn’t technically impossible, given the strength of a few of the bosses, but…
Probably because the Bible doesn’t say Cain is doomed to wander forever. Just that he’s doomed to wander. Here’s various translations, none of which say he’s going to wander for all time, or is immortal or anything. Although immediately after him being doomed to “wander” he “settled” in the land of Nod, so I don’t know how one settles while wandering. Regardless I just assumed he wandered the rest of his days, and then died.
Though as I actually read the linked article (note to self — this is always a good thing to do. Maybe even before posting) …
As I read the article, I think if you set aside his polemics, his basic point is quite interesting and well-supported – that hunter gatherers had long been part of much larger networks than the band.
Statistically speaking, Cain is most likely somewhere in the Pacific Ocean. Yes, he can survive underwater. How else did he live through the flood?
And Oklahoma appears to be hard at work on generating more evidence on the teacher’s strike thing. Today is day 4, and they’re apparently out of snow days, so everyone is hoping this gets wrapped up soon.
I don’t get how people jumped to the conclusion that Cain is immortal. God cursed him to wander, so… he just wandered for the rest of his life. And he tried to found a city, but was presumably mentally compelled to move on.
I don’t either, and I agree that he was dead. I was pointing out the obvious conclusion if he wasn’t, though. (Seriously, how would he keep from getting lost if he ended up in the middle of the ocean after the flood?)
If he walks straight for long enough he can get back to land.
Also, Scott’s entry on this attributes the assertion of Cain’s immortality to the biblical text. I wonder what passage of the Bible he has in mind.
Also which translation? Then again, considering he’s Jewish, there may be something in the Torah that’s not in the Bible.
I looked up the Catholic verse about Cain’s punishment,
Genesis 4:12 “When you till the ground it will no longer yield up its strength to you. A restless wanderer you will be on earth”
https://www.catholic.org/bible/book.php?id=1&bible_chapter=4
My partner pointed out that in the bible, death is equated to sleep numerous times, and since sleep = rest, and Cain must be restless, Cain can’t die.
Trust you to come up with a naval solution. 😉
Nice to see Business Insider pushing the idea that its totally OK to ignore half the population based on political beliefs.
I am so sick of this elitist rationalization for being condescending and judgemental.
They’re not pushing it, they’re explaining it. I don’t think anyone (including the businesses involved) are being elitist or condescending, I think this is how capitalism works.
But doing a poor job at that – consumer expenditure by age bracket is pretty much symmetric: https://www.bls.gov/opub/btn/volume-4/consumer-expenditures-vary-by-age.htm and if you weighed it with the actual number of people in the age brackets it would say that the boomers and the elderly are still a goldmine.
Besides, young people have desires, old people have needs (health stuff etc.), young people’s tastes change with unpredictable fashion but there is more information about what old people like because they like what they used to like 10 years ago, their desires rarely change flexibly anymore.
So from a pure profit perspective I would much rather sell hearing aids and old music CD compilations to old people than to try to figure out whether vegetarian but not vegan burgers are cool enough for young yuppies or not.
But I would probably enjoy the second job more, so if the incentive from the profit motive decays, then that will happen.
Let’s try a different model, Taleb’s most intolerant minority. Protein World gets a lot of loud criticism for body-shaming fat people, but the makers of XXXXXL clothes do not get the same kind of attack from fitness aficionados. The NRA Is not making a lot of noise over businesses cancelling their relationship, but maybe the other way around anti-gun people would make a lot of noise.
So it is more of a PR, brand thing. What is worse? Young photogenic people in the media saying your company has blood its hands by supporting the NRA? Or old dudes grumbling at home that you no longer do? Even if the photogenic young people are not your primary customers, it is better PR to support them.
While I don’t think the intolerant-minority framing is wrong per se, I still think the Josh Barro article has more explanatory power.
It sounds like you’re interpreting Barro as saying “Companies signal progressivism because progressive millennials have more money than Boomers”, and I think you’re right that the data shows old people still have plenty enough spending money. But Barro isn’t making a point about magnitude, he’s making a point about direction. Young, urban, college-educated, unmarried people are a fast growing population, both in terms of numbers and earnings. Old people still have plenty of spending money, but not a *growing* amount of spending money. Also, even if there’s millions to be made in selling hearing aids and greatest hits CDs to old people, that doesn’t contradict Barro, it just means these aren’t the companies we should expect to see progressive-signaling. Which I think is accurate.
I don’t think the intolerant-minority framing is wrong per se, but it would mostly describe companies where progressive young people are NOT a large part of their customer base (ie the minority part), but which choose to prog-signal anyway. I don’t know of any businesss where that is the case
Ironically, I would expect liberals to be the ones saying that it’s not ok to ignore people just because they have less money.
I guess that, since in this case it is republicans that are being ignored because they’re poor, the liberal solution would be to stop ignoring them right away, and instead patiently explain to them why they’re wrong.
Patiently explaining to someone why they’re wrong seems like an appropriate response to people being wrong. I wish there was more of that.
You wish for more mansplaining?
TITKOTILTSLOOSSC.
I think that needs some ‘splainin’.
“TITKOTILTSLOOSSC.”
??
This is the kind of thing I like to see left out of ssc?
Probably “This is the kind of thing I’d like to see less of on slate star codex”.
That said, incredibly dense initializations are definitely TKOTILTSLOOSSC.
Well, I don’t like your coining the term “initialization” instead of acronym, so there.
The first one to use an acronym is initializing it in more ways than one.
Incidentally, has anyone ever come up with a definition of “mansplaining” that isn’t entirely dependent on subjective interpretation?
Nice.
I didn’t call it an acronym because acronyms are supposed to be pronounced as a word (like “NASA”). The world I was looking for was initialism though, not initialization.
I pronounce it.
rlms’ abbreviation can be pronounced as a word, though, and he doesn’t say whether it’s meant to be so pronounced or not. So it’s not clear whether it’s an initialism or an acronym. (In other words, the first one to say it is pronouncing in more ways than one.)
ETA: Dammit, Douglas, you beat me to it.
Did you use the preferred pronuns?
Fair enough.
I disagree with rlms. Sometimes a snarky comment like outis’ serves as a really useful, compact, illustration of the problem of the statement it’s replying to.
Exasperatedly explaining to someone why you think they’re wrong isn’t actually an appropriate response to people you believe are wrong. Outis makes that argument far more succinctly and effectively than would otherwise be possible.
@Anthony
Outis statement in no way points out a “problem of the statement.”
Error’s statement is perfectly consistent with a desire for there not to be kneejerk dismissals of people based on their traits, because their beliefs are outside of an Overton Window, etc, etc.
Error merely desired for more people to do so, which doesn’t preclude recognizing that many people are incapable of being (very) charitable.
It was a bad comment that in itself is divisive.
Most liberals believe in markets for most things. When markets work, poor people get less stuff, and less attention, than rich people.
I think most liberals would be more interested in reducing inequality via redistribution rather than papering over some of its consequences. But there are dumb examples of and adherents to, every philosophy.
@Darwin:
I laughed. Well done.
I’m not going to dissect why this is funny, save to note that it actually is.
“History’s first conspiracy theory?”
What about the Catiline Conspiracy? Or do actual conspiracies not count?
This is a point Steve Sailer has been making at a low level for a while now. History is chock full of well-demonstrated conspiracies. But people don’t refer to e.g. the idea that major journalists in the US used a closed email list to coordinate an effort to get Barack Obama elected through slanted media coverage as a “conspiracy theory” because it is well known that that really happened.
Similarly, the idea that the Theranos executives and board conspired to give the illusion of a high-tech medical testing company where in fact there was just a massive pile of lies isn’t a “conspiracy theory” because it’s true.
In an example that really shows the aburdity of saying that the “first conspiracy theory” concerned Nero, Cicero accused Pompey, in his capacity as co-conspirator with Julius Caesar and Marcus Crassus, of having a journal which recorded the names of all past and future consuls of Rome. Again, the theory that three people were conspiring to control Republican politics from semi-behind-the-scenes isn’t called a “conspiracy theory” because it is 100% accurate.
This doesn’t make any sense. Calling something a “conspiracy theory” is a way to imply that it can’t be true because conspiracy theories are, by definition, false. But if I think there’s a conspiracy going on, people will think of that as a “conspiracy theory” based on the construction of the phrase — I have a theory about a conspiracy — which has nothing to do with the way the phrase is actually used.
And to really round out the problem, people will make the argument in the above paragraph explicitly. (“Your theory involves a conspiracy; conspiracy theories are false; THEREFORE, your theory is false.”) Everyone seems to have lost track of the idea that “being false” and “involving a conspiracy” are completely orthogonal concepts.
Distinguish between:
a. Claims that some conspiracy exists.
b. The conspiracy-theory mode of thought, which generally makes an evidence-proof shell around some set of beliefs.
There are lots of crazies, fools, and trolls in the world, so there are lots of nutty conspiracy theories. There are also lots of actual conspiracies (though more of the “guy with gambling problems embezzles money from the fund he’s supposed to manage” kind than the “1000-person leakproof conspiracies made up of omnicompetent men in black” kind). (There are also a lot of things that aren’t conspiracies but are sort of big social phenomena that lead to some bad or weird outcome.) The crazy kind of conspiracy theorist basically adds noise to the process of figuring out what actual things are going on.
An observation: the characteristics of conspiracies that have actually happened tend to be quite different from posited “conspiracy theories” – the former are usually smaller in scale, involve fewer people, etc. Either the latter are false, or conspiracies that match the “Conspiracy theory” model never get uncovered. I think the latter being false is more likely, since incompetence is usually a stronger force than malice.
You should see the scale of the Santa Claus Conspiracy.
Contemporary Turkish politics largely consist of giant conspiracies to attribute giant conspiracies to the conspirators’ enemies.
Some of these Turkish conspiracies have echoes in the U.S. For example, the biggest network of public charter schools in the U.S. appears to be run by the Turkish Gulen cult with the CIA’s approval so they can skim a few hundred million dollars per year from local American taxpayers. Why? Because Gulen’s cult would be a plausible alternative government for the U.S. to install in Ankara if anything unfortunate were to happen to Erdogan. So having Gulenites rip off local schoolboards is a lowkey way of funding a big foreign policy contingency.
But, virtually nobody in America, whether pro or anti-conspiracy theories, finds this at all interesting.
Why would Gülen need CIA approval to run public charter schools? Can’t anyone run public charter schools if they follow the rules? Do you have evidence that Gülen gets a special exemption or the like?
AFAIK, the only thing that the CIA did was to get him permanent US residency, which seems quite wise from the perspective of weakening the Islamists in Turkey. Doing this is not just helpful in the case of a coup, but also simply weakens Erdogan as it is.
Note that we also have evidence that ex-CIA director Woolsey tried to discredit Gülen. This suggests that he may also not have been a fan of Gülen while he was the CIA director.
Do you have evidence that the charter schools in question are more profitable than charter schools run by other organizations? Of any way in which the CIA subsidizes them or gives them a competitive advantage?
I explained back in 2014 how the business model of the Turkish Gulen network of charter schools across the US appears to be based around immigration fraud, kickbacks, self-dealing in contracts, etc:
http://www.unz.com/isteve/the-economics-of-gulen-cults-american/
For example, the Gulen organization gets a lot of visas for Turkish men who belong to their cult to come to the United States to be employed as charter school teachers in their schools.
Charter schoolteachers aren’t all that well-paid by American standards, but they are very well paid by Turkish standards. The Gulen cult often demands that about 40% of the immigrant teacher’s salary be kicked back in contributions to Gulen.
(By the way, the basis of Gulen’s power in Turkey was cornering the test prep industry. This allowed Gulen to infiltrate his followers into the Turkish police by, oddly enough, scoring high on police civil service tests. Erdogan used Gulen’s cops to take down the anti-Islamist military in various conspiracy trials. Then Gulen and Erdogan clashed, and Gulen tried to take down Erdogan by leaking wire taps of Erdogan complaining, for example, that his house was so stuffed with ill-gotten cash that they were running out room. But then Erdogan went to war on Gulen, especially using the failed 2016 coup attempt to crush the Gulen network in Turkey.)
The FBI was raiding Gulen schools across the country in 2014, hauling off records.
But then that pressure just disappeared.
I can’t prove that the CIA had a talk with the FBI about why foreign policy demands that the Gulenites be allowed to get away with their skimming from local American taxpayers. But it’s a theory that would seem at least worth investigating and rather interesting. But, virtually nobody in America is interested in the tale of this weird Turkish organization being the biggest operator of charter schools in the US.
Here’s a 2017 CBS News article:
“Are some U.S. charter schools helping fund controversial Turkish cleric’s movement?”
https://www.cbsnews.com/news/is-turkish-religious-scholar-fethullah-gulen-funding-movement-abroad-through-us-charter-schools/
For background, here’s my January 1, 2014 article on Gulen, “The Shadowy Imam of the Poconos,” in Taki’s Magazine:
http://takimag.com/article/the_shadowy_imam_of_the_poconos_steve_sailer/print
I realize this Gulen stuff sounds nutty to Americans (to Turks, it sounds like business-as-usual), but it’s quite well-documented by this point.
Here’s Turkish-born Harvard economist Dani Rodrik’s considered opinion on:
“Is the U.S. behind Fethullah Gulen?”
http://www.unz.com/isteve/rodrik-is-the-cia-behind-imam-gulen/
“Whenever I talk with another Turk about the Gulen movement, a question invariably props up: is the CIA behind Gulen? In fact for most Turks this is a rather rhetorical question, with an incontrovertible answer. The belief that Gulen and his activities are orchestrated by the U.S. is as strongly held as it is widespread among Turks of all political coloration – secular or Islamist.
“This is my attempt at providing a reasoned answer to the question. My conclusion in brief: I don’t think Gulen is a tool of the U.S. or has received support from the U.S. for its clandestine operations. But it is possible that some elements within the U.S. national security apparatus think Gulen furthers their agenda, is worth protecting on U.S. soil, and have so far prevailed on other voices in the establishment with different views. …”
Sailer’s comment: One obvious candidate for an American Deep State backer of Gulen is former CIA man Graham Fuller, who also happens to be the former father-in-law of the Boston Marathon-bombing Tsarnaev Brothers’ Uncle Ruslan.
Rodrik goes on:
“Perhaps of more direct interest to the U.S., foreign service officers have long been aware that many Turks have been obtaining visas under false pretenses, with the ultimate aim of ending up as teachers in Gulen’s charter schools. Yet apparently nothing was ever done to stop this flow, nor to hold the movement to account. A ridiculous number of H-1B visas — which require demonstration that no qualified U.S. workers are available — have been issued to Turkish teachers in these schools. One naturally wonders why the U.S. administration never clamped down on the Gulen movement for apparent visa fraud.
“The same question arises with respect to the widespread pattern of financial improprieties that has been uncovered in Gulen’s charter schools. A whistleblower has provided evidence that Turkish teachers are required to kick back a portion of their salary to the movement. The FBI has seized documents revealing preferential awarding of contracts to Turkish-connected businesses. Such improprieties are apparently still under investigation. But the slow pace at which the government has moved does make one suspect that there is no overwhelming desire to bring Gulen to justice.”
All very interesting, but if they are stealing money from U.S. taxpayers, that presumably means that they are producing less schooling per dollar than alternative sources. Are they? All of your arguments seem to be about how they use Turkish teachers to hold down their costs and, possibly, get money as contributions to fund their organization.
The Gulen charter schools claim their students get high test scores.
Of course, since the Gulenites almost took over the Turkish state by monopolizing the test prep business so that their followers could ace the civil service exams, perhaps somebody in America should look into this a little more.
For example, in Turkey, there’s a sneaking suspicion that the reason young Gulen followers did so well on the police hiring exams was because older Gulen followers within the government were stealing the exams before they were given and leaking them to the Gulen test prep centers.
In contrast, the Gulenites didn’t seem to do as well on the military exams, so they didn’t take over the military. So, after Erdogan used the Gulenite cops to take down Kemalist generals like Dani Rodrik’s father-in-law in various show trials, the Gulenite police tried to take down Erdogan in 2013 by leaking wiretaps of him being a giant crook. Erdogan struck back by turning to some of his now humbled old enemies in the military to give him a non-Gulenite power base to strike back against his former friends in the Islamic Gulen movement.
Or at least that’s one Conspiracy Theory about recent Turkish events.
Which suggests that Dani Rodrick, who you were citing as a source, might not be entirely unbiased.
@Steve Sailer
The Gülen movement puts very strong social pressure on all of their supporters to donate large sums of money to the movement, not just their teachers. They also pretty clearly want to build a Gülen-pillar, preferably with government support, as any sensible pillar-builder would do.
The combination of these policies does produce ‘kickbacks,’ but I see this as a side-effect of the above-mentioned policies, not as the goal.
“Which suggests that Dani Rodrick, who you were citing as a source, might not be entirely unbiased.”
No, but Turkish-born Harvard economist Rodrick is super-smart and also well-informed about Turkey. He’s upfront about being the son-in-law of a Kemalist general who was persecuted by Erdogan and Gulen working together a decade or so ago, so I’m quite interested in what he as to say about recent Erdogan vs. Gulen questions. That in 2016 Rodrick has arrived at (more nuanced) versions of what I came up with in 2014, suggests I wasn’t crazy in 2014.
“The combination of these policies does produce ‘kickbacks,’ but I see this as a side-effect of the above-mentioned policies, not as the goal.”
“Kickbacks” is such a harsh term …
Anyway, my general point is that roughly since the serious press went to war against “conspiracy theories” in the early 1990s when it looked likely that Oliver Stone’s “JFK” might sweep the Oscars, Americans with 3 digit IQs have been indoctrinated with the belief that there are no such things as conspiracies.
In contrast, in Turkey, it’s intellectually prestigious to believe in conspiracies — the Smartest Guy in the Room is not the Turk who best wields Occam’s Razor but the Turk who dreams up the most elaborate conspiracy theory.
Not surprisingly, this difference in mental orientation appears to allow at least one group of Turkish conspirators to sometimes put over on us naive Americans a pretty lowbrow conspiracy involving immigration fraud and the like without many educated Americans noticing what is going on for, broadly speaking, Sapir-Whorf reasons.
We’ve been lectured for the last quarter of a century that only rubes and crazy people like Randy Quaid’s character in “Independence Day” believe in conspiracies, so Americans have a very hard time cognitively engaging with a culture like Turkey’s where conspiring and conspiracy theorizing are socially prestigious activities.
Or calling something a conspiracy theory just distinguishes it from conspiracy fact.
Not being true (at least in the view of the speaker) is probably now part of the definition of a “conspiracy theory.” The term’s too loaded with negative affect to be used for anything the speaker believes in.
This probably ties in to there being fewer ancient conspiracy theories: we don’t have a huge number of sources for a lot of history, so we tend to treat those sources we do have as true. You see a few “conspiracy theories” in Rome because Rome has more sources. That doesn’t mean I’d ascribe much certainty to the Catalline Conspiracy, as all the surviving primary sources I’m aware of come from people who have a stake in it. In particular, using Cicero as the major source seems, um, reckless?
We’ve also got Sallust, but Sallust is hardly any less biased.
The modern concept of “conspiracy theory” emerged out of the JFK and (to a lesser extent) RFK assassinations. (Oddly, there has been remarkably little interest of this kind in the MLK assassination.)
Before the 1960s, the three most famous assassinations — Julius Caesar, Abraham Lincoln, and the Arch-Duke Franz Ferdinand — each involved sizable conspiracies. Not surprisingly, therefore, on 11/22/63, LBJ immediately guessed a conspiracy by pro-Castro elements, while RFK ordered his top men to look into three possible conspiracies, with the mafia at the top of the list.
Does the Tuskegee experiment count as a conspiracy?
The legal definition of “conspiracy” is that it has to be a crime under the laws of the society in which it occurs, so sadly no.
The colloquial definition of “conspiracy theory” is that its existence has to be known(*) only to the conspirators and the theorists, with the sheeple being ignorant and the mainstream media either ignorant or in on the coverup. I don’t think that was ever the case for Tuskegee, which was being openly discussed in medical journals from the start and was front-page news in the New York Times as soon as a doctor wrote them about it.
* In the “generally accepted as true” sense, not the “I am aware that silly people claim this” sense.
The thing about the Tuskegee experiment is that I think the men who were lied to about the treatment they weren’t getting had effectively no chance of finding out what was going on.
You could perhaps call it a conspiracy on that basis. One in the Santa Claus Conspiracy model where there’s more people who know the truth(*) than are being fooled by the lies and it’s little more than a tacit agreement and a culture gap that keeps the two separate.
But calling it a conspiracy theory runs into the problem that the first time anyone (specifically a guy named Peter Buxton) went to the public saying “I’ve got a theory something nefarious is going on here”, the New York Times promptly put the story on the front page as a thing that was definitely true and nefarious, and the study was terminated not long thereafter.
There was basically never a time or place where the Tuskegee experiment wasn’t one of, A: a known and uncontroversial good thing, B: completely unknown, or C: a known and uncontroversial bad thing, and none of those make for a “conspiracy theory” in the usual sense of the term.
* I’m guessing the total circulation of the medical journals in which the experiments were discussed was greater than the number of test subjects.
Those are good points.
If Tuskegee had required a classic conspiracy theory, we’d at least have the consolation of knowing that the perpetrators recognized their actions were an evil in need of concealment. That they were able to proceed openly, at least within their community, for over thirty years without being called out on it, is the really disturbing part.
On the Teacher strikes, they don’t (and obviously can’t) prove a negative about what having underfunded schools at the level where teachers would need to strike for 180 days and the effect that has on student levels of education. What effect does it have on students when teachers are stressed out from working three jobs? We already drastically underpay teachers.
It’s pretty clear to me we already exploit teachers desire to not want to hurt children into lower wages, but everyone has a breaking point. In WV the teachers did a meals on wheels type thing for the kids who they thought were at risk of losing their only meal while on strike, clearly not the deprived monsters these anti-labour authors would like to imply.
I don’t think they particularly think teacher strikes are important in and of themselves, they’re just trying to use it to determine whether exposure to more school is helpful.
Does this take into account the bit where they get the whole summer off? While I’m aware that they do have to eat during that time, it’s a really massive benefit compared to those of us who don’t. To put it another way, what does a teacher get paid per hour of work?
EPI compared salaries per week so they wouldn’t have to deal with that. (Though one still might argue that teachers are on average overpaid per hour.)
Also worth noting: the “17 percent less” ignores benefits entirely – when EPI counts the value of benefits that drops the gap to 11%.
UPDATE: the average US school day is 6 hours 38 minutes; the amount of that time teachers spend teaching has probably been overestimated in past surveys, especially in international comparisons.
As well as standing in front of the class, teachers also have to make lessons, grade papers, etc. I don’t know how much time that takes (and lesson planning probably varies a lot depending on how much you can re-use past material), but it’s not implausible to me that this takes 1.5 hours per day (which would bring teachers up to average working hours).
Yes. Teachers have quite a few duties beyond the hours they’re required to be in class (definitely prep and grading; paperwork, meeting with students, communicating with parents, club supervision generally aren’t included in contracted time either).
N.B. everything I’m citing is based on teacher self-reports; also, I’m not making any claims about anything in this thread besides how many hours teachers work.
A National Center for Education Statistics report from the early ’90s says US teachers work 11–12 hrs/week outside of the time that they were “required to be at [their] school,” total work week of ~46 hrs, not much difference between public/private. A Bill and Melinda Gates Foundation report from ’12 suggests similar or even higher numbers.
Probably biased anecdata: A teacher-educator I know says a typical K12 teacher might teach 2–3 hrs/day plus time on weekends (no priming, I asked what a typical # would be without referencing the above data). She’s been in teacher ed for ~15 years, has taught a total of ~3500 current K12 teachers in that time, and taught high school for ~8 years in the early/mid 90s. Apparently she and her husband, who was a middle school teacher at the time, each worked ~3–4 hrs a day outside of class but were unusually motivated and had a really high student load.
More probably biased anecdata, from Reddit (early career so probably skewing high).
That reddit thread looks like I expected. Young people throwing lots of hours at work because they think they need to impress; confuse effort with results; legitimately need to do a bunch of one-off things they’ve never done before; don’t realize which parts of the job need more and less effort; don’t know how to say ‘no’; haven’t figured out how to be efficient.
In other words, what most other professionals go through. After some number of years (usually within 10) you know how to put in quality work at the 40-50 hour mark.
@quaelegit:
Right. A proper accounting could go either way. You’re adding 1.5 hours to the “average school day” of ~6.5, but (other than grade school) teachers don’t necessarily have classes for every hour of the school day. So most can do at least some of their other work during off periods.
It’s certainly plausible that the average teacher (especially the average new teacher) does work about as many hours as other professionals, but it’s not clear that they need to do so and I wouldn’t be surprised if there were a substantial difference in the hours – in either direction.
Sure, but the same applies to most 9-5 jobs (not least because of legal requirements).
My impression is that there is a lot of variance in teachers’ working hours: bad and lazy teachers who don’t plan their own lessons and put minimal effort into marking homework spend very little time working outside of classes, whereas some teachers (not necessarily the best ones) spend hours each day marking work.
As someone who has taught, the time commitment depends a whole lot on how long you have been teaching a given subject. Even if all the teaching materials are just handed to you, figuring out what the typical misunderstandings are and how to help with them takes lots of time.. but if you have taught a subject that does not change much over time for years, your prep work is going to be nearly nothing.
Yup. Barring some change in assignment, off-hours teaching work is very heavily front loaded to newer teachers. I’m suspect this is the case in many professions, perhaps for different reasons.
But they don’t have the summer off.
Barring a few teachers whose students all understood the lesson plans and have no new material to teach the next year, teachers spend summers reviewing their plans, updating them to reflect new material (even the math teachers suddenly find themselves required to teach more and more material every year), tweaking the ones that students suddenly don’t take to when they worked every year previously, and planning and teaching summer school. Also, in at least one state, teachers are required to provide tutoring for free, which they do after school ends, but also are not allowed to charge for any tutoring done over the summer, so if they do that, it’s up to the government to pay them for it.
(source, I initially went to school for teaching, but do much better at tutoring than classroom teaching)
And this amounts to working a full-time job over the summer? Even in a worst case, that sounds like the sort of thing where you’re doing a couple hours a day, and can easily drop it for a couple weeks if you want to go on vacation. There’s a couple weeks before school starts for summer planning that should be paid, as should any teaching of summer school, so that sort of falls out of the argument. There’s a big difference between that and my job, which gives me about three weeks off, total.
In at least one state, it’s illegal to pump your own gas. Just sayin’.
On a more serious note, summer holidays alone are about 8 weeks long. Summer schooling is presumably, in most places, additional income, and the rest just doesn’t add up to 8x5x8=320 hours work.
I think the point here is efficient markets: teachers may have worse pay than other jobs for similarly qualified people, but people that qualified still choose to do the job so there must be some benefit, even if it’s that you get to do a non-bullshit job. You can only gripe on the equivalent-wages line if your earning well below the population as a whole, which teachers don’t (even before all the holidays).
In addition to what bean said, I also want to mention how it is nearly impossible to be rid of a bad teacher. Surely the kind of job security they carry counts for something towards teacher’s ‘total compensation’.
My daughter has a biology teacher this year that is, as far as I can tell, completely worthless. Late to class, doesn’t show up to class at all, (and doesn’t notify the administration!), hands out worksheets and plays on her phone. Hands out worksheets and leaves the room. Has been reprimanded by the administration multiple times for failure to grade papers and tests in a timely manner.
My daughter turned in a project 2 days early and received a zero because the teacher ‘lost’ it. A shoebox-sized DNA model! When my daughter went into the teacher’s storage closet and found it on top of a tall cabinet, she was reprimanded by the teacher!
Communication with the administration were initially positive and then sort of died off after they figured out they couldn’t do anything about her.
Whatever we taxpayers are paying her, it’s too much.
That implies that bad teachers are overpaid, but isn’t a benefit for good teachers.
Assuming a simple method exists for distinguishing the two.
I think Matt’s comment describes a reasonable way. It doesn’t have a very high resolution, but from the perspective of a good teacher that doesn’t matter. The important thing for them is that there’s no chance people will decide they are terrible and decide to try to fire them, since that is the situation where job security is a benefit.
Matt’s method does not universalize well. There are plenty of parents who are convinced their kid’s teacher is terrible when in reality it is them that are terrible.
Since those parents can be equally good or better than Matt at complaining to the administration, removing protection for bad teachers might cause some collateral damage among good teachers too.
It might mean that people will decide that they are terrible at random, which would be a benefit for a bad teacher but harmful to a good reacher.
Also, a good teacher might think that if the bad teachers were properly noticed, there would be more jobs for the good teachers, which might balance out the benefit from no chance of being thought bad.
There are plenty of parents who are convinced their kid’s teacher is terrible when in reality it is them that are terrible.
Every job has unreasonable clients. Employers don’t just fire their good workers when these clients start squawking. They still care about their work product and personal reputations within the professional community.
There’s nothing particularly unique about teachers that they would need job protections that other professionals don’t have.
Saint Fiasco says:
There are plenty of parents who are convinced their kid’s teacher is terrible when in reality it is them that are terrible.
Well, my wife has talked to multiple parents who have the same complaints about this woman. When the woman doesn’t show up to class other teachers have to cover for her – not substitute, but skip their planning period (much of the discussion in this thread seems to ignore that planning periods exist?) to babysit her classes. I have in my email inbox an email from the principle where she wrote the following: “I know this has been a frustrating class for our students” and “I appreciate and thank you all for your communication with the school in regards to your concerns. Thank you for your patience and perseverance during this time as well.” The quality of this teacher is not in doubt.
My other daughter has a social studies teacher who showed her class Moana last Friday and will be showing The Little Mermaid tomorrow. No doubt after hours the teacher will be working overtime screening potential movies for next Friday. I suppose I should be grateful that she’s showing up to her job.
If the method involved a minimum of complaints, say from parents of half the students and half of the other teachers within the department, that might work.
Alas, teachers are sometimes like cops, willing to lie for other members of the brotherhood.
Working side-by-side with unfireable people really sucks.
Some states overpay teachers. Some states underpay teachers. Look at how hard it is to hire and keep teachers, and how hard it is to fire them, to evaluate where you are.
My encounter with the phenomenon of unfireable teachers* was The New Yorker‘s old article The Rubber Room. As it makes clear, it’s a confluence of things which cause these “rubber rooms,” and there are places where one or more conditions don’t apply, like Chicago. Even so, it’s one of the most fubared situations I have ever heard of.
Have you googled how many open teacher positions their are in your state? What has your state done about this problem? Oklahoma STILL has over 1000 unfilled teaching positions after the legislature slashed the requirement of post grad work for teachers and hired 500 kids fresh out of college with a BS and little to no training in teaching. I wouldn’t be surprised if your kids teacher was one of those with minimal training. If you want quality teachers you need to have high enough pay that the district can afford to fire bad teachers without screwing themselves.
Except that in many states, firing bad teachers, at least after the first few years, is not an option. My wife grew up in Cleveland Heights, a suburb, where I doubt teacher salaries were particularly low. She had one very bad teacher. Her mother went to the principal to complain. The principal’s response was that they couldn’t get rid of the teacher, who had a few more years before retirement, and they had to give someone’s kids to her.
In California at present, teachers must be notified of their tenure status after sixteen months. Thirty-two states grant tenure status after three years. There are only four states where public school teachers don’t get tenure.
@userfriendlyyy
The “kids fresh out of college” part of that is false. I mean, it’s certainly possible that some of the people hired under emergency certification are “kids fresh out of college” but the article doesn’t say this and it runs counter to stereotype. The “central example” of the value of emergency certification is hiring a professional engineer (somebody with a PhD in a STEM field) to teach math or physics or computers, either post-retirement or during a sabbatical from their other profession.
“Emergency certification” is awesome and ought to be the default. The fact that the Oklahoma State Board only issued 500 emergency certifications might even be their entire problem – if they want to fill 1000 more positions they should issue 1000 more certifications to widen the applicant pool. Let those with and without education degrees compete on an even playing field!
Another part of the problem is an overemphasis on “small class sizes”. If – justified by terrible science – you pass a mandate that classes have to be inefficiently small, there’s suddenly a need for more teachers than the local economy previously needed or produced. Even if the salaries were excellent it’d take years to remedy that sort of shortfall; you’re bound to see a supply shock. Given this context – that for legal reasons they mainly need people with education degrees and teaching credentials – hiring from places like Puerto Rico makes sense. But the better solution is to (a) get rid of the arbitrary class size limits, (b) get rid of the “education degree” requirement.
Well, just about any way you split it the best quality teachers are going to go to the highest paying states unless they have other reasons not to.
Having graduated with a B.S. in engineering and also having taught swimming lessons at the Y for over a decade I can guarantee you that at most 1-5% of my graduating class of 300ish would even be capable of standing in front of a bunch of high schoolers and effectively communicating anything about science. And most of those people were the ones that realised college has practically nothing to do with learning the material and everything to do with networking to get a job. Making them the least likely to need a job teaching. I cringe picturing the people who I could see ending up in these poorly paid teaching positions.
As far as class size goes it is rather clear increasing it just adds more stress to teachers and decreases the quality of instruction.
All of your proposed solutions sound great for someone else’s kid. Is that the kind of school you want to send your kid to? Underpaid, underqualified teachers with 50 kids to a class?
I get so sick of this country and everyone’s selfish shortsightedness. Public education is a social good that benefits the whole population. It creates a more educated workforce capable of more complex tasks, higher paid jobs where they will pay more taxes and be less likely to end up in jail or the hospital on the taxpayer’s dime. There is literally no one that can think past the next quarter in this country which is why we make debt slaves out of anyone that wants to go to college rather than just paying for it with increased taxes on people that go through the process and successfully parlay that into a high paying job. Instead we have the poor pay for it because we have to pay our doctors, lawyers, and engineers much more than in other countries so they can pay off student debt; those inflated wages translate into higher prices for everyone. I get so mad knowing that college used to be practically free in this country and now just thinking about all the wasted talent that see the price tag and decide flipping burgers is a better idea. This has to be the least meritocratic society in the OECD.
One might expect that. My understanding of attempts to measure the effect on student learning of different variables is that class size doesn’t seem to have much effect. It’s been a fair while since I looked at that literature, however, so I may be out of date.
So far as “unqualified teachers” are concerned, is there any good evidence that getting an education degree makes someone a better teacher? The pattern over my lifetime is that cost per pupil of schooling has gone up sharply, measured outcome has if anything trended down. If there really was a science of education, which is what the existence of ed schools presumes, one would expect that people would get better over time at doing it, not worse.
IIRC there are studies that seem to demonstrate objectively measurable learning improvements from having very small classes, such as a student-teacher ratio below 6. But there’s essentially no evidence of benefit from getting class sizes down from 40 to 35 or 35 to 30 or 30 to 25 or wherever the current margin is. Yes, teachers prefer smaller classes because it means they do less work, but schools exist for the benefit of kids, not teachers. Given that qualified teachers and money to pay them are in limited supply, there is a tradeoff here – smaller classes means bringing in lower-quality teachers and spending more on teacher salaries that might be better spent elsewhere, including the option of keeping fewer teachers but paying those teachers more.
I think the whole idea of expecting kids to get most of their info from teachers via large lectures is nuts. I had my druthers I might prefer kids to be instructed primarily by older kids. Have mixed-age classrooms and let older kids teach/tutor younger kids, at least until the younger kids are old enough that they can learn primarily by reading books. That way a large fraction of the kids get exposed to the material twice – once as a learner and once as a teacher. Then learning material well enough that you can teach it and learning how to teach becomes an expected part of the curriculum.
I’m thinking here of the Karate school model where there is one school master who teaches a small group of very advanced students; those students refine their own understanding by teaching lower-level classes which the master occasionally observes or checks in on briefly. With that model you could have a 100 to 1 or (with another indirection layer) even 1000 to 1 ratio between students and full teachers. With that sort of setup you could afford to pay the full teachers really well (and perhaps pay some of the teacher’s assistants a little) while still keeping tuition costs well below that of public schools.
“Unschooling” is another option – have a few teachers around but let the kids decide for themselves what they want to learn and when and how – if something like a “Sudbury School” were available I’d certainly prefer to send my kids there than a standard school.
(something something Khan Academy…)
Of course cost per pupil of schooling has increased.
https://en.wikipedia.org/wiki/Baumol%27s_cost_disease
If we’re talking specifically about Oklahoma, they’ve been going through a multi-year budget crisis leaving them with teacher pay 20% below national averages and almost the lowest in the country. I think that’s their bigger problem.
Oklahoma cost of living is about 89% of the U.S. average, according to a quick web search, so adjusting for that makes teacher salaries only about 9% below average, assuming your figure is correct.
@userfriendlyyy:
That EPI study says when they adjust for the higher level of benefits, teachers are underpaid by 11%. Does a difference of 11% justify the word “drastically“? In that context I think I might have gone with “slightly” or “mildly”.
In skimming EPI’s paper the good news is that they claim to be comparing weekly earnings to reduce the impact of the three-months-off issue.
Alas, the “with similar education” part is dicey, as they appear to be comparing most teacher salaries to those of other people whose job requires “a college degree”, not adjusting for the difference between a degree in education versus a degree in, say, computer science.
Degree in education is a bachelor’s, 4 years of college.
Sure, but in terms of course difficulty education is one of the easiest degrees to obtain. From a signaling standpoint, getting a degree in the hard sciences signals a high IQ – much higher than that of a high-school graduate – whereas getting one in education doesn’t; that ought to be taken into account when assessing whether people are underpaid compared to their known attributes.
Most teachers require the education of a high school diploma in real terms to accomplish their job. That’s why parents without education degrees can teach their own children. Often with better results.
Teacher pay should be a function of supply and demand. There’s simply a lot of people who can do the job. This is like saying why do ditch diggers get paid so little when it’s hard work. It’s hard work that nearly anyone can do.
It is my impression, although I haven’t checked current numbers, that private schools get teachers even though they pay less than public schools, presumably because teaching in a private school is pleasanter work.
I recently got my first teaching job at a private school. I searched mainly for private school positions despite the fact that I could have gotten a higher salary at a public school. The two primary reasons were the assumption that it would be a more pleasant environment, and that I wouldn’t have to worry about certification.
You’d first have to convince parents that untrained teachers are just as good as trained ones. I think that would be difficult. 🙂
I assume by “trained” you mean “having an education degree.” Is there good evidence that such a degree correlates positively with teaching ability?
I recall hearing a little about the research on this, and as I recall having a degree in the subject matter being taught was correlated with teaching effectiveness, with more advanced degrees being better. But degrees in education were not correlated at all with educational effectiveness. Of course, as a teacher with an advanced degree in my subject and no training to speak of in education, that’s exactly the kind of study that flatters my own biases, so I’m suspicious of my memory in this case.
@David, I don’t know, but my point was about the parent’s likely beliefs, not about the state of the evidence.
@Harry:
I suspect a lot of parents would consider a teacher with a degree in physics at least as well qualified to teach physics to their child as a teacher with a degree in education, very possibly more qualified.
In high school? Yes, maybe. I was thinking more of primary education.
My understanding is that in most states, you can’t teach high school physics in the public schools with a degree in physics–you need some additional educational certification. And I think there is not any evidence that that certification improves educational outcomes at all.
Anyone have a link to some data here? (Freddie, maybe?)
I’d personally be very reluctant to allow a physics graduate to teach at a high school level without some sort of training and filtering. I might be typical-minding a bit, however, simply because I know if put in that situation myself I’d have absolutely no clue what to do and would be utterly dreadful at it.
Whether it really needs to take six months is another question.
@Harry:
The first, and possibly most important, filter is wanting to do it.
You could steal a lesson plan someone has posted online, try to summarize some of the material in the chapters in whatever text you’re using, solving some of the problems you don’t assign as homework as examples, and use the fact that you do in fact know physics to enable you to answer any questions from your students. In other words, how a college professor is likely to approach an intro class they haven’t taught before. Probably high school students are more challenging than college students, but nobody has jumped up to cite any evidence that trendy techniques taught in education classes actually help anyone cope with those challenges.
@Harry:
I guess I’m confused – why couldn’t you just teach high school physics the same way you remember having been taught high school physics when you were in high school? Does physics change so much in a decade or two or three that you wouldn’t expect to be able to find a similar set of textbooks? Heck, as a grad student you not only have extra experience learning that subject but additionally you’ve probably been a TA at some point running small college discussion groups while helping to teach freshmen the college-level Physics 101 course, so you could draw on that experience as well. How could somebody in that circumstance “have no clue what to do”?
My memory’s not that good. [Caveat: it is possible that when I finished my PhD I still remembered enough about being taught at high school for that to be a workable strategy. I don’t think so, but I can’t be sure because, well, my memory’s not that good.]
Another caveat is that, because of my anxiety disorder, I may be under-estimating my own likely competence. Although I don’t suppose teaching would be a good career for someone with an anxiety disorder anyway.
Incidentally, I never taught during my PhD, and I don’t recall that any of my colleagues did either – I think that’s just not how things were done in England – but that was a long time ago and, well, did I mention my memory’s not that good? 🙂
The way I understand things (at least in Maryland), a guy with a physics degree can get a job teaching high school physics in a private school, but not in a public school. For the public school, he will also need some kind of education degree or at least a teaching certificate that requires some additional coursework. My understanding is also that, in general, students of someone with a teaching certificate or educational degree don’t seem to learn any better than students of someone without one.
That EPI study is almost certainly a study of wages, not compensation, because the BLS provides a lot of great wage data, but is less good on total compensation. As Bean and matt point out, that means they exclude the benefit of lots of time off and practical unfireablity. It also doesn’t include the extremely generous pensions and medical coverage that tends to go with government employment. It also doesn’t hurt to mention that public schools pay considerably more on average than private schools do, though there are a lot of complicating factors in that comparison.
On a more philosophical note, “than industries with similar education and skill levels” is a decidedly questionable adjustment. This study doubtless is treating advanced degrees the same, whether they are in education or electrical engineering, and ignoring the fact that in most public school systems there are automatic salary increases for getting advanced degrees, hence a proliferation of diploma mills and purposely credentialed teachers.
Many states have HUGE teacher shortages because of the bad pay.
Husband of a teacher here (although she teaches part-time GED and diploma courses for the county school system now rather than the high school social studies she used to teach). Counting school hours is a sure way to underestimate the time that teachers put into their work. When she did teach full-time in high school, her daily regimen during the school year was something like:
— Get up and leave the house before I left for my own full-time job.
— Work the six and a half hours mentioned, eat lunch during a class.
— Stay after work making copies, grading papers, and performing other bureaucratic duties.
— Get home after I did.
— Grade papers, create lesson plans, etc.
— Eat supper.
— Grade more papers, do paperwork, etc.
— Go to bed after 11:00.
— Do it again the next day.
This leaves out things like parent-teacher conferences, PTSA meetings, mandatory assignments (fortunately not frequent) like running ballgame concession stands, rehearsing graduation, and other such. Also left out are after-hours meetings with students for the yearbook class, etc.
Yes, she got the summers off — although even then she would have to take classes and attend seminars to keep her certification current — but during the school year, work ate her life. I told her when we married that she did not have to work if she did not want to, and I was very happy when she decided to quit, even though the loss of her paycheck was a bit of a blow.
It might eat your life when you are a young teacher, but I’ve known lots of experienced teachers who were very effective at less than 50 hours a week.
Young people often think throwing hours at a job improves performance, and some managers encourage this mindset. It is incorrect.
Older teachers have the lessons plans they made when they were young
This is my understanding. A friend who was in her first year of teaching (High School Spanish, IIRC) said it was eating her alive because she had to write all of her lesson plans from scratch. But she also said that she expected next year to be much better, because she wouldn’t have to do that. Even if there’s some churn in what classes you have, it’s probably a lot easier to write one new lesson plan with some experience under your belt than it is to write several when you’re new to teaching.
“we drastically underpay teachers.”
Your link provides no support for that claim. It shows that teachers in some states are paid less than in others and object to it.
I have not looked at current data, but when I looked at the figures some time back, average teacher salaries were above the U.S. average but below the average for college graduates. The latter could be interpreted as evidence of underpayment—or as evidence that the abler college students tend to choose more challenging majors than education.
Do you have data to show that teachers are drastically underpaid? If so, perhaps that is what your link should have gone to.
How about the massive shortage of teachers in low salary states.
In terms of teachers per capita, Oaklahoma is right in the middle of the pack – they have about 1 teacher per 100 residents, the same as everybody else.
Regardless, wasn’t the question about teacher salaries in the entire US? If Oklahoma is getting by on 3% fewer teachers than they’d like (they have ~39,000 teachers and want 1,000 more) and we rule out other causes (artificial barriers to entry, whipsaw regulations, unpleasant job conditions…) then that might constitute evidence teachers in that state are somewhat underpaid, but it doesn’t seem like strong evidence that teachers in general in the US are underpaid, much less drastically so. Does it?
Per 100 residents isn’t necessarily very useful when you have Maine with 19% of the population under 18 and Utah with 31% under 18 (nationwide average is 23%, OK 25%). So yeah, OK should have slightly more than the average. Maybe dramatic is an adjective too far for you but I don’t understand why the richest country that has ever existed keeps 1 in 5 children in poverty, right near the top of the OECD, while insisting on cutting school funding which is one of the very few pathways out they have. I am exhausted by this topic, see my other comment if you want to see why. I am fed up with the complete inability of anyone in this country to think about what is good for anyone besides themself. I don’t even have kids and I’m not planning to but this just makes me sick. We have a literal oligarchy with more money than they could possibly spend and definitely don’t deserve and everyone wants to hate on public school teachers for being greedy. We are way overdue for a repeat of the French Revolution.
“but I don’t understand why the richest country that has ever existed keeps 1 in 5 children in poverty, right near the top of the OECD”
Did you try following the link you gave, and then following its link to the source of those figures? I quote:
“The poverty rate is the ratio of the number of people (in a given age group) whose income falls below the poverty line; taken as half the median household income of the total population.”
The median household income of Italy is a little less than half that of the U.S. So the source you are using defines children in the U.S. as poor at more than double the income that defines children in Italy as poor.
Can you see that your “the richest country that ever existed keeps 1 in 5 children in poverty, right near the top of the OECD” only makes sense on the opposite assumption–that the definition of poverty is the same for all countries? There is no particular reason why a richer country should have a smaller fraction of the population below half its median income than a poorer country.
Did you know the fact I have just pointed out?
Take a look at a graph of U.S. government spending on schooling over the past century plus. As you will see, relative to GDP it has increased almost six fold. Over that same period, real GDP per capita increased more than eightfold. So real per capita expenditure on education has gone up about fifty fold since 1900.
There is a real world out there, and before making general claims based on something you have seen about expenditures over the past two years it is worth checking what the actual pattern has been.
Is that in real terms?
@Harry:
I’m not finding the source I got those numbers for, but it was probably nominal. Another source gives figures at purchasing power parity and the Italian figure is a little less than 2/3 the American.
You might have been looking at something that referenced Wikipedia’s list of countries by household income, which has Italy’s household income at ~55% of the US value using a PPP comparator. However, that page’s sources are seriously outdated; all the numbers are from at least a decade ago and Italy seems to have gained some ground since then.
Poverty is relative, like most things. Maybe we earn a bit more but every other country in the OECD has free healthcare and a better safety net.
Scott just did a post on Cost Disease, though IIRC he didn’t really explain what causes it and his examples weren’t the textbook cases.
https://en.wikipedia.org/wiki/Baumol%27s_cost_disease
In which case complaining about US poverty rates on the grounds that a richer country should do better doesn’t make any sense, does it? Yet that was what you were doing.
Did you know that the poverty figures you were citing were calculated using a relative definition? I asked that question in my previous comment and you didn’t answer it.
My guess is that you did not. That matters–more for you than for us. You post with a great deal of confidence that you know how things should be done. If you discover that you are being careless about the facts your conclusions are based on, you might reevaluate that confidence.
No country has free healthcare; what you are describing is healthcare paid for by taxes. Those taxes come out of the income of the people in that country, just as private payments for healthcare do. So the fact that healthcare is “free” doesn’t mean that comparative income figures make the country look poorer than it actually is.
It’s true that healthcare in the U.S. is expensive relative to other countries, but it isn’t clear why; my guess is that it is not because of too little government involvement but too much. But that would be a long argument.
There is a conspiracy to keep one-third of our nation ill-fed, ill-clothed, and ill-housed. The conspirators has a name.
Statisticians.
The cohort-level difference-in-difference methodology controls (by design) for the level of resourcing of a given school or teacher, and any other individual characteristics associated with the treatment group. That is the whole point of that methodology.
It may well be true that schools and/or teachers are underresourced, but that’s not the research question.
According to this, teachers in Mexico are paid vastly more (as a multiple of the average salary) than in most other country, yet the educational attainment still sucks. There’s a million confounders, of course, but I’ll count it as evidence against the notion that throwing more money at teachers would improve education.
Speaking of confounders, maybe worse teachers are more likely to go on strike? Or the same factor could make some teachers worse at their job and more likely to strike. For instance, I could see a strongly activism-minded teacher focusing more on political causes than on teaching math, and also being more likely to support a strike.
It’s hard to square agriculture being a soft transition with the Neolithic transition in Britain, which involved a replacement of 90-95% of the people. Perhaps it was not a revolution in sociability. It was certainly a revolution in other ways.
There’s also the fact that the population didn’t rise much in the first few thousand years after agriculture. Was agriculture simply really inefficient? In Against the Grain, it’s theorized that this was due to plague after plague hitting both the people and their crops.
Reply to Nathan Cofnas, by Kevin MacDonald
And here’s Cofnas response to that.
And MacDonald’s rejoinder.
I guess I am happy this is debated openly and in a decent manner, an improvement over voaters festering in their hate echo chamber because nobody ever wants to talk to them.
I tried to read the paper and I tried to read the reply, but they are both really boring. Can’t those people even try to quantify anything?
“History’s first conspiracy theory?”
The conspiracy theory that the Achaemenid Persian emperor Bardiya/Smerdis was murdered and replaced by a look-alike.
Yes, this. Egyptian and Mesopotamian records are too compressed to include stuff like this, so if there’s a conspiracy theory Herodotus mentions, that’s the first one.
Slightly different versions are given in Herodotus and in the 6th century Behistun Inscription of Darius. I think the usual interpretation (tl;dr version) is that Darius himself invented the story in order to justify his own seizure of the throne: “I didn’t kill the king. I just killed a pretender, and he had killed the king.”
Who knows whether any of the contemporary Persian populace actually believed it. But knowing Herodotus, it was probably just too good a story for him to pass up.
Indeed.
Also note that Darius said the pretender was a Magi in disguise as the rightful king. So “a wizard did it.”
We’ve talked about the fewer-shootings-during-NRA-conventions thing here before. TL,DR: Probably just noise, and the postulated causes are plainly implausible in ways that indicate the authors didn’t do even minimal homework on this, Gelman is right.
Indeed, ref. over there: Indeed, I would expect (though haven’t been able to verify) that there would be any number of shooting range expeditions during the conference and that this would actually mean many attendees would be more likely to handle a gun during that time period.
I’ve never been to NRAAM, but I’ve read reports from people I follow who do.
Going to the range during it doesn’t seem to be a big activity … but whenever possible (and it’s usually possible, I believe, due to alignment of preferences and thus decisionmaking re. siting), NRAAM allows, nay almost encourages, concealed carry among attendees whenever the host state/city/venue allow it; there’s a reason it’s not in California, New York, New Jersey, Illinois, or similar hostile [to carry in particular and gun rights in general] states.
NRAAM attendance thus has a relatively low impact on attendees’ gun handling incidence compared to any number of other events someone might attend.
On the bullshit jobs point, feeling meaningful isn’t the same as being meaningful. I doubt there are many true bullshit jobs, particularly at a time when companies have been trying to streamline and get by with as few workers as possible. It’s just that some jobs offer a more direct impact on society, and others create value in more abstract or indirect ways that don’t feel as meaningful.
This is a big insight of effective altruism too; a high-paid banker who donates five or six figures to malaria relief may be making a much bigger positive impact on others than a low-paid social worker, but isn’t going to feel as altruistic on a day-to-day basis.
I believe there are in fact many bullshit jobs, and that they are mostly concentrated in the government. Businesses cant compete by paying people to do unproductive work.
As anecdotal evidence, I live in Ottawa, Canada’s capital which has an extremely high amount of civil servants. It’s a running joke that they’re not working hard, they are hardly working. What I hear from people I know who work for the government and what I saw with my own eyes during a summer job confirm this.
Unfortunately the text linked by SA doesn’t specify which proportion of people who feel useless work in the public or private sector.
As for the threat of automation, one of my pet theories is that in the event too many workers are displaced, there will be a return to personal servants for the higher classes, Downton Abbey style. I’m not convinced it’s true but it feels more plausible to me than everybody receiving a UBI and writing poetry all day.
I don’t think there are enough people in the public sector, total, for it to explain the kind of numbers that survey finds.
I agree, but I would be curious to see whether the public sector has a higher proportion of people who feel useless. I would expect that to be the case.
I doubt it. Most public employees work in education or healthcare. Robin Hanson might disagree but I strongly suspect the rate of feeling useless is a lot lower among teachers than bankers.
And just because YOU think of the the EPA or SEC or HUD or whatever as useless, doesn’t mean employees of those agencies do. To the contrary I bet a much greater share of regulatory workers believe in the mission of their agency than do private company employees.
I would also be curious to see data on this, because on this specific metric (share of workers who feel their job is useless) I bet the public sector actually performs dramatically better than any other industry you could find (excepting workers at nonprofits).
I admit that I was thinking more of civil servants in large bureaucracies than teachers and nurses when I was talking about the public sector. And for those large bureaucracies I would like to see the data.
Most people who work in education and health care do not teach or personally care for the sick.
That might have been true 50 years ago, but now we have armies of administrators in both those fields.
I think this hinges on what metrics people use for meaningful.
My old job I was working in a private company on inventory software for pharma R&D.
In absolute terms it’s probably fairly useful.
But that’s separate from it feeling meaningful to me. I had no competitive advantage in that job. If I’d died one day they’d have just had another coder slotted in doing my tasks within a couple weeks. I had almost no design-level input into the tasks, I was a code monkey writing code that any reasonably competent software dev could write and doing so in the most boring way I could so as to make sure it was as similar as possible to the style of the rest of the codebase without ornamentation.
*somebody* had to do the job but it had no “meaning and significance,” to me.
My current job in research (technically public sector) almost certainly involves my work being used in research far less, each line of code is only being used by small teams, not by half the worlds major pharma R&D departments but it has vastly more “meaning and significance” to me because I have vastly more freedom and there’s a kick you get when you find something notable or when an analysis kicked up some results that may have saved a couple of kids sight.
I occasionally see some of my coworkers in science news about cool breakthroughs or research and there’s a connection from that to helping them or members of their teams get analysis code to work.
I see people geeking out on forums linking to work done by people I know and, while I’m not a name I also get a little kick out of knowing that I helped X sort out why their R scripts were failing part way through or Y when they couldn’t get some perl code from a related publication working or helped Z with automating some task.
I used to feel sad every day going to work, in this job I whistle on my way in.
Being a tiny, utterly replaceable cog in a machine so vast you never see any meaningful results linked tightly with anything you do doesn’t make people happy.
I think that tons of teachers feel like they aren’t meaningful. For a while, it was even a cliché defense of bad schools, like, “Oh, we can’t be expected to teach these kids when they have such bad home lives.”
I can attest to that having been the case in at least one instance. Hence the career change.
Being a tiny, utterly replaceable cog in a machine so vast you never see any meaningful results linked tightly with anything you do doesn’t make people happy.
THAT’S WHAT THE MONEY IS FOR!
It isn’t exactly the same thing as meaningful, but I find that I get a pleasure from doing something where there is a direct link between what I am doing and being paid more than with an indirect link. If I write a book or an article or give a talk and get paid, it’s intuitively pretty obvious that the money represents real value produced. If I teach at a university on salary the link is considerably more indirect, so the intuition of “I earned this money” is weaker.
I mentioned this to a friend who is both an economist and a gambler, a blackjack card counter. He said that what he found most satisfactory was winning money at cards, because he was in direct conflict with other people and beating them. The moral intuition of the predator—in this case a moral predator, since the conflict was consensual—in contrast to that of the producer.
It’s not necessarily that he believes that e.g. HUD is useless but rather that there are people that work there that do almost literally nothing for the organization. Or if they do something it is baroquely inefficient and pointless — like copying values from one spreadsheet to another without using copy and paste.
“THAT’S WHAT THE MONEY IS FOR!”
Funnily enough the money is also way better in the current job.
Trivially replaceable cogs don’t tend to command high salaries because they’re trivially replaceable.
It probably bears pointing out that governments can force companies to hire unproductive/useless workers on pain of legal consequences. My last job was sort of like this. It was 10-20% making sure airplanes didn’t crash, and 80-90% writing that up to satisfy the FAA. I was employed by a private company, but if they weren’t legally required to keep the FAA happy, my job would have either not existed or been a lot different. (They cared a lot about planes not crashing, but a lot of the implementation was due to government requirements.)
Competition in the private sector can also result in lots of bullshit work, especially when the competition is zero sum.
Advertising is the most obvious example. Some advertising is necessary to connect consumers with providers of products and services they need, but it’s clear that at a certain point the customers one company gains via advertising are just taken away from another company that provides an almost identical product.
I think that’s a different kind of bullshit work, see Darwin’s comment below. Lots of jobs (for instance the ancient profession of soldiering) are more about getting shares of value rather than creating it, but people working on projects they know to be futile (and sometimes that everyone involves knows to be futile but that continue to exist anyway) etc. seems like a more modern phenomenon.
There was a writeup on reddit programming a while back from a former google software dev.
for 2 years projects he was on kept getting cancelled by top-tier management due to shifting company goals, meanwhile he was getting put off because he was realizing that the only way to get promoted was to basically game the the promotion system, ignore what was good for the team or the company and only do things which optimize for promotion metrics and then realizing that most of the senior people were in those senior positions because they’d been doing exactly that and that was why things were getting more and more shit.
Private companies can waste phenomenal amounts of money when they’re dominant in the market.
They can if they are all doing the same amount.
Right- unless someone else comes along and finds a way to do less.
It also depends what you mean by ‘create value’.
I would probably have said that I have a bullshit job in that survey.
I know for sure that I do create value for my company – in fact, some initiatives I’ve been involved with have been very profitable.
But I don’t believe I create any value for society. A lot of my work is related to marketing and branding, which is mostly a zero-sum game trying to pull consumers away from our competitors.
I think a lot of jobs consist of such zero-sum games.
+1
It’s worth distinguishing between:
a. What I do creates no value (or negative value) for society, but does create value for my employer. Example: Patent-troll attorney.
b. What I do creates no value (or negative value) for anyone but me. Example: An entirely redundant middle-manager who is kept on because management doesn’t realize he’s redundant.
I also think there’s a lot of stuff that looks useless but really isn’t. Like, if a company’s building airplanes and there’s a huge amount of form-filling-out done that seems unproductive, it may be that this form-filling-out is the (wasteful and inefficient) only way we’ve managed to get airplanes to be built to be safe. It’s possible that there are much better ways to do this, but that doesn’t mean we know how to get there.
Not zero-sum. Negative sum. The victims of advertising do not benefit in the slightest from the backdoors grown and exploited in their minds. Civilization would be better off if you changed careers to pretty much anything else. Lawyer. Janitor. Prostitute. Crack dealer. These things all create value.
It’s not at all clear to me that crack dealers are any better for society than advertisers are.
I’d go as far as boldly stating that they are in fact (on average) worse.
What madness is this? Has contrarianism gone too far?
Don’t they?
I believe that advertisers do create value. It usually takes at least some prior interest in the ad’s content to follow it, meaning that the advertiser basically just helps a buyer and a seller find each other, which is a good thing.
I cannot recall the last time an advertisement informed me of something I found useful. When I want something, I use search tools and word of mouth to find good sources, or sometimes even whether such a product exists.
Of course there are two types of advertisements – those I’ll call “push” ads, and those I’ll call “pull” ads. Push ads appear on your screen, in your mailbox, even on billboards and busses. They mostly just waste your time. Pull ads appear on classified ad sites, and similar places where you seek them out, and are shown only ads that are minimally relevant to what you want right then.
Pull ads are useful, and the better the categorization system, the more useful they are. People who create push ads probably delude themselves into believing they provide a useful service, but about all they manage to do is teach children the names and intended connotations of the local major brands .. and for that, the kind that appear on biilboards and similar are sufficient. Many pull ads consist of bare information, not created by ad professionals. But they are the type that actually help buyer and seller find each other.
Concur. The overwhelming majority of ads (of the “push” type, which is to say the overwhelming majority in all) don’t provide any useful information, and the flood of them is a net loss due to the annoyance and money wasted on ad campaigns that could have been spent on, say, R & D, or attracting better employees with wages or benefits.
Now, they do work, at least sometimes–my wife and I use Geico auto insurance because Geico was a name we knew, and we knew it because Geico has memorable, if frequently stupid, ads. But it’s not as if there’s a strong relationship between quality of ad and quality of the product or service. We might well get equivalent coverage from Progressive or whoever. Haven’t looked into it, since our current insurance works fine.
“we’re gonna get rid of the waste of advertising” was a classic line from the Communists.
Everyone also believes that advertising doesn’t affect them
Maybe they are right, but they are indistinguishable from the people who want to reorder society to match the science fiction books they read.
Oh, I don’t believe we’ll ever get rid of obnoxious advertising. I also don’t think we’ll ever get rid of fraud. Both are things that are profitable for individuals to do but drag down society as a whole.
Edward Scissorhands:
To believe advertising is a net loss to mankind doesn’t seem to me to be very much like wanting to reorder your society to follow some science fiction book. Can you explain that a bit better?
It seems obvious that a large chunk of advertising is a zero-sum game played between companies–Tide and All both want to sell me laundry detergent, they’ll both do a pretty good job getting my clothes clean, and so both buy ads to try to get my business, and the effects are probably like a tug-of-war with two big guys pulling in opposite directions.
It’s also obvious that a lot of important things in the world are currently funded by advertising. Most journalism is ad-funded, for example.
And, finally, it’s pretty clear that online advertising involves a lot of tracking of users and privacy violations in order to get the best price for the ads. This looks like a really bad thing to me, because all that data collected about who has looked at what ends is lying around, subject to being used for all kinds of bad purposes.
@theredsheep:
There actually is reason to think there might be a strong relationship. The signaling story is that expensive ads serve as a performance bond. When Tide or All or Geico spend a zillion dollars on a high-quality superbowl ad they are placing a bet on the future of the company. They can’t make that money BACK unless lots of satisfied customers keep buying their product. A fly-by-night operation trying to trick people into buying something of substandard quality might get one or two sales out of it, but without repeat business and good word of mouth that’s not enough to pay for an ongoing high-profile national ad campaign. So when a firm conspicuously spends a lot of money on some high-profile expense, the real message is “our firm is successful and we plan on being around for a long time” – a message that is incompatible with selling a crappy product that cuts corners.
Al else being equal, expensive and frequent ads signal reliability, and good ads (ads that do an especially good job of making you like and remember the product) signal competence, both of which are attributes you should want to see in the companies selling you stuff.
I think there is a sense in which advertising may produce something, although not the obvious sense.
Consider an ad which links a product to a picture of an attractive life—The Marlboro man was a classic example. Smoke our cigarettes and you can imagine yourself as a strong man riding a worse in wooded mountains, or something similar.
Actually getting you to smoke may be bad for you. But creating a myth and letting you imagine yourself into it is a benefit, rather like the benefit produced by the author of fiction designed to do the same thing, to let the reader or viewer imagine for a few hours that he is James Bond doing clever and courageous things and being loved by beautiful women.
When cigarette advertising stopped (by legislation), cigarette profits went up. The incumbents loved it, because any new producers were SOL.
Tide versus All would be fine if advertising on laundry detergents were made illegal, because they would keep their current customers, and anyone trying to take them away would face an uphill challenge.
Ads are part of branding strategies.
One thing corporations want to know (I see these surveys all the time) is — how many people have heard of their brands? If you want car insurance, you’ll probably go to Geico, because they’re the most memorable brand. So that stuff matters.
Sometimes awareness-based ads matter. My parents got fed up with the local big ISP, figured it was a monopoly, and canceled their internet service entirely. RCN exists in their area, but they’d never heard of it. If they had, they might have switched.
But sometimes… let’s say you’ve started smoking. What sorts of cigarettes are you going to get? You care about quality, but you also care about what sort of person you are — and different sorts of people are associated with different cigarettes. Marlboro is common and vaguely proletarian, American Spirits are for bougies, and blacks smoke Newports.
(I quit a few years ago, but I used to smoke Pall Malls. I don’t know what that says.)
Here’s an intereting presentation — you have political market demographics cross-referenced with brand consumption. “Unconnected & Unregistered” people are more likely than average to drink Rockstar energy drinks, own an XBox, use Axe deodorant, and eat Goya products, and “Super Democrats” are more likely than average to drink San Pellegrino, shop at Trader Joe’s, and own a Mac. Is any of this surprising?
One thing ads can do is build that sort of identity. Target’s ads, from what I’ve seen of them, are a good example of this — Target wants to be the bougie version of Walmart, but bougies don’t like department stores. (See also: “People of Walmart”.) So what do they do? They run TV ads that are a few seconds of things that generally appeal to bougie aesthetics — “look! we’re one of you! we’re not from Oklahoma and we’re not full of icky proles!” — and then they stick their logo on the screen at the end.
Same thing with cigarettes and alcohol. If you want to be a cowboy, smoke Marlboros; if you want to be green, smoke American Spirits. If you want to imagine that you’re a Kentucky outlaw, drink Jim Beam; if you want to imagine that you’re a London socialite, drink Tanqueray.
do not benefit in the slightest …
It’s not at all clear to me that crack dealers are any better for society than advertisers are.
I’d go as far as boldly stating that they are in fact (on average) worse.
The overwhelming majority of ads … don’t provide any useful information, and the flood of them is a net loss
It’s like nobody on this blog has ever watched television.
It is not zero-sum for society if you do it well. The best customer for the business is the returning customers. The returning customer is generally one who feels a genuine improvement in their lives by the product, not just an impulse purchase they regret. Good marketing is all about identifying and adressing the kind of people who are likely to be come returning customers. So ideally you steal those customers from your competitors who are likely to become returning customers to you but to them not, and they do the same, and all are better off.
That logic seems flaky.
Buisness’s aren’t setting out to maximize customer utility.
They don’t just set out to pull the minority of customers who’ll benefit from their product more than from competitors. They set out to pull all customers that they can pull and if they don’t their lunch is stolen by someone who does.
A car salesman who sits down with you and declares that while his own employers cars are good, the most optimal car for the customer is the one sold by their main competitor… that person is a car salesman who doesn’t remain a car salesman for very long.
I would be super curious if there are any care salespeople here? My intuition is actually the opposite! In truth it all probably depends on the delivery.
FWIW my experience (selling consulting projects, so totally different) is that candidly discussing the reasons a potential customer would not want to buy our service increased the chances of making that sale (not just later sales to the same client). But the devil is in the details, of course, and evidence doesn’t really get more anecdotal than that.
There is a camera and video store in my area that I used to go to, before cell phones with good cameras eliminated my interest in buying cameras. At one point, some years ago, I asked them about buying a device to scan slides. The salesman told me that, although they sold such a device, for my purposes I would be better off sending the slides to a service (not provided by them) that would scan them for me, producing better images at less cost than doing it myself. Which I did.
And I also concluded that I could trust that store in the future.
There are these hypotheticals where no one has any agency, and everything seems random and steeped in conflict.
1. If car salesmen X thinks that company Y makes the best cars why is he working for company Z? Why not just apply for a job (he has experience in the field and is currently employed making him a top 10% candidate) selling cars for the other company? The reality is that there is no objectively better car for 99% of customers, and every car company makes a range of cars, and many dealerships sell multiple brands.
2. Who are these people wandering in off the street and having no ability to figure out on their own what type of car they need? A car salesman is never, in a 20 min conversation, going to know better than you should know what type of car you need unless you have put no thought into it. If you show up at a dealership you are saying “I am already interested in this type of car”, the salesman’s job shouldn’t be to say “no, I know more about your tastes than you do, you should go across town” except in very specific circumstances.
I’ve known rocket salemen to say, e.g., “You don’t need anyone’s rocket for this application; aircraft with suitable performance can be chartered from an outfit right down the street and will be cheaper. When you do need rockets, we’ll be here”. This sort of thing definitely breeds customer respect and loyalty. Figuring out whether it results in a net increase or decrease of sales is a hard problem, but the strategy isn’t obviously a loser and may be worth trying if you don’t like and/or suck at lying.
W/re car salesmen generally, I think that market is efficient enough that it will be rare for the competition to be obviously and clearly superior, though a small and/or specialized dealer might simply not have anything really suitable in inventory and say so.
Also, it is often the case that a salesperson is limited by time, not by total people walking through the door. So getting rid of a customer who can’t be satisfied in order to work on to the next one can be the best use of time.
That’s the only explanation I’ve ever come up with for the Honda salesman who started out by asking what car I was replacing; when I said it was a Toyota Tercel, he really lost interest, saying that Toyota drivers never liked Hondas.
I would consider my previous job to be a bullshit job, but as you say it was not true bullshit – the main bs factor was simply the fact that there was nowhere near enough work for a full time job. What work there was did need to be done, but at the same time it wasn’t something that easily crossed over with any other role and so couldn’t necessarily be combined into another job at the company.
The role really should have been a part-time position. It makes me wonder what percentage of the quoted figures can be explained by the inflexibility of so many companies. Part-time jobs and work still seems incredibly rare once you move above minimum wage positions. It would be difficult for me to imagine the managers at my old firm saying to themselves that they could get by with someone for 16 hours per week because there is just this mindset that if you don’t have people in an office for 40 hours a week, you’re getting ripped off.
If you need an average of 20 hrs/week of specialized work, but some weeks it turns into 60 hours and always it’s Real Bad if it doesn’t get done promptly, that’s legitimately a full-time job. For an extreme example, think firefighters – we want to pay them a full-time salary to do nothing but goof off(*), but we want them to be sitting around the fire station for the full shift. Most examples won’t be quite that drastic, but less absolute forms of that dynamic are fairly common.
* Or doing equipment maintenance, training, etc, but you’ll get to the point where that is obviously non-productive busywork long before you get to 40 hrs/week.
You can also achieve that by having people do multiple jobs, where one gets priority.
Sounds great in theory, hard to implement in practice. Obviously firefighters know to drop everything and go to the fire, but my co-worker who needs to make a decision between applying cash and contacting clients on past-due balances is more difficult.
Like, I might prefer applying cash to contacting clients, because cash doesn’t yell at me on the phone. How do you really “prove” that’s the wrong priority, though? You’d basically have to micro-manage me. You can’t really fire me, because it’s difficult to fire people in a professional environment.
Also, from the employer perspective, if you see me not hitting a number, you don’t know if it is because it is a reasonable distribution of time between competing priorities, or if I am just slacking off. Sure, I might come to you and say I have too little time to accomplish everything, but that would require a manager to make a decision about trade-offs, which a lot of managers are NOT comfortable doing.
I think I’ve relayed the story before of OldJob making priority list prior to a fiscal year end. Something like 75% of the items had “high priority.” That was obviously ridiculous. So the seniors went back, and turned half the high priority items into SUPER-high priority items.
There’s definitely a reasonable stance where you eliminate these trade-offs and overpay someone to basically act as surge capacity.
As ADBG notes, that’s a pretty tall order. You need the low-priority job to use the same skillset as the high-priority one, or the process will be both inefficient and perceived as bullshit. You need the low-priority job to be one where progress is not lost when interruptions occur. You need the low-priority job to be one which doesn’t need to be completed on schedule. More importantly, you need the low-priority job to be one whose stakeholders don’t mind being told “your job is low priority and the people tasked with doing it might blow you off”. And you need everyone involved to agree on where the priority breakpoint is, in cases that are usually less dramatic than there being a literal fire that needs putting out.
At that point, you’re probably several standard deviations removed from baseline human nature, and better off letting your people play Settlers of Cataan in the breakroom or goof off on the internet until the fire alarm rings.
If you’re a military organization, you can get away with e.g. making the sailors swab the deck whenever they aren’t needed for damage control, but you can’t make them see it as anything but bullshit. You’re just making them see what their position is on the hill down which shit flows, and hoping the esprit de corps and whatnot will carry them through. That doesn’t work so well when the mission is several orders of magnitude less sacred than Defending the Homeland from the Forces of Darkness.
There’s good reason for this. Higher-level workers create a lot of value, and it’s hard-to-measure value. Therefore they are treated entirely differently from low-level, easy-to-observe workers. High-value workers are treated better and are paid higher wages than necessary in order to keep morale high. Companies will do stuff like not fire people in recessions and continue to hand out raises, because even though they COULD get cheaper labor, ruining worker morale is a great way to MASSIVELY reduce profits.
Low-level, easy-to-monitor? Treated like crap, because they can be treated like crap.
The flip-side of the 40 hours a week problem is that the company cannot get a reputation for being easy on slackers, because then you get all the slackers. So we’re still going to make you show up on time, for 40 hours a week….but don’t worry, we at least have a ping pong table, and you can spend your time posting on SSC.
Certain companies are more flexible on this, though, but when they are, it’s because it’s a shitty place to work or they pay shitty wages or you are REALLY high value. Like, if you regularly have you workers pull 60 or 70 hours a week, you might let them leave early sometimes or work-from-home “as long as you have your work done.” My last company eventually forced work-from-home on all the department heads, but only because they pay 10-20% less for all jobs and need something to keep them attractive.
My perspective is certainly skewed by working in academia, but putting so much weight on time spent in the office as a measure of productivity or effort in high-knowledge/skill salaried positions seems misguided or even kind of perverse, to me. For a lot of these jobs it seems like they could have the attitude, “the company needs products {X} delivered or service Y maintained at some level, and values that at $Z per year” and ignore how much/when the employee is on-site as long as their tasks are completed and they attend necessary meetings.
If someone is content with their salary and can do the work they were hired to do in 20 hours a week, you’re not going to get more value out of them by making them sit around for another 20. If you try to add responsibilities to make them work 40 hours a week, or try to reduce their pay to compensate, you’ll just drive them to another company. It gives no incentive to work efficiently and breeds resentment.
It also breeds resentment if one worker only spends 20 hours at the office and another spends 40 there.
Yes, spending 20 hours exercising slack in the office is usually less conspicuous an inequality than spending 20 hours at home while your colleagues are at their desks. Plus it ensures availability in the event of a crisis. So the happy medium is often going to be that you have to be in the office 9-5ish and you have to get the work done, but so long as there isn’t e.g. visible porn on the monitor nobody is going to track what you are doing hour to hour.
I’m currently experiencing this at my current job. I’m an English teacher at a private school in China; I’m in the classroom less than ten hours a week (and not at all on Tuesdays), but must then stay in the office to make 40 hours. Why? Because if parents walk by, then the administration needs to be able to show off the white monkey. I think if my computer weren’t a near-useless piece of crap I’d be distinctly less resentful, but even so…I’d really like to go into the city on Tuesdays.
I’m going to start looking for a replacement job at the end of this term. That will mean breaking contract, but this is TEFL in China; it’s a seller’s labor market, and contracts are guidelines, not vows. (Also, my school fired a coworker the day before the new semester began, thus breaking his contract, so if that’s how they want to play the game…
Alternatively, I may just say “give me a raise and an end to deskwarming, or I’m walking.” They need a white monkey and turnover is high; the school’s not on a metro stop and it’s not near the expat ghetto, so they have trouble getting teachers to stay. And it’s not all bad; they don’t take academics very seriously and I exist as a bauble to show off to parents, so very little is actually expected of me. I’ve been biding my time working on my Mandarin and doing research in historical linguistics for grad school.
The New York public school system used to order suspended teachers to report to an office to do nothing while waiting for their administrative reviews to finish. Some of them were there for years.
http://www.cracked.com/personal-experiences-2564-what-if-your-job-paid-you-to-do-absolutely-nothing.html
@kaakitwitaasota
That’s a bit weird. From what I’ve read, TEFL teachers in Japan typically work at several schools.
Perhaps you can suggest having the school take some nice pictures during your classes and then frame those and put them up in a prominent location. That might serve a similar purpose to having you there and perhaps work even better at appealing to parents.
@Aapje: In China you’re only allowed to have one job at a time (in practice almost everybody has side students), and there are massive information asymmetries involved in getting a job before you land (don’t understand the system, don’t understand how much you’re really worth, employer will lie by omission to you to keep you from ragequitting and going to Korea…). In fact that’s the real reason why so many people bail on their first contract; you can usually do much better by hitting the pavement and looking for a new employer. (Particularly since they won’t have to sponsor your initial visa, which is an extremely lengthy, complicated and expensive process).
The school has taken more pictures and videos of me than I care to count by now. They’re not going to back down unless I give them an ultimatum; it’s a face thing.
> I doubt there are many true bullshit jobs, particularly at a time when companies have been trying to streamline…
I was puzzled by that phenomenon as well. It seems to defy basic economics.
However, I think the apparent contradiction can be resolved by thinking of individual people as economic actors instead of companies. Company as such has no agency.
Wrote a blog post about it here: http://250bpm.com/blog:44
Yes/no. I invented a term for this: incentive decay. Shareholders clearly have a common interest, they can probably incentivize a CEO to follow it, the CEO incentivizes the CIO etc. but as it goes down the ladder the signal gets noisy. The feedback loops get longer.
As layers of management are added thermoclines of truth develop.
http://brucefwebster.com/2008/04/15/the-wetware-crisis-the-themocline-of-truth/
You are describing the reason that the idea that bigger companies always do better, hence markets automatically generate monopoly unless prevented from it, is wrong. There are technical economies of scale but organizational diseconomies of scale, increasing inefficiency as you increase the number of layers between the CEO and the factory floor.
I don’t for a moment trust people’s impressions of “meaningful” with either “society useful” or “company useful.” I’m sure that we’ve discussed parenting here before: pretty doubtful outcomes for parental interventions, and it doesn’t matter how much “meaning” you feel when you read to little Jimmy every night before bed.
I also have co-workers who have felt their jobs are stupid, but they are confusing the “cog in a machine” feeling with “no meaning” feeling. Believe me, there are times in the last week when I’ve wanted to blow my brains out after having to manually apply over 100 checks and had to review hundreds of pages of bullshit sales tax charges and send out hundreds of emails because some stupid city in Nebraska changed their sales tax rate AGAIN.
But that doesn’t mean the job doesn’t have meaning. Someone has to do this bullshit. Not resolving this bullshit means the books are going to be off and someone in upper management is going to have all this magical missing money with no idea what it is. And someone is paying me to resolve this bullshit, so I’m going to do this and maybe eat too much chocolate.
Big companies throw up a lot of red tape and require a lot of forms, but these processes do not typically come from nowhere. Like, at a friend’s food-processing plant, they put in new inventory controls. So now line-workers need to fill out a request to unload pallets of sugar, or whatever. The workers hate this because they don’t understand the purpose. but the purpose is so they can freakin’ track costs: it’s not bullshit work. They had recently fired a $500,000/year manager at another local plant, because he “lost” something like $5 million worth of sugar.
My FIL works to put in SAP at smaller companies, usually in conjunction with other consultants that rehab the company. These companies have TERRIBLE cost-controls and TERRIBLE processes. Like, one manufacturing firm had 5 different lines of widgets, but they tracked NO costs. They could tell you how much material they started the week with, how much they bought, and how much they ended with. Waste? No idea. Spoilage? Nope. Defects? What’s that?
How much does it cost to make Widget A? Uhhhh… How much overtime do we need to double this run of Widget B? How much maintenance work needs to be allocated to Widget C if we decide to skip on it the next 6 months so we can clear another batch?
Basic, basic, basic stuff. Yet, it’s all despised, and viewed as bullshit, because the workers who have gotten used to lax controls don’t want to suffer through REAL controls.
Then there’s just the entitled princesses that don’t want to do their job or don’t even understand what their function is. I had a co-worker tell me this week that we shouldn’t do any unapplied cash and we shouldn’t collect money from client. You dumbass, we are accounts receivable, what the hell do you think we do?
I’d agree that the emotions are likely to be disconnected from the facts. Actually I think I mentioned this the other day; it takes a level of abstract thinking to see the ultimate impact of a lot of modern day work.
I don’t read to my children solely or primarily because I believe it may improve IQ by a few points after so many years, I do so because they like it and I like them.
“Just because you’re necessary doesn’t mean you’re important.”
I get paid big bucks at a major tech firm. The job is soul-sucking and it’s a hostile work environment if you aren’t all-in on fashionable Bay-area politics. But I am reasonably certain the work I do is valuable.
I also volunteer in EMS. The job is mostly waiting around. When responding for patients, most times we’re really a high-cost taxi for the neurotic. Most of my co-workers dream of making making double minimum wage. It’s a near worthless job. But it’s highly rewarding on a personal level.
People are funny that way.
Well yes, often people don’t understand why they do what they do. But even more the issue is when people have no control over their jobs. Just read the comments on this thread — the jobs people hate are those where they are a cog in the machine and can’t change anything themselves. I think most often when people have “non-meaningful jobs,” it’s not that they aren’t contributing to society or even to their firm, it’s that they have no control over their jobs. It is even worse when the supervisor tells you to do stupid things which actively detract from your value. But it wouldn’t be so bad if you were making the decision as to which stupid things to do.
I’ve wondered about the whole meaningful-jobs thing for a while now. At a previous job, the answer hit me over happy hour drinks with some coworkers from another department: most of the people here do not understand how the company works.
In one sense, that’s understandable. There’s a ton of moving parts and most people have bigger things to worry about in their lives than what people they never speak to are doing. But in that moment, so much corporate dysfunction made sense to me. To put it bluntly: most of the people at the company weren’t doing whatever it was they were doing, because it benefitted the company. They were doing it because their bosses told them to. They had no mental model of how what they did fit into the bigger picture.
If this generalizes, it suggests that a lot of workplace anomie could be resolved just by helping people to understand how what they do is actually important.
In general people don’t know enough economics to say “solving this coordination problem is difficult and required much organization technology to develop throughout history”, nor “things that seem stupid often have complex and good explanations”. A lot of jobs are the equivalent of a function that solves a bug, but no one ever documented how it works, so every new person who looks at the function goes “WTF?” and doesn’t understand. But the function still needs to run.