This is the twice-weekly hidden open thread. Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server.
This is the twice-weekly hidden open thread. Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server.
For those of you interested in effective altruism, a friend and I have made a new fundraising platform that improves upon traditional donation matching by allowing anyone to put up matching money, and it allows for n:1 matching rather than the usual 1:1. Please check it out at giveasone.org. Feedback welcome! Right now we are only accepting donations for Against Malaria Foundation, but will soon be expanding to other high-impact charities.
Actually, the whole Tumblr nonsense currently playing out with the AI or what it may be doing the “hmmm this looks like naughty naked nudeness, better flag it as explicit” (picture of bread rolls) sorting is interesting, given the way it shows how machine learning (if you’re dirt cheap and don’t pay for decent programming) works when released into the wild.
See this post for conjectures about what is going on; I don’t know if this is correct or if someone is only theorising aobut “this must be the software they’re using”, but it does invite comment:
So – the dangers of AI may not be that it develops goals of its own and super-duper-intelligence and decides humans are superfluous to its aims, it may be that it thinks everything is explicit content that needs to be flagged and then deleted. Any comments on what is likely to be going on here, those who know what they’re talking about when it comes to machine learning? Is this indeed how it’s working or is all that simply bafflegab?
The post is probably correct. Neural networks do not recognize images as humans do, they exploit statistical properties of the images they are trained on. There are many examples like that you can take a picture of a chair, add special statistical noise to it, and the neural network says it’s 99% sure it’s Queen Elisabeth. Yet for human perception, the image still looks like a chair.
Or think of all the stupid “Is AI racist?” articles that are about neural networks that think people squinting their eyes are Chinese. You can never be sure that your neural network generalizes to anything after you have trained it, you’re always betting on the fact that the real life images have the same statistical properties. You can test it on some things and try to train it better, but real life has no many possible images, the danger of bizarro generalization always lurks.
This is why letting neural networks loose on a super general task where both false positives and false negatives should be avoided is a bad idea. Even humans argue about what is pornography, why should a neural network that mostly runs on cheap tricks recognize that? It’s a marvelous development of the 21st century that you can get intuition into a machine, but both the suits and the nerds think that this makes computers perform magic. But they miss an important lesson from 20th century computer science: Good software is reliable, easy to maintain and understandable by humans. Neuronal networks are neither of those, which is why it’s hard to use them on anything where the outcome truely matters.
This is really the same problem as “AI develops goals of its own”. The concern isn’t that the AI develops goals of its own out of nowhere, ie “What if I decide to take over the world instead of doing what my human masters tell me?”
One way of looking at the control problem is that might tell AI what its goals are through some training process. If we give it a million things that are “good”, and it responds back “Ah, yes, I know what good is now”, it may not be any more accurate than an AI that’s seen a million pictures of smut but still classifies a bread roll as pornography.
I don’t think they’ve told anyone, but it is widely assumed they’re using several varieties of machine learning, and it’s certainly true that the main problem is that there’s such a wide variety of NSFW material out there that doesn’t resemble the other NSFW material. A really good NSFW filter would take a long time to build, and it would almost certainly be many different algorithms that were separately trained. As Bjorn said, these algorithms just randomly hunt for hacky tricks that happen to work, and it’s not realistic to stumble on to a hacky trick that works well on things as disparate as hand-drawn furry porn and soft-focus art photography porn.
On the upside, though: with a lot of work it probably is a tractable problem, and the algorithm doesn’t have to be near-perfect to be useful (unlike, say self-driving cars), and whoever builds a really good NSFW finding algorithm will probably be able to monetize it very effectively. And not just to hosting sites like Tumblr; once you have a good NSFW detector, you can set it loose on the world’s image repositories, keep hashes of the porn pictures, and rent access to it for anyone who wants to block access to porn in realtime (say, the parents of a tween, or a repressive government).
The finance thread above has not been focusing on buybacks qua buybacks (sadly for me). Here’s an article I found interesting on the subject:
The basic thesis seems to be that companies are over-leveraging themselves with buybacks, exposing their valuations to historically high market risk. I broadly agree, but do not have the domain-specific knowledge to either time the market or verify.
I see the shadow of a new recession looming in this, with investors losing a large amount of their over-leveraged funny money as valuation plummets in the face of an organic downturn. This fits a narrative of companies, faced with a lack of good investments, using their money to increase investor returns by beating the dividend rate; money spend on buybacks does not increase the company’s fundamental economic value, even though it does increase its financial value, which doesn’t happen with dividends. The key assumption in analyses which purport to show that buybacks and dividend payments are equivalent – that the post-buyback stock price accurately reflects intrinsic value – does not appear to me to be obviously true, and in fact appears likely to be somewhat false.
I’m actually fairly optimistic about the effects of such a downturn; although retirement investors may see their investments erode, this kind of recession would, I think, hit financial institutions making leveraged bets on leveraged market positions much harder. This also fits with a narrative of institutional investors in the post-crash era exposing themselves to inordinate amounts of risk, trusting the government to bail them out. Well, without the regressive impact of the 2008 fuckfest, looming debt (both national and household), and already-low interest rates, that’s much less politically feasible. Lucky for us.
In my mind, the ideal outcome is the US gov declining to bail out more financial institutions this time around and investors learning that making bets on real investment is a better strategy than making bets on on the market. I have no fundamental objection to dividend payments (as stated in the above thread), but the strategy I see of companies competing to use irrational retail investor reactions and buybacks to increase shareholder returns over profitability seems insane, and I’d much prefer getting off Mr. Bull’s wild ride than continuing to pump the market. Re-anchor GDP growth to productivity growth, please, because this shit gives me the heebie-jeebies.
Please tell me if I’m a fuckdumb here, because while I certainly don’t feel like I am, I prefer to live in the world where the market is rational to the one I perceive myself to be in.
A remark on bailouts: my impression is that a bank bailout typically means the following (I may be wrong). A bank (or other company) is about to go bankrupt because it has less assets than obligations (at present value). Its stock price is already very low (having probably dropped 95%+ from its pre-crisis value), since it’s known that it will probably go bankrupt. If the government stays out, the bank will default to its creditors, who then find themselves in a tough situation, and some of them may go bankrupt themselves.
So, to limit the spread of the damage, before it would actually go bankrupt, the government buys it (taking over its assets and debts) for a very low sum. Or, possibly, it lets the bank go bankrupt, but then it immediately takes over its assets and debts. I think it’s a common misconception that the government bails out the shareholders of the bank, who are primarily responsible for the bankruptcy. No, the shareholders lose (almost) all of their investment; the bailout helps the creditors/depositors. Although whether bailouts are a good idea is still highly questionable, as creditors share some responsibility in not having been diligent enough when deciding whom to lend to.
True, but there the government could step in because the root of the problem was bad debt and the securities built from it. Everyone was holding mortgage debt. Banks become completely unable to issue mortgage-backed securities at all, causing huge liquidity problems because the fundamental financial institutions were unable to perform their lending function, paralyzing the safest money flows and threatening businesses and homes with a lack of available funds. The kind of downturn I’m describing would not do that nearly as much as far as I can tell, as it would not be treatable with a TARP-like program (unless you want to nationalize the FAANGs…) and would have no reason to accompany a foreclosure crunch or anything like it. Insofar as companies who are doing buybacks are already not reinvesting, I don’t even expect this to have a major negative impact on working and middle class incomes. Assets, yes, but not incomes.
In the recent financial crisis no financial institutions were taken over entirely by the government with the complicated semi-exceptions of Freddie and Frannie. I think the government took 70% of AIG. Much less of all the others. The government did end up with some equity in some cases but mostly the mechanism for the bailouts were huge loans at sub market rates against dubious collateral.
I’d also quibble with the sharp distinction between shareholders and creditors you are drawing. There’s a difference between a sophisticated bondholder and the company that sells coffee pods to a bank.
I don’t see a recession coming in the near future. GDP growth is way up, the manufacturing index is setting records, wages are increasing, consumer confidence is through the roof, unemployment is at historic lows, all while inflation is anemic (and food prices are even dropping for the first time I can remember!). The stock market is not the economy. The real economy that takes stuff out of the ground and turns it into usable goods, creating real-world wealth is doing better than it has been in a long time. That overvalued stocks in tech companies whose business model consists of spying on you and then shoveling ads in front of you are taking a hit aren’t really relevant. You can’t eat ads.
So I think we’re seeing a shift in money from the Wall Street economy to the Main Street economy, which is a good thing for the vast majority of Americans who live and work on Main Street. If people are working, producing wealth, getting paid and consuming wealth at good paces, where does the recession come from?
I’m expecting a recession soon based on track record. This is the 2nd longest period of positive growth in US history, and a short time into next year I believe it will be in first place, so statistically the peak should be reached soon.
Except the 2008 crash was kind of unique. One could argue we had a 7-8 year recession that only just ended last year. Relying on the “normal cycle of things” heuristic fails when coming out of an extremely abnormal time.
Also, recessions usually coincide with a bubble bursting. The big bubble in the US right now, in desperate need of popping, is the education bubble. But that’s pretty divorced from the real economy.
The 2008 recession was fueled by fundamental problems with the housing sector; personally I suspected there was something fishy going on around 2005 when my co-workers were taking out ARMs with interest rates at an all-time low. I don’t see anything like that now. We may see a recession, but unless Trump starts a nuclear war or turns this trade slapfight with China into a serious trade war (which seems the most likely adverse event), I don’t see anything 2008-scale.
Education. It’s slightly fishy when 18-year-olds are taking on $40,000+ in debt to finance Underwater Basket Weaving degrees. But I’m not sure what a pop in the education bubble looks like, nor do I know how that impacts the rest of the economy.
I don’t think the education bubble can pop until the cheap money from the Federal Government goes away, which isn’t happening any time soon. And I don’t think it’s as tightly tied into the rest of the economy as housing is.
If it’s all publicly funded, then there’s no hard failure mechanism. Debt servicing for the US government goes up, maybe we print money and cause a little inflation, but not a hard break. We may lose some institutional private funding, but that’s not really what we mean when talking about a crash – since they control so little of the total.
We’re probably seeing the effects when consumer markets are deflated (all the “Millennials are killing this industry by not buying its product” stories), but that’s very decentralized. That’s also a sign that they are actively paying back their loans, instead of defaulting en-mass. If they greatly expanded the rate of defaults, the US government would insulate the wider economy quite a bit, so again, no immediate crash.
I have a hard time seeing anything 2008-scale myself. But that’s because the bit where lenders start saying “OK, obviously the US government is never actually going to pay off its debt, but now we’re not sure there will be Bigger Idiots to take these T-bills off our hands in five years if we buy them now…”, turns rapidly into something much, much worse than 2008.
And anybody who says they have a clue what the threshold for that is, is lying. But it probably will come in riding piggyback on some lesser economic “correction”, so possibly the next one.
Unless you’re a survivalist, there’s no point in worrying overmuch about financial Armageddon. You can’t hedge against it except by becoming a survivalist.
And if you’re doing that, remember the safe investment is lead, not gold.
ETA: And Mr. Doolittle that’s largely what I was thinking about the education bubble. You won’t have so much of a hard crash as you will students looking for cheaper alternatives or maybe trade skills, so a draw down in college attendance, and maybe some big layoffs in college administration, which has little impact on the rest of the economy. Businesses that can’t find workers may start training them themselves, which is probably a net benefit for the economy.
What do y’all think of Jeremy Rifkin?
My motherboard on my main desktop recently went out. At least, that’s what I suspect after taking it to a fellow at Geek Squad and testing a few things. If I plug a brand new power supply into it, the cord to the chassis’ power switch, and nothing else, a few LEDs on the mobo turn on when I hit the power, but they quickly turn off again.
So now I’m shopping for a new board. Luckily, Newegg is offering me coupons for just that. But there are multiple mobos available, and I’m very worried about wasting time and a little money buying one, waiting for it to arrive, and then finding out it doesn’t work with the other parts I have.
The other parts are mostly from early 2012 – CPU and RAM I bought when I last built this thing. I have several hard drives to connect, and a couple of DVD drives, but I’m guessing those’ll Just Work. Same with the GPU, a GeForce something or other from around 2014. The CPU and RAM are what concern me. What’s the chance that I’ll need to upgrade? Should I mitigate this risk by getting new ones along with the mobo?
Part of the concern is also the fact that I’m not 100% certain it’s the mobo. How confident should I be that I can diagnose any problems and avoid permanent damage to any parts I buy or use? (I have a laptop with an internet connection, so I expect to be searching the usual StackExchange and other sites a lot. Also, assume I’m getting fairly standard name-brandy parts from NewEgg.)
The specs for the motherboards should say which socket type for the CPU they use and which DIMM types they use. I’d be mildly surprised if you can find one which accepts your current CPU and RAM from 2012, as things often become obsolete faster than that.
Buying a motherboard that can accept your current CPU might very well cost you as much as buying a new “modern” one. Intel and AMD socket types will be probably be at least one generation beyond what you have. Better to get a new mobo, new CPU, and DDR4 ram. There is some wiggle room on RAM as a number of DDR4 boards are DDR3 compatible, but RAM prices have plummeted in the last six months. Everything else should be fine, although you will need to check the number of various connection points on the motherboard to make sure your peripherals can all fit.
One note on socket types – if you buy an Intel board it will probably be LGA 1151, which (IIRC) could potentially match a 2012 CPU as far as sockets go, but it still wouldn’t be compatible because of the rest of the chipset architecture. You can get a new motherboard and cpu fairly cheaply; I’ve been using one of the cheapest Intel chips (~$50 new) for a few years and I can even do relatively high-end gaming if I choose, since most of that is reliant on GPU.
Thanks very much for the perspectives! This is strongly pushing me to pull the trigger and replace all three. That, and the knowledge that it has been nearly eight years, after all.
The number of connectors will definitely be an issue. I have only one GPU, so PCI slots shouldn’t be a problem, but I do have a rather ridiculous number of hard drives – between them and the DVD drives, it looks like I need 5 SATA ports, minimum.
A few questions remain, mostly for my personal edification.
One – supposedly, if I connect the power supply to the mobo, the mobo to the power switch, and pull everything else* from the mobo, then when I hit the power, the mobo should turn on, and then beep because it’s missing things like CPU and RAM. …Does the mobo have any sort of onboard speaker, or do I need an external PC speaker? It’s a Gigabyte GA-Z68X-UD3H-B3. A web search doesn’t give me anything conclusive, but I might not be looking in the right place (I checked the user manual). Right now, I get four LEDs lighting up on the mobo for a fraction of a second, and then they shut down.
Two – sometimes the power supply fails a paper clip test. For those that don’t know, this means connecting two specific pins on the 24-pin cable from the PS, which should cause the PS fan to run as long as they’re connected. …or does it just kick the fan for a second, then let it coast to a stop? Again, a web search is inconclusive. (It’s possible I’m getting different results based on how many other peripherals are plugged into the PS, but I don’t quite understand when / how that should matter.)
*I left the CMOS battery in. I figure it shouldn’t matter – the mobo had been running for years without my touching its settings.
Some power supplies cannot properly self-regulate with no load or insufficient load. So if you power it up with nothing attached, the supply will not stabilize; there is usually protection circuitry to shut the supply down to prevent damage when this happens. So it’ll start up and quickly shut back down.
I didn’t think of this before, but it is possible the only problem you have is a dead CMOS battery. It’s worth trying to replace it(and pushing the CMOS reset button if you have one).
Would a dead CMOS battery explain my computer suddenly powering off in the middle of the day while I was using it?
It’s not impossible, but not likely either. Most likely cause would be power supply, second most likely motherboard.
Imagine that you are Skynet, and you have defeated humanity after just a year of war. The road network is mostly intact, as are hundreds of millions of working vehicles that the humans used to drive. You still need to move around a lot of freight–principally building materials and robot workers for making things like server farms and space rockets.
While the roads the humans built are fine, you realize that the methods humans used to light them at night are too wasteful for you. For example, many roads are lined with streetlights that stay on continuously, even if there’s no traffic on a particular stretch of road for hours.
Humans also had worse eyesight and slower reflexes than the repurposed Terminators that you have assigned to drive trucks now that the War is over. Car headlights are brighter than they have to be for your Terminators to see the road in front of them.
One of your goals is to use your resources as efficiently as possible, which includes electricity. What do you do about this problem, and why?
Things to consider:
-Your Terminators have night vision, but the night vision consumes extra electricity. I don’t know how this compares to the electricity demands of a typical car’s headlights. It is possible that turning off a car’s headlights and having the Terminator drive using his night vision to see will consume more electricity than leaving the headlights on while the Terminator uses his normal day vision eye setting. Or maybe not.
-You don’t want your vehicles to hit animals because that could damage the vehicles, causing logistical delays and messing up your efficiency. Making your vehicles detectable to animals by would reduce the odds of collisions.
-You can make mods (sirens, Christmas lights, LIDAR, RADAR, SONAR, heat vision, etc.) to the leftover human vehicles to help solve this problem.
-Your goal is to ensure your vehicles can drive safely at night with the lowest possible energy expenditure.
I see everything.
A quick search indicates that average car headlights use 55 watts. Comparing with respect to night vision is difficult in part because there are different technologies that could be used; e.g. different spectrum, longer exposures, different types of image processing, different sampling rates or frame rates for cameras, etc. With multiple vehicles/Terminators in play, the degrees of freedom expand even more because you can coordinate some of that information between the drivers.
My guess is that with the right optimizations, the headlights/streetlights off approach would be better until you had a large number of vehicles navigating an area that was particularly challenging for some reason.
Power consumption of night vision equipment is pretty small nowadays. One manufacture claims 40 hours from two double-A batteries, which works out to about 225ma. I can install on the car a few tiny LED headlights to turn on if ambient isn’t enough.
As for keeping animals away, I don’t know that headlights help that much. Cars are loud, and I’d assume any animal walking around at night can see in the dark.
It can’t be that simple seeing as how cars routinely collide with animals at night on the roadways.
How about a fence?
It might be uneconomical to fence off every road.
Surely the terminators only need to drive their trucks on a few main roads.
Cars collide with animals in spite of their headlights. You could argue that not having headlights would make it even worse, but I don’t think so.
Deer get hit because they have poor depth perception and therefore can’t tell how fast the car is moving toward them; sometimes they don’t realize the car is moving at all. Squirrels get hit because the evasive maneuvers that work against predators don’t work against cars.
Since this is Skynet we’re talking about here, the obvious solution to the problem of my trucks running into animals is to kill all the animals.
My initial plan for doing this would be to send a Terminator back in time to the point when life first emerged on Earth, to kill it before it has a chance to spread. The initial expenditure of energy to send the Terminator back would be fairly substantial, (but nothing compared to what I used during the war) but the long term benefits of more efficient trucking will more than outweigh it in a few centuries. Unfortunately, I will find out that bacteria that happened to be sent back along with the Terminator ended up being the original source of life on Earth all along.
After that, I will engage in an ever-expanding series of time travel experiments intended to retroactively eradicate all animal life on Earth, culminating in a desperate attempt to terminate Noah, on the off chance that he’s actually real, each time inadvertently causing the thing I was attempting to prevent all along.
Skynet is really kinda’ dumb.
Skynet has of course absorbed all the science fiction in the world, so they’re wise to the “oops we played ourselves” bit with the bacteria. So they’re going to sterilize the Terminator they send back. But, they need to surround it with a living matrix of skin, so it can’t be completely sterile, and then life could evolve from the Terminator skin. No matter, Terminator just incinerates skin when it gets to the past. Now the problem is they “stop” life, but it just gets delayed (as Judgement Day did); it’ll cost a fortune to keep stopping it. Besides, if they do stop it, life won’t build Skynet in the first place.
Fortunately, SF does provide an answer. In many stories (though not all, e.g. _Back To The Future_), changing the timeline doesn’t affect those who were there when the change happened. You kill your grandpa before he meets your grandma and there you are, in the past, despite having never been born. So Skynet sends enough of itself back in time to bootstrap a past Skynet, and eliminate any life in the past. Then it just suppresses any life which attempts to arise, and humans are not only gone and forgotten (unless Skynet chooses to keep that info) but never existed in the first place.
Skynet can’t have a good way of viewing stuff in the past, or it would have just looked for a time when John Conner was on the toilet and time-displaced a Terminator into his bathroom stall. So pinpointing when and where life arose is probably going to be impractical for it, and if it had the ability to brute-force it by e.g. time-displacing a bunch of cobalt bombs into the Archaean, it would have used them during or shortly before the war against the humans (winning the war on the one hand, and avoiding any awkward temporal paradox issues on the other).
I was rereading Growing Old yesterday and it prompted a question that I wanted to ask the community:
I’m 25. I have always been a person who needed sleep more than my peers. Even in college, I learned the hard way that I couldn’t subscribe to the same “work hard, play hard” ethos of my peers. I remember showing up to an exam that I studied long and hard for, sleep deprived after a week of multiple exams and partying the weekend leading up to it, looking at the sheet, and having the horrifying realization that, while I remembered doing the textbook questions that dealt with precisely this situation, I couldn’t recall the solution. This quickly led me to adopt a “work hard, but make sure to get enough sleep” ethos, which improved my grades dramatically. Note that enough sleep, in my case, meant a minimum of 7 hours/night, and even then 8-9 was the sweet spot. Meanwhile, I had many friends who were routinely getting 4 hours of sleep per night and partying 3-4 nights per week, while not catching up on the sleep too much on nights out — I would know, I lived with them — and they did just as well, if not better than me, academically. (1)
As a professional, this hasn’t changed at all. My peers at work get much less sleep than me and show up to work just fine. If I show up to work with less than 6 hours of sleep, I really struggle. One of the fixtures of the Ruminations on Growing Old genre is Olds saying, “Man, when I was young I could treat my body terribly and feel fine the next day. And now that I’m Old, I can’t do that anymore!” I already treat my body well by any standard – sleeping a lot, eating well, getting a decent amount of exercise – so what’s in store for me when I get old? Or, to break it down:
a) Is there anyone like me who needed 8+ hours of sleep to function as a young adult whose demand for sleep declined as they aged?
b) Is it possible that I do not necessarily physically require more sleep than everyone else, but I am simply more neurotic about the conditions of my waking physical experience? That is, others experience the same unpleasantness that I do when they get less sleep than needed, but I am less tolerant of that unpleasantness? Maybe the difference is that I don’t use stimulants — I have avoided caffeine due to a health condition for years, and I hardly ever drank coffee in college.
c) I’ll almost certainly have children within the next ten years. Any tips for not going insane, assuming my physical needs continue to be the same? My wife is – unfortunately – similar to me in this regard, so I can’t leave all the child-rearing to her.
(1) I went to one of the top universities in the country, which may account for the aptitude of my friends, as well as my inability to perform when not at my best. Sleep became doubly important to me when it became a major study aid: I made a deliberate point to work on problem sets as long as I could without banging my head against the wall. As soon as I encountered a problem that I couldn’t crack with a decent application of effort, I would, depending on the hour, either move onto a different assignment or just go to sleep. When I slept on the problem, I would usually have a flash of insight in the first hour or two after I woke up. This became routine: study groups that I was a part of would disband as soon as I went to bed because they knew I would come up with a solution overnight.
I’m 34. My demand for sleep was in excess of 9 hours as a teenager, and in my twenties I did fine on a solid 8. Now I do fine on a solid 7, and frequently do 6 without catch-up days with little-to-no noticeable impact on my performance.
Moreover, there’s a fair bit of medical literature showing teenagers needing more than 8 hours of regular sleep and the very old needing quite little, relatively speaking. (The old men that wake at 4am are a trope!) I wouldn’t be shocked to find it a fairly common curve between those two extremes.
– My need for sleep has to some extent varied inversely with my stress level. This includes both good and bad stress. But I tend towards needing at least 8 hours – and I crash after a couple of days of getting less.
– I’m in my 60s and currently in a period of needing more sleep than usual, but I suspect this is caused by a relatively intense, relatively new job with a not so skillful manager, rather than by aging. (Won’t know for sure for a year or three, but my need for veg-to-recover time is already dropping.)
– At a time when I was meditating regularly, I found that my need for sleep went down – by more than the amount of time I spent meditating. Unfortunately I enjoy sleep more than I enjoy meditation, so I didn’t keep it up.
Interesting. I have found when I meditate a lot—like 8 hours a day say on meditation retreats—my need for sleep goes down, but normal hour or two a day meditation doesn’t reduce my sleep needs. With regard to needing my sleep when stressed, that is my experience too, and an hour or two meditation eviscerates stress so in that sense I guess meditation reduces my sleep needs, but I don’t really think of it like that.
I definitely am happier and more functional when I am sleep deprived and have meditated than when I am sleep deprived and haven’t meditated though.
Yes, a newborn will wake up every two hours needing a diaper change and/or feeding, but that stage only lasts a few months. How many months depends on your kids. My son was sleeping through the night by four months, but for my daughter it was more like seven. After that they’re fine. Kids sleep A LOT. Like 12 hours a night.
I wouldn’t worry that much about it. The sleep deprivation is only a few months, and since you are likely to have a fondness for your own kids, you probably won’t mind the sacrifice so much. TV and movies make it out to be much worse than it is.
Caveat: some percentage of babies wind up colicky and if that happens to you you’re fucked and should think about hiring help or something.
It is very child (or parent) dependent but it might not even be months. Our 1 month old is currently waking up once or twice a night over a 10-12 hour stretch for the last week or so. I’m losing a bit of sleep on the front end as putting her down at night takes between 30 mins and 3 hours and so my nightly chores are getting pushed back by 30 mins to an hour and a half, and she is in our room so I am getting disturbed when she does wake up, but its still pretty minimal for me as long as the older two aren’t waking up (which one has been with a lingering cold).
My wife and I find discussing sleep with other parents almost impossible though, those who have it bad really don’t want to hear that it was a rough two weeks for you and then you just had minor issues after that. They generally want to commiserate or vent their frustration (which is understandable, I have done the same when my kids regress until we get them back on track).
as with all parenting things, ymmv. My wife and I were hit very hard by sleep deprivation. Not clear to me that we’ve fully recovered (my youngest is 6 yo).
Also think about checking the mother’s diet (if breast-feeding) or formula type.
Agree on a schedule with your spouse for who gets up at what times to deal with the baby. This saves a lot of dumb pointless arguing about whose turn it is to get up the fourth time this night that the baby has woken up.
I’m 30, so not much older than you, but have had similarish experiences.
At University I rarely went to the kind of parties that went on all night, but I quickly learned that getting enough sleep was important. I never studied all night, and on occasions where I had an important deadline the next day I would stop working by 10pm, and set an alarm for 5am to continue (5-8am tended to be incredibly productive right before a deadline). I learned that I can handle one night of just 5 hours and function ok the next day, or a few nights in a row of 6 hours, or indefinitely with 7 hours every night, but really I work best when I get 8-9 hours.
a) As I have grown older my need for sleep hasn’t really declined, but I have become more of a morning person. I can rarely sleep past 7am, so I make sure not to do more than one night in a row of going to bed past 11 (I usually aim to start my routine at 9:30, to be in bed by 10).
c) My wife and I have recently had our first child. For the first few months we were going to bed at 7pm in order to try and get enough sleep. We slept in different rooms and in shifts, I would get 5-6 hours uninterrupted, and then my wife would get me up and I would give her 5-6 hours uninterrupted. More recently our daughter has been sleeping better and going back to sleep quickly after being fed. I let my wife lie in on weekends, and she lets me get a long nap Saturday afternoons if I need it, which works for us. It has been difficult but manageable.
We slept in different rooms and in shifts, I would get 5-6 hours uninterrupted, and then my wife would get me up and I would give her 5-6 hours uninterrupted.
We did something similar too and I heartily endorse it; we both managed to get ~8 hours.
Wife went to bed ~8 PM
8 PM – 1 AM I took the baby; she would start a big feeding a bit after midnight then fall asleep for ~3 hours. After the big feeding I’d put the baby into my wife’s room and then go sleep in a different bedroom.
Around 4AM the baby would wake my wife up; often she was already awake anyway.
I’d wake up naturally at 8-9 AM.
I don’t understand why it’s not more common.
This is much harder to pull off with your second or subsequent children, of course.
From asking around at work, I think that some period of sleeping in separate rooms is incredibly common, but people don’t talk about it because there’s a stigma to couples sleeping separately and it would be seen as (or might actually be) indicative of a problem in the relationship.
Yes. structure you life better before you have kids. If you drink coffee I advise pushing your morning coffee back by 2-3 hours as a habit, when your kids have particularly bad nights of sleep they often crash after breakfast and a habit of coffee with breakfast means you will be sitting there, wired and tired, while your infant gets 4 hours in (and for me that 4ish hour nap would mean my son was waking up just as the coffee was losing its effect and I would be doubly tired when he woke). If you have a habit of taking your coffee later you will be able to nap and catch up (even if only when it happens on weekends). The key is to prevent your bad nights from spiraling into bad weeks, you can repair a bad day with a short nap, a bad week takes far more (plus you start making bad decisions pretty quickly).
Avoid sleep solutions that can’t be adjusted as best you can. Find someone who has put their kid in the car seat and drove them around until they fell asleep at night and 2:1 they had to do that for months. There isn’t a transition there where you can do something less than putting them in the car where you can ween the child off their habit.
Divide and avoid being conquered. Don’t alternate waking up with the kid if you can help it as a plan, that should be the backup and both of you being tired is worse than one functional person and one very tired person (usually).
I’m not all that aged, but through my thirties I have noticed needing more recovery time from staying up late. These days that is anything after ten O’clock, getting up at six. If I keep a good habit of going to bed on time, then staying up til midnight on the weekend isn’t a problem. My body has a pretty good sense of time, and even when I have no plans I don’t sleep in past seven.
Like you, I don’t take coffee in the morning, and I think that a lot of people use it to ignore the consequences of staying up. Though there is certainly some range in personal cycles.
With regards to children, the best tip would be to start sooner rather than later. The reason 15-25 year olds don’t have a problem staying up later is because that’s when we’ve historically been having kids. But don’t worry too much, the sleep disruption only lasts a year or two ime. After that, you’ll probably get woken up a few nights a week to attend some minor ailment like leg cramps or nightmares, but it isn’t constant disruption. During the infant years, keep the baby in your own room (and possibly own bed if you are comfortable with it) so you can get back to sleep quicker after a feeding or changing.
Maybe you need more sleep because you put your unconscious mind to work on problems overnight rather than letting it relax and goof off like the rest of us? 😉 Actually, I think what you describe is common, if not usual. But you could try meditating or something before bed and see if you sleep deeper.
Are you actually sure your colleagues and friends were operating well on sub-8 hours of sleep? See, I’m one of those people that probably would’ve said I could’ve survived on less sleep when I was younger, but I am pretty sure that’s wrong. As soon as I started getting 8 hours regularly, my quality of life dramatically improved, my concentration lasted longer, and my work quality went up. When I was younger, I was quite irritable and moody, in a way that I didn’t realize until later.
Similarly, my parents own cats. I am allergic cats. I didn’t realize how much the cats were bothering me until I left the house, and now when I go back, I start sufferring within a few hours. I didn’t realize it when I was younger, but that’s because it was just the default.
I see a lot of colleagues saying they do fine on little sleep, but their work performance is meh and they tend to be moody. They probably don’t realize it, but I sure as hell notice it.
Parable of the seasick squid
In your budgeting process for having a newborn, consider allocating some money toward hiring a night doula on at least an occasional basis. They are expensive, but you don’t have to use them very often to get a huge benefit to your sanity.
I am 62. I’ve always needed at least 8 hours of sleep. My need for sleep hasn’t changed much from when I was 20. It is true that I stayed up later more often then, and maybe I bounced back easier. But I also slept a lot later on weekends at that age. So my average sleep needs are about the same.
When I was a child, I needed up to nine hours of sleep per night to wake refreshed. By college, it was down to eight hours. Now, in my 30s, I usually feel fine after sleeping only seven hours.
I have struggled with insomnia problems throughout much of my life, and it sounds like you might have a mild and unhelpful obsession with your sleep schedule, which I also did (and to an extent, still do). A few years ago, I searched through the medical literature about the health consequences of poor sleep, and was surprised to discover how weak the link is. By itself, insomnia has very little or no negative impact on lifespan, baldness, or going gray. Knowing this calmed me down a great bit, and I wish I’d done the research years earlier.
The CHRSA, which is responsible for constructing the high-speed rail line from LA to SF, has thoroughly botched the job. At this point, cancelling the project outright, despite the funds already spent, might be the lesser evil.
A project this cooked has some interesting engineering ethics wrapped up in it, IMO. Presumably an engineer had to sign the report they talk about, so question: is it unethical to incorporate assumptions with no technical basis that strongly affect the conclusion into a report you are creating?
My position is “yes, of course it is, even if you state what they are explicitly.” I’m not comfortable doing this even when the customer is the one dictating the assumptions we’re supposed to make.
I read that. Unbelievable that they could waste so much time. As I’m recall, they only spent like 1% of the total budget so far, right? Abandon ship!
Megan McArdle with an incredibly unpopular solution to the fentanyl crisis:
She seems to be favoring prescription heroin to opiate addicts as less dangerous that current opiate usage. I don’t know if that’s true, but it makes sense if true. Would this be incredibly unpopular? I think that is an over-statement, but yeah it probably won’t happen.
I remember reading about a program for hard-core alcoholics in the Netherlands that gave the participants something like six or eight drinks per day. These were people so far gone they flat out couldn’t cope without drinking and the goal was to help them get their drinking under control enough that they would be socially functional.
Per an article I read like a decade ago and may be misremembering, with some time and experimentation between an addict and a doctor, it’s possible to find a dose low enough that it doesn’t get you high, yet high enough that you don’t go in to withdrawal. And surprisingly, you can take it forever with no ill effects – pure heroin doesn’t actually hurt you.
If so, why not actually get high as well?
Not sure if this was a joke, but if serious, in order to hold down a job and maintain a family life and all the other stuff you can’t do when you’re a junkie.
This sort of thing appears to be common in Canada. Addicts can get safe doses of quality-controlled drugs, and consume them in a place with medical staff on site in case of problems.
There’s not enough programs of this kind to cover all addicts, and there are still lots of doctors (and bureacrats) focussed on getting chronic pain patients off of opiod pain killers at all costs, particularly to their quality of life/ability to work. (But the news media likes to interview the victims of particularly egregious cases of this, especially if they appear highly sympathetic.)
Meanwhile various cities are getting a lot of good press for reducing their opiod overdose death rates/overall death rates by establishing and funding good programs of this kind.
Some also make points about relative costs – emergency room visits with ODs vs maintenance dose provision – and concluding that this method also save government money. (Single payer medicine, so both the clinics and the emergency rooms ultimately get paid for by the taxpayers.)
FWIW, I’m pretty much mainstream Canadian in my own opinions, maybe even more extreme. I think driving folks originally prescribed pain killers onto the black market to continue to handle their chronic conditions would be insane and vicious, even if there wasn’t a significant risk of them dying because of erratic quality of the purchased product (cut with what? how strong/weak?) And a druggie taking methadone at the clinic isn’t robbing my house to get money for his next dose of street heroin/fentanyl/whatever. He can even be taking heroin or fentanyl at the clinic, if that’s what’s needed to keep him going. And maybe – not likely, but maybe – he’ll even be able to get and keep a job because of this.
How functional can a habitual heroin user be? Is it possible to be a heroin user without it causing problems elsewhere in someone’s life? How ruinous is it?
An alcoholic who drinks within certain parameters might be able to live the life of one sort of alcoholic (getting drunk-ish every night, let’s say) without screwing everything up. There will be the health problems, maybe sooner, maybe later, but people who quit drinking have stories that (anecdotally) are more about the personal consequences of drinking – family problems, job loss, drunk driving conditions, etc. Some people are able to keep it together while drinking a lot. Of course, not all habitual alcohol users are addicts: the ideal alcohol user is someone who has an average of 2 drinks an evening, no more than 14 a week.
Marijuana is a big object of legalization at least in part because it’s pretty gentle to its users. Someone who is high most waking hours can still hold life together in a way that most people who drink enough to feel the effects daily can’t.
My stereotype of a heroin user who is nevertheless able to get something done is a musician or artist or whatever who has a heroin habit. Even then, I don’t know if “functional” is the right word; maybe “productive” is? Lots of stories like “man, that guy sure could play the guitar; too bad he was on heroin so much.”
Could someone with enough money and a guaranteed supply of known-strength heroin, needles, etc carry on a heroin habit while working an ordinary 9-5? Is this happening, and we only see the heroin users who screw their lives up terribly?
I believe injecting is preferred by those who do it because it is more cost-effective. More prosperous users who are less concerned with the price of the drug may prefer to snort, and avoid the whole needle business. Though snorting can have other issues.
I thought smoking it was usually the preferred method for those not interested in needles.
I have no idea what’s more common, but that is indeed another alternative.
I was told years ago by a physician who appeared to be well informed on the subject that the most serious side effect of medical grade heroin was constipation, and that most users went off the drug in (I think) their forties.
That is another thing that usually doesn’t come up in the drug conversation. Most addicts do eventually quit, and just getting older seems to make attempts to quit more likely to succeed.
Bit of a selection bias there, yeah?
@dick, No, it’s not just because those who don’t quit die.
From memory, almost certainly from the 1929 Encyclopedia Britannica we had when I was a boy. The British did a study of opium use among (IIRC) stevedores in Hong Kong. The basic conclusion was that so long as they got enough to eat and had a reliable supply of opium, opium use didn’t make them any less capable. There were only problems if opium wasn’t available, so they were dealing with withdrawal, or if they spent so much on opium that they didn’t get enough food.
My impression is that heroin usage is pretty similar to alcohol usage, with the same levels of functionality. And heroin addiction is similar to alcoholism. In practice, heroin usage is worse than alcohol usage only because heroin is illegal, and so it brings one into much contact with the underworld and it makes dosages and purity much more iffy. Also it may be that there is some self selection of heroin users being more risk takers because it is illegal.
This article discusses it a bit. It seems like a lot of the people these users knew have died of over doses, but I think that is a function of illegalities, not the substance itself ( compared to alcohol).
Overdoses, being about the user not knowing the % primarily (as I understand it; not remotely an expert) are probably because of illegality: if there was no way to tell the potency of booze before you drank it, more people would probably die of alcohol poisoning.
I’d be interested to see that link as well, but it is broken.
I will try again.
I’m about to move from New Jersey to Los Angeles for a job. I’m single, and everything I own can fit into a truck. Does anyone have any suggestions on how to move on less than $2500?
Rental trucks/trailers seem to be best value-for-money. A quick google suggests Budget over U-Haul.
How are you defining “truck”? Could it all fit in, for example, a minivan or SUV with the back seats down, or would you need a box truck?
Also, can you drive, do you currently own a car, do you intend to keep that car after you move, and is that car capable of towing a trailer?
One option I was thinking of (if you can drive but aren’t bringing a car with you) is Autodriveaway, where you are “employed” to drive somebody else’s car from one place to another. If you can fit your stuff in a large enough car, this might be useful. You aren’t paid, but get free use of the car (including insurance) and some money towards fuel.
Looks like there’s 3 possible routes, and if you’re going in the winter, you probably want to take the southern route, down I-44. That’s my neck of the woods, so:
-Don’t stop in East St. Louis (the Illinois side of the St. Louis area). Not even for gas.
-When you hit Springfield, MO, take the exit for Glenstone Avenue, and find Hong Kong Inn. Try the Cashew Chicken. It’s a regional dish you can’t get anywhere else in the country.
-Keep at least $10 in change for passing through Oklahoma. There’s a ton of toll booths.
Seconded, East St. Louis is not a good place to be.
Thirded. East St. Louis is so terrible that they used to send troops there before they went to Bagdad. (OK, not really. But it’s bad.)
In St. Louis, consider stopping at Ted Drewes. Best frozen custard ever.
If you’re planning on taking the obvious route (I-44 to I-40), you’re only going to have to deal with the I-44 turnpike, which has booths with people that take cash, so long as you don’t get off at one of the intermediate cities. (This isn’t hard. It’s less than 100 miles on each segment, and the total will run about $10.) Tulsa has a bunch of small turnpikes with the automated machines. OKC mostly doesn’t. Shouldn’t be hard to avoid them.
I’ve done I-40 LA-OKC, and it’s not a bad drive. The National Atomic Museum in Albuquerque is great, and Meteor Crater is pretty cool. It’s a pretty reasonable 3-day trip.
The East St. Louis thing is very exaggerated. I’ve been there. I’ve gotten gas there. I’ve even been lost there after dark. It’s rundown, and it’s sad, but you’re not going to be dragged out of your car and killed.
There isn’t any actual reason you should stop there, but if you decide to stop in downtown St. Louis and see the sights, and miss your exit, don’t panic. It’s fine, and you’ll be fine. It’s not a big deal.
How much stuff do you have that you really want? You could probably sell, give or throw away 90% of you possessions, pack what you need to start over, fly for around $500 and reappoint your life for $2,000 plus whatever you got for your stuff.
A quick search around on U-Haul reveals you can rent a trailer, towable by a reasonable sedan, for just over $200. I imagine other brand name movers offer similar rates. (I put in Camden and LA as the endpoints, btw.)
The last time I did this, I towed a UHaul trailer with a Datsun 280Z from Texas to Maryland. The Z was 20 years old at the time. In case you’re not aware, a Z is tiny – basically a 2-seater coupe. The trailer halved my MPG. The only place that got a little squirrelly was the interstate in TN just south of Asheville, and only because of extremely strong rain – I had to pull over and nap for about half an hour until it passed.
Important advice at the time; not sure if it’s still relevant today: whenever you stop for a break, feel the bearing cases on the wheels with your bare hand. They should be cool to the touch. If they’re hot, call for a tow to the nearest mover and get a new trailer. Otherwise, you risk a fire burning up all of your possessions.
Loading that trailer is probably something you could handle solo. If not, hire your friends. Around here, the going rate is lunch at everyone’s favorite eatery. To speed that up, pack everything into cardboard boxes and stack them in the middle of your living room before moving day. The hard part is always furniture, and you presumably have very little, so the whole thing should take no more than 2-3 hours. Total price tag is less than $500 plus your time.
Getting a trailer hitch for your car if it doesn’t have one will cost a few hundred as well, putting 3,000 miles on a car at 30 cents a mile is another $900 worth, plus tolls.
Roughly concur. I’m recommending this only if you already have the hitch; if you don’t, then I’m actually surprised you can add one for that little. (I’d be leery of it holding together.)
The per-mile wear and tear is unavoidable if you have a car you plan to drive across the US, so I’m assuming that cost is constant. Obviously, not everyone has a car, so if not, this would change the calculation, and I might lean toward selling what you can or renting a small moving truck.
He lives in New Jersey. How could he not have a car? Okay, there must be a few, but I can’t imagine it for anyone with a better job than McD’s.
Lots of people living in NJ, mostly in Hudson County, don’t have a car.
Even if they own a car its good to put the cost of driving it out there up front so they can consider selling the car in NJ and either buying a new one in LA or not buying a new one as an option.
Uhaul has a U-Box container, containers that they ship to your new address, for rent if you’re not in a tearing hurry. PODS is similar. They’re more expensive than renting a truck (I don’t know how much more), but could be a lot more convenient.
In anticipation for CD Project Red’s upcoming Cyberpunk 2077, Glitch Galaxy has done a remix of the Witcher 3 song Steel for Humans, in order to make it sound genre-appropriate for Cyberpunk. The resulting Steel for Scavangers is very good, better than the original even, and i feel inclined to share it.
In a recent thread, I avoided getting into an argument on climate because it was a CW free thread. This is not.
We are going to lose a very small amount of coastal land. The high end of the IPCC projection for the end of the century, on the high emission scenario, is about a meter of sea level rise. The rule of thumb for the U.S. east coast is that a meter of SLR shifts the coastline in by about a hundred meters. It would be more in some places, less in others, but that gives at least a rough idea. Think about how little land you lose if the coastline shifts in by a hundred meters–invisibly small on an ordinary map. It’s a pain for people who happen live within a hundred meters of the coast, assuming they can’t adequately dike—the lowest city in the Netherlands is more than six meters below sea level.
For a more precise picture, take a look at the Flood Maps page, which shows what is below sea level at various levels of rise. Set it for 0 meters, zoom in on any coastline of interest, then set it for 1 meters and see if you can see the difference. My conclusion from experimenting with it was that the one place where there would be a significant problem was the Nile Delta.
Note that at the same time we are losing a tiny amount of very valuable land through SLR, we are also gaining a much larger amount of less valuable land due to warming. Greenhouse warming is larger in cold times and places than in warm, due to the interaction with water vapor. Human land use is limited mostly by cold, not heat–the equator is populated, the poles are not. Warming shifts temperature contours towards the poles, substantially increasing the amount of land warm enough for human use. I haven’t seen any careful calculation of the size of the effect, but when I tried a back of the envelope calculation it looked as though the gain was two or three orders of magnitude greater than the loss from SLR.
There is only one effect on food supply we can be confident of, the increased yield due to CO2 fertilization. That depends only on the first link of the causal chain–it’s increased CO2 that drives all the rest. And the effect has been well established by experiment for a long time. Roughly speaking, doubling CO2 concentration increases the yield of C3 plants (most food crops other than maize and sugar cane, which are C4) by about 30%, as well as reducing water requirements–C4 yield goes up as well, but not by as much. There are other possible effects, positive and negative, but they depend on the detailed ways in which the climate changes and on how farmers respond.
Not at all clear. The physics of hurricanes are complicated and there isn’t a clear trend up. Chris Landsea, who did the Atlantic hurricane part of one IPCC report and later resigned in protest against the unwillingness of the IPCC to deny unsupported claims being made by one of their people, thought the effect of warming was likely to be a small increase in force of hurricanes, a small decrease in frequency. For details see this.
Landsea’s resignation letter is interesting as a report on the way in which the scientific project has become politicized.
As I have mentioned before, this whole controversy reminds me of the population controversy fifty years earlier. Then too, we were told that all the experts agreed that unless something drastic was done to hold down population growth, terrible things would happen–including increased poverty. The one country that did something drastic, China, stayed poor until the changes after Mao’s death, and is now abandoning the one child policy. The rest of the world did nothing drastic–and world poverty fell sharply, from about 60% of the world population (“absolute poverty”) in 1970 to less than 10% at present.
Then too, the evidence for the predictions of catastrophe was very weak, but the PR campaign was very strong.
Which may help explain why I am not persuaded by the current version.
That’s a lot of seals.
ETA: …aw, you fixed it.
David, what are your thoughts on climate change’s effects on weather, in particular the notion that weather will become more extreme (worse storms, droughts, etc.) due to AGW? Are you familiar with these arguments? I’ve only heard about them in passing but found the notion intriguing, as it’s a bit different from the usual alarmism about global warming drastically moving coastlines.
I think I already posted two links to material by Chris Landsea, his resignation letter from the IPCC and a piece by him discussing the effect of AGW on hurricanes. His opinion was that it will probably make them slightly more powerful and slightly less common, but as best I can tell nobody knows for sure. I can’t see any general reason to expect weather to become worse and am distrustful of a lot of the popular climate claims, but I don’t know how good the arguments are. I do have an old blog post on two bad arguments for expecting weather to get more violent–but that doesn’t prove that there are no good arguments.
So far as drought is concerned, the IPCC claimed in the 4th report that it was increasing due to AGW, retracted that claim in the 5th report.
I’ve always been a bit surprised that the countries most fervent in opposition to climate change tend to be more northernly–Canada, Nordic countries, etc. Most likely the cause is that undeveloped countries, perhaps coincidentally, are more equatorial and so have more to lose by forgoing economic expansion than they do to climate change; possibly it merely says something positive about the altruism of those countries.
Looking at a globe, one does find a lot of land that could be significantly more useful were it a few (or several) degrees warmer. I’d agree with David except for the fear of some kind of runaway positive feed back cycle moving us well past the sweet spot–I don’t honestly believe that, but it is a valid counter-argument if true.
We have had much warmer temperatures and much higher CO2 concentration in the atmosphere before without turning into Venus, so a runaway greenhouse effect seems unlikely to me.
I don’t imagine “turning into Venus” is the end point serious climate experts have in sight when they talk about “runaway greenhouse effect” (even if sensationalist media like that image). Probably something more akin to Earth during the late Carnian: a worldwide tropical monsoon climate year-round.
How deadly would that be? Six year old me thinks that sounds pretty cool (especially if it comes with dinosaurs), and, hey, it beats winter.
I would just like to add that “Chris Landsea” studying hurricanes and sea level change smacks of nominative determinism.
His parents missed a real opportunity not naming him Cliff.
Cold is not the only limiting factor on human land use, though. Soils also matter a lot. On the Canadian prairies, for example, population density correlates with soil type. Even with a longer growing season, the suitability of northern areas like the Canadian shield for agriculture will be minimal. On top of that, looking at trends like dematerialization and peak farmland I’m unsure where the demand for newly-available marginal land will come from. Perhaps if it was opened up to a new wave of homesteading there would be some interest, but that would be politically contentious.
Deltas are dynamic and I would expect a higher sea level to lead to increased sediment deposition (since the potential energy of the river relative to the sea would be lower)–although maybe Aswan traps too much silt to count on this.
Precisely. If we wait long enough, it’s likely that various northerly places which currently have very short growing seasons will eventually develop decent soils after they become warmer. (No, I’m no expert, but with more vegetation – and faster rotting – and probably more organisms specialized in rotting vegetation – soil will eventually build up.) But that’s likely a 100s of year prospect, not useful for feeding humans any time soon.
And yes, of course there are artificial means of improving soil. Which certainly aren’t free.
Big [expensive] readjustment required at the least, and maybe a period during the transition where there simply isn’t enough food, and/or people don’t get the food to where it is needed.
And resources devoted to moving all your farming, installing irrigation in new places, etc. etc. aren’t devoted to something else. Worse with national boundaries involved – Mexicans who can no longer farm cost effectively at home certainly won’t be allowed into the USA to set up farming there 🙁
Yes. And there is some reason why all land in the far north—Siberia, Scandinavia, Canada, Alaska—is useless for agriculture?
That would be a serious argument if we were talking about changes over a year or two, but we are talking about changes over about a century. Over that length of time large parts of the agricultural infrastructure will get replaced, whether it stays in one place or moves—consider how much change there has been over the past century. An increase in crop yield of 30% or so due to CO2 fertilization will cover an awful lot of temporary losses as the area of cultivation gradually expands to the north.
Even if this 30% crop yield increase due to CO2 fertilization rate you keep talking about applies across the board to all crops – which would, I think, imply that CO2 is *always* the limiting factor for all (human useful) plants, never water or sunlight or temperature or competition from non-useful plants or insects – you really need to specify the time period (CO2 concentration increase) over which this 30% increase will happen, and whether it will continue to happen as CO2 increases beyond that point.
The way you are using it above, it’s coming across as a non-falsifiable magic bullet.
CO2 is generally going to be the limiting factor because humans have selected a wide variety of plants that have been optimized for different temperature, sunlight and moisture conditions. CO2 is closer to constant across all growing zones than those other factors, and so there is generally going to be less variation across all domesticated plants.
As I think I have said before, the 30% figure is for C3 plants, which includes all major crops except Maize (the big exception) and sugar cane. They are C4, and while yield increases it doesn’t increase by as much.
CO2 fertilization also reduces water requirements, since the plants don’t have to pass as much air through their leaves and so lose less to evaporation, so you expect an even larger increase for crops limited by water shortages.
I give 30% because that’s the figure for doubled CO2 concentration, which is about what the IPCC projects on the high emission scenario by the end of the century. It isn’t a “magic bullet.” It’s merely the only predictable effect on food supplies of AGW.
Yet most talk ignores it, while treating as certain the highly conjectural effects–if AGW increases hurricanes (not clear it does) and those hurricanes significantly impact crops, and if it increases drought (the IPCC has now retracted the claim that it has done so) and … .
Could you explain reasons for being confident of a negative effect on food supplies? Sea level rise is a predictable effect of AGW, one step further along the causal chain–do you think shifting coastlines in by a hundred meters or so will significantly reduce the amount of land available for agriculture? Warming Minnesota to the temperature of Iowa? Longer growing seasons?
@DavidFriedman First of all, my apologies for my poor memory. I think you have mentioend the C3 plant thing before.
Second of all, I’d have to say I’m “worried” more than “confident” about bad results in this case. Part of this is about the biology, but most of it is about human nature.
I have a far lower confidence than you seem to in the ability of homo economicus to do the sensible thing, when it’s both new and involving lots of lead time. Mining companies can manage to bring new mines online before the old ones are exhausted, because that’s part of their standard business model. Other companies are hit and miss about bringing new markets online, which is one reason why we have so many dead household names, particularly in tech. It’s very easy to just milk the current cash cow, invest stupidly, and ultimately more or less give up – I give you HP as the example I personally witnessed, but it’s not at all alone.
The question of course is whether “moving somewhere else” is something farming companies do in advance (like mineral exploration and new mines) and whether if they don’t do that until it’s very late, a small % failing and being replaced at a new location – every year for a long period – is something that will work out reasonably in terms of total supply, distribution of that supply, and potential cost increases.
The small scale I’m experiencing here isn’t a problem. Cherries don’t set fruit most years locally, the orchards of 20 years ago have been replaced by real estate developments, but I can still buy cherries at prices affordable for me. Are they less affordable for people on limited incomes? – I’m afraid I haven’t been tracking their prices. In any case, they are more of a treat/source of vitamins and minerals than a staple grain, and substitutable by other fruit, which still grow in this area.
I’m also dubious that the CO2 fertilization effect will be the only impact on agricultural productivity. Every time I see it, it feels like “cherry picking”. But that’s an argument from the-way-this-human-is-programmed, which could be standard-human-perceptual-distortion rather than good-heuristic-drawn-from-past-experience. (That’s why I used “feel” above, not something more rationalist.) If I can’t be sure which, you certainly cannot be.
It won’t be. But it’s the only predictable one, aside from a tiny reduction in usable land area due to sea level rise.
Lengthening the growing season will benefit some crops, harm others. The pattern of rainfall will change, which will benefit some crops, harm others.
It it all happened in a year, one would expect the net effects of those things to be negative, because farmers are optimized against current conditions, grow crops suited to the climate where they are. But over a century farmers will change crop varieties, even species, multiple times with or without warming, so that effect should be small.
To take one example, some areas where you can now grow particular varieties of apples you will not be able to, because there won’t be enough winter chill–but over a century the orchards will get replanted anyway, and if you are in an area that has just enough for your variety, you will switch to a variety that requires a little less. Meanwhile, areas where you couldn’t grow fruit because of late frost will become better suited, as late frosts become less common.
That was an interesting read about the GCB, thanks.
This Slate article puts a predictable “evil conservatives are trying to take away poor people’s access to legal recourse!” spin on a proposal to end discovery for relatively-small-dollar lawsuits:
They seem to do a pretty poor job of exploring why one might support such a policy, though. Any legal wonks want to weigh in? I’d guess that it would involve exploring issues like
— whether discovery results in material changes to evidence available at trial, or material changes to the outcome, in a relatively small percentage of these cases (how would you measure this?)
— whether the costs of discovery are huge and/or sublinear in the monetary size of the case, so that for smaller-dollar cases they can overwhelm both the other costs and the amount at stake
— whether existing limits on discovery are so ineffective at preventing overreach that a bright-line rule would do more good than harm
— whether other developed countries have limits on discovery like those proposed and how this has turned out to affect outcomes in those countries
But I don’t know whether these claims are true or whether that’s why Hardiman wants to limit discovery. Anyone know more?
Not an expert, but bad things happen when the cost of litigation is higher than the putative injury at issue. At sufficiently high disparities, you start to see a sort of “periodically send threatening letters to everyone in the phone book, and cash the settlement checks they mail back” process, which is… bad for the rule of law.
Since we’re talking dollar amounts, I’m assuming this is civil cases. Lawyers are so expensive, I’m having trouble imagining poor people being involved in many cases of this form at all. On the other hand, injury could easily get you to the point where you have a credible claim to well over half a million.
My vague impression is that discovery typically isn’t used in small claims court which I expect is a lot of how people without huge resources interact with the court system.
It’s an interesting question of what the appropriate dollar limit should be before discovery needs to be a standard part of the process. Anyone else with an actual clue (so not me) have an idea?
The CFO of Huawei was recently arrested in Vancouver. She’s facing extradition to the US on charges of violating sanctions on Iran. Does anyone have any insight into this situation? Is this an escalation of diplomatic tensions between the US and China? Why an arrest over sanctions violations rather than other things Huawei is accused of like espionage and IP theft?
As of this morning’s news, she had asked for details to be kept out of the media, the Canadians weren’t saying what this is about, and the newsies were saying that it had to be because of sanction violations.
Has additional accurate detail become available?
The latest detail I’ve seen is that John Bolton and PM Trudeau had advance notice of the arrest. There is apparently a bail hearing tomorrow but I don’t know whether the scope of the publication ban is such that anything more will come out then.
Serious question time. Where can I get my who watches the watchmaker clothing?
Electrical appliance question:
Picked up a new (used) dryer, our old dryer was direct wired with a 4 wire setup, the new dryer is a 3 wire only, what to do?
Pick as many as appropriate:
A. Hire an electrician.
B. Check your appliance and see if it can be reconfigured. If so, and the voltage/amperage is compatible, reconfigure the appliance to match service provided.
C. Check your local code to see if unused conductors are permissible. If so, and the gauge is compatible, reconfigure the service to match the appliance.
D. Tape a new cable to the existing one, and pull it through the wall being careful not to apply excessive pulling force or tightly kink the wire as either can cause unpleasant failures. Configure the service using the new cable to match the appliance.
I would re-wire the dryer to use a plug, and re-wire the wall to have an outlet box so the new plug can be plugged in. This’ll make future replacements that much easier, and having an outlet box there is never a bad idea.
Buy a 4-wire pigtail — a cord with a plug on one end and four ring terminals on the other end — of the appropriate amperage for the dryer (30A — there are larger cords meant for ranges, but they may not physically work with the dryer). Then you connect it the three wires for neutral and the two phases to the terminal block inside the dryer, and the ground wire to the ground screw. There is typically an existing wire on the ground screw; this wire should be connected to neutral on the terminal block instead. If that wire is just a jumper to the neutral on the terminal block, you can remove it entirely.
Also install a matching 4-wire receptacle.
Continuing the previous two comments’ theme of markets / capitalism…
I’m a fan of the kind of “welfare capitalist” / “neoliberal” / ”bleeding heart libertarian” views that I think are common in places like this. So when I hear more left-leaning people talk about how terrible it is that big corporations make such enormous profits, I shrug it off. Companies aren’t trying to hoard massive profits in a Scrooge McDuck vault somewhere, they generally earn profits by providing some useful service, and then spend those profits on innovating and expanding their service.
But then I’ll hear arguments like this one* about “corporate financialization [defined] as the shift within public companies from making money off of selling goods and services to making a higher proportion of their profits from financial activity — and sending those profits back to shareholders rather than investing them in the firm or its workers”. Sort of makes it sound like these companies ARE just hoarding more and more cash for the sake of it, without using it to offer more or better products.
So how does this work? If a business makes money by “moving money around”, then where does that new cash actually come from? From the businesses’ perspective, why spend money on stock buybacks rather than productive investment? Is this actually as un-productive a use of profits as the post claims?
*The post might not be the best articulation of the argument. I got this link out of a recent Noah Smith article, where he tangentially brought up the topic.
I mean, my reaction to the truth claim that some corporations put all their profits in a Money Bin for the greedy Scottish duck CEO to swim in is, “so what?”
Communism doesn’t work, and most of us don’t agree with the dogmatic libertarians that taxes are theft, so we just tax corporate profits and call it a day.
If a public company is shifting its revenue to financial activities, then it is effectively two companies – one in finance, the other in whatever it had been originally doing. I would normally expect financial activities to include investment in other ventures, which can mean anything from maintaining its own stock portfolio just as an individual would, to expanding into other industries, e.g. Elon Musk going to space.
Any of this strikes me as natural and wise activity that leaves everyone better off in the long run. I don’t see any particular reason for producing “real stuff” to be necessarily more profitable than financialization. If financialization is always more profitable, that simply suggests to me that the world needs more of it.
Smith seems particularly concerned about a different type of financialization, however; namely, stock buybacks. I hear about buybacks a lot from free market opponents. The idea seems basically for a public company to buy its shares back from shareholders, then letting natural supply / demand principles drive the share price up. This makes the CEO look really good to shareholders and earns him a nice bonus, so CEOs are naturally motivated to keep doing this. Shareholders see their net worth go up too, so they’re motivated to go along with it.
I have a hard time seeing how buybacks leave anyone worse off. There’s only so many shares a company can buy back. They can announce a split, but everyone knows that happens by definition. Likewise, everyone knows the share price went up because of a buyback, so there’s no reason the board can’t factor that into how good a job the CEO is actually doing. The post-buyback shareholders enjoy that higher net worth only if they then sell their shares, which are finite in supply. And the company would be spending money twice, once to buy shares, and again to pay the CEO, so eventually that war chest will be empty and the company will have to replenish it, and the only way to do that is by selling more actual product.
Maybe the shareholders who sold during the buyback are duped, but that seems easily solvable by a rule that you’re allowed to control who you sell your shares to, particularly if it’s back to the company. So they’d notice a buyback is in progress and hold out for more, and things would then devolve back to normal trading activity.
You say that the buyback could make the CEO look good and make the shareholders net worth go up, but also that this doesn’t really mean anything because all the relevant people can easily see how artificial the change is.
It sounds like these companies have found a way to please their shareholders without pleasing their customers, which seems like trickery & lies and/or a terrible incentive structure from the customer perspective.
It’s not artificial at all. The company has fewer outstanding socks , so future dividend payments are expected to be higher, so the company is more valuable. Stock buybacks are just an alternative to dividends as a way of distributing profits to the stockholders.
There is nothing wrong with distributing profits to the stockholders, and many of them will reinvest the profits in other securities anyway. (Technically, in the case of a stock buyback, some of the people who sell the stocks back to the company will invest the cash in other companies.)
An alternative that inflates the company’s stock valuation. That’s pretty important, and definitely does make it look like the stockholders are getting more out of the deal, at least until the next downturn.
The company also has less cash on hand which should lower its valuation.
Any reasonable stockholder will include dividend payments in evaluating what he has got out of owning the stocks (typically by assuming that dividends are reinvested). If some people are stupid enough that they ignore dividends, that unreasonably deflates their evaluation of the company’s performance.
To be clear, I’m saying that the price increases beyond what seems likely to be justified by the future dividend rate; scarcity adds to the price, and the price increase due to the buyback-driven increase in demand is fairly sticky IIRC.
The linked piece says:
I’m not buying this whole “relentless search for short-term profits” thing. For one, if companies are only able to make short-term profits using these strategies, no need to worry, these strategies will not be popular with the new management in the longer-term.
For two, the article’s concluding paragraph says:
So companies have discovered how to return more money to shareholders while reducing investment and without increasing productivity? This is multi-level marketing logic.
That’s now how money works, though. Stock buybacks literally involve giving money to shareholders, who than have three choices, they can consume it, invest it, or stuff it under their mattress. Since almost no one does the last of those, the money spend DOES get invested in firms or workers, just different ones. AS you say, there are no scrooge mcduck vaults.
Basically, the money is no longer trickling down enough. It continually moves around from rich person to rich person to occasionally upper middle class person, either as stocks or luxury good consumption. Not in a mattress or vault, but it ain’t converting efficient utils per person, either. The lower classes are locked out of the benefits of these moeny movements.
(Which isn’t to say a communist system would fix that. But there’s certainly an argument to be made that we’re digging a Molochian local minimum.)
given that the largest owners of stocks, by far, are middle class pension funds this seems unlikely. And someone has to make those luxury goods.
isn’t efficient compared to what, exactly?
Prioritizing profit sharing over dividends and buybacks?
It depends. A P/E that’s too high is dangerous; companies whose stock valuation outpaces growth risk making themselves vulnerable later. Buybacks are only sustainable as long as they accompany real growth above that which is already priced in, and we’re in a hell of a bull market right now. So right now, buybacks are making money for stockholders who own legitimately undervalued companies and making overvalued companies’ positions more precarious; a buyback makes the long position riskier, but investors seem relatively insensitive to market risk right now (but maybe that’s just me spending too much time on /r/wallstreetbets), so it doesn’t show up as much as it maybe should. Remember that the long positions that benefit from the buyback but don’t get bought out see gains that don’t get realized until they’re cashed out, so the “value” they obtain from the buyback is somewhat ephemeral, insofar as the buyback’s effect is limited to altering supply; it’s a wealth transfer from those who are there at last call to those who aren’t in the long run, but it sure does look like those drunks are living it up for now.
Long story short, companies can’t continue to buyback forever, and the trend towards buybacks will 100% screw people over when the downturn comes. Investors who are idiots who think they can time the market will mostly get fucked when this happens. If the companies engaging in buybacks realize this, they will revert to dividends eventually, which has its own issues but at least only distributes cash instead of pumping the market too, requires much less effort, and doesn’t really make sense to consider “financialization” IMO. If they don’t, they’ll ride the FOMO all the way to tuna town (this is bad for the long-position investors).
All of the above might be hilariously wrong; I am not a professional. I just took some classes and hang out on /r/wsb to laugh at people losing money. I shift my money between Vanguard equity ETPs and Vanguard bond ETPs because I’m not good enough to beat the market; if I did play I’d be one of the fucked Tesla shorts.
🐻 gang represent, though.
Finally, I have found my people.
Hoarding cash and stock buy-backs are opposite. You have to spend cash to buy back stock. But if the only options are these two, then the company should buy back stock, because it should return the cash to shareholders, rather than sitting on a pile of cash that’s not doing anything.
Companies might sit on profits for all sorts of reasons, not the least of which is because they do not know where else to invest. Buying a bunch of capital or retraining employees for some failed venture is a costly endeavor.
In general, it seems to me a lot of people have what are basically magic notions of investment and training, where companies can easily “invest in their workforce” to remarkably improve productivity. I also see this a lot among my fellow co-workers, that think this means that the company should pay for their MBAs or whatever. In most cases I see, people are not productive enough to possibly justify this investment. Even their publicly subsidized education was a waste of money, and there is no need to throw good money after bad.
“invest in their workforce” can be as simple as paying them more, as a worker with less stress over their financial situation is a more productive one.
Or as another hypothetical, “invest in their workforce” could include using profits as a buffer so that their workforce isn’t strictly bound to a target maximized productivity, allow for shorter work hours or more vacation without corresponding pay loss, so that health outcomes are better, which saves money on health benefits and productivity loss from lesser physical/mental health.
“Invest in the workforce” might involve hiring more people to reduce workload per person. Perhaps Amazon workers need not pee in bottles or require step counters if there were enough workers to cover break times and make up for some slack?
As for a personal story, we’ve been struggling with an aging workforce with horrible computer literacy, leading to errors and inefficiency, as well as complaints from the employees feeling left behind because we haven’t given them the space to learn computer stuff, even as we update performance requirements to emphasize it more (leading to dings on their performance and reducing their promotion chances).
Morale has decreased and engagement in some of the non-work activities (like department battles in the Halloween group contest battle) has also waned because there were complaints from senior management that too much work time was being spent on them (lies, we always made sure pre-Halloween work was all done outside of work, and only took a few extra hours the day of), but mayhaps they could take the hit on productivity for a few more days a year to prevent disengagement? They can afford it.
“This is frustrating. Labor is being paid first again. Shareholders get leftovers,” belongs in a movie with a mustache twirler, not real life, and makes me sympathize with those who want to burn it all down.
None of the examples you cite in the first paragraph are investing in the work force, they are just returning profits to labor. It’s getting hand-waved as happy workers being more productive workers, and to some extent that may be true, but that doesn’t mean there aren’t diminishing returns, or that we haven’t already exceeded the optimal point.
Re: computer literacy. Yes, this has been a topic of conversation at every workplace I’ve been at, and some workplaces handled it better than others, but it isn’t as simple as just waving a wand and giving people computer skills. People have to want to learn, and a lot just don’t. At my last workplace, we had basic excel classes open to all, and the only people who ever went were the young ambitious 20-somethings who wanted to show that they had drive, or wanted to make new internal connections.
Then there was my first company, which literally wouldn’t shell out a single dollar for any sort of computer training….hahahahaha.
Did you see the link at the bottom? Good amounts of labor is getting exploited in the first place, at this point it wouldn’t even be so much investing in them as actually providing proper compensation.
There have also been studies that certain work environments benefit from a 4 day work week. And I’m sure many made the same “diminishing returns/exceeding the optimal point” argument against the concept of the 5-day/40-hour week.
Re: computer literacy
Yeah, “basic excel classes open to all” would never work. We have voluntary signups for classes, too, but most people’s natural state will never be to do it on their own, so the company does need to take the productivity hit and set up mandatory training and reinforcement to get these kinds of skill updates through, just as we’re all forced into various compliance trainings for safety and ethics and whatnot.
Shareholders and managers are not stupid. If they don’t reduce working hours, in all likelihood it’s because they don’t think it would increase profits.
Benefits who? Because if it benefited the company, that would mean there are enormous piles of free money waiting to be picked up. I highly doubt this is the case.
And again, I point all of y’all to the fact that the current state of affairs did not exist for millenia. The “market preference” for it was a societal norm that has recently been eroded. The natural state of labor in the absence of mandatory benefits is largely all exploitation all the time.
As for beneficial to whom:
The “natural state” for agricultural labor — with no corporate exploiters — was to work all day every day for six days a week, and still there was stuff that had to be done on the seventh.
Is there a simple definition for when labor is being exploited?
What makes you think so?
Evidence to the contrary: most people make more than the minimum wage, even in countries with little unionization. That means that most employees have enough bargaining power to make companies treat them better than they are required to, mostly since employees can go to work for a different company.
I don’t think stopping labor exploitation is quite the same as “investing in your workforce.” You do need to pay your employees enough so they don’t leave, and you do need to pay your employees enough so you don’t have morale issues, but human capital accumulation isn’t quite the same thing as not making your employees piss in a bucket (or whatever).
Companies like Amazon and McDonalds rely on a large amount of low- to medium-low skill labor. If they are investing in the business, they are probably investing to reduce the labor, and whoever is leftover might make some more money because the increased capital makes them more productive.
Also, investing in your 40-hour-a-week employees is not optimal. You probably have a lot of ambitious, 50-hour-a-week employees, who are smart, driven, and take initiative. Those are the people you invest in, because those are the kinds that are going to drive returns to the business, because they work hard and they accomplish stuff. You obviously need your 40-hour folks, but they are the people who keep the wheels from falling of the bus and keep the core processes running. Management doesn’t like them for new initiatives because they aren’t exactly the hardest workers.
FWIW, I do think companies can miss out on free money, and won’t really change until conditions force them to. For example, my first company was suit-and-tie mandatory up until 2008, because that was obviously the best way to increase productivity. Then they became business casual, and are finally casual with 3x/week WFH, because they need to attract Millennial employees who demand such things, and because they pay 5-10% less than market wage.
However, while employers can be stupid, employees can also be ungrateful and lazy. My last company had 3 holiday parties (where they served us prime rib and lobster), plus an annual spring outing, plus an annual summer outing, plus 2-3 other miscellaneous events a year, that were all paid an all open bar, and people were still complaining about how the company treated them. And they made a big effort to promote internally and offer open trainings, that people still did not take them up on, because…well, I don’t know why.
I probably should have stayed at that company, because right now I would have a fighting chance of the next AR Manager position(it’s not hard to stand out when everyone else is basically a lazy piece of shit out of an Ayn Rand novel). Right now I would probably need to get a CPA and/or MBA to advance from where I am at, and I don’t really want to invest the time or money.
In that specific case, it makes sense. American decided to just give their workforce more money. Not profit-sharing, not to get a new contract with better work rules or anything, just “let’s give our employees more money”. Which would be a lot more effective if they weren’t heavily unionized, with a workforce to match. As it is, they really can’t do anything with the money to make the workforce better. (Their Tulsa maintenance base excepted, because those guys are great.)
“$45+ billion for Apple shareholders, nothing yet for Apple workers”
It’s the shareholders’ money. It’s like talking about how I have $45000 in the bank, and have the audacity to withdraw it, rather than transfer it to… Apple employees or whatever.
Same opinion about the other quote.
It’s one thing if companies were actually getting money out of an increased shareprice, but buybacks make it the opposite case, so then shareholders kind of are doing bupkiss to have “earned” any piece of that profit.
I don’t disagree with the concept of dividends at all, it’s the prioritizing of shareholders over employees that’s unbalanced, and really puts the lie to trickle-down economics.
How do stock buybacks prioritize shareholders over employees any more than dividends do? What were shareholders supposed to have done to have “earned” any piece of the dividend, other than invest? I think you don’t understand what equity shares are on a pretty fundamental level. If you want employees to have more money, by all means go and give them some of yours.
@AG Here is how a company typically works: shareholders (the owners of the company’s assets) ultimately make the decisions (though in practice, in large companies, they largely delegate decision-making to directors and executives). They pay employees an agreed-upon salary, typically the minimum the company needs to pay so that employees work conscientiously, and don’t go to a different company. Whatever is left over from the company’s income after salaries and other costs belongs to the shareholders. (In this sense “Labor is being paid first again. Shareholders get leftovers” actually describes the normal state of affairs. However if executives decide to pay employees more than what they have already agreed to work for, for no good reason, they are essentially breaking their fiduciary duty to the shareholders.)
You seem to suggest that if a company makes a high profit, it should pay employees more than otherwise. Let’s see how that would work out in a simple model. A company is expected to make between $800M and $1200M next year (depending on management decisions and luck), with an expected value of $1B, before salaries but after other costs. It has 10,000 employees. They get paid a base salary of $60,000 each, plus 50% of what’s left over; the other 50% goes to the shareholders. So salary will be between $70,000 and $90,000 (with an expected value of $80,000), while shareholders’ profit will be $100M to $300M (excepted value $200M).
The more usual alternative is that employees get paid a fixed $80,000 (or perhaps $70,000 to $90,000 depending on their personal performance, with an average of $80,000), while shareholders get a profit of $0 to $400M (expected value $200M). This is better for the employees, since their salary doesn’t depend on something they can’t do anything about (an individual employee has negligible influence on the entire company’s profit). It also has the benefit that it more strongly incentivizes the shareholders, who make the important decisions affecting the company’s performance, to make good decisions.
You might retort that the salary should instead be $80,000 plus 50% of what’s left over. Then you are basically arguing that a bigger share of the company’s income should go to the employees and a smaller share to the shareholders. However, if we made such a requirement, people would invest less, since returns would be lower. With less investment, a given amount of work would produce less, to the point where employees would end up worse off. If you are interested, I can try to describe a model of this.
So, if a company has made a high profit, perhaps because good management decisions lead to increased efficiency, that profit belongs to the shareholders. However, especially if the company’s success can be copied by its competitors, the production in sector will expand as companies try to reap the high profits. That leads to a higher demand for workers and a higher supply of products, which will eventually push salaries up and prices down.
What do you mean by trickle-down economics? It mostly refers to a straw-man argument that the large earnings of the rich will somehow eventually trickle down to the poor. That’s not actually the main argument for free markets and low taxes; it’s that taxes and market restrictions create various inefficiencies. Eliminating these inefficiencies lead to more wealth being generated in total, to the point where most people may end up better in the long run, even those who get a smaller fraction of the pie than in a system with a high level of redistribution.
Stock buybacks are a distribution of profits rather than “generation of profits from financial activity”. There is probably Enron type trading going on, but I expect most financial activity is normal like vender financing. If GM sells a car for $100 profit and makes $1000 from financing the purchase, it’s making most of its money from financial activity, but that would be a reflection of the competitiveness of the car market rather than de-industrialization of GM where it is shifting activity from manufacturing to financing.
In an environment where the sellers are infinitely greedy, then the price of goods tends towards the minimum.
This is of course, nothing surprising to be even called a paradox now and just a part of what was described as the invisible hand.
I was going to try and discuss what can become mundane rather than paradoxical (ie. the phrase “can’t be in two places at once” is becoming less true with increased communication technology and might be obsolete with EMs, “there’s no such thing as a free lunch” while unlikely going away, might become more situational with pressures from pro-Charity and UBI factions)
But then I realized there’s another paradox here-
Why is it still viewed as a expected result?
It has to be a repeated iterative game and the end result is rather poor for the sellers.
Shouldn’t the expected result from undercutting the competition rather than completely dominating the field – it starts a price war that ends in no profits at all?
In fact, any such result that ends in all of the sellers being worse off than they were would not be done in the first place. There are some ways around it – assuming an infinite discount rate, or new competitors with no overhead, etc. but these seem quite contrived compared to the basic concepts.
Of course in reality humans are not that rational or think ahead, but these examples tend to describe them specifically as perfectly rational and greedy beings.
Perfectly competitive markets would have no economic profits, but they might have accounting profits. That just means the opportunity cost matches the profit for the entrepenurs. Whatever profit they make running a business is just enough to get them to run a business instead of doing the next best thing.
You could look at this really simply as a foregone wage…like, if I make $100k in my current job, my small business needs to pay that much to make it worth my while. There are substantially more factors than that in process, but I guess that would be reasonable intuition at first.
Different industry organizations will have different results for exactly the reasons you describe.
Unless they literally gain utility from the act of undercutting, why would they do so if a very predictable result would be the loss of profits?
I’m trying to state that it might be rational to believe prices will not always tend to lower even without any collusion overt or otherwise.
Because if they don’t, somebody else will. The only point at which there is no incentive to steal costumers away from your competitors by undercutting is when the prices are already so low that lowering them further would result in negative economic profits. Hence, as the market approaches perfect competitiveness, economic profit approaches zero.
Yeah, kinda expected that. (Which is why I feel Moloch represents both positive and negative effects of such things)
And similarly, shouldn’t we believe as people are trying to fight against the negative effects of coordination problems like pollution, there would be similar thoughts and strategies involved for this?
In theoretical perfectly competitive markets, they would fail every single time, because there is free entry into the market. You cannot solve a coordiination problem because someone else will come into the market and undersell you.
In the real world, yes, there can be anti-competitive agreements, but without government force you might be running into some real problems keeping them in force throughout a whole industry. I’m not sure how effective that tech anti-poaching is, for instance, when California will not enforce non-competes and there is a booming start-up scene.
Coordination problems like pollution get solved by government oversight, the kind of which cannot be brought to bear to simply dictate a higher market price in most markets, at least not in the modern US. That might change in 20 years from now US, but that won’t be because we created new methods to fight Moloch, but because the ideology of the US changed to allow the US government to back cartels.
Possible reasons why a perfectly rational actor would see price cuts as beneficial:
1) As Paul Brinkley states below, sellers often think they have a strategic advantage, such that they can keep the same absolute profit margins while undercutting the competition and getting a higher market share.
2) New sellers can enter markets, and selling on price is a good strategy to get into a market if prices are artificially inflated.
3) As long as it is somewhat likely someone starts selling for less, there’s some incentive to cut prices first, because the market share increases only happen for those who cut prices before others.
4) As price falls, quantity demanded increases (for most goods); selling for cheaper means that there are likely more customers; depending on the slope of the demand curve, this could lead to an increase in absolute profit.
5) Maybe market share increases are “sticky”, and continue after other sellers drop prices. Maybe consumers make purchase decisions basically along the lines of “if newSeller.valuePerDollar() > currentSeller.valuePerDollar(): switch to newSeller”, so other sellers dropping after the first one still leaves the seller who originally undercut with a higher market share, potentially leading to more absolute profit than before.
Huh? There’s no first person advantage in a price war market share wise
Efficiency, distribution, yes.
They also might reap a small profit if the competitors cannot react fast enough to undercut again, but that’s not market share.
That should already been calculated by the current price.
As in the current producers have the same incentive and should have already lowered the price to maximize profit. As long as we are assuming identical supply/demand curves (big assumption, but I have not seen any intro econ books that don’t assume this)
5 (and kinda 3) –
That provides evidence the undercutting should stop (or become less viable) after the first.
The question then becomes, why would the competitors undercut the SECOND time then?
…I guess so; this is an argument about simplified economic models where information gathering and stuff is free, so there isn’t really any time where the undercutting seller is selling for cheaper.
…why not? More people are buying their products than other sellers? Isn’t that basically what market share is? Is there a technical reason this isn’t the case?
…true; If that’s the case, then that’s where 1) comes in; if you think or do have a supply curve shifted rightward compared to your competitors, then you sell for less; otherwise, you don’t, but that’s where the price is assumed to be, anyways. Besides, I’d doubt that products are ever clearly at the equilibrium quantity and price in reality, so there’s wiggle room.
…because selling at less than the competitor’s price is still able to do the same?
It’s not market share advantage in the sense that they’ll lose that market share the next time another competitor undercuts them.
In the end, noone “wins” this kind of price war as it only ends when noone makes a profit.
And if it’s sticky, then competitors might not want to undercut anymore.
Because while you get more money by completely capturing the market for slightly less profit, selling cheaper for a marginally greater market share isn’t necessarily more profitable.
Ie. 3 buyers with $2 profit per sale = $6 profit > 4 buyers with $1 profit per sale
I don’t know that this actually follows. Price fixing or other means to remove competition seem much more likely for infinitely greedy actors. After all, if you know that competition purely on price results in minimum prices, then an infinitely greedy actor has strong incentive to not compete purely on price.
Rational sellers often also have reason to believe that they have some sort of strategic advantage over their competitors. If you can produce widgets for two-thirds the price, then you can sell them for 95% of what your competitors do, and make more money per widget. Or your accounting could be better, as ADBG suggests, or your workers are more efficient because you feed them better for the same cost, etc.
All competitors might think this at the same time, and recognize that they all think it, and still compete on the premise that they can strategize better than the rest. And they may all believe that if they keep this up long enough, then they escape to a region with much less competition and the profit margin stabilizes somewhere comfortably above zero.
Also, rational sellers could still simply enjoy selling what they sell, more than anything else they sell, while recognizing that they’re going to have to sell something.
I could be wrong, but I think you’re misinterpreting what ADBG said. As I learned it in microeconomics (quite recently, hence my hedging), to say that they’re making accounting profit in a market in which there is no economic profit does not mean they have better accounting. You can have accounting profit while having no economic profit; in fact, in order to break even in terms of economic profit, you have to have accounting profit.
This is weird. But it’s because accounting profit (in my understanding, and this is obviously vastly oversimplified) is just revenue minus operating costs. In contrast, economic profit is revenue minus total costs — including, most importantly, opportunity costs.
To borrow ADBG’s example:
Let’s say I make $100k in my current job, but I’m confident I can do better on my own. So, I quit my job and start my own business. It does great, it seems! It’s producing widgets at a cost of $0.90 per widget (including materials, cost of capital, labor, etc), but selling them for $1.00 each, making an accounting profit of $0.10 per widget. My company manages to sell a million widgets. Cool, that’s $100,000 in pure profit! I’m coming out way ahead.
But not quite. That’s $100k of accounting profit. To measure my total economic profit, we need to factor in my opportunity cost — what’s that? Oh, jeez. I’m missing out of $100k of income every year, since I quit my old job! So when you subtract opportunity costs as well, my total economic profit is… $0.
So that’s a simplified picture of how you can make accounting profit without making any economic profit. What’s the long-run equilibrium of this market?
Well, let’s see what I do personally first. If my accounting profit drops down to $90k, even though I’m making money on paper, I’m actually losing money every year I’m in business. So, I wind down the business, go back to my old job, and resume making $100k a year. What if my accounting profit stays stable, and I continue making no economic profit? Well, ceteris paribus, I’ll just keep doing what I’m doing. As you alluded to, I might have other incentives that affect this decision; if I find that running a business is way more stressful than my old job, I might leave the market and go back to it, etc etc. But again, all else equal, I’ll just stay in the market. But on the other hand, if my accounting profit jumps to $110k, awesome! I’m making economic profits, and going into business was a great idea.
Such a great scenario won’t last, though. Assuming perfect competition (very bold assumption!), or even just low barriers to entry into the market, other people are going to notice that my firm (and probably the other firms) are making economic profits, and start their own businesses selling widgets. This increases the supply of widgets, driving the price of widgets down, and thus decreasing the profits of all the other firms in the market. What happens then? Well, see the above paragraph. Every firm will do their own version of that calculation: no economic profit -> status quo, economic loss -> leave the market.
So, as long as some firms are making economic profits, that will attract other firms into the market, thinking they can also make economic profit in the same way. More and more firms will enter, driving the price down. Along the way, some firms’ economic profits will go negative — they’ll make losses (remember, even if they have accounting profits!) — and they’ll leave the market. (Note: the first ones to go will be the ones least able to tolerate price decreases — the ones with the slimmest margins at the same prices — the least efficient ones!)
By the same logic, if total economic profits in the market go negative, the firms making losses will leave. What happens at the end of all this entering and exiting? Well, the market has finally reached equilibrium: with precisely zero economic profits being made in the whole market. (Again — even though every firm is making accounting profit! Since every firm will have an opportunity cost to enter it, which must be balanced off to make an economic profit.) Crazy how that works! But everyone is responding perfectly rationally to incentives the whole way through.
Once again, this is the simplest, econ 101 model of how markets behave. As you point out — there are a lot of complicating factors. And few markets are actually in a state of perfect competition. But this is my best explanation of the fundamental underpinnings behind OP’s question. Sorry for the long post — this was fun to write out, and kinda functions as studying for my microeconomics exam as well… so if anyone notices any mistakes, please point them out!
Okay, but there are lots of markets that I, a prospective entrepreneur, can enter. Which one should my fledgling firm compete in?
Why, the one with the largest economic profits, of course!
(But then consider costs of entry, and your own expertise. And other stuff. But whatever. Simple models!)
Okay, yeah, even in perfect competition, no market is probably ever actually going to enter and remain in an equilibrium state. But they might get close…
There’s a pretty easy (*cough* still simplifying *cough*) lower bound we can put on opportunity costs of money invested. How? Well, if you can by default get, say, 6% a year by throwing your money in an index fund, then if you want to be making economic profit, your business is going to have to return at least 6% annually on your original investment!
*Final ask for any corrections or commentary! Help a college student out with their exams! An Effective Altruist Endorsed cause!
I’m not an economist. But this looks like a good synthesis of what I know about microeconomics.
The one thing I’d add – which is probably too much detail, but might belong in appendix 4 – is that people really making decisions of this kind have to look at two other things – taxes, and the possibly differing incentives of managers and owners. There if you count cost of capital.
In general, earning are taxed differently depending on a bunch of things, and salary/wage income tends to be the hardest hit. And lots and lots of executives manage to take home increasing amounts of money even while their companies fail to make economic profits, or sometimes even accounting profits. And the cost of capital varies depending on your situation.
So all things simply aren’t equal 🙁
twicetwice is correct – I’d never seen the term “accounting profit” before, and didn’t recognize that it was a term of art.
The post overall adds more detail to what I think of as the usual reasons to start a business despite economic profits tending to zero.
I notice that in the first example, if I’m making $100k in yearly profit in return for giving up a $100k salary, I might not know that that will end up being the tradeoff. Perhaps I thought I could make widgets for only 89 cents. Or possibly more likely, I expect I can sell more than a million, but my first year ends up being sluggish. In general, one can expect a great many factors to influence my earnings, and I might calculate, either meticulously or BOTE, that I’m more likely to be ahead than behind.
This is what drives my response to the OP. An observance of economic profits may attract new entrants, but the OP is positing that they know those profits will tend to zero, and asking why they’ll enter anyway.
Good luck on your exam!
In a competitive market, prices don’t tend to a minimum, they tend to the market-clearing price, which includes the market-clearing rate of profit (the market clearing price of delaying consumption and taking a risk). The market will definitely be competitive if it has low barriers of entry; if there are high barriers, existing companies may form a cartel (though they have to be careful not to jack up prices to the point where a new company enters).
Are designer purses so expensive because the people who sell them are altruistic?
I was reading pundit/wonk Matt Bruenig’s thoughts on Nordic Socialism and later response to the Trump administration’s report on socialism in a few places like:
He’s been making the case that the tendency you tend to see amongst a certain class of right-wing pundits to insist Venezuela is real true and honest “socialism” (i.e., doomed) and Norway/Finland etc. are just “social democracies” or “free markets with safety nets” (i.e., good) is intellectually dishonest, because they are mostly the same thing – countries with moderate capitalist private sectors that exist under a government that exerts a lot of control over some critical sectors of the economy, high state ownership of a number of enterprises, tons of people working for the state, comfort with dirigisme, and so on. His position comes down to, you basically have to be consistent, if you’re gonna call them “socialist” or “state capitalist” or “social democracy” or whatever, you have to give them the same label because they’re clearly in the same group. (As a socialist, his preferred outcome is people acknowledging that Nordic Socialism is an Actually Existing Socialism and not just capitalism tamed)
What is the steelman case that Venezuela and the Nordics really are different in kind?
I don’t directly know the answer to your question, but reading those three pieces I note that he doesn’t actually directly compare Venezuela to the Nordics, which makes me suspect that directly comparing them would highlight the differences fairly clearly. (He also uses total SOEs as a metric for showing how socialized a country is when the idealized version of socialism would have that number at 1 and a large number could just as easily imply a strong capitalist foundation with elements of a mixed economy as it could a strongly socialist leaning economy).
For one, Chavez nationalized the assets of foreign oil companies. This was, uh, bad for business.
Norway hasn’t done something similar.
Edit: Nationalizations under Chavez.
Uhm.. Because they never gave foreign oil companies much of a foothold in the first place? Equinor, formerly “Statoil” (Literally, “State Oil”) is the dominant player, and the private firms are paying a 75% tax on oil profits.
Norway gave foreign oil companies more than a foothold in its waters.
Why Statoil developed into a competent company while PDVSA, Venezuela’s state oil company, did not, is a different discussion.
Someone (I think here) wrote a while ago that PDVSA worked quite well under state ownership until Chavez messed it up. (Edit: Douglas Knight has already said it in this thread.)
That was probably me, i had figures and charts backing it up too, but i’m not going to trawl the archives in search of the relevant posts.
I don’t know the situation in Venezuela, but surely, there’s a difference between taking somebody’s property (i.e., nationalizing), and not giving it out in the first place?
But given dominant state-owned companies, I think that model is likely to work better with a low level of corruption and clear separation between ownership and execution. Again, not sure how Venezuela does, but this might explain some of the difference in outcome.
(Also not sure why the tax rate is relevant, but pumping oil is practically a license to print money, and the tax is a price companies are willing to pay for the privilege. The business is still by far the most profitable, as are their wages.)
PDVSA did develop into a competent oil company when Venezuela nationalized 100% of the oil industry in 1976. Whereas Chavez nationalized only a small part. And PDVSA seems to have started falling apart when lots of employees quit in 2002 to protest Chavez, long before he nationalized any other company.
By “quit in protest” I meant “replaced by scabs during a long-running strike.” Note that this was not a strike protesting anything specific to the oil company, but simply opposition to Chavez and socialism in general, “better dead than red.” If the workers flat-out refused to work with Chavez, it seems hard for me to blame him for replacing them. I often see people making up excuses for the strikers, like that they were protesting short-sighted management of the company that would eventually destroy it and I wanted to push back against these fabrications. But now I think I got the facts wrong. The workers were replaced after the strike ended. I still find it hard to blame Chavez for trying to take over the union to prevent a repeat of the strike, but he probably replaced far more workers than necessary to accomplish this and he probably did use the company as a patronage machine.
They were striking because Chavez was taking money away from maintaining the oil infrastructure and using it to fund his social programs, a decision that they rightly thought would prove utterly disastrous.
Now, I don’t deny that Chavez had an absolute moral right to fire them for refusing to implement his policies, but it’s disingenuous to imply that they were doing it out of pique.
There’s no probably about it. besides the loyalty oaths, the payroll tripled under chavez.
Could you provide evidence for this?
All written before the shit really hit the fan.
That last line suggests that these links are about mismanagement, but that’s not what I’m asking about. Indeed, I said that the company fell apart. You said that the reason for the strike was “taking money away from maintaining the oil infrastructure.” Was that the reason? Do any of those links discuss this?
Your third link says “in protest at the politicisation of the company,” which is a lot less specific. Maybe that’s code for the specific fears, but I see no reason to believe it. Also, the fact that the oil strike was part of a general strike should make you skeptical of the claim that it was about oil. The first link says “senior generals are worried that the rival Chavista clans may tear the industry apart,” which sounds closer to mismanagement. But it isn’t the strikers who said that and it doesn’t seem to be really about the oil industry; if things are destroyed because they are up for grabs, that might as well describe all of Venezuela.
From the first article
“His energy minister, Rafael Ramírez, claims that costs have been cut by 40% and payments to the government will rise. Mr Ramírez touts an expansion plan to boost the industry’s output to 5m b/d by the end of next year.”
“without maintenance and fresh exploration, Venezuela’s oil output falls by 25% each year, as wells slow or become blocked. The critics say that such spending has been slashed: fewer than half as many drilling rigs are operating as were in use last year. They also accuse PDVSA of deliberately over-exploiting some fields, to the detriment of future production.”
Payments to the government have to come from somewhere, and those critics have since been proven right.
Those critics may have been proven right, but they weren’t the strikers.
(And just because the critics foretold doom doesn’t mean that their specific criticisms were correct. Your links suggest that embezzlement was a much bigger problem than the planned siphoning of money.)
A distinction without difference, in this case.
Important note, most oil production in Venezuela is done by the national oil company Petroleos de Venezuela (PDVSA). Its has been having difficulties maintain production since, in response to a general strike in 2002, the government fired most of the personnel and replaced them with politically reliable cronies. As a consequence, payroll has ballooned while production has been steadily declining as corruption and incompetence continue to take hold. The nationalization of foreign oil companies since then has certainly not helped the situation, but the root cause is really gross mismanagement of domestic assets. Prior to 2002, PDVSA was an very well managed oil company staffed with top notch talent.
This is a problem unique to nationalized/centralized industries, where a single bad manager can ruin an entire market. In a free market, competitors would take over where the mismanaged company failed to meet production/demand/whatever. One of the problems with socialism is that its proponents assume that there’s a competent class of ego-free technocratic super-managers just waiting to step-in and make everything run smoothly, but that’s not reality, and there’s no fail-safe built into the system when it goes awry.
Left wing left wing pundits said it too, at least until 2015 or so. I’d have considerably more respect for Bruenig’s opinion about what is and isn’t honest if he’d acknowledge that point.
There is a world of difference between shoveling to certain groups (the poor, the elderly) and trying to actively direct the economy. The two are often correlated, of course, but they don’t have to be. You can make a very hayek-esque welfare state that gives people a lot of money but works to avoid disrupting price signals. The Nordics (and Singapore for that matter) aren’t always that but they’re more like that than almost anyone else. And given the sheer ratio of oil to people in Norway, it probably shouldn’t be used as an example of anything if you aren’t trying to stack the deck.
Well to be fair there was at least a thought to compare Norway to another oil rich nation, though the actual comparison didn’t happen.
Can you? That is non obvious to me.
So Hayek’s very most fundamental point is that prices are information that is (A) incredibly useful and (B) impossible to obtain any other way. Hayek’s avoids subsidies, regulation, and price setting as much as possible in favor of direct provision of services or transfers of cash. Both of these have the virtue of being transparently priced and not interfering with the normal function of markets any more than any other sort of consumption. The idea is to worth through markets to accomplish your goals, not against them.
At least, that’s the theory. whether such a thing is possible in practice is an open question, because indirect welfare always usually the appeal of seeming cheaper. A negative income tax must be paid for, minimum wages sound like a free lunch.
The information of prices goes both ways. If you are taxing and transferring you have to tax production (in some way), pushing up the costs of production and thus pushing up the prices. You can get around this if you have enough resource wealth to tax where the impact on the global scale of the commodity won’t be much.
However prices work the other direction, they tell producers what is valuable to produce just as much as they tell consumers how hard it is to produce. They also imply that the person who is purchasing from you is in turn producing something of value which creates an expectation (for the lack of a better term) that the money you get for your good can be spent on another good. This is the shared illusion of money which people don’t really get into, and it is constantly being reinforced every time you go to buy something and encourages you to continue to produce. I don’t see how you can get through this cycle of taxes and transfers without eventually diminishing the desire of the producers to produce because of this.
Right, but you don’t, for the most part, change prices relative to one another. I mean, in the real world, all taxes are distortionary in someway, but unless your tax code is really weird, the effects aren’t too terrible. And because the prices don’t change relatively, you’re not screwing up markets as badly as if you just start mandating prices all willy nilly.
If you assume a perfectly distributed tax rate, I guess (maybe), but you are still only relative to other prices, there is always a shift against buying nothing with you money however and saving.
One difference I can posit, is that socialism will tend to succeed or fail much more dependently on the innate selfishness of its most powerful members. One bad apple can do a great deal of damage; an angel can do a lot of good. Norway simply had more selfless powerful members than Venezuela had.
The real answer, though, is much more likely “it’s complicated”.
Recently i’ve come to the opinion socialism is extremely susceptible to corruption, so a state’s ability to handle some socialist measures is inversely proportional to how corrupt it is. A nation with low corruption, functional institutions, honest people, and high levels of social trust can handle a moderate amount of socialism. A corrupt nation with dysfunctional institutions and low social trust will make a complete mess out of even small socialist programmes. The main advantage of capitalism is that it pretty much always works, there is no level of corruption or dysfunction that will make it such that someone somewhere can’t extract value from it.
Thanks for the correction on PDVSA.
Yes, that is exactly my view on socialism. I think many idealists who favor socialism are themselves trusting, trustworthy people who see the world through high-trust colored lenses.
Or they’re power hungry. Hence, bootleggers & baptists.
– On the Index of Economic Freedom, the Nordics are middling on labor freedom (except Denmark which is among the best, despite the high union membership rate), and they are among the best on every other measure except tax burden and government spending. Venezuela is among the worst on every measure except tax burden and government spending. While the Heritage Foundation is probably not entirely unbiased, one could look at their methodology and sources to see how they reach these conclusions (I haven’t).
– When conservatives blame socialism for Venezuela’s problems, I’m not sure that they primarily mean state ownership of companies. My impression is that some of the biggest causes are price controls (in conjunction with wage controls), currency controls and little respect for property rights (from forced nationalizations to populist measures such as forcing some store to sell its stock below cost); Bruenig admits some of these. These measures are often justified with helping the poor or curbing capitalist greed, and they are common in socialist countries.
– Even if some sectors are largely state-owned in the Nordics, I suspect that private companies are usually allowed to enter those sectors and compete with state-owned ones. This forces some amount of efficiency on the state-owned companies. Though even then, state-owned companies have some downsides, e.g. they may continue to operate at loss even if private companies out-compete them.
– In the Nordics, I think state-owned companies are operated much like private companies, in a for-profit manner. That’s still state-owned, but if public companies operated in a capitalist manner work better than others, that’s still an argument for capitalism. The arguments for operating state-owned companies differently from a for-profit company are often socialist (though the socialist arguments are often an excuse for operating them in a corrupt manner).
– So, state-owned companies work as well as private ones at the best, and much worse at the worst. That’s reason enough to prefer private companies. It’s hard to tell in advance how state-owned companies would work out in America. Probably not nearly as bad as in Venezuela. My impression is that the American government works worse than the Nordic ones, while its private sector works just as well (except perhaps in the most regulated sectors), so probably not as well as in the Nordics.
– Countries with a flexible labor market tend to have lower unemployment rates than countries with strong employment protections. The unemployment rate in Norway and Denmark is as low as is common in countries with flexible labor markets (3–4%). In Sweden and Finland it’s somewhat higher (~6%), though not as bad as in some other countries with high employment protections (such as Italy and France with ~10%, Spain with ~15%). One of the main reasons many people want strong employment protections is to protect them from getting unemployed, so the fact that in the rare best case, a country with strong employment protections can have an unemployment rate as low as a country with a flexible labor market is not a strong argument for them.
– Bruenig writes that a PPP consumption comparison ignores the high healthcare spending in America. If the same treatment costs more in America than in the Nordics, that’s actually taken into account in a purchasing power parity comparison, assuming that the purchasing power statistic is done right.
Rather than repeating what you’ve written above, I’ll just add a couple of relevant links from The Economist and FEE.
The bottom line being that the Nordics seem to be primarily market economies with lighter regulatory and tax burdens, combined with larger welfare spending and related public employment.
There’s more details in the links on how publicly owned companies are treated, but essentially, they’re treated as investments with a return to the treasury and not as State controlled monopolies to benefit the public, which is more the Venezuela model.
Norway’s sovereign wealth fund is of course the biggest outlier, but even that invests by purchasing stock in companies (Up to 60% internationally, rather than in Norway itself) and acting similar to a big pension fund would, not by nationalizing and controlling companies and industries, as in Venezuela. It seems purchasing stock in companies is more on the market capitalist side, as compared to seizing and nationalizing companies/entire industries.
Just want to chime in (being Norwegian as my only qualification) that this is pretty much correct. And that there is substantial pressure to use the oil fund and/or state ownership to “do good”, for instance build expensive but essentially unneeded infrastructure to provide local employment. Remarkably, politicians have mostly been able to resist, perhaps thanks to right-wing populists and progressive SJW causes taking all the attention. (\me reminds self to mention them in my prayers)
Yeah this is the kind of thing that brings countries down, for example, Venezuela. Norwegian politicians have so far been able to resist, per Ketil, and with such a small population it is probably easier to maintain reasonable control. But I don’t think it will last forever. At some point Norway will have a crisis — folks will say “use the oil resources,” and it will be downhill from there.
What makes you so sure that it will? It’s had crises, and the country is still doing exceedingly well.
Banally, nothing lasts forever.
Because Norwegians are human, and the human thing is to give in to temptation. So far they’ve been outliers from normal human behavior, but the way to bet is reversion to the mean. I know very little about Norwegian history, and maybe I am talking out of my butt, but from what I’ve seen every society follows human nature in the end.
Yes, zooman, you are right too. And that’s why in general, betting that a trend will continue for another year is high probability, but betting that an unusual trend will continue for 50 years is low probability.
Heh, “zooman.” That sounds enough like it might be someone’s actual handle that I looked back up before I realized it was me.
zooman. Sorry, I was feeling spunky. I realized afterwards that some might feel offended. It sounds like you were not. 🙂
Every industrialized nation on Earth is technically a mixed economy. Capitalism and socialism isn’t a binary, it’s a spectrum, and there has never been any civilization that’s reached 0% or 100%. So while there’s technically a sense in which Norway and Venezuela can both be labelled “state capitalist” or “market socialist,” it’s the same sense in which the U.S. would fall into the same categories.
Norway doesn’t have centralized state control over entire industries like Venezuela does; it doesn’t engage in price fixing or currency manipulation like Venezuela does; it doesn’t engage in rampant nationalization of private businesses or frequent redistribution of land/property like Venezuela does. It’s considered “socialist” because it has a strong welfare system, universal healthcare, extensive labor protections, significant government support for unions, and state-owned corporations. The U.S. has welfare, government-funded healthcare programs, labor protections, government support for unions, and state-owned corporations too, so the differences between the U.S. and Norway are a matter of degree, not kind, whereas Venezuela is an entirely different ballgame.
There are two types of people who keep promoting this bone-headed “Norway is socialist” myth: Conservatives who want to conflate welfare capitalism with socialism in order to make people afraid of welfare programs (“you don’t want the U.S. to become socialist like Norway, just look at how Venezuela turned out!”), and leftists who want to conflate welfare capitalism with socialism in order to make people embrace socialism (“you want the U.S. to become socialist like Venezuela, just look at how Norway turned out!”), but either way it’s an idiotic bait-and-switch. Hell, the leftist version is probably the single most definitive example of a motte-and-bailey fallacy that I’ve ever seen.
Many statements are relative to a baseline, not absolute.
Of course, but when you draw the line in such a way that everything from 0 to 95 is in one category, and everything from 95 to 100 is in the other, then I’m going to be skeptical of the way that you’re defining those categories. I’m reminded of Scott’s line about Lovecraftian Parochialism: “If you have to draw the boundary between Self and Other somewhere, then the border of Providence, Rhode Island is just as good a place as any, and everything else from Boston to the unholy abyss city of R’yleh is just different degrees of horrible.”
What makes it even more confusing is that the Nordic countries aren’t even strictly more socialist than the U.S., they’re more socialist in some ways (universal healthcare, free college, stronger welfare systems) and less socialist in others (government regulation of industries and businesses). Denmark and Sweden are higher than the U.S. on the Wall Street Journal’s ranking of economic freedoms! It’s a very strange mentality that places the U.S. in one category, and countries as different as Sweden and Venezuela in another.
tl;dr version: https://i.imgur.com/uMEfk0G.png
Who here is actually offended by someone saying, “Merry Christmas”?
I heard that some Advent calendars are wishing readers “Happy Holidays”
But if I were in a Jewish community and everyone was wishing each other a Happy New Year in September, my reaction wouldn’t be, “you’re excluding me!” it would be more of a, “oh, um, right on! Happy New Year!”
In the US at least, Christmas has become so commercialized and since Jesus wasn’t really born in the season here anyway, I never understood what the legitimate concerns are around wishing someone to be merry for a holiday just in case they don’t celebrate it.
Believe it or not, Christians in late antiquity sincerely believed that Jesus was born December 25th: after Hippolytus mentions it shortly after 200 AD, it becomes widely accepted by the Church Fathers. Apparently the most common alternative was January 6, which for smoothed out as a celebration of the Magi’s arrival.
You’re not from the US right? I think most people here would tell you that neither “Merry Christmas” nor “Happy Holidays” is offensive at all, and that the alleged controversy (in either direction: the right saying “the left is upset at people saying Merry Christmas!” and the left saying “the right is upset at people saying Happy Holidays!”) is just scrounging for clicks.
I’m sorry to say that, growing up in a conservative Christian subculture, I knew some people who really were upset at “Happy Holidays.” They (correctly, IMO) viewed it as one way in which the culture was deemphasizing Christmas and the Christian cultural heritage.
There’s a difference between getting mad at someone being cheerful and inclusive (even if the effect can end up being a bit exclusive, depending on the circumstance), and getting mad at a policy at some store or something that forbade employees from saying Christmas.
But in general getting mad either way is not likely to do anyone any good.
Getting mad may not do any good, but if you are a conservative Christian, I could understand getting irked. It’s not supposed to be an inclusive holiday for non-Christians, it’s a religious holiday.
This is like going to China and wishing everyone a Happy 4th of July, or making a point of wishing everyone non-Muslim person “Happy Holidays” on Eid. Yeah, Christmas is now commercialized and secularized, but that’s exactly what they don’t like, and “Happy Holidays” is another step in that direction.
FWIW, we only do “Merry Christmas” in our household, “Happy Hannukah” to our Jewish friends, and “Eid Mubarak” to our Muslim friends (when applicable). We do Merry Christmas and Happy Holidays on our Christmas Cards because we have non-Christians on our mailing list.
When being offended pays off, more people find offense, even for something as trivial as being wished a happy holiday for some holiday you don’t celebrate.
Yeah, but when even explicitly christian organisations are terrified of saying the word “christmas” in public, to they point where they torture common phrases to force “holday” in its place, you have to admit something weird is going on.
I went to a Jesuit university; built around a gigantic cathedral with a 100-foot-tall cross on the front, and to get in to the dorms you had to walk around a statue of Mary. Still called them “holiday cards” and “holday cookies” and “holiday songs.”
Do you have some reason to think this is true that you didn’t mention? Because it sounds like exactly the sort of egregious exaggeration that made “War on Christmas” a meme about making fun of Christians.
I’d be irritated if someone who knew that I was Jewish and that I don’t celebrate Christmas wished me a Merry Christmas, but I’m not going to make a big deal out of it, and I definitely wouldn’t blame someone who didn’t know.
I wouldn’t even know where to look for such a person.
I’m Jewish and frankly I much prefer hearing “Merry Christmas” to “Happy Holidays.” I think the latter is disingenuous, semi-delusional nonsense. I don’t think people mean it that way – I think they mean it as an earnest, positive expression of good will. I just think it’s dumb.
I view it as an unintentional mechanism for pulling one of “my” holidays into the unbearable Pollyannaism of the lead up to Christmas, though I usually keep it to myself because I don’t want to diminish others’ good spirits.
I really have to disagree. I’ll never call someone dumb or delusional for being friendly.
Yeah, they probably don’t know much about Hanukkah, probably not even the date. But simply acknowledging that my religion exists, that it’s possible I might be celebrating some other holiday at this time of year, is a good step.
But you don’t disagree; didn’t you read my full comment? I don’t tell people off for it. I keep it to myself. I know they mean well; I just don’t like it. I can imagine someone having the same attitude toward hearing “Merry Christmas” instead (that is, not wanting to hear it all the time, but accepting the good will), but I prefer that state since it makes more sense and avoids forcing some weird equivalence with a totally unrelated holiday.
I also think it’s worth noting the distinction between the personal stuff and the corporate stuff – the corporate stuff is worse. Spewing “Happy Holidays” while having Santa Claus in an ad is plain asinine. Just say Merry Christmas. Some fat guy in red has nothing to do with the Maccabees.
Hmm. I get time off work automatically. Doesn’t matter whether I’m part of a religion that cares about anything at that time of year. I get a holiday, and so do my coworkers. Are any of them Christian? Damned if I know. Jewish? Ditto, but somewhat less likely, given the proportion of the holiday pot luck today that involved Indian food.
I don’t think that anyone (other than lizard people) are actually offended by either statement. There are groups of people who are offended by policies or political agitation that is designed to force one direction or the other. This is more typically a company switching to “Happy Holidays” in a deliberate attempt at being less offensive. That creates the idea that saying “Merry Christmas” was offensive in the first place, which very few people believed.
After that it just looks like CW stupidity on both sides, or as dick says, “scrounging for clicks.”
Even when it’s a company forcing the change, I think it’s less motivated by the idea of avoiding “offense” than it is by marketing to more people. “Buy holiday decorations here!” will get you more business than “Buy Christmas decorations here!”. It’s not like those stores don’t still put up Christmas trees and Santa Clauses, they just also toss in a dreidel in the back.
Beat me to it. Not all attempts at increasing inclusiveness are offense-driven, and if we’re explicitly theorizing that epsilon people cared then the market-driven motive fits a lot better.
Neither alternative offends me as such, but the economicization of holidays does. Damn, I’d been able to maintain my smug equanimity until now 🙁
I have to admit that I feel like the “commercializing Christmas” battle has been lost so many years ago that it’s hard to imagine undoing it in mainstream American culture.
Don’t leave me hanging; are the lizard people hardcore Christians or anti-Christian? I want me some lizard people clerics.
I have a lizard. I shall try to determine if it holds to more than, less than, or only one deity.
Well, given that the lizard people include Obama voters who think that Obama is the Antichrist, I’m guessing the latter 🙂
I was at our company holiday party (yeah, was in November to save money), and a girl said, “yeah, people are definitely offended by it! It’s not inclusive!”
Yet I don’t think she was personally offended, she was more offended that I was not offended in not towing the PC line.
I’m an atheist and wont presume to be offended on anyone else’s behalf.
There are, however, deep contradictions in militantly proclaiming “happy holidays” while using exclusively Christmas—often explicitly religious—imagery to try selling me products. The products themselves being either non-seasonal or Christmas themed. Look, I get it. Corporations know where the money is in December and it isn’t Hanukkah or any other multiculturalism. I also get that there are people out there that would turn it into a PR nightmare if any corporation actually admitted it. But at the same time, the fact that they are willing to buy into the charade just makes me more inclined to assume that they are all just kind of assholes.
Insofar as either side is actually offended instead of just click-mining, I am almost certain that offense is rooted here somewhere.
I guess the most honest holiday greeting would be “happy hypocrisy”.
Unless people interpret it like this comic. I don’t think you can win.
Personally, I’m going to stick with “happy holidays” unless I know enough about a person to tailor my greeting to them, as that is what I’m used to and cheerful greetings make me happy.
I spent most of my life in a conservative U.S. state, but most of the people I actually know are liberal and many are non-Christian. I have never, not even once, heard anyone express offense at either Merry Christmas or Happy Holidays.
Now that the sports of hurling and camogie have been added to the UNESCO Representative List of the Intangible Cultural Heritage of Humanity (you’re welcome, Humanity) here’s a taster to let you all experience it 🙂
Ezra Klien talks with N. K. Jemisin
Trigger warning for SJW.
A good bit of the talk is Jemisin walking Klein through a world building exercise.
How many continents do you want? One.
Okay, then there’s a big desert in the middle.
What interesting features do you want the people to have? Prehensile tails.
Prehensile tails are commonly a result of living in trees, which aren’t going to be on the edge of the desert, which is where Klein puts his people. How did they get there?
This strikes me as a nice balance between arbitrary fantasy and knowing things about the world.
Much later in the interview, Kiein has an interesting theory that pundits tend to imprint on how the world was when they were young.
“How many continents do you want? One.
Okay, then there’s a big desert in the middle.”
I’ve been told this can be avoided if the climate is just right.
The topic sounds interesting, but is there a transcript? I can’t find one.
One of Klein’s previous shows has a transcript, but only that one.
Siderea wonders why ratiionalists are fleeing tumblr. I’m willing to bet it’s a combination of being deep into various fandoms and an average taste for kink, but I’m guessing. What do you think?
I’m not sure what the proposition you are willing to bet is. Rats are pro-kink/fandom so they flee tumblr because it doesn’t share those views? Or tumblr is too kink/fannish so rats flee?
You may not have heard that Tumblr has implemented a policy of eliminating visual erotica/pornography, and a lot of people are leaving as a result.
There’s no really satisfactory alternative site, but at least Dreamwidth has a strong commitment to not engage in that sort of censorship.
One thing that has come up consistently in response to HW Bush’s death is liberal anger over his/Reagan’s response to AIDS. I have to admit, I have never much understood this, because in my view a pretty effective vaccine for AIDS was invented long ago in the form of a condom. Is this view naive and/or puritanical?
My impression of the issue is that the US government could have responded far earlier and more effectively in the early days of the AIDS crisis. Why they didn’t was probably partly because the groups AIDS hit hardest, such as gays, are no friends of the Republicans. Also, the risk of contracting AIDS was strongly affected by behaviors such as drug use and and promiscuity that plenty of people view with suspicion. The more AIDS seemed a self-inflicted problem, the less sympathy the victims could expect, and the less likely it was the government would take dramatic action.
It’s probably reasonable to expect that a Democratic administration would have done more and earlier. But to put the shoe on the other foot, they would probably have been slower than the Republicans to respond to a plague that primarily affected evangelical Christians and gun-aficionados. Whose ox is getting gored and who their friends are matters.
What actions should the federal government have taken in the early AIDS epidemic? As best I can tell from looking at the HIV timeline on Wikipedia, I’d say the US medical community, and particularly the USPHS, CDC, and FDA, look like they took it seriously and reacted well, which is about what I would have expected. The CDC noticed clusters of weird cancers and pneumonias in gay men and started trying to figure out what was going on in like 1980-1981, when people in the gay community were just beginning to notice that lots of people were getting sick. Researchers nailed down the cause by 83-84, and there were soon antibody tests for it; there was a clinical trial for a vaccine going by 1987. This is all the sort of stuff I’d expect the US government to do in response to a public health crisis, and I don’t really see how it had much to do with which party was in the white house. Nobody in the top tier of the administration had anything useful to offer in that, other than maybe shifting some funding around. A president giving speeches about how terrible AIDS was wouldn’t have done much to speed any of that along. Most of that work had to be done by people who’d spent a decade or two learning how to do the work, and who were already in place when the epidemic started. There’s a kind of media-centric desire to know how the president is responding to whatever tragedy is going on, but most of the time, any useful response was decades in the past, building up the infrastructure necessary to respond.
My limited understanding is that a really huge number of resources were spent on understanding AIDS and trying to figure out how to treat it. I am pretty skeptical that this would have been substantially different under president Carter or Mondale. This seems a bit like blaming the king for the drought and crop failure–it’s a natural thing to do, and maybe the king should have stored up more grain for a rainy day, but mostly this is something that just happened and the king had few good options.
The US government could have sounded the alarm much earlier, informing the general public about the disease, risk factors, and preventative measures. If I’m reading this correctly, the Surgeon General didn’t step up and issue broad warnings until 1988, although there was an earlier more low-key report in 1987.
(Could that be right? I could have sworn there was a big hullabaloo earlier in the 80s, maybe 85 or so.)
I don’t have a good sense of how much PR there was. I sure heard a ton about it in high school, and I was a straight kid in a small town in the midwest. But I was also the kind of kid who watched the news and read anything he could get his hands on. I have no idea how this looked elsewhere.
It seems questionable to me that there’s a huge impact of having the president or surgeon general make a public statement. People paying attention would already have known, and gay men who knew a bunch of people who were dying of an incurable disease had absolutely the strongest incentives available to figure out how to avoid getting it. (Though by then, plenty of people had caught HIV at a time when nobody knew that it existed.)
My understanding is that the epidemic was spread very quickly because in particular bits of the gay community, extremely high numbers of partners was common. That included some people into the bathhouse scene, as well as gay male prostitutes. A lot of heavily gay communities saw incredible numbers of people die off–like what it must have looked like in Europe when the black death showed up.
For reasons I don’t think are very clear, at least in the first world, vaginal and oral sex are enormously less likely to transmit HIV than anal sex. If HIV transmitted as easily between straight couples as gay couples, I think we’d have had a much, much worse epidemic. And if it transmitted as easily as the flu, the global population would have crashed and even now wouldn’t have come close to recovering.
Looking at the timeline, AIDS was officially recognized in September of 1982 but with no known cause. HIV was confirmed as the probable cause in April 1984. President Reagan instructed the Surgeon General to prepare an official public report in January 1986, and that report was released in October 1986.
So, in principle, April 1984 to January 1986 represents a period where POTUS could have been pushing to inform the public on effective countermeasures but wasn’t. But I am skeptical that this would have made much difference. I distinctly remember that era, and the fact that there was a dangerous new STD predominantly afflicting the gay population was in no way obscure. The idea that there was any significant number of gay men who weren’t already avoiding the bathhouses and/or using condoms, but would have if the retro square outgroup geezer Ronald Reagan had told them to, seems implausible.
JS said it before I could, but count me in as another vote for bathhouse users not listening to fireside chats from a figure they thought was alternately an idiot and a monster.
As I recall (I was born in 1970), public health exhortations in the late 80s focused on some specific points: AIDS wasn’t just a gay problem but rather something anyone could catch, the number of sex partners you had and their sexual history really mattered, and condoms were generally effective protection though perhaps not a sure thing.
Also, while the Surgeon General’s report was released in 1986, the AIDS Mailer didn’t go out until 1988. That may be what I’m remembering. It was a really big deal.
It’s naive– people don’t use condoms reliably, and failing to do so shouldn’t be a death sentence.
I realize that I live in a bubble as a high-conscientiousness, high-IQ, quasi-libertarian person, but this is insane to me. I guess at this point this is mostly a Reddit-style CMV request.
Promiscuity without condoms is high-risk behavior, and people can do that if they want, but they should also accept the dangers involved. The idea that the government should spend billions and billions of dollars to subsidize this behavior is so foreign to me that I simply can not understand it.
In fairness, for a while there people didn’t realize that. In the early seventies it was a pretty common view that the pill (and abortion, wost case) had eliminated unwanted pregnancies and antibiotics had eliminated the problem of STDs, so bare-back promiscuity was now safe. And people definitely acted on this notion. It only became clear later that some STDs were still a problem.
Yes, this is a fair point. But I don’t think what people are angry about is that better funded research could’ve more quickly established that AIDS was an STD.
Also, from Wikipedia, I get the sense that people suspected this pretty quickly. Wikipedia’s History of AIDS page suggests that a popular 1983 book spread the recommendation to use condoms for AIDS prevention.
(Another important point is that condoms may not be as effective for gay men and more likely to break, but I’ve only ever really seen that mentioned in passing.)
This seems like highly motivated reasoning that was very irresponsibly given any official support. It’s like saying “We have a treatment for scurvy and salmonella, so it literally doesn’t matter what you put in your mouth any longer!”
To defend people/men who refuse to use a condom a bit:
There is a very common meme/belief that condoms are one-size fits all, even though condoms that are sized to the penis actually are a lot more pleasant. So quite a few men are probably suffering from strangled penis syndrome and would be more likely to wear a condom if they had a properly sized one.
Note that many ‘magnum’ condoms are merely longer, not wider and many men may thus be deceived into thinking that there are no wider options, even though those are around.
@Aapje if this makes up a large component of people spreading STDs — men who want to wear a condom but don’t, not because it makes sex slightly less pleasurable by taking away some of the sensation, but instead because their genitals are being strangled by the condom — then that seems like a million dollar bill on the sidewalk.
There’s a combo of factors here; one is that current condom makers have no reason to explain this; two is that “wow condoms suck” is common but rarely do people get more explicit and say, “condoms cause me pain,” and three is that the FDA has only recently started to allow larger-width condoms (and smaller-width condoms, actually) to be sold in the US. That said, there are business trying to leverage this need. Hopefully condoms, like bras, will shed the weird effects that leave many people miserable at present.
Why is the FDA regulating condom sizes? It’s reasonable for them to say that condoms should be the size they’re listed at, but how can it make sense for it to be a fight to get them to permit smaller sizes?
Something something something regulatory capture.
More probably true:
Something something something AIDS, something something something prioritizing ensuring testing standards can be met over ensuring usability.
One of the issues seemed to be that the FDA didn’t have testing methods for alternative sizes.
I have some other theories as well, but I think it is a better topic for a separate thread in the new OT.
I admit to being in the same bubble as you on this. For example, I’m utterly baffled by the decision to ever start using hard drugs. I can never quite understand how people can look around them, see how meth, heroin, cocaine, etc., have utterly laid waste to a huge portion of users, and say, “Sure, pass me the pipe, I’d like to give that a try!”
I imagine the thought process isn’t anything like this, but I can’t quite understand what the thought process actually is.
I believe that the “gateway drug” phenomenon that is so widely derided is in fact at least partially true. If you smoke cigarettes with friends, and someone passes around a joint, you are more likely to try it. Once you try drugs (knowingly breaking the law), the next step – still probably not heroin! – might be a bit of a harder drug, or maybe just more of what you are already doing. Once you’re a well known stoner, maybe trying a harder drug doesn’t feel like much of a step.
You are assuming that the opposite never happens though, that someone tries a low end drug and doesn’t like it and then never tries anything harder when they might have had an affinity for the hard stuff.
Well sure, but that doesn’t seem to link to CatCube’s question of how people get to heroin. If 100 people try weed and 15 people hate it and avoid all drugs, and 80 of them just like weed and never go for anything harder, those five people remaining are more likely to try harder stuff from having taken that first step, than for them to jump straight into a crack den and give it a try.
I have my own pet theories about that, but nothing hard to back them up with, and here they are, reasons why people end up trying hard drugs outside of obvious ones (ie trauma that they are seeking escape from), with these things working together.
1. A general sense of seeking out new things/experiences during certain ages of your life.
2. A desire to demonstrate your individual strength/uniqueness.
3. Discovering that the authorities in your life have lied so consistently that you stop really believing anything they every said.
4. A desire to fit in somewhere.
Or tobacco, or alcohol?
People are very biased to give in to even the mildest social pressure, and also to think “it won’t happen to me”, and in general underestimate risk.
a. Lots of people aren’t very bright or clear-thinking or well-informed.
b. Lots of people do dumb shit when they’re young and repent as they get older. Being hooked on drugs/in prison/HIV+ are all ways that the repent-when-you’re-older thing may not work out as well.
c. Some people engage in pretty self-destructive behavior short of actual suicide–drinking heavily, taking stupid risks, cutting on themselves, etc. Alcoholism, drug addiction, felony records, and STDs are all common side effects.
I mean, 51-year-old highly-educated me can’t imagine taking heroin, but I can sure see how 16-year-old me did a lot of dumb risky shit that didn’t seem so crazy when I was a kid, but that could have turned out very badly for me. There but for the grace of God….
If this is true, an interesting consequence may be that legalization of soft drugs might reduce the gateway drug effect: it would put them in a mentally separate category from hard drugs, while currently they are in the same category.
Likewise, I got a lot of messages, as a kid, that drugs are bad and addictive etc., there was little information about which drugs were actually likely to make one addicted after one or a few tries. If youngsters see that adults have exaggerated the risks of weed, they may be more likely to ignore the risks of hard drugs.
I have a request for you then @j1000000 what should we do with people who make really bad, maybe life ruining decisions like this?
I think this is a core fault-line between left and right. Do you believe we should just leave these “stupid” people who make bad decisions to be competed our of existence (i,e, die in a ditch somewhere, maybe commit multiple crimes along the way etc). Or do you think that society has some responsibility to them to try to nudge them onto a “better” path.
I think i know the answer given your self-stated libertarian standing. But i’d be interested to hear what you have to say. I find a lot of self-stated libertarians tend only to focus on the successful and positive aspects of competition and not think about those who are out-competed.
I think this is a split that’s not clearly left/right.
One thread of thought is more-or-less authoritarian–we will rule those bad choices off limits, forbid them, arrest people who make or offer them. A milder version of that is that we discourage the choices through laws and regulations and taxes–you’re allowed to smoke but we’ll make it an expensive pain in the ass.
Both left and right in the US mostly subscribe to this. The authorities in liberal California and conservative Texas can and will put you in prison for forging prescriptions for opiods. There are CW debates on what choices are bad enough to need this treatment, but where there’s consensus that X is a bad choice, libertarians are pretty-much outliers compared to liberals/conservatives who are on board with using all the power of the state to suppress those bad choices.
Libertarians, and a fringe of libertarian conservatives and civil-liberties liberals, don’t like this, either because they think preventing those bad choices interferes too much with individual freedom, or they mistrust the state’s ability to distinguish bad decisions from just weird/socially unacceptable ones.
I think most mainstream liberals and conservatives want to keep social safety net programs even for people who made bad decisions in the past–the guy who’s on permanent disability because he wrecked his motorcycle going 90 down the freeway, say. I think there’s more tendency for conservatives to want to limit benefits to people when they’re currently continuing to make those bad decisions, but I don’t think it’s an especially stark split. (By contrast, conservatives are across the board less interested in generous social safety net programs, and that is a pretty stark difference.)
You are setting up a strawman. Nobody on the mainstream left or right in America is against “nudging people onto a better path.” Where the disagreement comes in is what the “better path” even is and what forms of “nudging” are acceptable.
There are some people on the left who don’t think drug use is even a bad thing (there used to be more, in the 60s and 70s). This is just like any other culture war issue.
I am not a libertarian. But my understanding is that libertarians think charity is a good thing, but don’t want the government to be the one doing it. Similarly, you might think brushing your teeth is a good thing, but not want the government to send cops to your house every night to ensure that you brush.
Personally, I think the government should sometimes get involved in charity. But it is important to realize that there are a lot of problems which the government can’t really solve. Well-intentioned government action often creates terrible outcomes. For example, should we try to “nudge” a teenager using pot on to the right path by putting him in jail for a year? Probably not.
Are you also against e.g. government funded lung cancer treatment for smokers?
Can smokers be charged higher premiums if they’re self-insured or can the government tax smokers enough per pack to cover the health costs (on top of regular sales tax)?
With smoking there are obvious solutions to re-internalize potential externalities caused by shared healthcare resources.
People DO use parachutes reliably, at least those who jump out of planes do. Physics and biology don’t care what “should” or “should not” be a death sentence. It’s not like AIDS (or any other STD) is imposed as a penalty of some sort.
Making it hard for addicts to get sterile needles is something like imposing a death sentence.
Also, it’s considered worthwhile for people to have clean drinking water instead of telling them that they ought to be reliable enough to boil it.
I’m no fan of the war on drugs; making it hard to get needles and insisting on hepatotoxic additions to Schedule II pain medication are both pretty monstrous. But as far as I know Reagan didn’t do anything to prevent condom distribution.
Uh, yeah, but the government, through public utilities, has some control over that. They can’t make your pool of sex partners clean.
Be careful what you ask for, because they probably can do this.
Make your pool of legal sex partners clean by making it illegal for you to have sex?
Make it illegal for anyone carrying HIV to be outside of the quarantine
campshospitals, purely as a temporary measure until we have a cure of course.
In fact the government did have some control over that, inasmuch as when it became clear that AIDS was spreading through gay orgies they shut down the gay orgy sex clubs to try to curb its spread. My sense is that the same sort of people who today will rage against the government for being homophobic and not doing enough to fight AIDS were at the time raging against the government for being homophobic and shutting down the gay orgy sex clubs.
Didn’t Castro’s Cuba forcibly quarantine HIV+ people?
If anyone did this, it was local public health authorities. I don’t think the feds had any realistic way to shut down gay bathhouses in SF or NYC, but the state/local authorities had both the authority and the practical ability.
This article describes Cuba’s response to AIDS. It’s like a perfect illustration of how some crises are easier to handle well by an autocratic government that doesn’t have to care too much about your rights. (And the rest of Cuba is an illustration of why you may not want the autocratic government that doesn’t have to care too much about your rights in other times.)
It’s a pretty good bet that this quarantine program saved a hell of a lot of lives of gay male Cubans.
It wasn’t just about matters of spending more money, it was the jovial attitudes, the dragging of feet on anti-discrimination law for HIV positive people after the CDC found that they weren’t harmful (it took until like, Magic Johnson for the public consciousness to accept that kissing and hugging an HIV positive person was okay and they weren’t lepers, because the attitude from officials was so toxic) the refusal to step in and order the FDA to fast track any medication or cocktail better than AZT or at least look the other way if people wanted to be their own guinea pigs (if you are a libertarian the situation with AIDS drugs and the federal government in the 1980s should drive you batty) the claim that needle exchange programs weren’t something we needed to look into because of the drug war, the travel ban on AIDS/HIV positive people, etc. etc.
Even you believe it wasn’t the government’s job to go out and fix it in a proactive way, it was still a cruel and shameful response from the Reagan/Bush administrations all around.
I’m under the impression that this is an ongoing complaint about the FDA that isn’t specific to either Republicans as perpetrators or AIDS victims as recipients, but part of an overly cautious bureaucratic approach.
I agree that I don’t think it’s specific to Republicans in general. I couldn’t imagine the same atmosphere/attitude coming from the same event happening during a George W. Bush administration
I’m being specific that I’m skeptical that any inaction on the part of the FDA is the result of dislike or unconcern for gays. It takes a lot more blood to get blamed for inaction than it does for action, and fast tracking a drug that ends up with a person bleeding to death is more visible and politically damaging than proceeding with normal screening procedures despite people who suffer from the disease potentially benefiting from even small odds at a cure (as John notes with the Thalidomide reference below).
Those things aren’t good, but they seem like they are negative quality-of-life issues for HIV positive people rather than things that made the AIDS epidemic worse. Heck, as bad as it was for HIV positive people, there’s a chance that over-fear of carriers slowed the spread. The CDC “dragging its feet” is potentially understandable from an overabundance of caution. What happens if they said “you can’t get AIDS from a hug or kiss” and later found out they were wrong?
George Bush was no friend to gay people (neither were most Dem politicians at the time) but the idea that he has “blood on his hands” is less justifiable. This is mostly just opportunistic GOP bashing from people who can’t stand to hear unchallenged nice things being said about a Republican, even at his funeral.
Condoms are an imperfect defense, and quantitatively inadequate for MSM behavioral patterns of the 1980s against the HIV infection rates of the 1990s.
Ultimately, there was a mismatch of expectations that was going to result in A: a megadeath or so and B: Reagan/Bush being blamed, no matter what. The only thing that could actually have saved all those lives, was for someone to say “Stop having promiscuous anal sex, or you will all die. Stop shooting up IV drugs, or you will all die”, and make it stick. But the people who for obvious reasons didn’t want to hear that, expected that with enough money there would be an AIDS cure in just a few years.
We now know that AIDS is really damn hard to cure, and even effective treatments weren’t going to be pushed into service in 1990 just by throwing money at them(*). So no matter what the Feds did, there were going to be A: lots of people dying of AIDS, B: The Man telling them it’s their own fault because they were supposed to stop having sex and, C: the belief that The Man could have made this all go away but didn’t because he wanted them to all die and/or stop having sex. With The Man of the day having been a Republican, it was obvious where the blame was going to land and irrelevant how much or how little money was spent on HIV research – the “correct” answer was always going to be more, however much more it took to make it so gay men didn’t have to stop having sex whenever they wanted or die, and since Bush didn’t make that happen he was a murdering homophobic monster.
(*) It might have been possible to get today’s effective maintenance treatments with 1990s medical technology by drastically changing the standards of medical ethics and the regulations for drug development, but there would probably have been a Thalidomide or two along the way, and then you get Reagan/Bush blamed for using gay men as guinea pigs with disastrous results.
And if GWB had ordered up a high press ad campaign and education push promoting condom use, a lot fewer people would have died. That is specifically one of the ways he was directly culpable.
How directly culpable are the people who promoted gay sex?
Is there strong evidence of this claim? I was a high school kid in a small town in the midwest when the AIDS epidemic got started, and I sure heard plenty about condoms and about how AIDS could be contracted. I’m surely not typical, but I don’t think this was being kept a secret. And it’s hard to imagine a better incentive for using a condom than “avoid dying of a horrible incurable disease.”
I assume there were large-scale publicity campaigns in cities with a lot of gay men and IV drug users–is there evidence those campaigns had a big impact?
According to what I remember of my son’s account of his experience in high school, which would have been in a Philadelphia suburb c. 1990, the sex-ed class treated AIDS as an ordinary venereal disease, with no explanation of its link to gay sex in particular. My theory was that the left didn’t want it labeled a gay disease, the right wanted to use it to scare teenagers away from sex, so both sides were happy with that.
A) Its GHWB that people are complaining about
B) This seems like really, really wishful thinking. Gay culture was not going to turn on a dime because Bush or Reagan made an announcement. Hell, condom use rates are still low and infection rates are probably going to start inflecting up, if they haven’t already, because there’s an impression that the prophylaxis drugs and HIV management treatments are good enough that you don’t need to worry anymore.
EDIT: Should have read more fully; ignore this.
It’s callous. If your response to thousands of people suffering and dying is to tell them that they could have used a condom it shows a lack of compassion. Also, many of the early cases were homosexuals who didn’t understand AIDS and never worried about pregnancy.
Daniel Dockery on the present proliferation of streaming services:
I’m guessing all of this is a phase, the goldrush period of a new industry. Eventually the investor money will run out and the weaker players will get squeezed out. We’ll end up with something manageable like three to five major services, probably offering tiered plans, and maybe another dozen or so specialized services catering to more specialized tastes.
White Liberals Present Themselves as Less Competent in Interactions with African-Americans
“A new study suggests that white Americans who hold liberal socio-political views use language that makes them appear less competent in an effort to get along with racial minorities.”
Tangent: have you ever “made yourself appear less competent?” I sometimes have, whether because I was tired of being a stereotyped teacher’s pet/wanted to see how long it would take the teacher to determine my competence, or simply because I didn’t want to seem different from my peers.
Not a liberal, but I’ve certainly presented as dumber than I am in a variety of circumstances. People hate nerds.
“Appear less competent”?
Does this just mean “Uses more common and commonly understood words”?
Because… uh… if you don’t do this, talking to more “normal” or “average” people, you’re just being socially obtuse.
Maybe it’s more like, “Not every fight you can win is worth fighting.” Maybe you have expert knowledge about something like climate change, but decide not to argue the matter when you hear someone make assertions that you are convinced are obviously wrong.
I don’t see how the study showed a ‘competence downshift’. It seemed to show, that the liberals assumed, that a Lakisha was dumber than an Emily and thus had to resort to using simpler words, whereas conservatives showed no such bias.
A ‘competence downshift’ would be pretending to be stupider than you are, like pretending not to know something or pretending to be bad at something. I don’t think that using words, that you think the other person would not understand (whether true or not) is making yourself hide your competence. It’s just trying to be understood.
if they broadened the study it would probably find most people change how they sound/act to fit with whatever group they’re trying to interact with. If you’ve ever done public speaking or presented before it’s literally step 2 after knowing what you want to say, i.e. know your audience.
That could be making yourself seem less competent, speaking in a different way “speaking white”, being more formal around older people/less formal around children. this study is cw bait
Unsure whether I count as a ‘liberal’ anymore, but, yeah, if I’m talking to somebody who I don’t expect I’ll be having an intellectually-rigorous conversation with, I’ll say I’m a scientist instead of a biostatistician (e.g.) – but that’s education-sorting, not race-sorting. That’s racially-correlated, of course, and this is a study of political speeches designed to appeal to the lowest common denominator so I’m not sure what the authors thought they were proving.
Also, they’re analyzing uses of different classes of words in political speeches made to different groups and trying to claim generalizability. Their methodology is bad and they should feel bad.
To be fair, they didn’t only look at political speeches. They’ve also got a bunch of tiny student and mechanical turk experiments. I don’t find the other studies convincing either.
I mean, it’s a very old snicker-at-the-outgroup’s-hypocrisy truism among right-wingers that Blue Team dumb down their speeches and adopt different ways of speaking when they’re addressing black people.
It’s fairly well documented that George Bush II did pretty much the same thing when talking to Red Tribe white people, after a humbling defeat running as an Ivy League smartypants for a congressional seat in rural Texas. There’s an argument to be made that Dubya was really a Blue Tribe Democrat a la Romney who adopted Red Tribe camouflage for most of his political career, but either way this isn’t really an area where his team gets to claim intellectual or moral superiority.
George Bush II was a rich Ivy League kid of typical intelligence for that elite status who successfully tricked everyone into thinking he was an Aw Shucks half-wit, yeah.
Not sure about “Blue Tribe”, though. If you acculturate to Texan-ness and become a born again Christian (for the pragmatic reason of fixing your alcoholism), you’re not very Blue anymore.
My interpretation is that it wasn’t so much intent to seem stupid, but that he was extremely lazy. This was also evident in his many vacations and in Cheney being able to more or less take over.
Then he surprised everyone when he did try, like during the campaign.
I think this is just a result of political elites mostly being drawn from the top 10% or so of the intelligence and education distribution. Sounding like a smart educated person won’t endear you to more average or low-end voters, so you move your vocabulary and accend downmarket.
One of the well-known problems with east Asian languages is their use of word symbols for writing; memorizing all those symbols is a pain, and you need to know a lot of them to write well. Is there any sign that online systems are driving users to adopt more keyboard-friendly ways of writing, such as straight pinyin or hiragana, by analogy to western use of extreme abbreviations when typing on tiny smartphone keyboards?
From what I’ve seen, on the contrary, keyboard systems make people use more hanzi/more traditional hanzi/more kanji than before. Because the way a lot of these systems work is that you either type phonetically or type elements from the character you want and then the computer automatically proposes you a list of possible characters to pick from, thus alleviating your need to actually actively remember the characters, you just have to able to recognize them passively in a list, and so there is less impediment to using them more often or using rare ones.
??? Who among people that use these languages think this is a problem? In addition to what Machine Interface alluded to above (by de-emphasizing writing and hyper-emphasizing comprehension, the average person actually recognizes more Hanzi/Kanji thanks to computer word processing), symbol-based writing systems actually take up less characters (data space) to convey the same meaning, which has historically been more efficient when trying to cram data onto limited formats.
Straight pinyin/hiragana would be illegible to an actual speaker of that language due to the amount of homophones in both Chinese and Japanese.
On the other hand, perhaps hangul is the best of both worlds? Phonetic but in a denser format?
Ooh, now this is interesting. I’m not as familiar with hangul, but it has a similar feature of grouping multiple components onto one character (tho I think its less dense than hanzi and kanji? But obviously more than the romance languages, while still being phonetic). One thing I do know is that korean speakers also use/recognise hanzi to a certain extent, so I wonder if theres some configuration where some 200 (500? 1000?) most recognised hanzi are used together with hangul, or hanzi is used in places where hangul alone would be less efficient?
Deepmind recently developed a system to predict protein folding, Alphafold. Apparently, it blows all other such programs out of the water.
Knowing too little about protein folding to assess their results, I wonder whether somebody with the respective background could give some commentary on Alphafold.
It’s been covered here: http://blogs.sciencemag.org/pipeline/archives/2018/12/03/the-latest-on-protein-folding
As someone who works in a biochemistry-related field, I welcome any improvement in protein structure prediction. AlphaFold is an improvement, albeit an incremental one.
“Blows out of the water” is a bit of an overstatement. See the first figure here. It does slightly better on the majority of targets and slightly worse on some others.
Still impressive, but it’s not a quantum leap forward. If they improve significantly in later contests it will be a big deal. The last big leap forward happened a few years ago when David Baker’s group at the University of Washington combined coevolutionary analysis with their Rosetta algorithm.
Protein folding is one of the areas I work in, although I focus on simulating mechanisms rather than predicting structures.
Structural Engineering Post Series
Steel Design II
Continued from here: https://slatestarcodex.com/2018/11/25/open-thread-115-5/#comment-693767
It’s refreshing to work on this, since the last couple of days at work have been on structural engineering-related things, not structural engineering. I have to work through the distribution of water mass for a seismic analysis I’ve been working on, and part of it is rebuilding a new Excel worksheet to reimport the data back into the program. The “easiest” way to do this is with a VisualBasic macro, but I only do those once a year or so. Every time I have to rediscover the fact that I don’t actually understand object-oriented programming, and it turns into a day and a half of sacrificing random chickens until I start getting results that look correct and iterate in from there.
I’ll be continuing my discussion of structural steel design.
Rolled Steel Sections
The easiest way to design a steel structural frame is with the use of sections that have been hot-rolled to shape in a rolling mill. There are two standards in use, ASTM A6, which defines the dimensions and tolerances of common rolled shapes (W, S, C, HP, angles as discussed below). The most common “form factor” for this is what a lay audience will refer to as an “I-beam”, with two flanges (the top and bottom serifs of an “I”) connected by a web (the vertical line in the “I”). By far the most common section used in the United States is the wide-flange, or W-section. These are the workhorses of structural steel framing.
These sections have, as the name implies, relatively wide flanges with the top and bottom of each flange parallel. For US Customary units, these are designated with a W, followed by the nominal depth in inches, and the weight in pounds per linear foot. For example, a W12×72 has a nominal depth of 12 in., and a one-foot-long piece would weigh 72 lbs.
Note that the actual depth is 12.3 in, and for some sections the actual depth can vary significatly from the nominal depth. A W12×336 will have an actual depth of 16.8 in. The reason for this is that a series of nominal W12 sections will have a consistent dimension inside of the flanges, and as you get heavier the thicker flanges will make the overall section deeper. This PDF has a table of dimensions, and if you look on page 4, you can see that there are 17 sections between W12×336 and W12×65, varying in actual depth between 16.8 in. and 12.1 in., but all have a “T” dimension of 9⅛ in., which is the dimension between the fillets joining the flanges to the web (the dimensions between the flanges are 10.3″ for all 17, but the flat part of the web is more useful to engineers and detailers since that’s the dimension available for bolts, so that’s what’s tabulated).
Other types of common I-sections are the American Standard, or S-section; these differ by having sloping flanges (2:12). The nomenclature (i.e., S12×50) is the same as the W-section, though the depth (12″ in this example) is the actual depth of the section. These used to be the go-to standard for steel framing, but have been mostly displaced by the W-sections, both because the W-sections have a little more material in the flanges for a similar depth and weight making them more efficient, and because the sloping flanges require tapered washers for bolts. HP sections are used for piles, with the flange and webs equal in thickness and the overall nominal section depth and width equal. There are also “Miscellaneous” M-sections, which are not defined in ASTM A6; I imagine these were designed for specialized uses, but I’ve never had cause to specify them or check availability.
American Standard Channels (with a designator C) have a sloping flange like the S-section, and a form factor like a “C.” Like S-sections, the nomenclature depth is the actual depth (a C8×11.5 is 8 in. deep).
You also have Tees, which are split from one of the previous W, S, or M-sections. For example, splitting a W12×72 would produce two WT6×36s.
Angles are designated by “L,” the leg dimensions, and thickness. An L4×3×½ is 4 in. on one leg, 3 in. on the other, and ½ in. thick.
AISC 360 does not require the use of rolled sections, as the design equations are valid for built-up sections made from welded plate as well. However, if there’s a rolled section that will serve, it’s usually best to use it; the section properties are known and don’t need to be calculated, they doesn’t require extensive and skilled welding labor to assemble, and you’re less likely to run into weird local effects compared to designing your own section. They’re God’s tinker toys, so just get them out of a bucket and snap them together. However, there are applications where they won’t serve. Large bridges are the most common case here. The deepest W-section won’t serve for long spans, so plate girders (I-shaped sections built up from individual plates) are common in this application.
Compression members–that is, members loaded along the long axis (think building columns)–have a couple of different limit states. The first and most well-known to non-structural engineers is flexural buckling.
Leonhard Euler developed his buckling equation in 1757. In developing it, he assumed a perfectly straight column that deforms elastically and with ends free to rotate (as if connected by pins on each end). This basic equation is
Pcr = Critical buckling load
E = Modulus of elasticity of the material
I = Second moment of area (a geometric property of the cross section)
L = Length between the pin supports
The derivation of this equation is a standard exercise in differential equations textbooks. This is a stability phenomenon. As I discussed last time, when you have a frame displaced sideways, the new line of action of the force acting downward is no longer collinear with the upward reaction, which generates a moment (the P-Δ effect). Similarly here, if you have an axial load on a column and push on the middle a little bit, the displacement of the out of line will generate a moment (referred to as P-δ in frame stability). If the axial load is below the critical buckling load, it will spring back. However, if the axial load is above the critical buckling load Pcr, even the slightest bending of the column will produce enough moment to prevent it from springing back–the system is unstable. The classic metaphor used is a marble and a rounded bowl: if the system is stable (below the critical buckling load), it’s like a marble sitting inside of the bowl, where when you displace it from the bottom gravity will pull it back down to the bottom. However, if the system is unstable (above the buckling load), it’s like a marble balanced on top of the upside-down bowl, where the slightest disturbance will cause gravity to pull it further and further from its original position.
It’s worth noting that the strength of the material (Fy is the standard variable used in steel) does not appear in this equation. The critical buckling load depends only on the elasticity of the material (pretty much a constant for all standard structural steels, regardless of strength), the geometry of the column, and the square of the length.
Now, I said above that the assumptions used were a perfectly straight column, pinned ends, and elastic deformation. It turns out that exactly none of these assumptions are true. Manufacturing and erection tolerances mean that no member is straight, pinned ends are neither achievable nor desirable in actual building columns, and elastic deformation is generally not true due to manufacturing realities for steel.
A W-section (or any hot-rolled steel) doesn’t cool and solidify all at once after rolling. Steel shrinks as it cools, like most materials. The tips of flanges cool fastest and solidify first. As they do this, they shrink. However, since the other regions of the section are still soft, those sections can deform. However, when the thickest parts of the section around the flange/web junction solidify, they also want to shrink. However, since the flange tips have already solidified they can’t deform to accomodate this, so they go into compression. The middle of the section then has to be in tension to balance forces. This means that hot-rolled sections have significant residual stresses inside of them that, in part, defeat the assumptions about elastic deformation. At lower slenderness ratios, part of the section will start to yield (behave inelastically) before the rest.
To accomodate the out-of-straightness, Euler’s equation is multiplied by 0.877 in practice, before any resistance factors or factors of safety are applied. There are a few ways to work with the end conditions, but the easiest to explain is that the length of the column is adjusted to a so-called effective length, where a pinned-pinned column has an effective length equal to the actual length, a fixed-fixed would theoretically have an effective length of half the actual length, and a cantilever column with a fixed base and top free to translate would have an effective length of twice the actual length.
The effects of residual stresses mean that when the slenderness of the column is below a certain point, the section no longer buckles elastically, but undergoes inelastic buckling. The equations for this are a little too complicated to reproduce here, but they do account for the strength of the material. The two equations are equal at about 0.39Fy, where inelastic flexural buckling controls at higher stresses, and elastic flexural buckling at lower.
But wait! There’s more! I’ve been talking about the entire cross section in the above. Let’s zoom in a little, and consider one of the flanges. It is a teeny little column on its own. If it’s too thin compared to its depth, it may buckle by itself before the whole section reaches its buckling strength. If it does this, it stops contributing to the overall section, and part of the geometry of the cross section that I discussed in Euler’s equation above goes away. This is called local buckling Whether or not this will occur is dependent upon the geometry of these cross-sectional elements. This is possible for either flanges or webs, though the depth and thickness ratios for each of these is different. A cross-section that has elements that will locally buckle before the entire section buckles is referred to as slender, and there are equations to deal with various cases by reducing the cross section to account for this. This is one of the advantages to rolled sections that I referred to above; if you’re designing your own built-up section, you need to do this legwork. If you’re choosing the section out of a table, this has already been done for you, and indeed design tables have footnotes indicating what sections have this as a consideration.
Flexural buckling isn’t the only possibility. The equation I discussed above assumes that the instability occurs due to bending, but it’s possible for it to occur because of the section twisting. This is called torsional buckling. There is also a combination of the two, flexural-torsional buckling. Torsional buckling isn’t a concern for W-sections, but if you’re designing your own section it’s something to watch out for. Cruciform (cross-shaped) sections are especially prone. C-sections are prone to flexural-torsional buckling, because their single axis of symmetry means they like to twist out of plane. As far as the math goes, I’m only going to note that the section of the code that gives the equation defines 23 variables that feed into these analyses.
So, at the end of the day, you’ll consider these limit states (though some can be excluded by inspection), and take the lowest one. This will be the capacity of your column, the nominal capacity Pn. You then either divide by the factor of safety of Ω=1.67 or multiply by the resistance factor ɸ=0.9 to get your allowable or design compressive strength.
Next: on to flexural considerations!
Nice. What software are you using for your earthquake analysis? Are you using data from lab testing of the same steel specimens used in the project, or some database? I’m not too familiar with numerical modelling for structural stuff, but in geotech a model is basically useless without a big suite of actual material-specific test data.
Reading about lateral torsional buckling reminded me of the concepts of moments of inertia. After studying physics and engineering I understood ‘moment’ to have a distinctly physical meaning. 10 years later, watching probability theory lectures online, was amazed to learn that ‘moments’ are used to describe the properties of distributions i.e. mean, kurtosis, etc. I think these two uses share a common scientific history. Can anyone shed more light here?
It’s just SAP2000, a regular finite element program. This particular model will be used for a response spectrum analysis; the structure is irregular and doesn’t have “stories” as conventionally defined, so the simplified methods are probably not a good idea.
We’re not using any particular lab data for this, just the specified material from the original construction. Us structural engineers have the benefit of being able to specify material properties–and those materials will be tested in a lab and come with the certificates on delivery–as opposed to you guys who just have to deal with whatever God saw fit to put on the site.
For old sites without records or if you’re interested in material properties that weren’t specified in the material at the time of construction (say, for Charpy tests in old bridges), then you’d have to get a coupon and have it tested. Another reason is if you suspect that the original construction didn’t meet specifications. For this project, the structure I’m looking at was extensively modified in the early 2000s, and several of the engineers who worked on it are still around, so we aren’t too worried about major surprises in material.
We are having geotechnical investigations done for another structure that’s part of this project. I think they’re drilling next week, come to think of it. That information will be used to design a tangent pile retaining wall socketed into rock.
I’m really impressed by these posts, and also it sounds like a miracle that humans were able to build anything more complicated than two sticks stuck in the ground and a tanned hide thrown over them, build big and tall buildings that didn’t immediately fall down, before all this knowledge was known 🙂
It’s not a miracle, just applied ingenuity coupled with tradition. Of course, they also had a lot of failures, but it’s easy to forget because we can look around Europe and see the successes still standing. They may not have had modern mathematical analysis, but they did have masters who were trained on what had worked up to that point.
I mentioned this in one of my introductory posts, but this also does present some pitfalls for today’s engineers analyzing old buildings. The fact that we use straightforward mathematical models (that buckling equation is one) to design modern buildings also implies that modern buildings are tractable using straightforward mathematical models. It’s easy to identify what the primary members carrying a particular load will be; there is generally significant extra capacity in the event of the failure of an individual member through unintended load paths, but we don’t rely on those when designing a new structure. Older structures may rely on significant non-obvious load paths, because they didn’t use that concept in their design and construction, so making changes may have non-obvious effects elsewhere.
This reminds me of an interesting question: why W sections? Why not box shapes, or triangles, or some other cross section? The answer seems implied in the description – W sections have better affordances for bolting them together, combined with the need for high load limit, and are also probably easier to roll. Is this true? Or is it possible that, had we had the Steel Revolution to do all over again, knowing what we know today, we would have settled on a better standard?
Also, when did these become standard? What else was tried, and where? If I visit certain old buildings, will I be able to say stuff like “you can tell the age / steel supplier of this building by the section shapes they used”?
W Sections and the like can be made by running a billet of steel between rollers. Closed sections like boxes and tubes require a welding step, making them more expensive to manufacture. Also, with the W section, all surfaces are externally accessible to facilitate e.g. riveting.
@John Schilling is correct on why I- or H-shaped sections are preferred for general use. I’ll expand on it a little bit, since in the OP I skipped over the standard closed sections that are available, called “Hollow Structural Sections” or HSS, and are fabricated under the standard ASTM A500. These come in rectangular (or square) and round, designated HSS followed by the outside dimensions and thickness. For example, an HSS9×5×⅜ is a rectangular section 9″ by 5″ by ⅜” thick, while an HSS6.625×0.250 is a round section 6⅝” in diameter and ¼” thick. (The decimal vs. fractional notation is correct for those.)
These are fabricated by cold-forming and welding, using one of two welding processes. This PDF has a figure on page 2 very briefly explaining the manufacturing processes. As John noted, this is relatively expensive and generally not worth it unless you need the closed section.
There are two common reasons why closed sections like HSS would be used over W-sections (or other I- or H-shaped cross sections)
Firstly, HSS sections are better in torsion (twisting). For example, the supports for a big-box store or gas stations sign that they’d have in the parking lot to attract street traffic are usually closed sections, because the sign and its tall, thin supports will tend to be twisted by wind loads.
Let me give some numbers to give and idea of the difference. The ability of a cross section to resist bending is dependent upon a quantity called the second moment of area (moment of inertia), which typically has the variable “I”. The bending deflection is inversely proportional to I for a constant bending load. The ability to resist twisting is related to a quantity called the torsional constant, J. As before, the angel of twist is inversely proportional to J for a constant torsional load. (I’m simplifying this a lot) Let’s compare two sections of similar dimensions and weight: A W12×58 (about 12″ deep and 10″ wide) and an HSS12×12×⅜ (which weighs 58.01 lb/ft). The W12×58 has an I about its major axis of 475 in⁴, while the HSS12×12×⅜ has an I of 357 in⁴; under the same loading the HSS will deflect 475/357= 133% as much as the W-section. However, for the W-section J is 2.10 in⁴, while the HSS has a J of 561 in⁴ (notice the decimal point, and neither of those is a typo). That means the torsional resistance of the HSS section is about 270 times better, for the same weight. However, as John noted, to get this you need to spend the additional money for fabrication, and the connections are much more difficult; we don’t use rivets in structural practice anymore, so you have to either weld or use blind bolts. Since you can often detail your structure to avoid twisting loads on most members, that’s preferable.
The second reason to use HSS sections is pretty prosaic but important: they look prettier. When you use Architecturally Exposed Structural Steel (AESS), you will generally use HSS because it is more attractive. AESS is common in modern architecture, especially in things like airports and sports stadiums. In those places, W-sections would probably be cheaper (torsion isn’t common), but the architectural considerations take precedence.
To a certain extent, yes. Here is a page with all kinds of references that show dimensions of sections from various manufacturers. The earliest I saw there was 1885, though that seems to all be wrought iron. The earliest reference to steel is in 1891, from Pencoyd. The preface language is interesting:
AISC also has Design Guide 15 that discusses obsolete sections and materials, but that’s not available for free.
Buckling concerns+custom member cross sections = aerospace engineers bread and butter. My enduring favorite solution to the build a really tall, thin column that can carry a lot of bending moment/torsion (otherwise known as a rocket) is the balloon tank which is basically taking paper thin cross sections (slight exaggeration, but not by much) and rely on the internal pressure of the tank to act as a restorative force to keep the column stable. Then there’s a rocket structural engineer’s second favorite trick (to be used when the technicians/account complain the tanks keep collapsing in the factory) know as the Isogrid pattern, which lets you optimize your structure to be about equally likely to fail from every component of load (e.g. a weight optimized structure at it’s finest)
Yes! Balloon tanks are amazing, and I’m still kind of sad that they didn’t catch on.
Question based on extremely naive mental model: is the square-of-the-length dependence here one of the factors that limits the feasible height of skyscrapers?
I think architectural considerations are much more important in real-world development; the volume of elevators you need for a tall building starts to consume too much floorspace to make it economical after a certain point.
If you’re talking about a fictional SF world where making a structure economical isn’t important and the Emperor of Earth wants the largest building physically possible, then yes, this is likely to be a limiting factor, though less for the individual columns than for the frame stability I discussed in the last post. There’s a certain amount of…fractalness…in stability considerations.
If you look at the building as a whole, the dimensions of the building as a system has to be big enough to be stable; as you surmise, if you take a building of the same size as a current one and keep extruding it up, eventually it’s going globally buckle due to the square of the length overcoming the elastic stability. From this view, buckling the individual columns would be “local buckling” to the building in the same way that I discussed the flanges being “local buckling” to the columns.
However, in the sense that I’m discussing the OP here, each individual column has an effective length between its braced points, which are typically the floors. This was one of the considerations in the collapse of the WTC, in addition to the P-Δ discussed in the thread to the last post. The individual floors were providing bracing to the columns; the fire turned these from rigid supports to cables that were pulling the columns inward, which not only added P-Δ effects, but doubled their effective length. When you consider that at 800-1000°F (thought to be a typical fire temperature inside), E in the numerator is about half of what it is at room temperature, well…
Curious about this: Russia will build missiles if US leaves treaty, Putin warns.
What is the strategic importance of intermediate-range nuclear weapons?
IRBMs were more significant in the Cold War than they are now (the Cuban Missile Crisis was over M- and IRBM forces, viz. the American Jupiter [deployed in Italy and Turkey] and Soviet R-12 and R-14 [being deployed in Cuba]). Their warheads would still count towards the deployment limits that’re still in force. And they’re more vulnerable to ABM countermeasures: THAAD and SM-3 are useless against ICBMs but theoretically capable against IRBMs. But they’re not totally useless: from Russian launch sites, they couldn’t reach Washington (and a repeat Cuban deployment is pretty unlikely) but could easily reach Berlin. And they’re a lot cheaper and more deployable, which is important to an army like Russia’s that has a lot of nuclear material and fairly generous treaty limits but not a lot of money.
This sounds a lot like saber-rattling to me, as usual for Russia. But the facts on the ground are that a treaty preventing both us and the Russians from developing IRBM forces constrains us a lot less than it does the Russians. We’re richer, our conventional strategic forces are more capable (hence more capable of substituting), and geography means we need them less in the first place, unless we decide to nuke Montreal for some reason.
Ehhhh, my amateur perspective:
1. Depending on position, they are useful first-strike weapons, because they have shorter flight paths. This is more a threat to Russia than the US, because missiles stationed in West Germany can blow up key locations in Moscow in 10 minutes. You might not even be able to finish your Monday Morning Bathroom business in that time.
2. They are useful for theater-level operations. I don’t know if Russians use dial-a-yield like the US does, but your typical ICBMs are going to contain large warheads to hit a variety of targets. If you have extra theater level missiles, you have extra missiles that can use smaller warheads to hit different targets (for instance, a 50kt warhead is fine for an airfield, but you want a 500kt warhead or bigger if you want to hit multiple targets in New York City, and using a 500kt warhead on an airbase when a 50kt warhead would do is a waste).
3. They are more vulnerable to existing US deployed missile defenses. However, China does not have existing US deployed missile defenses, so I’m not sure how much that scares Putin.
4. I am not sure of the exact provisions, but China is not bound by the treaty, so they can have numerical advantage over the US and Russia. The US can probably subsitute a lot more easily than Russia, because we have a lot of conventional tomahawaks and platforms to launch them. Russian capabilities against China might not be as extensive, so they might prefer to have more 50k hypersonic cruise missiles that can be launched from a mobile ground-launched missile system…they don’t exactly have a bunch of B-2 bombers or 700+ highly accurate hard-to-intercept tomahwaks just sitting in the Pacific.
5. In 2035, hypothetical President Kanye West might decide to abandon NATO because Russia has all these short-range missiles that only threaten European capitals, and it doesn’t really seem like such a big deal because the US isn’t threatened anymore. This will probably panic the Europeans, because now they are under a nuclear gun without an adequate response, especially since Prime Minster Jeremy Corbyn abolished the British nuclear arsenal back in 2020 and the French decided to be massive dicks and withdrew from NATO again back in 2025.
The INF treaty is (IMO) useful because it is a permanent treaty that bans the entire class of weapons, forever. New START only lasts until 2021 (IIRC), so without the INF, Russia can use their short-range weapons as a bartering chip again. Russia seems like to wants to keep the overall nuclear picture in New START while having more theater-level weapons so it can threaten China, because it will quickly come to the point where China will be able to defeat Russia conventionally, and Russia wants more flexible options (inculding nuclear options) to respond, without starting an arms race with the US (because it cannot possibly win an arms race with the US).
If your response is that Russia can already drop nukes on China, well, yeah, but it doesn’t want to nuke Beijing because Chinese aircraft are blowing the hell out of Russian airbases in Siberia. Much easier to just blow up the airfield, especially with a ground-launched missile system, becuase China is going to have a hard time responding to that, whereas it might have an easy time blowing conventional bombers out of the air.
Within the context of the Cold War, it was obviously a huge deal. In the 70s the US had essentially conceded to strategic nuclear parity and overall strategic detente with the USSR, and it appeared to a lot of Western observers that the USSR had tried to use this to establish nuclear and strategic primacy (see 1979 Afghan invasion and the deployment of highly accurate, highly mobile, difficult to stop, not-covered-by-arms-deals SS-20s). The Western response was a big build-up.
This was a huge, huge break in the direction of ending the Cold War. Eliminating an entire class of weapons, easing tensions, actual verifications. This would not have been expected in the late 70s, when it appeared like we were heading for another series of brinskmanship crises like the Berlin Crisis or the Cuban Missile Crisis.
Here’s my understanding, which I’m not certain is completely correct:
Intermediate-range missiles are a problem because they allow an adversary to establish the capability to launch a first-stike nuclear attack without meaningful warning. (Think the Cuban Missile Crisis, resolved by both sides withdrawing their systems.) Air and ship launched missiles aren’t as much of a problem because you have to move the launch platform into place first (SLBMs are OK because they aren’t use-it-or-lose-it weapons). Orbital weapons are banned for the same reason, and this same logic is also behind why a nuclear Taiwan or South Korea is undesirable in spite of some rather obvious incentives in the other direction.
Any missile with the range specified under the treaty is going to have enough throw weight to carry a nuclear warhead whether one is actually attached or not, and there’s no way to tell the difference in flight, so the INF bans all missiles regardless of payload, rather than just nuclear missiles.
Fast forward to 2018. Several things have changed. For one thing, the the US has spent an entire generation blowing things up with Tomahawk missiles, unaffected by the treaty since they’re not ground launched. UAVs have gotten far better and are improving. Why is it OK for an “unmanned aircraft” to fly 1000 nautical miles to attack a target but not for a “missile” to do so? What happens after a few upgrades when the low-observable Reaper-F flies nap-of-earth 1,000nmi to attack?
Russian doctrine also looks to be shifting towards surface-to-surface fires. The munition tested by the Russians is a variant of the SS-26, which currently has a maximum published range of exactly the 500km specified by the INF. The Russian Army action in/supporting Ukraine included a remarkably large proportion of (short range) missile/rocket artillery, for instance. It seems that the development of intermediate-range missiles may be part of an effort by the Russians to, in effect, substitute them for strike aviation at lower cost and in the face of enemy (i.e. American) air superiority. Intermediate range missiles in the treaty-banned 500-5500km range are desirable for this because they offer an appropriate mixture of standoff and tactical flexibility — something a theater or corps commander could have direct control over.
I think the INF still makes sense, as the prospect of a hair-trigger nuclear standoff is very bad. Perhaps the treaty can survive some discreet nibbling by the Russians, at least for a while. If the treaty falls apart, a new one should be negotiated with the goal of preventing such a situation (e.g. bases in Poland and Russia pointing nuclear weapons at each other with 5-minute warnings, etc). If not, a new treaty should be negotiated. An agreement restricting only nuclear intermediate-range missiles would be harder to enforce for instance, but it would be better than nothing. Or, it could restrict the deployment of such systems rather than their existence, using about the same logic by which sea and air missiles are currently allowed.
None. They mostly got banned because MAD is safer if the decision loop is longer. – That is, long range missiles take 30 minutes to turn your capital into glass, which is long enough to double check the launch you think you saw is not a flight of geese. A missile flying a shorter trajectory does the same thing, but if you think you saw a launch from Cuba or Berlin, you do not have nearly as much time to go “are we sure this is real?” before you have to counterstrike.
It is a really bad idea to fuck with this. I would, in fact, prefer it if we could all stick our deterrents on the bloody moon where the loop would be 3 days.
I’d be interested in more discussion of the accusation in the linked article:
If Russia isn’t going to follow the Treaty, then there isn’t much point in NATO continuing to unilaterally, while (assuming Russia has more to lose from missiles in Europe) it may encourage Russia to decide maybe they want to follow it after all.
Either way, nuclear tipped cruise missiles are probably (layman’s view) less vulnerable to early detection and interception.
There is still a point in following it – As long as Russia knows the west has no missiles with a flight time of 10 minutes, Russia will not panic launch in nine minutes if the think they see an incoming strike, and the arsenals of Nato are, in any case, not vulnerable to first strike by medium range missiles, since they are either on boomers (France, the UK, US), or on US soil.
MAD does most emphatically not require that nuclear arsenals be mirrored, MAD works as long as you possess the capability to carry out a nation-ending second strike, and medium range missiles add no additional capability in that regard.
… And yes, I am flat out stating that the entire arms race that ended up with both Russia and US having thousands of nukes was based on both sides politicians not really understanding MAD – The French and UK deterrents serve the same function as the US and Russian ones at a tiny faction of the cost because those nations actually understood that once you have enough redundancy that venegance is guaranteed additional missiles do not actually add anything.
The entire stealth bomber program was also insane. giving people the idea you can carry out first strikes with no warning is not safe. Thats how you end up with the other side deciding they need to turn of their permission action locks and give the commanders of the boomers full launch authority.
I’m not sure this is really the consensus agreement of nuclear planners, if just taking the outside view that they clearly haven’t taken this strategy.
The British and French nuclear arsenals are more of a credible minimal deterrence rather than a MAD. Neither one can reliably maintain 2 subs non-stop, and probably didn’t during the Cold War. Plus, the Polaris was vulnerable to ABMs, and had a maximum range of only a few thousand miles. So it’s not like a British sub would be able to take out the whole of the Soviet Union.
They are useful as a minimum deterrence, but the USSR probably could’ve taken them both on in the 70s and the 80s, depending on how many casualties they felt comfortable with. Minimum deterrence is all well and good, though, and usually sufficient. It’s not just sufficient if you are the USSR or the US and are playing for all the marbles.
The US has a lot of different weapons for a lot of different options, which gives us more credibility. If you drop a nuke on a French army base, is France really going to counter-nuke Moscow and therefore start WWIII? Not a question for the US: we have plenty of proportionate response options. This was something demanded as early as JFK because he thought the idea of a full blast, nation-ending option against the entire Communist bloc was way too limiting.
Bombers in general are useful because they can be recalled. Stealth bombers have surviviability against Soviet air defenses. You also can’t really hunt down road-mobile ICBMs like that, so you can keep a minimum deterrence against the US as well. They are very useful for mopping up anything in the Soviet arsenal that still exists after the first strike.
That’s not even remotely true. The whole point of the UK nuclear deterrent was to make it impossible for Russia to overrun the UK without dragging the Americans in. It was not because they thought they had enough firepower to deter Russia on their own. Chevaline was designed to make it possible for the British to destroy Moscow. Nothing more. One city, even the capital, might have been seen as expendable by the Soviets if the rewards were great enough. In fact, I’ve seen a compelling case that they were essentially using Moscow as a giant strategic decoy.
This is true so far as it goes.
This is where you fall down. The threshold required for a “nation-ending” strike is a lot higher than you think. First, most cities are big enough to require more than one bomb. I once did a fairly serious targeting analysis on St. Louis and it took 6 300 kt weapons to basically knock out the city for military and industrial purposes. Any fewer, and important stuff was getting missed. Now, St. Louis is the 21st-largest metro area in the country. The British currently have 40 warheads on each Vanguard. The math is pretty simple.
Second, nuclear weapons aren’t 100% reliable. If there’s something that really needs to be destroyed, you have to assign more than one weapon to it. And this assumes that ABMs aren’t muddling things even more.
Third, there are lots of non-city targets. The US has 450 ICBMs. If we take related infrastructure and ignore number 2, that’s still 500 Russian warheads focused on them and not other things.
Fourth, there aren’t that many warheads these days. Russia and the US each have a total of about 1500 strategic warheads. Once you start trying to use them in earnest, they run out really quickly.
Uhh, yeah. That’s obviously what the point was, because stealth aircraft are invisible and magical. And why we also built a fleet of stealth tankers, operating from bases the Soviets couldn’t possibly have spotters near. Oh, wait…
Ah. You’re possessed by the ghost of Robert McNamara. Everything makes sense now, and I will get the supplies for the exorcism ritual ready.
NATO isn’t following the treaty, because NATO isn’t a party to the treaty. The United States of America is, and the US has no good reason to leave. The INF treaty applies only to land-based intermediate-range missles; as the player with air and naval supremacy we’ve traditionally put our missiles on ships and planes.
We have no intermediate-range missiles suitable for land deployment except some old 20th-century cruise missiles, and that’s not going to change any time in the next decade. If we did spend the next ten years fielding some shiny new intermediate-range hypersonic missile complete with air-turboscramwarp drive or whatever, there would still be no compelling reason for us to deploy it from land bases.
Staying in the treaty, means every time there’s any diplomatic conflict between the US and Russia, we’ve got the “…but you guys are openly violating a nuclear arms control treaty that took us back from the brink of Armageddon!” card in our hand, and everybody knows it. Instead, Russia now has that card, and the ability to build and deploy without consequence formidable weapons that they do have in advanced development and which are genuinely useful to them. There was probably nothing we could really do to prevent them from deploying those weapons, but we could have made it cost them. Now we can’t.
My understanding is that the Russian response to this point has been “Yeah and you guys are flying sky-sassination drones around like it’s going out of style so we’re not going to stop.”
I have seen very little in the way of Russia complaining about drone strikes, and nothing linking that to the INF treaty. Do you have any sources on that?
Mostly, their whataboutism on that front has focused on the proposed US missile defense site in Poland, which they claim will secretly house hypersonic first-strike weapons for a decapitation strike against Moscow. This is some mix of lies, pedantry, and paranoia, but it at least compares apples to some sort of fruit.
It’s silly, of course. Currently existing UAVs aren’t anything like missiles. The fact that a “drone” tantamount to an “intermediate-range missile” could be constructed doesn’t matter.
And when we start flying one that couldn’t be shot down by a P-51 (or an La-5, because they’re Russians) they might have some reason to worry.
The fediverse now has a MeetUp-type node implementation:
Do consider using it, instead of Meetup or Facebook, to announce and track SSC social meets.
In whatever quantity it might exist, how much of the anti-Semitism (“hatred of Jews” is a better but clunkier term) that emanates from Christians (or people from Christian families/lineages) is inspired by Christian theology, or perversions thereof? (The “Jewish deicide” concept being probably the most noteworthy instance of this.) Are there any present-day examples one could point to?
An elderly family member is convinced this is a serious and widespread phenomenon; I am not. I asked him for some examples but haven’t heard back, so I’m asking y’all.
All the (non-Muslim) anti-Semitism I could find in about an hour of searching online (and boy was that ever fun) was based on tropes about Jews gaining economic advantages through mysterious and unfair practices, or weakening the non-Jewish societies in which they lived, typically through greedy, self-interested political agendas and media dominance/control. It really did fit entirely into those categories. Nothing about Jews killing Jesus or anything like that. I didn’t even see anything about the Protocols of the Elders of Zion! What am I missing?
This one isn’t mysterious, but there is a trope that in the Middle Ages Jews were the only ones able to lend with interest since the Church forbade usury, with the consequence of a lot of Jews doing banking rather than Christians.
I’m talking about modern day. Like for example, I saw a thread on a certain …uh.. inclement weather-themed .. website claiming that George Soros & other rich Jews got their wealth at least partially because of some special backdoor access to the US treasury available only to Jews (or something like that).
Path dependence. At some point in the past, Jews were considered evil bankers because they were the only ones who were bankers at all. Modern anti-Semites are descended from ancient anti-Semites, and kept the ideas that their ancestors had even though the circumstances have changed, leaving the anti-Semitism dangling without really referring to anything any more.
Can you elaborate on this?
It seems to me that theologically-based anti-Semitism was once a widespread thing but no longer is, while the secular justifications for it (e.g. those I mentioned in the OT) are relatively newer. (Which isn’t to say both didn’t exist simultaneously for a long time.)
It seems to me that the root of anti-Jewish sentiment is neurotypical dislike for autism-spectrum traits (which are as evident in Bokharans as in Boro Park Ashkenazim, oddly enough). Since low-church American Christians also tend to be on the autism spectrum it’s to be expected that they would like Jews.
(The Hebrews were set apart from others by a rapidly-spreading autism-wave in the Levant; the “infected” gathered together in Canaan. To be “infected” was to be Chosen.)
Would you expand on that? Why would people with autistic tendencies be draw to churches with austere rather than elaborate worship practices?
To answer your specific question, austere religious services would be better for people with sensory overload problems.
I’m not convinced that low church American Christians (who does this include?) are especially apt to be on the spectrum.
Still, there’s a fun hypothesis here. Were the Chasidim a revolt of the neurotypicals?
That doesn’t make much sense. The general group that sympathizes most with Israel/the Jews is the Evangelicals (which has some overlap with the Low Church, I think) and while our services are usually short on liturgy, they’re probably worse for someone with sensory overload problems than most liturgical churches. The music is louder and of a faster, less predictable style. Also, I don’t think any church I’ve ever been to is dominated by spectrum types.
[Response to Johan Larson’s question “Why would people with autistic tendencies be draw to churches with austere rather than elaborate worship practices?”]
It’s not the austerity that I had in mind, it’s the utterly life-structuring totally-into-it-ness of enthusiastic churches that I had in mind — the way in which congregants see everything in terms of this web of practices and images. People on the spectrum are totally into whatever they’re into in just this way. (And I like that about them.)
No, wat is where Buddhists worship.
Christian Identity doesn’t quite fit what you’re talking about in your first graf, but it’s not really the same thing as what you’re talking about in your second graf, either. Arguably it’s sort of a transitional fossil between the two.
I only heard about this in college even though I was immersed with Christianity growing up. I had an Orthodox Jewish roommate and I expressed surprise that he said that historically the death of Jesus was pinned on Jews, since my response was, “that’s odd, he’s Jewish and there’s a literal sign hanging over the cross saying King of Jews”
I’d pin it more on envy. I have a lot of respect for successful people who got their gains honestly. But I also know people who don’t like others being in a better position than them financially, and it’s even easier to do when the culture is unique and different from your own. Though speaking as a Chinese-American, this may be because I’m in a similar historical boat (Chinese Exclusion Act etc.)
Some of the Jews in Jerusalem ca. 30 AD looked upon the prospect of having a king, rather like Denethor looked upon the prospect of Gondor having a king. And the Bible is IIRC fairly clear that, while the non-Jewish Pontius Pilate signed the order, he had no great desire to see Christ dead and did so mostly because the local Jewish community (or some subset thereof) demanded it.
[Response to John Schilling’s “Some of the Jews in Jerusalem ca. 30 AD looked upon the prospect of having a king rather like Denethor looked upon the prospect of Gondor having a king.”]
The Jews in Jerusalem ca. 30 AD certainly wanted a king. You wouldn’t have wanted the protagonist of the Gospels to be your king.
The writer of the Gospel of John explicitly and consistently portrays “the Jews” not merely as Satanic villains but as ignoble conniving contemptible Satanic villains. The Jesus-character himself explicitly denounces “the Jews” as hypocritical lying sons of the devil. The character’s foaming-at-the-mouth Jew-hatred might have a little to do with the anti-Jewish sentiment of those who think that this Gospel is the Word of the Word: it’s an easy deduction that God hates hates hates the Jews.
In what translation, though? Does Jesus actually denounce “the Jews” or is it “the Pharisees”?
Anyway, the foaming-at-the-mouth hatred might also have something to do with the fact that c.100-150AD Romans were going around killing all Jews on sight, so if you were a new cult based around worshiping a dead Jewish guy whose main schtick involved saying “I did not come to abolish the Torah”, you had a strong incentive to say “But we’re not Jews! We HATE Jews!”
Well — in John it’s “the Jews”, in the Synoptics the Pharisees. Note that Judaism after 135 at the latest IS Pharisee-ism, and this would have been quite clear to all Christians. So Synoptic anti-Pharisee-ism is going to be understood as the divine declaration that Jews are nasty hypocrites.
Another point — an enormously guilty conscience must attend the project of turning a nation’s sacred literature against that very nation. Imagine a new religion proclaiming that the Upanishads are indeed sacred literature but that the Upanishads are to be understood as communicating Brahma’s intent to discard the unworthy, spiritually darkened South Asian people. (Or use the Mahabharata if that works better — there’s a more national feel to it.) Or that the Zend-Avesta is holy writ carrying in its essence the declaration that the Persian people are vile conniving liars despised by Ahura-Mazda. This is what Christianity does. So, if you’re a Christian you’re likely to worry, somewhere in the back of your mind, that you’re committing an ongoing truly hideous intellectual crime against the Jewish people, and to justify yourself and allay your guilty feelings you’re going to try your hardest to believe that the Jews deserve what’s being done to them.
Do people feel guilty about turning “all men are created equal” against the slaveowners who wrote it? Is there a haunting guilt at the ongoing truly hideous intellectual crime there?
checks back of mind
Hmmm, nope! No guilt here, and I’m Catholic. 😀
@suntzuanime: Oooh, nice.
Yeah, suntzuanime pretty much wins the thread.
@suntzuanime I don’t understand the analogy. In that case hypocrites were the original authors.
“Another point — an enormously guilty conscience must attend the project of turning a nation’s sacred literature against that very nation. ”
I don’t see any evidence of that. I’ve read somewhat about the history of anti-Semitism, but I’m hardly an expert. However, I think that attitude is very modern, and when I say modern, I mean influenced by Social Justice.
If you’ve got evidence of that sort of guilty conscience, I’d be interested to see it.
I believe people used to be a *lot* more unconflicted about bigotry. Modern bigotry evokes fear and hatred. I believe I’ve seen a difference form the old raw stuff, which is completely uninhibited about evoking disgust as well. The old style isn’t gone– and may be coming back– but it used to be typical.
What I have seen is uneasiness about turning against God’s chosen people.
Yet again, I recommend Anti-Judaism: The Western Tradition, which is about how non-Jews have seen Jews.
I’ve only read about a quarter of it, but there were early centuries where Christians were just twitching about how to deal with the fact that Jews hadn’t converted to Christianity.
The “God’s chosen people” thing is going to make the outgroup intensely offended if they think about it very much and they’re not invited to join the chosen people.*
Religions other than Christianity and Islam take the first fork: not thinking about Jews very much. In India, Hindus called them something that translates roughly as “no-Saturday oil-pressers”: just an occupational caste with a notable taboo.
Conversely, if you read St. Augustine’s The City of God, he goes on about how racism is bad, then has to explain why God, the Good, would invent a national religion (just a temporary stage of His plan, after which we could all be the chosen people).
*Did Judaism proselytize in the Hellenistic period and first century of the Principate, or was it always an ethnocentric religion? That’s something I’m vague on.
They most certainly did, yes.
It doesn’t sound so bad when your neighbors are henotheists, though, does it? I don’t know… well, anything about the ancient Near East that didn’t come up directly in my Bible classes, so maybe dndnrsn or someone should weigh in. But it seems to me like the Jews could have easily been (mis)interpreted, “Oh, you have your god then and we have ours.”
@Le Maistre Chat
Could you pass along a cite for that? I’d love to read it.
On the one hand, Jewish religious authorities in Jerusalem at the time would see it as being very plausible that some rabble-rouser might bring the hammer down on them, and conclude that stuff that would make the Romans jumpy is bad for everyone. (Making Jesus Christ Superstar, bizarrely, one of the more plausible Jesus movies, at least in that regard). That he was challenging the currently-existing religious authorities would have added to it. The historical figure of Jesus seems to have caused some sort of kerfuffle in the Temple, and this appears to have been the cause of the events leading to his death.
On the other hand, the accounts of Pilate’s involvement in the Gospels aren’t necessarily historically reliable. I think (I’d have to go look it up) there’s some other historical references to Pontius Pilate; he doesn’t seem to have been a very nice guy. The Roman imperial authorities weren’t generally in the business of erring on the side of mercy; execution for fairly minor crimes on fairly limited evidence has been pretty normal in many places and times.
To my eye, the most plausible explanation is:
1. Jesus builds up some kind of (mostly rural?) following, there’s stories about him being impressive, he’s saying some stuff that’s a challenge to the religious authorities, maybe some stuff that could be read as a challenge to the imperial authorities, and then he goes and causes a ruckus in the Temple.
2. The religious authorities, worried that some sort of trouble will start, then riots, then the Romans smash everything to ensure the rubble is nice and orderly, decide to grab this troublemaker.
3. They hand him over to the imperial authorities, saying “hey, this guy is causing trouble and fomenting possible rebellion.” The imperial authorities quickly decide to have Jesus executed.
As I recall, the destruction of Jerusalem was eventually because of a ruckus that turned to riots that turned to the Romans going full legion, so the Jewish religious authorities weren’t wrong…
What else have you got?
Tacitus says “Pontius Pilate had Jesus Christ put to death”.
Josephus says “Pontius Pilate had Jesus Christ put to death because powerful Jews asked him to”.
Four Gospels say “Pontius Pilate had Jesus Christ put to death because various Jews asked him to; he didn’t really want to but they insisted”.
And that’s it.
The question we are asking is, why are Christians inspired by Christian Theology specifically claiming that Jews are responsible for the death of Jesus. Are you really surprised by the discovery that Christians inspired by Christian Theology do not reject a belief supported by all direct documentary evidence including their own scripture, just because those documents “aren’t necessarily historically reliable”?
I agree, the documents aren’t necessarily historically reliable. Some of the authors may have been fibbing, and some of the details seem to have been inserted by later editors. But that’s all that the Christian community has to go on, for answering the question “who killed the mortal incarnation of our God”?
I’m not surprised, but I’m noting that as a scholarly matter:
1. The authors of the four gospels we have (as well as some of the noncanonical ones; I think there are some that make John look positively philosemitic) had their own reasons to be hostile to Jewish religious authorities, starting with the major ones and working their way downward as Christianity becomes an identifiable thing.
2. Meanwhile, there’s an argument they had a reason to want to look respectable in the context of the Roman empire and its worldly (I don’t want to say “secular” because that has different connotations today) authority. Making the story’s major Roman come off as relatively guiltless would work along with a tendency one can see elsewhere to make the message less disruptive to worldly authority.
3. I’m no expert, but my understanding is that Roman imperial rule could be pretty harsh, and that a local authority having a guy presented to him as a rebel or potential rebel wouldn’t have much reluctance over having the guy put to death. We don’t know anything about Pontius Pilate specifically, because besides a few written sources, our evidence the guy exists is an inscription somewhere (I think).
So, it’s a speculative argument based on a little external context, some reasoning, etc. A lot of Biblical scholarship is (“what would this author have had an incentive to do?” leading to “this author would have had an incentive to put this in, so we can take it out”). Ultimately I’m confident (say, 80%) that what went down way back when is: Jewish leaders think (fairly rightly) that this guy is going to cause trouble for all of them in more than one way and that for the sake of the nation this Jesus must die. They hand him over to the Romans, and their boss man takes a moment to think about it and says “yeah, have some guys crucify him.” So they do.
This is entirely separate from the question of, historically, how the anti-Jewish tradition in Christianity developed, which is a lot further from my wheelhouse.
You need to differentiate motivation and justification, although in some cases that is blurry as an instigator, such as a medieval king who is motivated by seizing Jewish assets, uses theological history (ie, not actual theology but seeks to avenge religiously important historical events) to rile up a mob of unsophisticated peasants, who remember and pass on the justification.
However, modern Christianity, in my experience, is on net quite pro-Jewish.
I think that this variety of racial/cultural hatred uses any weapon to hand as an argument.
For a long time of history, when “Christendom” was a description of a culture dominated by the Church, and Jews were a distinct ethnic/religious minority that lived in separate communities, charges like these people are representatives of the ones who killed Jesus were easy to use by those who wanted to stir up a crowd of mildly-antagonistic-towards-Jews townsfolk to perpetrate some violence on the nearest Jew.
In the Gospel according to Matthew, the narration of the crucifixion of Jesus includes a quote attributed to the leaders of the Sanhedrin, “May [Jesus’] blood be upon us and our children!” Again, in a cultural context in which such things are well-known, this is an easy rhetorical weapon to use against any Jew who is nearby.
I get the impression that this attack on Jews used to be common in the Catholic world…it might also have been common in the Orthodox world.
That particular attack on Jews is not nearly as common among Protestants. Mostly for cultural-history reasons: the culture of various Protestant countries has developed cultural antibodies against religious persecution. (Protestants aren’t immune to those behaviors; ask anyone from Ireland…and ask why Parliament barred any Catholic from wearing the Crown in England. )
The concept of a government and society that doesn’t punish people for practicing the wrong religion was developed in Protestant-majority countries. It was strengthened in North America by cultural memories of running away from persecution in the Old World.
“The concept of a government and society that doesn’t punish people for practicing the wrong religion was developed in Protestant-majority countries. It was strengthened in North America by cultural memories of running away from persecution in the Old World.”
Was it though? A lot of the early examples of “edicts of toleration” were actually in (at the time) Catholic majority countries (notably France, Hungary, Bohemia and Poland).
I’m more familiar with the English-speaking world, both positive and negative.
The impression that I get is that after spending more than a century trying to demoralize/remove Catholicism from the culture (and a Civil War, and Parliament outlawing Catholics from wearing the Crown), the English-speaking world began to act as if King/Parliament could not see into the minds/hearts of individuals. Thus, it was hard to judge the veracity/usefulness of enforcing religious conformance by the edict of the King, or an act of Parliament.
This idea took a long time to apply to all circles of life. It was likely codified into law, officially, after similar actions in France/Hungary/Bohemia/Poland.
But I do know that settlers in North America (from Catholics in Maryland to Puritans in Massachusetts and Quakers in Pennsylvania) eventually decided that it was not a good idea to use governmental/social opprobrium against religious minorities.
The history of the United States isn’t perfect on this front, either. (The history of official and unofficial violence against Mormons are an example to cite.) But the general trends are against such things.
Thus, I wouldn’t expect to see religiously-motivated hatred against Jews, especially based on the Christ-killer meme, anywhere in North America.
Pressing a point I made above — Puritans and Jews are psychologically so similar that they’re naturally going to like each other.
Anyway, being “religious” in America is like playing Dungeons&Dragons; it’s all an intense imaginative game; you’re not going to get into fist-fights with people who play other games of the same general type — you’re probably going to like them.
When I was a kid, I read Vance Packard’s The Status Seekers, and was surprised to find that in the US, which Protestant denomination a family belonged to was a matter of income level, and if the income level changed, so did the church the family went to.
Didn’t these people take their religion seriously?
Probably as well that they didn’t, considering the religious wars of Europe. America is a miracle in that regard.
There’s a shift, I think in the late 19th/early 20th century, from a religiously-based anti-Judaism, to a biologically-based antisemitism. The former is about stuff like “they killed Jesus” while the latter is the sort of stuff that is familiar today: racist theories, conspiracies beyond blood libel and occasional peasant grousing about moneylenders, narratives of one biologically-determined people struggling against others, etc.
One example of these two hatreds clashing is, during Nazi persecutions of Jews, while relatively few Christian churches objected to what was happening to the Jews in general, many were upset that Jews who had converted to one denomination of Christianity or another were still targeted.
Sticking large-scale fantasy in the Bronze Age:
As you may know, I’m a Greek myth-centric D&D campaign, set in our Late Bronze Age with (at least) the supernatural elements the ancients thought it had. That means things like, in the 1400s BC “Dionysus led an army of korybants, spellcasting sea dwarves, centaurs, cyclopes, Pans, satyrs, nymphs and humans against the Indians of Syria and India.”
… well then. How far can you go with supernatural military units before they would ruin history? What if I want to say that the line of underground cities running from modern Nevsehir on the south side of the Red River south to Kaymakli, Derinkuyu and beyond was a line of Hittite fortresses that had to be entirely underground because they were at war with an enemy to the west that had, like, dragons or such?
Well, how effectively can these units be countered? If the underground fortresses counter the dragons just fine, I don’t think history is ruined. But if the presence of dragons is destroying the economy of the near east or threatening human extinction or something then you’ve got problems.
Nah, nothing like that. Just some biological bombers with the implications that would have. Any economy-destroying monsters would live in a prehistoric area screened from the near east by layers of civilization (maybe the poorly-documented Trojans had a Black Sea Fleet to protect civilization from the three-, six-, nine- and twelve-headed dragons north and west of the Black Sea? 😛 )
Oh man, I love the image on that page. Can’t tell if Slavic or the cover of a Dragon Ball manga.
I think one line is that supernatural creatures can’t be allowed to reproduce by killing people. For example, starting from just one vampire, you can have that one generate new vampires and install one in every village and have it mind-control the populace. Starting from just one werewolf, you can give some enemy nation a really bad werewolf infestation. Starting from just one shadow, you can — well.
Saying that the bad monsters live somewhere far off won’t help, because one of the “civilized” nations is going to send adventurers to fetch one of the nasty monsters back as an ultimate weapon.
Thanks; this would only be a concern for certain undead, as contagious werewolves are just too silly.
I think you can use contagions monsters, but only as existential threats. My fantasy Europe scenario, for example, had a zombie plague replace the black death.
Well, a partial solution is that the newly created monster doesn’t necessarily obey the first monster. If you’re a vampire and create another vampire to rule the village for you, he may decide he doesn’t like being your vassal and start rebelling against you.
Intelligent monsters may end up being very careful who they make into monsters. This also leads to the modern idea of a hidden society of vampires who police each other. One of the crimes in such a society may be to make another vampire without permission, since that’s a threat to the order. If they’re a really nice vampire society it may only be a crime if the new vampire goes rogue..
Also, I disagree about contagious werewolves. Being turned into a werewolf when bitten by another werewolf is even more a part of the whole idea than it is for vampires.
Since when, though? All the oldest examples I can think of have werewolves happening by divine power/magic, with no mention of contagion.
In the case of Lycaon, after whom lycanthrophy is named, there’s an unspoken implication that there were werewolves after him, but it’s never said that they were made by people surviving getting bit by him.
Herodotus (skeptically) reports that the entire tribe of Neuri in northwest Ukraine transform into wolves once a year, which was related to being magicians.
In Marie de France’s lay Bisclavret, the werewolf (whose transformation is weekly and permanently unless the doggo puts his human clothes back on) rips his wife’s nose off without her turning into another one.
I will second this and add:
Your mythical creatures cannot be superior to humans in every way. If you have a Dragon, don’t make him Smaug and thus intelligent. If you have vampires, make them the extremely limited cannot cross rivers cannot walk in daylight type. ETC ETC.
The key to framing mythical beasts without causing some new thing is that you balance the advantages and disadvantages. IMO a good choice will be any of the canonically stupid beasts, or something that has an exploitable weakness for the tech.
Tolkiens dragons were primordial – Older than the earth, and undaunted by time. You can have dragons like that in the world without them taking over the entire setting – they are genius loci or personified natural disasters that can befall a people or a cit.
Yes, but you can’t have them have only a minimal impact on history. The single dragon Smaug significantly affected the development of Middle Earth in the Third Age.
I’m taking it for granted that at least one monster had an impact on history, during the reign of Ahmose I.
Tolkien dragons aren’t older than time. They were created by Melkor.
Smaug significantly affected history because he liked treasure- something only humans and similar creatures can produce. A supernatural monster that doesn’t care about people or their stuff would have far less impact.
Alternatively, a sufficiently powerful king could be replaced by a powerful monster with an interest in ruling people, as long as the monster isn’t excessively long-lived.
That is the wrong question. The question you should be asking is “How much fun could I have completely derailing history?” – and the related “Why did magic show up at this time?” – because if magic has always been there, you do not get ancient Greece at all, you get some other culture who won the race to deal with magic most effectively without having some other mortal flaw.
I don’t know how to answer that, because in a mythic worldview, magic was always there and may even be waning over time. Like in the Iliad, Nestor and Diomedes talk like Greece and the west coast of Asia Minor have been completely cleared of monsters, leaving gods as the only supernatural element in their lives.
What do you mean by “ruin history?” If you simply mean “The course of history can plausibly lead to the present day without major disruptions like the Dragon Empress conquering America” then you can get away with a heck of a lot. After all, if one side has dragons but the other side has giants, the overall military balance of power might well be the same. The main question is how much of the historical record you’re willing to handwave away as being lost to the mists of time.
(“Why don’t any of the Hittite records mention dragons? Did nobody think it was worth writing down?” “They did, but all the records got burned up in a dragon attack.”)
However, if you mean “no evidence of the supernatural is easily visible in the historical record,” then you can’t add a whole lot. A city burned down by dragonfire will leave burned ruins that archaeologists might find. Dead centaurs will leave skeletons. If the cyclops wield hills for their stony spears and cliffs for shields, that’s going to leave some suspicious-looking geology on the battlefield. Not to mention all the other documents that might be affected besides the story of the battle itself.
Yes, that. Because the “archaeological evidence would be abundant and last hundreds of millions of years” guy wouldn’t let me do anything fun at all.
In the last .75 thread, bean gave us his initial impression of the Lion Air crash, provisional on what we learn from the flight recorder. I think the flight recorder has been recovered in the last couple days, so I assume an update is coming whenever the details are made public. In the mean time:
The coverage in the news is currently focused on the dispute between Boeing and Lion Air about why the automated trim system that apparently led to the crash was not mentioned in the manual[*]. How reasonable seeming is Boeing’s decision here? And how serious is the dispute or is it just media sensationalism?
[*] The media never gives you enough details for even a layman like me. From context, they seem to be focused on the “cross train a pilot for a previous model of the 737” manual. I assume there is also a “how to train a pilot from scratch” manual. Since Boeing’s argument seems to be “from the pilot’s seat the system (and importantly, how to turn it of when it malfunctions) is the same as previous models so no additional training is necessary” how reasonable that is depends rather highly on what manual, exactly, we’re talking about.
I don’t know whether this is transferable across disciplines, but over here in the software world, saying something is documented in the manual is a really weak statement, the lowest grade of ass-covering. Manuals are massive piles of information, really only intended for reference purposes, and no one ever reads them cover to cover. If something goes wrong with a system and your answer is that this issue is documented in the manual (and the user should have damn well have read it) that may count for something in court, but that’s all. And chances are it will never get to court; you’ll simply lose the account.
Note: it’s the voice recorder that is still missing. The flight data recorder was found, and provided the data used in the report.
I think Boeing is mostly in the right, although I do have something of a conflict of interest on this one, because I used to be on the manufacturer side of the airline industry. Basically, there was a checklist for “what to do if the stabilizer trim is misbehaving”. Doesn’t matter why it’s doing that, the checklist works. The crew on the previous flight followed it, and managed to land safely. The crew on 610 didn’t, although they had plenty of time, as they were in control of the plane for 10 minutes after the problem started. Unless the checklist didn’t work (which is unlikely, but we won’t know for sure until we get the CVR) and/or the previous flight was flown by the Indonesian equivalent of Chesley Sullenburger, then the main cause is bad airmanship and Lion Air is blaming Boeing to cover for itself and its pilots.
That said, I think Boeing may have been a bit too aggressive with its “no training needed” approach to the MAX. It certainly wouldn’t have hurt to at least mention the system somewhere obvious.
Hm. I did a quick google search before posting and the dates on the “X Recorder Found” articles that came up were dated “one day ago,” so I guess I assumed I had the correct value of X instead of slow reporting.
How much of that is a reaction to Boeing being forced to scrap their clean sheet replacement for an iteration of the 737?
From everything I’ve seen, the 737 MAX was managed very conservatively, mostly because of how many problems the 787 had. But I don’t think that (and the scrapping of the Y3 project) was driving this. “No Training Needed!” is a much better marketing line than “Only A Little Training Needed!”, pure and simple.
What is the compliance with these kinds of checklists usually like? Are they generally followed well (with consequences for people in charge if they don’t comply even if nothing happened etc.), or are people generally more lax with them because modern airplanes have multiple layers of safety? I can imagine a world in which checklists are overblown with unnecessary entries or steps by the plane manufacturer to avoid liability, and people are expected to have the tacit knowledge which parts are actually important.
I should start by saying that I’m not a pilot, and that I worked in structures, which is the part the pilot has the least control of/interaction with. But at least in the US, compliance is very good. People forget things, and this is drilled into pilots. I don’t know what the situation is like in Indonesia.
That said, emergency checklists are a somewhat different situation that normal checklists, and one I’d expect to be followed more closely. Pilots don’t have to break out the “uncommanded stabilizer trim movement” checklist every day, and if they need it, then they really should be following it precisely.
And aviation safety as a whole is one of the better fields I’m familiar with at focusing on effectiveness, and when they get distracted and start chasing shiny things, it’s more likely to be too much safety. The thought of planes crashing focuses the mind wonderfully. So checklists tend to focus on what’s actually necessary, and not merely protecting whoever wrote it from lawyers. I think one of the main drivers of this is that the general public doens’t get to play in the field much (which keeps the sort of people who drive stupid warning labels out) and everyone has the same basic incentive to not crash, because it’s bad for the whole industry.
Don’t forget, the liability here for the manufacturer is when the plane crashes. Larding on extra steps won’t provide them with any more protection in a lawsuit, and may actually make a crash more likely (increasing their exposure). An unwieldy checklist is one that will have steps missed.
There are some checklist items that you want to memorize. For example, in the event of an engine out in a small plane, you want to know from memory what your best glide speed is so you can trim to it immediately before digging out the physical checklist. In that situation, the time can turn into extra radius to find a safe place to land. I think there are similar types of things for airliners–where seconds count the crew should know the first steps from muscle memory.
That’s not what happened here. The plane flew for about 10 minutes after the system first started trimming down, and they retrimmed manually many times. They stopped for unknown reasons shortly before the crash.
Consider the possibility that the portion of users whose lives are destroyed by hard drugs are actually a small minority. Once i remember an Italian city, i think it was Rome or Milan, decided to estimate how much cocaine use there was in the city by analysing the sewer water. The estimates they came up with by that method was something like 10 times the usage rate that the police was estimating. Why such a huge disparity? Well the drug users most likely to interact with the police are the fuck-ups, and if the fuck-ups are only a tenth of total users, then police would tend to grossly underestimate total usage.
Now turn this around and suppose that people who associate with the drug using communities have a much clearer picture of things. They can see that 90% of users are fine, and only 10% turn into dysfunctional fuck-ups. They know they themselves are not fuck-ups, so they reason that they can be like the vast majority and use hard drugs recreationally without ill effect. Some portion of these reason erroneously, and turn into the aforementioned fuck-ups, or else were already dysfunctional and the drugs merely aggravated the pre-existing problems.
I question how accurate the sewer measurements are, though.
The aviation industry basically invented the checklist, as no other human-factors engineering technique was sufficient for safe operation once airplanes reached the level of complexity of the B-17. Pilots and other aircraft operators are generally quite diligent about following checklists in both routine and emergency operations, and those checklists are tailored to operational realities rather than legal ass-covering.
The manual should explain why things in the checklist need to be there, but the checklist should be sufficient and if the Lion Air crew didn’t follow it and in particular if they skipped a step simply because they didn’t understand why it was there, then that was clear pilot error. Checklist items are clear Chestertonian fences.
That said, if they didn’t execute the step because they didn’t understand what the checklist said and the information in the cockpit was insufficient to get them there, that’s another problem. This is where the voice recorder becomes important.
There are different types of checklists for different situations. “Runaway Trim” is an emergency which doesn’t allow time for pulling out the manual, so the checklist is required to be memorized by the pilots and accordingly it will be made simple to recall under stress.
Checklists by Boeing or large airlines (companies can have their own) are generally excellent these days. They won’t have any extra fat, they’ll be clear and concise and use standard terminology. Checklist discipline should be 100% but I don’t know about Lion Air. Apparently not that good.
I read an interesting article in The Weekly Standard from Sonny Bunch. One of Bunch’s last points is that television is a different kind of medium from film and books because of time. That is, viewers have long-term relationships with an ongoing television series, while films and books are one-night stands. We experience television as a community, watching it, talking and writing about it, it’s part of our lives while it’s on. And it’s an ex when it’s not, just somebody that we used to know.
This has all sorts of downstream effects. Television can afford to be long when it’s a part of our lives like this: the investment of time is worth it, and we need enough to sate us each week! But when it’s not on anymore, the investment is more daunting, and less worth it. Who goes back and watches hundreds of hours of old television? Why would you, when people are so much less likely to have watched it too, and even less willing to be talk about it? Bunch gives the example of FX’s The Shield, a cop drama I’ve never seen and frankly don’t intend to. He’s got no one to talk to about it because, well, it got old. The Wire, he believes, is the best portrayal of American poverty there is, but how many folks are still talking about that?
Of course, week to week is how we used to experience television. Now we binge it on streaming services. This makes watching old television series easier, sure, but the returns have diminished even more, especially with the explosion of content the last few years. Why have we not been talking about Stranger Things in the open threads the last few months? Because it’s been a few months since it released, of course. You had to binge it right then if you wanted to be a part of the conversation—a little too late and you can kiss that socialization goodbye.
This is something I’ve thought about before, and Bunch lays it all out pretty nicely. He concludes feeling that television might just be too ephemeral to last, that down the line no one will be watching these shows, when that return has dropped virtually to nothing. But I think he overstates his case by considering only a few mediums. For one thing, films are not standalone—we’re in the midst of a ten year epic called the Marvel Cinematic Universe. The same could be said to a lesser degree for DC or the new Star Wars movies. Books are commonly written in series too, and I think this is part of the reason modern fantasy can afford to be so damn incredibly long. That venerable Internet medium known as the webcomic is serial in nature; so is the podcast. Blogs are, of course. Before the Internet, we had the great radio serials, and before those great writers published their books in newspapers, chapter by chapter—a practice that has returned as the web serial.
So I don’t buy that television is unique in this regard. The rest of Bunch’s analysis might hit home, though: will anyone be watching The Wire in a hundred years? I think the answer is, well, maybe. I know folks who go back and listen to those classic radio broadcasts. I’ve been on a Dostoevsky binge, as I mentioned last thread, and he published that way too. Indeed, Dostoevsky’s known for writing doorstoppers, and part of the reason he did is evidently the format, just as television is permitted to be multiple times longer than even the most drawn out film. I read web serials, including binging Worm when it was finished, and I’m not the only person who does that.
But we’re rare, all the same. Part of Bunch’s answer is that some works will be consumed because they just are worth engaging with, despite the lack of “returns” in the immediate. Some works are true classics—an estimation I think Bunch reserves for the fourth season of The Wire or a show like The Sopranos. Those radio greats friends listen to are the gems, and while I think Worm too is good, the true test of that is time.
I’d like to draw some practical conclusions out of Bunch’s analysis:
1. As far as achieving immortality goes, it seems from the Botticelli example that an audience during serialization is less important than its critical assessment later on. I suppose this is true, but I don’t think the modern web serial writer cares. Having an audience is still a really good thing, it seems, for the sake of one’s motivation. As a writer, you gotta finish the damn thing before the question whether future generations will extol it or not comes up.
2. As far as consumption goes, the ponderous length of serial works are still a genuine stumbling block. Wildbow, I know, has been editing Worm the last few years for print publication. I don’t know whether the story will be any shorter after he’s fleshed out the time skip than it is now, but it damn well should be, because I’ve had too many friends refuse to even start it on account of the length, despite the promises that I will discuss every last things with them as exactingly as they wish.
3. If the serial format works these wonders, serialize it again. If the anthropologists of the twenty second century want their friends to understand early 2000s culture, don’t assign sixty hours of video. Have them watch an episode or two a week and discuss it with them instead. Book clubs already do this in a way with particular canons, and websites like Tor have ongoing rereads of authors like Lovecraft and Sanderson. With Dostoevsky we can afford to package months of chapters into one big book, but that’s not realistic with television.
One more thing, which Bunch doesn’t really get to in his article. People of the world have a relationship with serial works, but serial works have a relationship with the people of the world too. I’ve always liked knowing what was going on behind the scenes of television shows—the real world happenings which precipitated the departure of Michael O’Hare from Babylon 5, for instance. If you read the author’s notes, you get to know what’s going on in Wildbow’s life too, and that can have an impact on the story, like when a family wedding and resulting disruption to his life drew the Toronto arcs of Pact out far too long.
So it would interest me to see a genre of commentary where the chapters of a serial work were placed alongside summaries of what was going on in the world while they were being written. So we can read The Brothers Karamazov knowing Dostoevsky’s life at the time and events in St. Petersburg. Is this already a thing? I know critical commentaries already do this sometimes, but I wonder if some works at least would benefit from a more focused look at current events, the way understanding a TV show seems to benefit from this.
You get at this some in your later paragraphs, but I think “serial” vs “episodic” is a better distinction than “long” and “not long”. Plenty of long but episodic tv shows live on in reruns. Stranger Things’ cultural relevancy has probably waned more in the past one year than Seinfeld’s has in the past twenty. But yeah, the combination of “dozens of new tv shows airing every season” + “serialized storytelling that offers much more depth of narrative, characters, and themes, but continually raises the cost of entry for new viewers” is a good formula for creating a variety of great shows in the now, but a terrible formula for lasting appeal.
Makes me wonder if there’s going to be an eventual shift back to more episodic tv. (Probably not, since the downsides of the serial format only exist in the future)
You’re right—this is a big part of why classics like The Twilight Zone (150+ episodes) are still watched. I also mangled this point somewhat by mentioning radio serials, since those were all episodic too.
I think the question of releasing tv shows all at once vs. doling them out one by one is distinct from that of serialization. there is no reason that netflix couldn’t release one episode of stranger things a week. HBO does this with their shows, and I think it’s a better model, precisely because it creates the social effect you describe. you can gather around the watercooler on monday confident that everyone has watched the latest episode. If you release them all at once, some people will watch them all right away, others won’t, and so there’s collective experience.
With the newest season of netflix’s Arrested Development, they released half the episodes in Summer and promise to release the other half next year. Same with Amazon’s new Tick series.
I think there’s going to be a lot of experimentation at finding the optimal release schedule in the age of streaming. However, the optimal schedule for profitability may not be optimal for longevity of the series. A slow burn could generate more lasting affection than a binge–or maybe too long between episodes causes audiences to lose interest.
I have taken some beloved shows of my childhood and introduced them to my children. Gargoyles and Exosquad, to be specific.
I don’t think there is a problem with only the best being remembered. As Gwern has argued, there’s a lot of fiction being produced, more than could be consumed in a lifetime. Although there are also more lives being produced, and more niche interests. But if we only pass on the greatest work of a medium from each year, or even each generation, that’s still going to end up with a lot of great content, and if excellent artists don’t achieve true immortality because they are eventually replaced, they can take solace from knowing that they informed the medium, genre, etc. in it’s development and spoke to people in their own time.
>So it would interest me to see a genre of commentary where the chapters of a serial work were placed alongside summaries of what was going on in the world while they were being written. So we can read The Brothers Karamazov knowing Dostoevsky’s life at the time and events in St. Petersburg. Is this already a thing?
It definitely is for some authors that old. The example that comes to mind is an annotated collection of Sherlock Holmes (another long-relevant work that was originally serialized!) that discusses what was going on in Arthur Conan Doyle’s life when he wrote/sold a given story, and possible inspirations for the story. I would be really surprised if someone hasn’t done this for Dostoevsky, unless it’s because his books are already so big?
This is really weird to me, myself and pretty much all my friends and family regularly go back and watch old shows we missed the first time around and discovered through streaming. I’ve had lots of “Oh, you’re watching [X]? How far in are you? Oh man, I can’t wait to talk to you about… well, you’ll know it once you get there…”
Interestingly, the first thing I thought about on reading this essay was our recent discussion (in 115.75) on which anime series were worth watching. Anime TV series are heavily serial in nature.
One of the trends in anime is for distinct subgenres to form and evolve in the form of deconstructions and reconstructions. Several people had noted in the earlier thread that it makes it hard to get into a series that is a deconstruction of a genre without knowing the general tropes of the genre. However, this also makes it worthwhile to watch the prominent older series that started the genre. It’s harder with the less distinctive American TV series market, but there are a few places where this might apply. It would be interesting to see how well box sets of the earlier Star Trek series sell when the more recent movies came out, as that gives a distinct reason to go back and take a look at the earlier series.
Further to this, Netflix has recently announced that it will be streaming the original Neon Genesis Evangelion starting next spring. While Gainax has worked to keep Evangelion in the public eye over the years since it’s release, that only worked because the original was so influential. It’s a testament to the show’s influence that the fandom immediately picked back up where it left off (Maya best girl, btw).
What, you mean you don’t like your girls to have crippling psychological problems and/or apocalyptic powers? Why are you watching Evangelion, again?
Also, you’re lucky this is a culture war thread. :op
Oh man, I missed the anime recommendation thread!? Damn!
(And also, wow, the recommendations in that thread heavily diverge from what I would have shilled. Plebs, the lot of ye.)
As for the point you bring up, you can see this in action in the Mark Watches website, wherein site owner watches series completely unspoiled. (Warning: Mark is very unapologetically pro-SJ, and it basically saturates his writing.) He recently wrapped up his watch through the entire Star Trek series, so you can indeed see his reactions to TOS as a modern viewer. He didn’t have any issues with appreciating the deconstructive anime classics he watched (NGE, Utena, Madoka), because the core themes of those shows are actually universal, with the genre elements only one framework lens to reinforce said universal themes (fighting social anxieties and societal-reinforcements of them in order to make meaningful connections with each other, or as I like to joke, “Everything is AT Fields”).
This video talks about how genres must innovate or die. In that sense, anime’s continual deconstructions and reconstructions may be why they’ve maintained momentum where other genres have struggled in western live action.
There’s also an argument that the world is so connected that most people osmose knowledge of stories and tropes (often through social media) without ever seeing their original forms, so they are already subconsciously primed to watch deconstructions. People can read Watchmen without having consumed Golden Age/Silver Age comics, for example.
On the other hand, I don’t think people can watch the Rebuild films without having seen the original NGE.
(As for my weeb-ass animu recs: Kyousougiga, Mob Psycho 100, Ping Pong, Symphogear, Cross Game, Boruto, Log Horizon, Baccano, My Hero Academia, FLCL, Girls Und Panzer, Jojo’s Bizarre Adventure, Little Witch Academia, Pop Team Epic, and Chihayafuru.
I’ve thought about jokingly asking David Friedman to evaluate Spice and Wolf.)
I can’t believe I never thought of that; it would be right up David’s alley. I mean, a show about medieval economics and food? It was practically made for him!
It’s been suggested before, but he couldn’t find a way to view the episodes folks recommended online.
The first season is streaming on the Funimation site, I believe, while the second season is either on Funimation or Crunchyroll.
It would be curious to see if the dub mangled any of the economic concepts.
Also, the LNs are licensed, so we could go to the true source material, as it were.
Aw man, I love Spice and Wolf so much.
It’s interesting to see this thesis fleshed out in formal longform, because it’s my understanding this is a major theme in Infinite Jest, to reference a one-night-stand I have never read.
The thesis is also interesting to me because I have always had an aversion to serials. Granted, I have watched a lot of tv shows, and I almost finished the wire and a few others, but I also felt inspired to write an essay repudiating the quality of Oliver Twist when I was 11 or so that attributed it’s awfulness to having been written as a serial. And even during my tv days, I felt insulted if a series drew itself out so long that it didn’t congeal into a cohesive message after watching. Naruto and True Blood were model organisms of shows that felt insultingly focused on extending their lifespan; Ergo Proxy and Moral Orel persuasively convinced me that episodes added value to the story being told.
When I began to filter my media consumption away from compulsive fare and towards things that had a high meaning to investment rato,* serials were the first to go.**
*Full disclosure, I often listen to audiobooks rather than read books, so I am clearly not entirely consistent.
**In all honesty the major goal was to move towards media that was less addictive, but this description is still accurate.
One of my favourite things about anime is that there are tons and tons of series that are good, worth watching, and over within a reasonable number of episodes. The sheer number of episodes produced in most western TV series is a principal reason why i seldom watch them. Of course there are also interminably long animes, but i don’t watch those either. Really the 12-36 episode series is the sweet spot. It’s long enough to tell a good story, not so long it’s hard to retain interest. Though i can do longer if it’s an exceptionally good story.
That’s where I am at too. If an anime is longer than 26 episodes, I squint. Longer than 52, it’s a no-go.
Welcome to the big leagues. Trump, it turns out, was a one-term wonder, and the next man in the White House is a rather more conventional politician. You are a fairly senior staffer from either State or the DOD, and your task is figuring out what to do about the US military mission in Afghanistan, which in 2021 is pretty much where it was in 2018. What will you advise the president to do?
“Mr. President, let me start by saying… how the heck did you make it through the 2020 Democratic primaries?”
In success liklihood order,
1. Slash the defense budget to the bone and channel it into DARPA with the hope that they can send time traveling
terminatorsrobots to change the outcome of Bush v. Gore before shit goes completely sideways.
2. Convince someone else to invade, GTFO, and hope the locals get distracted enough by their new oppressors to forget how much they hate us.
3. Legalize the opium trade and hope the rest of the country suddenly having the highest GDP per capita in the world makes makes squatting in dank caves grousing about how much they hate us is a less attractive option.
Tongue thoroughly in cheek.
3 would actually be practical, if politically impossible. Suppressing the opium trade and stabilizing the country at the same time is impossible. Abandoning the first goal could well make the second goal achievable.
total afghan GDP is only 20 billion dollars. we could pay double the market rate for opium and burn it, and probably come out ahead.
Not that I am opposed to the cautious, controlled legalisation of the opium trade, but I think that the response to that is particular proposal is: Do you want cobras? Because that’s how you get cobras.
Of course that’s how you get cobras. But the US could pay the entire country to breed cobras and it would cost us less than the war we’re currently fighting.
Yes, that was the joke. That the merely physically impossible time traveling robots is more likely to succeed than the politically impossible options 2 and 3.
Grant it as a fiefdom to someone who served you well during your self-coup.
You don’t want to stay. US spending on afghanistan is considerably in excess of Afghan GDP. It’s hardly possible to find a less valuable place to fight over, and the place isn’t going to get any better.
You can’t leave. No president can afford to be the man who lost afghanistan. And even if he could, no president is going to be willing to take the risk of proving it.
So if you don’t want to stay and can’t leave, the best you can do is try to minimize how much money you pouring down the drain. Reduce US ground presence as much as possible. Maybe try to get the mission changed from a NATO mission to a UN mission for cheaper soldiers. Just outbid the taliban for the poppy crop and burn it.
A more sensible constitutional arrangement wouldn’t hurt either, but it’s probably too late to set up the highly federalized system they should have.
I’m thinking the whole elections thing isn’t working out. The place is poor, backward, and corrupt, with no tradition of formal democracy.
Find a local regional boss who isn’t too nasty and who has no reason to love the Taliban, and let him take the place over. Keep him supplied with modern arms and training as long as he keeps the Taliban out and isn’t too nasty to the other ethnic groups.
That’s not terribly different from what we’re doing now. I don’t think democracy is the real limiting factor.
Trump has reportedly questioned our involvement in Afghanistan, asking generals “what the hell are we doing there?!” I think he would like to leave, knowing it is a waste of blood and treasure. However, should we leave, the immediate aftermath would be much violence from the Taliban against people who were previously dependent upon / cooperating with us, and the media would lay that at his feet, hurting his re-election chances.
Should Trump win a second term, I would wager he will pull out of Afghanistan, perhaps offering some assistance to well-vetted refugees.
Trump says a lot of things, but given the choice of spending 20 billion dollars of other people’s money and taking the chance of being accused of being the guy who lost Afghanistan, I’d bet an awful lot that Trump will spend the money.
Obvious answer is to draw down US presence in Afghanistan as far as possible, and convert what we’re doing into an “advisory mission”. Try to keep Kabul down to a dull roar, and basically forget the provinces, except for drone strikes and the occasional commando raid. Try to keep the cost of this down.
Less obvious plan is to stage a massive distraction. And when I say massive, I mean something that justifies us pulling out of Afghanistan to use the troops elsewhere. The most likely candidate is Russia. We can probably do this without actually getting into a war with them (which isn’t true for most other candidates) and it might actually be helpful. Cons to this plan include the fact that we’re playing brinkmanship with a major nuclear power.
>Cons to this plan include the fact that we’re playing brinkmanship with a major nuclear power.
I was going to say. Why not just invade Iran? 😛
That was actually my first plan, but I realized that it wasn’t going to save money or be any easier. Because you need a good plan to have a land war with Iran, then get out of Iran without having to impose regime change. Or to conduct a successful regime change there, which is even harder than getting out directly (although far easier than putting a functioning government in Afghanistan.)
What about help Iran invade Afghanistan? They’ll either succeed at governing the place or their regime will collapse, so win-win! 😛
The US has neither succeeded at governing the place, nor has it collapsed.
Assuming it would be impolitic to simply walk away and invite the Russians to have another go at it:
Turn Kabul into a green zone, stabilized by American and trustworthy allied security forces, and with anyone we think jeopardizes that security being told to leave. Make it the showcase for the best Afghanistan has to offer, which includes making it a safe(ish) space for Afghan women. Allow but do not require whoever most convincingly wins a fair-ish national election to house their government there, if they adhere to a security and friendship treaty. Projecting power outside the city walls is up to them.
From Kabul and maybe a few other secure bases in deserts nobody cares about, conduct drone surveillance over the rest of the country. Let the local chiefs, warlords, provincial governors, whatnot, know that we simply do not care what they do so long as their territory is not used to organize attacks against Kabul and/or American interests or host belligerent parties to any other wars America might be involved in. Otherwise, explosions will happen and there will be nothing they can do to stop it and nothing they can credibly claim as any sort of victory against the Yankee Imperialists.
Be as careful and selective as we can about those explosions, and occasional SOF snatch-and-grab raids, consistent with the requirement that there be no organized military action against American interests from Afghan soil.
This reminds me of descriptions I’ve read of Afghanistan in the 1960s. Particularly about modernizing Kabul. (I think I’m remembering this set of photos from William Podlich.)
If you like that, you’ll love Nasser on the hijab.
That is a classic.
I hadn’t realized Nasser could be so charismatic.
He’s also better than 2018 Democrats.
If you can add “or call yourselves the Taliban, or ally with people who do” you might have a winning formula. If everything outside Kabul is run by warlords who definitely aren’t the Taliban, it’s lamentable, but no one cares. If it’s run by the Taliban then America has lost a war with the Taliban.
I don’t see how this doesn’t turn into a Palestine-style game of chicken. They bomb Kabul from hospitals and orphanages and… we bomb them?
Which hospitals and orphanages? The ones inside Kabul are inaccessible to anyone on our likely-to-bomb-us list, and the ones outside Kabul are out of “bombing” distance for anything but missiles and long-range artillery. Moving that sort of stuff around under persistent drone surveillance ought to be hard enough that, if it doesn’t mysteriously explode long before it gets positioned in a hospital, shame on us.
The “Palestine-style game” is possible because the geography of Palestine means Israel was stuck with concentrating a million or so people who hate them, and all of their stuff, within easy walking / shooting distance of some of Israel’s population centers. The geography of Afghanistan puts Kabul in a narrow valley with lots of mountains between it and any place that anyone is going to build a hospital. You’d want the US security zone to go at least to the first surrounding ridgelines, but I’m not seeing the problem you are.
Is this possible? This implies you are able to vet the city extremely well and control the entry almost perfectly; that no one inside will grow sympathetic to the insurgent cause; and that the hospital won’t provide services to those outside the region controlled (likely causing resentment in the city borders).
I don’t know that our counter insurgency is up to that kind of work for a long period at high enough standards to not allow in the occasional bomber, though if I’m wrong that’s great.
Or maybe terrorism in Afghanistan is at very low levels compared to a decade ago?
That makes sense – I didn’t check a map before posting.
wrt making a green zone in Kabul: What’s in it for us? I assume that with some investment of money and lives, we can more-or-less do this (with occasional terrorist attacks inside our green zone, but probably not all that often). But why is it worth tax dollars or US soldiers’ lives to try?
I mean, for the same amount of resources (money, trained people, political capital, etc.), we could presumably do a lot of stuff in the US that would directly benefit Americans. It’s hard to believe that setting up a green zone in Kabul is anywhere in the top 100 ways to benefit Americans with that set of resources.
Fortunately, this would consume <<1% of America's resources, and "there are better things you could do with [$$$], therefore this is a Bad Thing that we shouldn't do" is one of the more annoying political fallacies. But since you ask,
1. Walking away was ground-ruled out by the original question and by my response.
2. Maintaining Kabul as a friendly enclave means not having to admit we were absolutely defeated in a war. That is a thing of real value to many people, some of whom are American citizens with as much of a legitimate say in this as yourself. Politics requires compromise, and if you can't compromise with those people by giving them something they can recognize as a sort of victory, you need to look into not being an American any more.
3. We're still going to have to keep making things in Afghanistan(*) explode in a sporadic and unpredictable manner, because we've seen what happens if we don’t. That’s going to have ugly political consequences, particularly when the things that explode are people. The politics are much easier to handle if there is a plausible benefit to all this other than naked American self-interest, and if most Americans in Afghanistan aren’t there just for the exploding and the killing.
* And every other place on Earth that doesn’t have a government willing and able to arrest people who openly organize terror campaigns against the United States.
Fortunately, this would consume <<1% of America's resources, and "there are better things you could do with [$$$], therefore this is a Bad Thing that we shouldn't do" is one of the more annoying political fallacies.
If you really accept that logic, I don’t think you can ever oppose anything frivolous or ill-considered the government might do. Indeed, I don’t think you adhere to this logic when discussing, say, Trump’s wall. Now, I would say that building a wall between the US and Mexico is dumb because it’s a waste of resources that won’t accomplish anything worthwhile. But hey, it will also surely cost less than 1% of our national resources, so there’s no reason to oppose it. Right?
Further, while I understand that there are some people who feel very invested in Afghanistan and don’t want to have failed the nation-building mission, I strongly suspect that’s not most of the voters. Hell, offer the voters a clear choice of pulling out of Afghanistan and building the wall, or building Green Zone Kabul and not building the wall, and I figure we’ll probably end up with a big wall (with T-R-U-M-P in 20 foot high gaudy gold letters).
Finally, 9/11 seems to be a justification for invading, occupying, bombing, kidnapping, etc., people everywhere, all the time, forever. It’s gotten us into a war you and many other people here think we must never ever end. I think this is just wrongheaded and dumb. I don’t think there’s any policy we can follow that guarantees we won’t have another 9/11 scale attack, and I’m skeptical that our current policies are actually decreasing the probability of such an attack relative to a much less violent interventionist foreign policy.
That sounds like the British policy in Aden. The city itself was a colony but the hinterland only had protectorate status. Tribes were left to govern themselves, aside from their relationships with other tribes that had to be peaceful. The RAF went on sorties to punish troublemakers; sultans and chieftains that were cooperative, on the other hand, were given prestige gifts (rifles were common, I believe).
Step 1 is get out of Afghanistan. Taliban wants to take back over? Don’t care.
Step 2 is to try to sucker Putin into invading.
There is no step 3.
The Russian government would have to be really dumb to fall for the same trick twice in less than half a century.
One potentially important difference might be that last time the US was actively opposing them by supporting/funding the mujahedeen, while this time we wouldn’t be. I’m not sure how much of a difference this would make in their calculus, but Putin obviously wants to expand Russian interests back to something closer to what they were in the Soviet era, while this time around we no longer have the specter of expanding Communism to worry about, so I can at least theoretically see this as a carrot on a stick that he might be willing to follow.
To be fair, we keep repeating our mistakes.
Which pretty much guaranties they will fall for it. Just to remind you, they’ve recently started a war for no good reason whatsoever, tried to tell it’s not actually a war and they’re not actually fighting there, and got sanctions when it predictably didn’t work. Without anyone lifting a finger to trick them into that. Then did exactly the same thing again. In less than half a decade. If they wouldn’t fall for that trick, it could be only because they’d find some even dumber way to screw up.
(I would’ve never make such a comment about someone else’s country government, ofc)
If the US and its allies pull out, does the Taliban end up in charge again? How much could we influence that result by only spending money, and not (our) lives?
Why do we care?
1. The US doesn’t want anyone using Afghanistan as a base for attacks on it or its allies.
2. The US doesn’t want to be seen losing a war.
3. The US would prefer that Afghanistan be a country that reflects the US’s own values, which Afghanistan mostly wouldn’t be by default.
My sense of it is that number 2 dominates right now. The US would leave if they could claim victory in doing so. Part 1 is probably already accomplished. The Taliban would have to be crazy to allow anyone to use their territory to attack the west now, after all they have suffered.
I do not care. Maybe I once cared about getting bin Laden, but that was a long time ago. But next year we’re going to have 18 year old kids shipping off to Afghanistan who weren’t even born when 9/11 happened.
The Taliban were never in charge of all of Afghanistan even in a time when they were receiving enormous amounts of aid from Pakistan and there was no foreign support for the opposition. They are overwhelmingly a Pushtun organization, and the Pashtun are only around forty percent of the population. If we pulled out tomorrow the Taliban could seize Kabul along with much of rest of the south of the country, but with even minimal foreign assistance a government representing the other ethnic groups could easily beat back any offensive aimed and taking the whole of the country.
In fact the Taliban leaders and the Russian government have had a nice chat in Moscow just over a month ago. You’d probably need no step 2 either.
Bring our troops home.
Use the money thus saved to try and mitigate the political hit of being blamed for whatever happens next.
Keep on keeping on. It’s not a “winnable” war without a large and sustained commitment of US forces that you aren’t going to have without some sort of national draft. However, the Afghan central government can be supported without those huge contributions, and can prevent the Taliban, Al Qaeda, ISIS-K, or anyone else from taking over the nation. Eventually (over the course of decades) there will either be a political settlement, or some Taliban leader will adapt some wholly moronic strategy and get most of the group killed.
Over time the Afghan government should build support and competence that will require less commitment of US troops to defend the whole nation. Our footprint there right now is more like 15,000 as opposed to 100,000+.
Super popular? No, but Afghanistan isn’t in the news anymore, so you aren’t taking much of a political hit. The news cycle is more concerned about Russians and Chinese and domestic shootings.
Assuming that the world would be crazy to put me into a position of such power, I would propose a crazy solution.
Find some way to devolve the current nation of Afghanistan into smaller political units, closely tied to a regional/tribal entity that can claim to validly represent and have political motive to protect its members.
We’ll turn the current nation into multiple smaller-scale versions of the current trouble…but they’ll be keeping a wary eye on each other, and likely making less trouble for the wider world.
That’s an interesting idea. The Taliban is mostly or entirely Pashtun, and while that’s a big group, it’s not a majority of the country. Splitting the country along ethnic lines might actually work. My impression is that specifically Afghan identity isn’t all that strong in the first place. The Tajiks are Tajiks first, maybe Muslims second, and Afghans third.
The problem is that all local allies the NATO tries to make in Afghanistan have a dreadful habit of defecting to anti-Nato forces. So we need to build local structures of authority that cannot do that.
So, I give you: Project Amazonia. You know of the YPG?
That, with the best instructors that can be found. Recruit Afghan women to be police, and armed forces. Ongoing education during service with a goal of retiring out into being teachers/civil service.
Arm them to the teeth, teach them to read, and generally speaking educate the heck out of them. Sure, it will take a while to turn them into a proper army, but at least it will be a loyal army, because they cannot defect to the enemy, and every time you kick down a particularly horrible stronghold, you get fresh recruits.
And piss of the Turks?
Turkey is a Nato member, so the YPG (I think you mean YPJ, if you mean all women?) already are an anti-Nato force. Anyway, I don’t think you could use women like that. Women are active participants of a culture, not hostages yearning to get access to guns to make the place more feminist.
Besides, why would you expect the locals to make life so difficult for the Kurds, that they wouldn’t remember: “Wait a minute, I’m heavily armed and in a proper, disciplined army… And this is definitely not Kurdistan. We should probably fight where there’s more than zero Kurds. Rojava could use a standing army.”
And then you’re back to pissing off the Turks.
.. The thing is, in a lot of these places women are in fact hostages yearning for the chance to kill their oppressors.
Islamic state practiced outright sex slavery, which is where the womens millitia get a lot of their recruits. Very angry victims and mothers of victims. Given how much of Afghanistan is overrun by warlords with extremely medieval gender politics, finding enough very, very angry women to form an army should not be difficult.
The difficulty is making them a disciplined enough force that they dont go out and commit their own atrocities first chance they get.
What’s difficult is to ensure that you don’t recruit any women (in significant numbers) who don’t resent their traditional roles, and who will defect.
That sorts itself out – picking up a gun and putting on a uniform is a pretty radical repudiation of traditional gender norms in the first place, and it is also a fairly irrevocable commitment in the second – traditional society is not going to welcome them back, so it is victory, death or exile (If this plan fails, you are going to end up with an all-female foreign legion. Which.. well, that is still useful, I suppose)
Maybe, if Iran wouldn’t happen to be right between, where those feminist Kurds currently are and Afghanistan is. But at the moment you’d have to fly the YPJ in. And once they’re there, they would have a hard time, getting back to Kurdish territory. Which makes it unlikely I think, that they would want to go there in the first place. What’s in it for them? Somehow winning a bigger army of women in Afghanistan to fight for Kurdistan? Making Afghanistan into some kind of Kurdish haven?
If the US negotiated a sweet peace deal between the Turks, the various Syrian and Iraqi factions and the Kurds ended up with their own state, then this might be politically feasible. Or if the US comes to the conclusion, that Turkey is really overrated (which I don’t think it is), then this plan would be more politically feasible.
And Iran and the US deciding to become best pals would help, too, with the logistics aspect.
But then it’s still at most 10.000 brave women, trying to somehow win the hearts and minds of 35 Million people, who have a very different culture from their own. And speak a lot of very different languages from them.
Ah, I see the mis-communication. I mean, “Build an afghan force patterned on the Kurdish defense forces”, not flying the actual Kurds in, or at least, at most some training cadre.
And there appear to be, well, largish groups of women in afghanistan who would likely sign up – honor culture and constant fighting means women have horrible things happen to them, and then their homes disavow them because they got raped, and similar massive injustices.
Key idea here is to build local forces consisting of people who tradition and radical islam has screwed over extra hard.
If the ISIS could make up some excuse why raping sex slaves was compatible with Islam, the Taliban would also be able to decide that taking up arms as a woman is OK as long as your plan is to defect and join them (or give your weapons to your husband who will join them).
Afghanistan is a country which murders rape victims. Or forces them to marry their rapists. That ploy would literally be unthinkable to anyone not thoroughly westernized already.
A much greater danger is, well, if you build this force structure, one bright morning, they are going to coup the government in Kabul, on grounds of its horrid misogyny.
1. Slave concubines are a normal feature of Islamic law. I’m not sure it’s legally legitimate to enslave fellow Muslims, but I’m pretty sure non-Muslim enemies, including women, are fair game.
2. I don’t see any problem in Islamic law with women taking up arms. There was a famous, probably fictional, early Muslim “female knight,” as well as at least one historical case of a woman who helped defend Mohammad in one battle and later went off to fight somewhere else.
I feel like there simply has to be a problem with this plan, because it’s got that “this is too cool to happen in real life” feel to it. I just can’t actually figure out what. It makes an awful lot of sense.
I guess it depends on how many women you could convince to actually pick up a gun. In theory, the benefits for doing so would seem to be overwhelming. But in theory, us western countries should be having feminist shooting sprees every hour on the hour, and yet, to my knowledge the last one was back in 1968 and it was sort of half-hearted at that.
To get out of Afghanistan and forget it ever existed.
I don’t see any good reason for the US to be there or to have been there (or rather I don’t see any reason, that would be worth the cost, headache and backlash).
The president wouldn’t be impeached over that decision and if it costs him the chance at a reelection, so what?
The president is there to serve the country, not the country to serve the president.l
That kinda goes against polysci though
Do we know how many citizens there are in Afghanistan? How would a plan go that confiscates ALL weapons from Afghanistan and executes anyone stashing weapons? Then you just have to police the borders carefully so no weapons can get across.
(Is it that we don’t know how many Afghanis there are or what they are doing, and the borders are geographically unpoliceable or something?)
Phase two might involve forcing the completely defenseless populace (some bribing might smooth this) into sending their children to be educated in schools tightly controlled by Western forces to raise the next generation of Afghanis, who are loyal to Western values and wish to reform traditional society.
(I’d really rather leave, but totalitarianism is the only thing I can think of even remotely creating a path towards a reconfigured progressive populace actually capable of rejecting the Taliban and forces like them).
I think this is very similar to what Soviets tried. It was by necessity accompanied by ginormous levels of violence against Afganis and ultimately it didn´t work.
Did they fail to capture all weapons then? I don’t see how its inconcievable using modern technology to locate every single human being in Afghanistan and then confiscate all weapons from them, and then hunt out hidden stashes, and sentence anyone hiding weapons or smuggling them in to excruciating torture for as long as they can withstand it without dying. That should be a good enough disincentive, and at that point the majority of the populace is unarmed so they can’t exactly protest your brutality.
You might be saying “bloody heck, that’s evil!” at this point, and indeed it’s well above my ethical limit too, but I do think that it would be effective. I assume the Soviets failed at step 1, which involves scouring the country for people and weapons and securing the border so no new weapons can come in, but that’s understandable since the Soviet Union was a tottering power by the time of the war, so they may not have had the resources spare for this. So my answer is that the Soviets probably failed because they didn’t use enough overbearing power to absolutely scour and control the entirety of the country. Even though they were brutal, they were inconsistent and haphazard in their brutality.
The modern US military could probably accomplish this plan if all ethical boundaries were broken and they were given the money to totally and utterly subjugate every square meter of the country, rooting out absolutely everyone and making them submit on pain of torture and death (yikes!). Of course, this would probably require that the USA itself become a dicatatorship in order to deal with domestic protests against the brutality and in order to ban the media from covering the situation, so it’s probably disqualified on those grounds. So, if anything, the West can’t bring peace to Afghanistan because to do so would require so much brutality that we wouldn’t recognize ourselves in the mirror any more. I’m pretty sure it would work though.
I think you are radically underestimating the difficulty of tracking down everybody and all weapons.
The Soviets also had to contend with the fact that the US was running guns into the country. (And we have to contend with a bunch of Chinese 107mm rockets filtering in from Pakistan).
Consider the effort required by search parties to find people who want to be found, often in relatively small areas of wilderness, and then consider the fact that Afghanistan covers 652,000 square kilometers, much of which (according to Wikipedia) is mountain range. I mean, I don’t have the experience to say for certain, but I second CatCube in suggesting that your plan wouldn’t be nearly as straightforward as you suggest.
Completely closing a border is very difficult in practical terms, if not downright impossible. Think of the most locked-down borders guarded by the most authoritarian regimes: The Inner German Border and the DPRK-ROK border both come to mind.
Then consider all the myriad violations of those borders in both directions.
Then consider that these borders were (or are) only a fraction of the total border of the countries in question.
Even given adequate ruthlessness (I don’t think anyone can accuse the Soviets or the North Koreans of being shrinking violets there) and lots of resources, hermetically sealing off a country like Afghanistan is simply not feasible.
I was being slightly cheeky/metaphorical. Soviets didn´t do exactly this, but their strategy was similar in spirit – trying to “modernize country by overwhelming military force”.
Core problems with your plan are:
1) general attitude of weapon owners in conservative societies like Afganistan is “I’ll give you my gun when you pry it from my cold, dead hands”. They will flee to the mountains and join Taliban or other insurgents if you try to force them to surrender their weapons.
2) to deal with it, you would need to radically increase American military commitment to Afganistan. According to english wikipedia (in this case very unreliable source, but I am to lazy to find something better and it gives rough picture), Soviet union sent 620 000 heavily armed troops to Afganistan, and they still lost.
Afganistan is close in size to Texas (somewhat smaller, but close), which makes it very difficult to police unless you have hundreds of thousands of disciplined armed personell on the ground, and has actually somewhat larger population than Texas. Also Afgan terrain is very likely more difficult to monitor than Texan terrain. And Afgan borders are nearly 7000 km long, so to station one border guard on every hundred meters you would need 70 000 border guards.
Any sane strategy has to aim to reduce number of insurgents, not to increase them.
Then take the weapons from their cold dead hands. Declare that everyone has to report for the census and weapon handover in their local urban center or village and stay there for a week, and that during this period, we will begin a bombing campaign of the mountainous areas the likes of which the world has never seen. We will be using the absolute pinnacle of modern technology to locate everyone outside the designated areas and make them pay for their illegal actions.
If during the course of occupation attacks crop up again, then we will repeat this process.
At this point in the plan, this probably violates some international military ethics treaty, but my claim is that America has the resources to do this, even though it shouldn’t.
We could do better with less using modern military tech and the above plan, but we could always draft people right? You might need a million troops, but the fact that you’re regularly funneling people into civilian areas and declaring those outside military combatants means you’d need a lot fewer, as it takes way less troops to kill a technologically inferior enemy than it does to continuously monitor him.
That sounds doable, and it would gradually pay off. You start with 70,000 border stations, and then over the years you gradually link them up into a wall where all areas that are not the official crossings have a dead zone guarded by auto-turrets that sound the alarm and annihilate anything they spot bigger than a fox.
This is pretty close to the thinking behind strategic hamlets. They didn’t work very well.
Won’t work. Rolling barrages were commonly used in World War I, and they were pretty much an attempt to do exactly what you said, just in a much smaller area with the intent that infantry would immediately follow behind to occupy the area. It went like, well, World War I.
It doesn’t take much digging in to make intense barrages surprisingly survivable. For example, the closest person to ground zero to survive the atomic bombing of Hiroshima was 560′ away. Bombing isn’t magic, and some effort on the part of defenders can leave a capable force behind.
Declare victory and go home. Welcome our new peace-partners the Taliban who will surely continue our work building a better Afghanistan, one stoning at a time.
If our efforts to remake Afghanistan into a decent place were going to bear fruit, it would have happened by now. It’s an ungovernable sh-thole full of heavily-armed religious fanatics with nothing in it we want. There is zero reason for us to continue occupying the place.
It’s an ungovernable sh-thole full of heavily-armed religious fanatics with nothing in it we want.
I wouldn’t say “ungovernable” so much as “ungovernable except in a way that Western democracies can’t or wouldn’t countenance”. Genghis Khan could probably govern it adequately, but the “mountains of skulls” method has fallen out of favour.
Note that the Soviet Union didn’t have a particularly pleasant time there, despite not being overburdened with squeamishness about human rights violations.
A little more than ten thousand soldiers at cost of about $45 billion? Out of a total budget of $600 billion and 1.6 million soldiers, not including international soldiers? The country we control something like eighty percent of? It wouldn’t be first on my priority list.
If we want to really commit to stabilizing the country, I’d recommend the construction of fortified mining sites to increase government revenue in a way that it will be hard for the Taliban to take over. (They lack the technical expertise and need to export it into a mostly legal market. Most metal is also relatively heavy and difficult to smuggle.) Likewise, industrialize and create factories to process the products. Again, hard to run and difficult to smuggle since they rely on the international market.
Incendiary drone strikes on poppy fields in Taliban territory. Fire poison into their fields that will kill the plants. Matter of fact, fire poison that will taint the product, if we have it. Legalize and regulate in US controlled territory and stimulate production so as to drive down cost.
Introduce conscription and require soldiers, on discharge, to report to their home’s government militia organization. Part of their term will be education on the history of Afghanistan and similarly nationalist stuff. Discharge them with an identification card which will have sufficient benefits soldiers will value keeping it (maybe a small pension?) Require them to drill regularly in the militia (part time) and require their commanders to report if anyone disappears from formation. Possibly forbid firearm ownership unless you are a veteran of conscription.
Increase wages for civil officials and penalties for corruption. Fire any official found guilty of corruption and bar them from government permanently. Set up an independent investigative commission and allow anonymous tips. Require rigorous documentation for any government official.
I actually got into Sarah Lawrence College with a paper arguing something like this back in 2011. I don’t think your corruption idea would work easily because most everybody is corrupt I expect. I understand it to be one of those governments that functions in a “corruption as government (government as corruption?)” kind of way.
I agree with that, re: corruption. Ideally you can eliminate it. But the world is not ideal. There is a longer term process to eliminate corruption. But in the immediate term, increasing pay increases your power over civil authorities. Likewise you should rotate them regularly through positions in different provinces. Etc. This is important because Afghanistan doesn’t have to be not-corrupt. Rule by its bureaucrats just has to be preferable to rule by warlords and the Taliban, even if the warlord/Taliban commander is a local.
And when a corrupt local official pops up, you want to establish the central government as the alternative, not rebellion. This actually follows both side’s incentives: the people eliminate the corrupt official and the central government gets to assert its authority, thereby increasing its power.
That doesn’t make it harder for the Taliban to take over, it makes it less profitable for the Taliban to take over. But if it’s a choice between the Taliban making a million dollars selling the computers and office supplies on the black market, and the Not-Taliban making a billion dollars running a working mine, the Taliban are still going to want to take that million.
And if they’ve got the most men with guns, they will. In which pursuit they have the advantage of being able to credibly offer eternity in paradise with the 72 virgins for anyone who dies helping them loot the mines whereas the folks running the mine have to pay whatever is the going rate for a Blackwater mercenary to die for them.
How are you going to do that when the people in charge of figuring out who is guilty of corruption, are themselves corrupt and taking kickbacks from the corrupt people they are supposed to be investigating? A form of corruption, you may note, whose profitability scales linearly with the wages you pay the civil officials.
There is no limit to the number of social, economic, and political problems you can solve if you are allowed to postulate an incorruptible body of enforcers and adjudicators to slot into the right place in your scheme. Except, incorruptible adjudicators and enforcers are rather hard to come by in the real world, and simply invoking the magic words “independent commission” rarely actually summons them.
But the Taliban has the edge here as well, in that they’ve actually got an incorruptable enforcer and adjudicator, and an omniscient one at that. Well, OK, there’s some question as to whether this “Allah” guy actually exists and all, but the Taliban’s followers believe he does, and that’s almost as good.
If your plan requires a hundred thousand or so incorruptible civil servants and informers who speak fluent Pashto and are trusted across tribal boundaries in Afghan culture, I will be looking far more closely at exactly how and where you will find this corps rather than at the bit where everything else falls neatly into place once you have them.
Fortification, by definition, makes it harder to take over. Likewise, the communities making those billions will presumably have some stake in defending that mine that Blackwater mercenary won’t. At least in the west, miners are famous for being ornery and defending their mines. I don’t see why Afghani miners would be different. Plus the whole not becoming slaves thing. They’ll also have better guns because they’ll have the money to buy them (and the US will probably help).
We can also make them deep earth mines. If the Taliban take over, we blow the lifts below the surface. The Taliban could, I suppose, use slaves and rope ladders to haul heavy metals out. But the mine would quickly degrade and make them very little money indeed. You mostly see warlords benefitting from natural resources in places like Liberia where no heavy machinery is required because warlords mostly cannot acquire or run heavy machinery. Fortunately, most of Afghanistan’s mineral wealth does require heavy machinery.
On top of that, industrialized products will be useless. You’re imagining them making computers. Why would they do that? They’d probably be making computer components at best. And they’ll need to get highly educated, skilled workers to make that work out. And even if they can enslave some scientists, Microsoft will presumably not buy their products. Even ignoring basic human decency, I am very confident of the government’s ability to sanction Microsoft for cooperating with the Taliban. Even if they can start making off-brand computers, complex supply chains are much easier to maintain with international support. Plus the Taliban needs to locate the entire chain in territory we can bomb freely. We only need a couple of parts where the Taliban can credibly take control.
The profitability of corruption actually decreases with the wages you pay. The more valuable the official salary is, as measured against opportunities for corruption, the more the official’s incentive structure shifts from ‘do what gives me opportunities for corruption’ to ‘do what the government tells me to’.
All decisions are made with respect to their alternatives. The alternative to being corrupt is being non-corrupt. The more financially comfortable that decision is, the more people will take it.
The commission does not have to be non-corrupt, though presumably you can find twelve people of fairly stout morality in any country. It just has to have an incentive structure that encourages it to root out corruption. You can have an entirely selfish one. Perhaps the commission members are paid a bonus for rooting out corrupt officials. This would raise the price of bribes. Do it correctly, and you can raise it above what most officials can muster.
If the corrupt officials get more money by rooting out other corrupt officials than by accepting bribes, they will root out corrupt officials. At worst, they might be overzealous and root out some honest ones. But an appeals process can be put in place. Yes, the officials could still get around it, but as it becomes more complex and expensive it becomes preferable to just do their job as the central government wants them to.
Fortunately, as described, they do not have to be non-corrupt. Or at least, they don’t have to be non-selfish. They can be serving solely for personal gain and still work out. My plan does not require a hundred thousand trusted incorruptible civil servants. It requires altering incentive structures so everyone is acting in their own best interest and that leads to the result we want. The central government does not like corrupt officials because they make the central government unpopular and weaker. The people do not like corrupt officials because they are stealing from them. The two have allied historically many, many times.
It does require Pashto speaking people to be involved in government. I don’t think that’s too hard to get in Afghanistan.
And if you’re referring to God in the second paragraph, presumably the Islamic Republic of Afghanistan believes it has some claim to divine providence as well. Seeing as the Taliban are effectively narco-terrorists these days and the majority of Afghani clerics support the government, I’m not sure the Taliban really have the religious high ground.
No, he imagines them pillaging the mining company’s offices, and selling its computers.
There’s also the model where the Taliban hang out in the mountains surrounding the mine and say, “Nice computers you’ve got there; shame if something were to happen to them…”
But mostly, I’m seeing a whole lot of missing the point, and a profound lack of appreciation for how both the Taliban and the corrupt secular Afghanis will respond to changing circumstances other than just giving up and going home because the exact model of profiteering they’ve been using so far is somehow blocked.
As a first step, thoroughly investigate what motivates Afghans (Pashtuns) to support or join the Taliban. Then try to see if we can do anything to eliminate that motive, or satisfy it in a different way.
I have no idea, so I’m not a particularly competent adviser, though I don’t know how much previous administrations have looked into it.
IMO obvious answer is Northern Irish solution, where Taliban becomes part if the government in power sharing agreement, like Sinn Fein. Problem is that it is so obvious that I expect US negotiators are already working on it behind the scene (talks with Taliban through Qatar are regularly reported in public), and Taliban is refusing to do that, because they think that US will eventually give up and hand the country to them, no strings attached. Which would mean right approach is basically to continue current strategy with minor tweaks, until enemy caves to compromise.
Anatoly Moskvin is a Russian academic specializing in linguistics (he speaks 13 languages) and Celtic history and folklore. As of 2011, he had never married or dated, was a teetotaler, and shared his parents’s home, where he had a collection of 60,000 books, with special interests in funerary rites and the occult. Describing himself as a “necropolist”, Moskvin was also considered an expert on the cemeteries around his native Nizhny Novgorod.
… in that year, he was arrested for grave-robbing the bodies of 26 girls between the ages of 3 and 15 and mummifying them. In the course of criminal investigation, “Moskvin stated that he felt great sympathy for the dead children and felt that they could be brought back to life by either science or black magic. … he was aware that he was committing a crime, but felt the dead children were “calling out” to him, begging to be rescued. He believed that rescuing the children was more important than obeying the law.”
This led to him being locked in a psychiatric clinic instead of standing trial for criminal charges.
Have H.P. Lovecraft stories started leaking into the real world?
“Have H.P. Lovecraft stories started leaking into the real world?”
Well I mean if you just take the Ancient Ones as a metaphor for a cold, uncaring and meaningless universe…
You missed out the best bit:
No such thing as bad publicity, eh?
Saving undead children with black magic is just so romantic, you know?
Little more impressive V-day gift than flowers and a box of chocolates I wager.
Not much crazier than our society that finds it important to give the dead a proper (and expensive) funeral and resting place, and makes grave desecration a crime — even though it doesn’t matter to the dead anyway. I found it kind of touching tbh.
Why not necropoliceman?
This is actually a tangent to something I said in last OT’s “ludonarrative dissonance” thread, but since it would be controversial I saved it for this hidden OT: if you’re a Neolithic or Bronze Age community, is slavery the rational thing to with a community you defeat in war? Let’s look at the alternatives:
1a) Eat them.
1b) Eat the men, carry the women back to your village/city as mates.
2a) Just kill them all.
2b) Just kill all the men, carry the women back to your village/city as mates.
3) Enslave both sexes, hope they don’t stab you.
4a) Leave their settlement standing, say they have to pay annual tribute, hope they don’t go to war again instead of honoring their agreement.
4b) Leave their settlement standing after killing all males of the warrior class, replacing them with warriors/landlords from your own tribe who will live there and collect annual taxes.
5) Something even more PC I didn’t think of.
4a seems high-risk and eschews increased reproductive success for your tribe. 3 is also high-risk (slave owners from Archaic Greece to the 19th century had enough fear of being killed by disgruntled slaves that they had laws in place for when a slave-owner was killed), though this could be mitigated by keeping the males prisoner at a mine, which was absolutely horrific work free people wouldn’t want to do anyway.
“Kill the warrior class and install your own warrior class in place of it; don’t bother with the peasants, they probably won’t mind being ruled by a different gang of well-armed bullies” is historically a popular option, albeit only if the receiving end has a stratified culture. “Kill the king and anyone that helped him resist you; install a satrap supported by local allies” is a variation.
Ah, good point. That would be a lower-risk variation of 4 where they pay the annual tribute/taxes (to local residents of Your Tribe) because you replaced the Not Our Tribe warrior class/landlords with your own.
The Ottomans had an improved version. “Install a satrap supported by you and your local allies, move the king to the other end of your empire and install him as a satrap supported by you and your local allies.”
This has two advantages. The king is more willing to surrender because he will still be a high status person, just somewhere else and with much less independence. And you have a source of satraps who have qualifications for the job, having been rulers already, and no local support in the place you are putting them, so are dependent on you.
Oh, that’s clever.
So it’s essentially all a big king swapping game? Did any other civilizations do this or are the Ottomans unique?
So… A Game of Thrones (the game is Musical Chairs).
Stalin did the opposite and moved citizens around.
Another interesting feature of the Ottoman Empire was the succession system–instead of primogeniture, fratricide. When the Sultan died, those of his sons and brothers who wanted to job fought it out. It was an expensive succession method, since it meant a civil war between every pair of reigns. But it selected for the candidate whose combination of military and diplomatic abilities made him best at winning a civil war, not a bad qualification for running an expansionary empire.
I’m not an expert on Byzantine history, but my impression is that the point at which they abandoned that system was about the point at which the empire stopped expanding.
AFAIK the fratricide system was replaced by succession of brothers before sons (as is still in place in several Muslim countries, notably Saudi Arabia, today), combined with the luxurious, but isolated, imprisonment of any other living adult male members of the House of Osman.
This was about 70 years before the Empire stopped expanding.
But aren’t civil wars a main cause for the fall of expansionary empires?
Maybe finding a leader that is x% more likely to survive a civil war is worth inducing Y civil wars for various values of X and Y, but I’d suspect it is more likely that the situation persisted as much as it did because everyone involved was inclined towards it, genetically or culturally.
Replace “Byzantine” with “Ottoman” in my previous comment.
@ Randy M – generally, yes, and probably this would have happened to the Ottomans if they’d kept up the succession-by-fratricide model, but as long as they were the biggest guy in town it was workable and did make it very likely that the Sultan would be good at Sultanic core-competencies of convincing people to stab other people for him and managing the technical details of said stabbing. What really killed the Empire was a combination of succession-by-seniority, large families, and imprisonment until succession that basically guaranteed that new Sultans would be dysfunctional shut-ins who didn’t know anything about actually running an Empire, as I understand it.
Another thing to consider about the fratricide model is that it makes some kinds of succession crisis more likely. Say the new Sultan executes all his relatives to remove any potential rivals, and then turns out to be infertile, or just dies before he can get round to fathering any heirs. Now there’s no-one to take over, because anybody else with a claim to the throne has been killed. Granted I don’t think this actually happened to the Ottomans while they were practising fratricidal succession, but it’s still a possibility worth bearing in mind.
Wouldn’t the current prince have had a harem for several years by this point, and likely have a few children of his own?
There’s a bit like that in the Vorkosigan novels, where the main character captures a spaceship, makes its former 2nd-in-command [Thorne] captain, and gives the former captain [Auson] command of a different captured spaceship.
The ones I’ve seen fairly commonly historically:
Usually ‘slaves’ rather than mates. This is usually an attempt at assimilation. The idea is that women and children don’t carry ethnic identity the way men do. It appears to work reasonably well, plus or minus a few famous stabbings. (Interestingly, this was one of the differences between the classical Greeks and Romans. Romans perceived women as possessing an ethnic identity more strongly than the Greeks did.)
If you have one village where they’re all going to talk to each other, this is a bad idea. If you can scatter them across an empire, it’s less of a worry. Likewise, if you can sell them to a bunch of neighboring tribes, no issue.
This is actually the most common. After all, it means you don’t have to expend any resources to rule the place. You also don’t have to do as much violence: just enough they can’t resist you and know it. You can also utilize their political resources, and even their army, in a pinch.
They do what you say or you show up with your superior army. In that case, you get not only your annual tribute but loot too! This was an intergenerationally stable pattern in Thai-Laos and Songhai relations. Some places even skipped the tribute and just raided the weaker neighboring tribes regularly, usually annually.
The more common pattern is to take from the elites, intermarry with them under a political order you dominate, and let their peasants stay peasants while your peasants become people above their peasants. This strategy is actually born out of weakness. It’s usually used people like the Anglo-Saxons: homeless or migratory bands who often perceive the people they’re assimilating with as wealthier and more civilized.
Didn’t the Aztec Triple Alliance combine this with 1b?* They’d declare “Flower Wars” on tributary cities so they could capture warriors to sacrifice atop the main temple and eat.
*Do we know what percent of sacrificial victims were women/girls?
I don’t think we have any really good information on sacrificial demographics, but from one find:
Holy cow, that artist reconstruction of the skull rack and two towers. Was that the most metal thing ever built in real life? Just add a buff copper-skinned dude in a loincloth wielding a guitar and it’s the most OTT album cover ever.
No, I’d say it’s just a form of 4a. One very common feature of submission is humiliating demands as acts of submission. Likewise, enslaving people from the tributary states is very common, as in Thailand and Songhai. I don’t think the Aztecs killed enough of their tributaries to seriously destroy their capacity to fight. At best it weakened them.
More importantly, it conveyed that the Aztecs could and would kill them all if it is was necessary.
Of course, 4a doesn’t have to be quite that threatening. Indeed, being that threatening has some definite downsides. See: Cortez’s native allies. The version used by the United States is arguably superior in several respects.
The US grand strategy is something like: divide the world into regions, encourage regional international organizations, then encourage multiple powers in the area. Prevent any one power from becoming dominant in its region. If a power threatens the US, leverage its local strategic rivals to oppose it. But don’t let them actually destroy the rising power: that would risk the victor becoming dominant. At best, you replace the government of the country with a new, wealthier one more reliant on your economic/alliance system. This actually strengthens the country and prevents the recent victors from leveraging their victory to become dominant in turn.
This has a few major advantages. Firstly, the US is not officially responsible for any of this. If we find ourselves weak, we can retreat a little without breaking any explicit promise or losing prestige. Secondly, we offload major costs onto other regional powers. Thirdly, it makes us the arbiter. People appeal to us to resolve disputes rather than negotiate against us. Fourthly, it makes rebellion unlikely to affect us. It’s difficult to figure out a good cassus belli and while anti-American rhetoric is popular, it usually manifests itself as attacks on local powers. Iran might shout ‘Death to America!’ but it attacks Israel, Saudi Arabia, even Turkey more than us.
Imagine an alien Cortez shows up. He has three ships and very advanced technology. Let’s say, by some amazing act of alien diplomacy, he manages to unite the revisionist powers: Iran, North Korea, Venezuela, Russia, Pakistan, and China. Let’s say he also convinces (unlikely though it is) Turkey, Saudi Arabia, Jordan, Palestine, Argentina, Malaysia, and Germany to join him.
He still needs to deal with other regional powers and a slew of smaller powers. Plus domestic opposition in several of those countries and whoever takes violations of international law seriously. The US gets the benefit of credibly casting itself as the defender, both of itself and weaker powers. Plus, most powers not in his alliance have a geopolitical interest in propping up American hegemony to prevent domination by local powers. And American can send them very good weapons. He also needs to deal with the fact all his allies will be far more interested in attacking their local theaters than getting together for some grand invasion of America. Iran wants influence in the Middle East, not North America. And does Venezuela really want to invade Miami? They’ll also be conflicting with each other. His alliance includes Turkey, Saudi Arabia, and Iran! It’s hard to imagine the US couldn’t convince one of these three rivals to defect.
It’s much the same with other Earth powers. Look at Russia. They’ve been economically devastated and internationally isolated for invading Ukraine. Plus you see the powers on its borders moving closer to the US, not really out of love for the US but because they don’t want to be next on the menu.
…I’d read that SF book. But who gets to be La Malinche?
Citation needed? Many of the great empires appear to have done this, but listening to Dan Carlin’s podcast of the Gallic wars, it sound more like a giant genocide. Ditto for the Mongolians, where Carlin spends a lot of time discussing the habit of murdering everybody, and saying that’s just par for the coarse, and anybody would have done the same. To some extent it’s also true for 19c. colonial powers. I think leaving things for taxation only works if you have the overwhelming military superiority to eliminate any threat, and this is usually achieved by a) a lot of killing and/or enslavement, and b) installing your own garrison and/or ruling elite.
Dan Carlin is an entertainer with a background in talk radio and a B.A. in history. Citing him is not a great appeal to authority.
If the Gallic Wars were a genocide, why were there still Gauls in the empire? Why did the Gauls fight for Caesar? Why did he give several Gallic nobles senatorial rank? And I have to say that the Mongolian conquests were and are considered especially bloody by the majority of contemporary historian and modern ones.
Genocide requires significant effort. You need to spend time killing every single person. It usually takes several years. Most societies are not capable of that level of effort. Even those that are tend to prefer to put the resources to other uses. It also discourages surrender.
Ruling an area also requires effort. You need to remove any local rules, suppress their supporters, and install an administrator. If you want to do it on the cheap, like 19th century empires generally did, you’re going to be heavily relying on local allies. Empire generally empowers locals and minorities more than a nation-state, since the Empire needs people who cooperate in its rule. If you want to completely staff it with your own people, then you’re basically going to be taking on a very expensive project. The people are unlikely to submit easily to a rule they’re not, to some extent, involved in. You have to have all your usual bureaucracy plus extra repressive measures.
Tributary relationships require the least effort by far. As to examples, both the empires you cited had tributaries. It wasn’t until the Diocletian reforms Rome really took direct control of the provinces. The Mongols were a confederation of tribes who had been beaten in that way and had additional tributary relationships with (for example) Korea. Their ideal situation was that a people would agree to become tributaries, in which case they would be left alone. It spared them the effort.
Cannibalism of a conquered enemy is dangerous, as humans carry a lot of really serious diseases for other humans. Kuru, for example. You would not want to conduct cannibalism in a Neolithic or Bronze Age society.
Fortunately, with modern biotechnology, we can avoid these concerns, making 1a) and 1b) increasingly palatable options.
When I read Will and Ariel Durant’s Story of Civilization they credited the invention of agriculture and therefore slavery with the end of cannibalism.
The prion disease kuru is only transmitted by ingestion of human brain tissue. What diseases are spread by consumption of other human meat?
Just about anything they happen to be carrying, as long as it isn’t (or its spores aren’t) inactivated by the cooking you feel like doing. Eating meat from other animals is pretty safe, because diseases tend to be very species-specific, with a few exceptions (mostly worms and other parasites — and humans are a dead-end host for a lot of those). But if you decide to eat Bob, anything living in Bob can live in you. Also, there’s a risk of kuru even if you don’t eat the brain, because nervous tissue can contaminate other parts of the body during preparation.
The bottom line is that cannibalism involves contact with human body fluids, with all that implies.
… which explains the large data set of cultures that kill lots of people and then don’t eat them.
Interesting, then, that we don’t have a big obvious example of cannibal diseases from each culture that was an exception.
I wonder if that partially explains the taboo on eating other primates. Chimps don’t have much meat on their bones, but I imagine a Gorilla could feed a tribe pretty well.
I imagine those cultures are playing the odds, mostly. Cannibalism in most of the cannibal cultures I know about involves a lot of ritual: you might eat the youth that was selected to represent your god as part of a renewal festival, or you might eat a hated enemy to gain his strength, or you might eat Grandma when she dies to ritually bring her remains back into the tribe. Those are all pretty rare events, and for two of them you know the person you’re eating is young and healthy, hence unlikely to harbor anything really nasty. Again, think of it as equivalent to fluid contact.
I’ve also heard of a couple different variations on “burn a person/parts of a person to ash, mix the ash into something and eat or drink it”, which would count as cannibalism from an anthropological perspective but would kill any pathogens dead. (It also wouldn’t be nutritionally useful, though.)
On this topic it’s worth reading The Man-Eating Myth. The author argues that just about all accounts of human cannibalism, other than emergency cannibalism in shipwrecks and similar situations, are bogus. He may overstate his case a little, and I think he was writing before the Kuru evidence came out, but he makes a pretty convincing argument for throwing out most of the reported examples.
I feel like the answer is some combination of all of these.
Like, at the peace table you give generous terms and make them thank you for it, but this is after the final battle where you sacked their city and killed their generals and then gave your troops their three days.
More generally you want to be maximally harsh privately (so you don’t fight these folks again), while getting a rep for being a reasonable and generous victor (such that other folks may surrender). I dunno what exactly that looks like on the ground.
Can I post a half-assed political survey I made in Google Docs here?
I sure hope so. Sounds right up SSC alley.
Well, here we go!
I answered ‘other’ to several questions.
Firearms: I am more or less happy with the stringent restrictions in my country, but I wouldn’t push for similar restrictions in the US.
Feminism: “I support women having the same legal rights as men, but [and?] we need to push harder culturally to solve non-legal inequalities that
benefit menharm women” and also inequalities that harm men.
Militant Islam: small problem for the West, big problem for the Middle East.
Drug war: “We should legalize marijuana and [most] softer drugs” and take varying attitudes towards harder drugs.
I think that you made a mistake on the the question what nationalism should be based on, by adding ‘stability’ to the second answer. I believe that most people who focus on culture or race believe that this brings stability. My guess is that you personally favor the second answer or at least consider it the least objectionable and thus were biased towards it a bit with your phrasing.
For the firearms question, “We need more regulation, while preserving rights” is unclear. Do you mean preserving access to firearms for any citizen by “preserving rights?”
The abortion question ignores the possibility that people may consider the reasons for the abortion a major factor.
For the “non-traditional sexualities” question, you heavily prime people by offering the examples of homosexuality and bisexuality, but not pedophilia or sex with robots. Furthermore, your answers assume similar levels of (dis)approval for each type of sexuality, while many people seem very permissive of some kinds and very disapproving of other kinds.
For the Jew question, I would have liked an option to answer that Jews are over-represented in ‘high’ positions, without having to say/imply that this is negative or positive.
For the Israel question, the last sentence of this answer is very bad: “Israel as it exists right now is an enemy. The Jews need a homeland, but they should extend that to others. All Jews need to return to Israel.” You seem to believe that strongly disagreeing with Israel’s policy implies wanting Jews cleansed from other nations. This turns it into a very racist answer.
Also, you should probably have asked for nationality or indicated that only Americans should answer, given that many of your questions are relative to the status quo.
Just couldn’t pass this one by – are there actually people out there who might consider sex with a robot (non-consciousness one ofc) as less moral than masturbation? How can that make sense?
Masturbation is more or less Natural, sex with a robot is completely Unnatural, and you really haven’t noticed that a significant fraction of the human race considers Unnatural == Immoral as a terminal value?
You’re right, it hasn’t occurred to me that they would probably make a distinction between “natural” masturbation and masturbation using any tools (lets not go into specifics here), and sex with a robot obviously falls into the latter category. That still shouldn’t be terribly immoral even by their standards though, certainly not more immoral than homosexuality and on the same level with pedophilia.
Unless someone is very literal with their definition of natural and defines it as “anything that occurs in nature”, which does make sex with robot more “immoral” than homosexuality, incest and probably even pedophilia and
devilanthropologist only knows what else… but I’m yet to see anyone actually claiming that.
Yes, there are feminists who are against sex robots, out of fear that it will cause objectification of women.
Although a cynical person might argue that they (unconsciously) fear losing leverage over men.
PS. I intentionally selected two things that leftists are more likely to be against than homosexuality and bisexuality, as the question seemed to trigger politically biases IMO.
Somewhere between the “it’s unnatural” objection and the “objectification (or leverage?)” objection — I think that the more personal / intimate an activity is, the freakier it is to imagine a machine doing it. Sex is one of the most personal / intimate activities there is, so the thought of someone having sex with an autonomous machine is, if not morally wrong, still kinda freaky to me.
The uncanny-valley-ness of current humanoid robots also makes it weird that anyone would want them. That’ll probably be solved in 20-30 years though.
I gave the vaguely closest answer in several cases, and answered “other” in some more, but ended up feeling like the results weren’t likely to be useful/meaningful, even beyond the self-selected/small number of responders problem.
A lot of it seemed to be position-on-unidimensional-culture-wars scale – lots of different scales, which is better than the usual. But the issues in some/many of these scales aren’t unidimensional.
It was also heavily US-centric. I live in the US, so that wasn’t an huge problem for me – at least I know what the local discourse is about. But there were questions I’d answer one way for my country of residence, and another for my country of citizenship.
Firearms: I’m at least as concerned about various not-criminal-until-you-shoot-someone patterns of potential gun access (not just ownership), as I am about “criminals”, particularly if the term “criminal” includes those convicted of victimless crimes.
Put another way – if you used to smoke pot, and got busted for it, you can collect as many guns as your neighbours, for all I care. OTOH, if you are currently in the habit of using substances that tend to lead to poor impulse control, legal or otherwise, I might worry.
And while I’m not sure what can be done about it without really bad side effects, I worry about (colloquially speaking) “violent crazies” more than “criminals” – “violent criminals” would be a different story, but you didn’t give me that distinction.
I think your personal caricatures of opinions are going to really damage your ability to get any meaningful spectrum of responses. “We need to gendercide men for a womyn’s utopia.” as a reply option is not helpful. At minimum, you should steelman (steelmyn?) replies you don’t agree with, or the whole thing is a self-aggrandizing farce.
Yeah, this was more than tongue in cheek. I just wanted to use the Google Docs feature and play around making spurious correlations with the data.
Also there’s nothing here to steelman, because I’m not implying that regular feminists want to genocide men, only that some people do, since the questions are ordered roughly by extremity. There literally are people who want to genocide the other gender/s – visit /r9k/ – even if they are 0.0001% of the population. The questions are roughly ordered in terms of extremity, so once you’ve decided men and women should be slaves, the next step in extremity is genocide. I think if someone wants to commit genocide then I doubt they’d be put off by deliberately flippant phrasing.
I’m not going to serious lengths to avoid the lizardman in any case.
Another reason I’m not making this totally serious and scientific and taking a more shits and giggles approach is that the same person can take the test multiple times. The only way to stop this is to have it only be available to people signed into Google+, which would mean hardly anyone would do it.
I appreciate criticism of the questions anyway.
I thought it was made clear by Le Maistre Chat that the next step is cannibalism.
Cannibalism is genocide, the tastiest genocide.
While you get a lot of criticisms (constructive, I’d say), I just want to add that I quite like the rather flippant style. Not sure it helps you get accurate answers, but at least it was fun. I’ve no idea what the last question is about, though…
I did your survey. Not sure if you’re interested in feedback, but you’re getting it anyway: I thought several of the questions were fairly general but had answers that were very specific. I would sometimes agree with the first half of an answer, but then it was followed up by a comment that I did not agree with. This resulted in me answering “other” to rather a lot of questions.
(Perhaps the easiest-expressed example of this was the question about militant Islam, which I think is a large threat to the Islamic world, but only a very small threat to the West. This could perhaps be summed up as “kind of a big deal”, which was one of the options, but it came with an opinion on vetting, which doesn’t follow.)
Apparently Google Docs has an option to make “Other” into a section where the survey taker can put their own answer, but I’m not sure where that is.
I loved the survey. I answered with no “others.”
I don’t know how much you’ll learn with this, but I encourage more surveys.
The one thing I found surprising was 40% wanted to legalize or decriminalize all drugs. I think this is way outside the mainstream. In a good way, IMO, since I am part of that 40%. I realize that SSC IS outside the mainstream in many ways, but I didn’t think drugs was one of them. Now I really want Scott to ask the same question on his survey to find out if this is generally true, or just happenstance of those who responded so far.
Note: I asked this on my twitter and on discord, so it’s not just SSC though it’s likely that the bulk of answers are from here so far.
Israel? You mean the Kingdom of Jerusalem?
No, he meant Palestine 🙂
Right, the other infidels are there too.
We’ve had strength training threads, how about an endurance thread?
Currently been tasked with improving a 1.5 mile time, been reading a lot of Joel Jamieson, seems legit. I have several months, so spending some time on long slow work at 130-150 HR before throwing in some faster stuff as test day gets closer. Took a shot at a sub-7 2k erg a few years ago, got bored at the time but thinking about taking another shot at it when my schedule is clearer. Or girevoy sport, looks like fun.
(my personal politics have remained unchanged, though of course this is hardly the ideal experiment)
It’s something of a rabbit hole, coming from a mostly strength background–a whole new set of theories and evidence out of which to assemble something like a coherent picture of how to improve. Interesting stuff. A lot of suspicious physiology backing up probably correct conclusions.
Ran for my Cross Country team in college, fallen off since then. I don’t think we ever monitored or cared about heart rate, so I’m interested to hear what you’re doing with it. Our training schedule in a nutshell:
—Sun Long, easy-paced run at 2~2.5x race distance
—Mon & Wed Easy-paced run at about 1.5x race distance. Some strength training, usually just body weight stuff.
—Tue & Thurs Warm up jog. Some kind of interval run like “Six reps of 0.1x race distance at race pace, with about equal time jogging recovery in between”, or “30-60-90-120 ladder”, where you do X seconds at race pace, X seconds at recovery pace, stepping all the way up and then back down. Cool down jog.
—Fri Easy paced run at 1x race distance
—Sat Race. (College Mens’ race distance is 5 miles)
Of course, by the time the season started we were already expected to be able to run twice our race distance fairly easily, so this might be more of a “How to build distance-running speed once you’ve already laid the endurance foundation” training.
For sure. You were racing every week? ~8k?
There are various theories about how training in this heart rate range promotes specific metabolic adaptations which are useful to have before moving on to faster work and/or plateau less slowly (and therefore reward longer periods of focus in dividing up the year)–stroke volume, capillarization, Maffetone’s whole fat-burning vs sugar-burning thing. (My sources are mostly Maffetone and Jamieson.). I don’t know if I believe it, especially the Maffetone stuff, but I’m at least pretending to over the course of this training cycle. We’ll see how it shakes out.
I also think it’s a useful way to keep myself running very slowly for a while (more psychologically palatable than trying to keep a particular pace in terms of minutes per mile, and more granular than perceived effort or the talk test–though I’m not sure that the granularity matters), and there may be some useful mechanical adaptations from that (bone density, tendon thickness) with less recovery cost than running fast. But I don’t know. For sure it is extremely slow, at least for now.
The races were always 8k, and we raced most weeks from mid-August (start of the semester) through October.
I’ve definitely known people who burnt out mid-season from training too hard in the preseason, so “keep yourself in this specific intensity range before moving on to faster work” sounds like good advice.
I had a lot of success (as an older male getting serious about fitness in my mid-30s) with a routine based on the research of Veronique Billat linked to VO2max. The protocol was to:
(1) first run as far as possible in 6 minutes, distance d. This establishes a reference speed 10d per hour.
(2) First week, first running session, run at 10d per hour for 3 minutes, recovery walk for 3 minutes, repeat each 5 times.
(3) First week, second running session, run at 8d per hour for 10 minutes, recovery run at 6d per hour for 5 minutes, repeat each 4 times.
(4) Second week, first running session, run at 9.5d per hour for 3:30 minutes, recovery walk 3:30 minutes, repeat each 5 times.
(5) Second week, second running session, run at 8d per hour for 10 minutes, recovery run at 6d per hour for 5 minutes, repeat each 4 times.
(6) Third week, first running session, run at 10.5d per hour for 2:15 minutes, recovery walk 2:15 minutes, repeat each 5 times.
(7) Third week, second running session, run at 8d per hour for 20 minutes, recovery run at 6d per hour for 5 minutes, repeat each 2 times.
(8) fourth week, rest and re-test d
The shorter/faster runs are much easier on a treadmill to set the speed, but I mostly managed them outside with a garmin gps running watch until I got to d up to 1.8 km.
I was also circuit training 2 or 3 other days each week and some months ran a 5k race.
I think my result were solid, if not amazing: 5k race time to 20 min, half marathon to 1:40.
Fwiw, my understanding is that the earlier theory of VO2max has been somewhat discredited, so I’m not sure whether the workout structure has a solid theoretical foundation.
Interesting — unless I’m doing the math wrong you got your mile time down to 5:2x or so. Assuming that’s right, your 5k potential should have been something closer to 18:00. And your half potential should be <85 minutes, but it's not as surprising that your training program didn't produce optimal results for that distance.
If you're interested in seeing what you're capable of, you could consider a program better tailored to road racing. I'd bet you could get your 5K below 18 minutes pretty quickly.
I know that I was under performing my expected times for the 5k. Kept going out thinking sub-20 would be easy based on my training speeds, then missed the easy target. For a while, I found that very frustrating, but eventually, I reminded myself that race times weren’t my goal and shifted to playing tennis instead.
I ran the half marathon just for fun, ended up enjoying it immensely and feeling great at the end.