“Anonymous Colin” asked about “fodder for Weird Fiction” in the integer thread. To answer here:
I love ancient aliens. Not the wan recent kind popularized by Erich von Daniken and later The “History” Channel who showed up to build the Pyramids and make the Sumerians worship them. I mean the kind whose expanding civilization drove Permian species extinct and changed the global climate with their industry, the kind who make sense and reason of Deep Time, so we are not forced to believe things like “If human governments don’t change their policies, there will be catastrophic global warming and mass extinction, just like some pre-human events human scientists have discovered, but never mind those because there weren’t rational animals around to be guilty for them.”
Hot take: college students complaining about being underemployed and trapped in crappy, precarious work are the economic equivalent of the sort of unhappy singles whom Scott discusses in “Radicalising the Romanceless”. By this I mean that both groups are unhappy because they did what society told them to do, and now are either not benefitting (at best) or actively being held back (at worst) because of it. When it comes to getting dates, society (or at least a large portion of — especially blue-tribe, pro-feminist — society) says, “Women like caring, sensitive men, so be all emotionally available, do whatever your crush asks you to, and eventually she’ll realise what a swell guy you are and start dating you.” When it comes to getting a good job, society (or at least the state education system) says, “Graduates earn so much more than non-graduates! If you don’t get a degree, you’re really shooting yourself in the foot in terms of future earnings! If you have a degree, you can get all sorts of amazing jobs!” In both cases, of course, this advice turns out to be rubbish, but many people accept it when they’re growing up it because it’s what society tells them and they have no reason to doubt it. In both cases, the reaction when people say “Hey, I’m doing what society said I’m supposed to and my life still sucks, what gives?” is, as often as not, “You’re a bad, entitled person who deserve your suffering! Society doesn’t owe you a living/girlfriend!” And in both cases, a lot of people respond by becoming bitter and disillusioned and supporting groups which promise to right the situation — radical left-wing politics in the case of underemployed university grads, and incels or PUA-types in the case of the romantically unsuccessful.
Yes, this. I don’t have a terribly good answer for what to do about it, in the educational case. In the romantic case, the mediocre but better than nothing advice has been to look into what the PUA and adjacent manospheric communities are teaching, but be careful not to drink the kool-aid. We need an equivalent for people who spent four years on an education that doesn’t really advance their economic prospects.
Also, in both cases, could we maybe stop giving the bad advice in the first place?
The advice isn’t that bad, really. Sure there are some great jobs in the trades and corners of the civil service that don’t require degrees. But they are no more universal solutions than learn to code is. For most everyone else they probably are better off with a four year degree, even with debt, than without. It is dumb that assistant manager of the clothing department at Target requires a degree, but that’s where we are. And it’s better to be an assistant manager of the clothing department at Target than a part time stock person the clothing department at Target.
There’s a pretty hard ceiling on promotions without a degree at Target, apparently. Getting any salaried managerial position without a degree is theoretically possible but in practice never happens.
Some kind of alternative credential that would work as rational astrology for employers, but wouldn’t cost employees a gazillion dollars, would be a huge win for mankind.
For most everyone else they probably are better off with a four year degree, even with debt, than without.
Agreed. And IMO, this is where the analogy ultimately fails.
While getting any college degree at any price isn’t necessarily a good idea, it is clearly true that statistically, those with degrees do better economically than those without. A degree doesn’t guarantee wealth and opulence, but for 99% of people, having a degree is better than not having a degree.
This is not true for the “nice guys” out there. The issue isn’t just that being a nice guy is no guarantee of romantic success. It’s that being a nice guy is actively harmful and does not make one better off, all things considered, at all. It’s not as if being the nice/sensitive guy clearly works for 99% of guys but there are a few edge cases where it doesn’t. And yet, despite clear and obvious evidence that it doesn’t work for most people, society (for reasons largely political) absolutely refuses to change/update this advice.
“Go to college” is good, but not perfect, advice. “Be a nice guy” is abjectly terrible advice.
While getting any college degree at any price isn’t necessarily a good idea, it is clearly true that statistically, those with degrees do better economically than those without.
But that’s weighted by A: people from an upper-class background who are going to be rich no matter what, and B: people who go to college to pursue a specific professional career. Those aren’t the people the usual advice is targeted at; they’re going to go to college no matter what. For the person on the margin who might or might not go to college, “you should absolutely go to college, you will be more prosperous if you do no matter what you bother to study”, does not put them in either of the groups that unambiguously prosper afterattending college.
A degree doesn’t guarantee wealth and opulence, but for 99% of people, having a degree is better than not having a degree.
I don’t believe that number is 99%. I don’t believe it is even 90%, or even just 90% of the people who actually attend college. Particularly if “having a degree” is not qualified in some economically sensible way. And, again, the (bad) advice is for people on the margins, least likely to prosper from it.
I don’t believe that number is 99%. I don’t believe it is even 90%, or even just 90% of the people who actually attend college. Particularly if “having a degree” is not qualified in some economically sensible way. And, again, the (bad) advice is for people on the margins, least likely to prosper from it.
Fair points, particularly about the marginal advice-receiver.
That said, I still think I’m correct that “go to college” is generally good advice that sometimes fails, while “be a nice guy to women” is generally bad advice that might still, nonetheless, occasionally succeed.
We need an equivalent for people who spent four years on an education that doesn’t really advance their economic prospects.
Also, in both cases, could we maybe stop giving the bad advice in the first place?
Part of the reason PUA was created is because people (mostly men, I presume) were willing to seek out and pay for such advice. Are people willing to do the same for career and self-improvement advice? There’s a few financial shows around (though again, people are willing to pay for financial advice) but nothing like that.
Would the equivalent be an online community of people figuring out how to get acceptable credentials for hiring/promotion with minimal cost and effort?
I think that’s too indirect. If you want a literal equivalent, it’d be an organization of people with a strong ideology that corporations don’t understand who they really need to hire in order to be profitable and repeatedly choose to promote workers that are popular with management over the real hard workers who keep the company running. The ideology would then focus on strategies to get paid as much as possible for as little work as possible with a strong ideology that corporations are thieves and it’s all BS from top to bottom. Plus some virtue posturing about social and moral decay, probably going back to the halcyon days when everyone had their own farm, was their own boss, etc.
More healthily, some kind of career advice/coaching based on getting the skills, credentials, and experience, and leveraging them to get the best job possible however they define that (whether high pay and low work, personally fulfilling, etc). Which is analogous to the healthier part of PUA too: if PUA was just telling me to touch up their social skills, dress better, get in good shape, and get successful, then I don’t think most people would find it objectionable. Likewise, if someone advocates stealing from your employer that’s not great, but if someone advocates negotiating harder I doubt most people will object.
Hotter take: This is all part of a long-term plan to restore ‘Murica back to its anti-elitist roots. What better way to build up a nation of individualists who don’t trust no “experts” than to select hard against those who trust what society tells them?
What is the selection effect here? Those who conform to the narrative (got a degree and a good job) are the ones who succeed, and get the financial and social power to further entrench the system.
When a character created in the early 20th century, occasionally even in the Victorian era, maintains a certain continuous level of popularity, the heirs of the deceased creator get royalties. These royalties can then be used to harass creators of derivative works after the character and writings are believed to be in the public domain.
The oldest example I know off the top of my head is Sherlock Holmes.
In Conan Doyle’s home United Kingdom, the copyrights expired in 1980, but the stories were somehow removed from the public domain from 1996-2000 by his heirs. Meanwhile in the United States, due to laws like the Sonny Bono Copyright Term Extension Act, publication before or after 1923 was frozen as the cutoff between whether it was legal or illegal for the public to freely Cher a work of human creativity.
As all but ten Sherlock Holmes novels and short stories were published before 1923, the Conan Doyle estate continued to demand royalties for all Sherlock Holmes works from first appearance A Study in Scarlet on, under the claim that Holmes and Watson’s personalities were seamless wholes that included copyrighted elements. Fortunately in 2013, Leslie Klinger (lawyer and editor of The New Annotated Sherlock Holmes) refused to agree with this out of fear of legal fees and was vindicated by the original court and Seventh Circuit Court of Appeals (the Conan Doyle estate tried to appeal all the way to the Supreme Court of the United States, which declined to hear their appeal – perhaps under the legal theory of “Are you shitting me?”)
Meet the rent-seekers!
As far as I can gather, the most legitimate holder of the few remaining and geographically-limited Arthur Conan Doyle intellectual properties is a private corporation held by the ex-wife of Sheldon Reynolds, an American TV movie producer-director who attempted to approach Doyle’s literary estate in 1976 and found that the IP had been transferred from his three surviving heirs to a holding company, Baskervilles Investments Ltd, that had gone into receivership with the Bank of Scotland. This legal entity has a detailed but self-serving (it never mentions the copyright trolling Klinger broke) history of the copyrights. Hilariously, they call another company, Conan Doyle Estate Ltd, “copyright trolls” for claiming that they own the late Doyle heir Lady Jean Bromet’s 1/3 of the IP.
It’s a big problem. We reached the point long ago, even before Sonny Bono, where copyright is having the opposite of its originally intended effect. It’s supposed to encourage creativity (by helping creators get paid for their work), not stifle it (by allowing corporations and media conglomerates, often with no relationship to the original creation of the work, to monopolize creative work that was done before my grandparents were born). The ideal period for copyright, in my opinion, would be something like 25 years, non-renewable. The original Star Wars trilogy, The Terminator, and Back to the Future would currently be public domain in a just world.
The ideal period for copyright, in my opinion, would be something like 25 years, non-renewable. The original Star Wars trilogy, The Terminator, and Back to the Future would currently be public domain in a just world.
There’s a case to be made that members of the public shouldn’t be allowed to dilute the author’s vision while (s)he is alive. Unfortunately corporate authorship is so common that this might not be a viable reform! 25 years non-renewable from date of release would be roughly ideal for corporate ownership.
There’s an easy fix for that – corporations are not people. Require them to record actual author(s) of works for hire, and they get to keep copyright only as long as there’s at least one author still working for them. (Or if we want to be nice to corporations, as long as there’s at least one author still alive.)
If we want to provide slightly more predictability, make the rule “author’s life or 25 years, whichever is longer”. That allows an independent author who dies young to provide for their surviving family, and allowes Disney et al. to plan to have whatever-it-is for at least 25 years.
My hesitation with this is that if we start extending the human lifespan, the period could get too long again. If people start routinely living to 120, we’d eventually have a lot of 100-year copyrights and the situation would be little better than it is now. I’d want to have a hard chronological limit, just in case the transhumanists win. Maybe 50 years?
That’s a pretty big “if”, if you ask me. And even if it does happen, we can revisit copyright law as and when it becomes an issue. In the meantime, I think it would be better to base our copyright laws on actual human lifespans, not on lifespans from a hypothetical future which may not even come to pass.
The problem is that copyrights are easy to extend but hard to shorten. So this could be a real problem if we don’t prepare for it before the problem arises, as long as we’re magically reforming copyright law anyway.
That’s a good problem to have. We can cross that bridge if we come to it. Chances are, society will have evolved in unforseeable ways by that time that make present planning futile anyway.
If someone lives to be 120, and if it matters whether copyright lives with them, then there’s a good chance that you’ve got an author living in poverty while their works are still immensely popular and being “ripped off” by lesser, but more commercial, corporate imitators. That’s going to be massively unpopular.
I’d be in favor of an absolute fixed term for copyright, but if copyright doesn’t last for life or at least for a fixed term of approximately an adult lifespan, that’s the sort of unpopularity you’re going to have to account for.
The whole author starves while work remains popular thing is a hypothetical so very hypothetical I think it likely to fall into the category of “inheritance Taxes bankrupted the family farm” – that is, no actual examples to be found.
Most books earn almost all their earnings in the first 6 years. Ebooks have made that falloff slightly less brutal, but not.. that much less brutal. Books that sell significant volume after twenty years, as a rule sold goddamn mountains in the first decade, so if the author is depending on the trickle income still, they are catastrophically bad with money, and would be in dire straights pretty much no matter what.
Also. Not to put to fine a point on it, but if you only wrote one book, you are not a professional author. Expecting to make a living of a profession in which you have not done any work for decades is.. overly entitled.
Copy right terms, in practice need to achieve two things :
1 Generate enough revenue to make writing not entirely a fools quest. – a single decade would more than suffice for this.
2: Make hollywood, the gaming industry and the rest who make secondary IP pay up. This means the term needs to be long enough to make waiting a bad strategy, compared with finding an appropriate sum of money. 10 years is not enough for that – too many still culturally relevant works to choose from, but 25 should more than do it. I mean sure, the local film school will now be filming a lot of 25 year old books, but that was not a relevant source of income anyway.
I find it very hard to justify more than 25 years, flat. The granting of monopolies is a very severe violation of liberty. Granting effectively perpetual ones is just offensive.
Each of us has a strong moral claim to the fruits of his labours. This means, in the case of authored works, if anyone makes money, it should be the author. This to me suggests copyright should last for the life of the author or if we must have a fixed term, for approximately the expected life of the author, meaning something like 50 years. I agree the current policy of life of the author + seventy years is excessive.
“Mr Larson, we would like to publish your book, and we offer you royalty of $100. If you refuse, we would like to remind you that if an accident happened to you, we will be able to publish the book for free.”
The ideal period for copyright, in my opinion, would be something like 25 years, non-renewable.
This. And I’d also say that copyright should only apply to the works themselves, or literal pieces of text/sound/video extracted from them. Copyrighting a fictional character or a fictional setting should not be possible.
Anybody who wishes to produce fanfiction or fanart should be able to do so without any legal issue. Note that many historical masterpieces would be considered fanfiction or fanart by modern copyright laws.
Copyrighting a fictional character or a fictional setting should not be possible.
The problem with that is that other works can affect the perception of the original, and do so negatively. If I write a series of stories about the fun adventures of a group of colts and fillies, aimed at ten-year-olds, and someone else comes along and uses those same characters for fantasies of horse-fucking, that’s going to affect how people think of my stories. It could definitely affect my ability to earn a living from the stories I wrote, to say nothing of the offensiveness of having characters I created used for things that offend me. As the original author I should be able to put a stop to such things, which is why copyright should include the ability to prevent derivative works.
They’ve said they aren’t planning on it, there haven’t been any moves yet, and there’d be much greater opposition to copyright extensions now than in the nineties – both from the Internet and from the leftist opposition to large corporations.
Four years ten months to go till Steamboat Willie enters public domain!
The extended copyrights have already started expiring. If they were going to fight it, they would have done so a couple years ago, back when it would have been a lot easier. But even they they recognized that it would be an uphill battle.
Something like that example is probably typical when an author with a spouse and/or children dies with creations worth licensing. Edgar Rice Burroughs has heirs; so do Tolkien and his now-deceased son Christopher, the original literary executor. But what if someone creates valuable intellectual property and doesn’t have typical heirs?
That brings us from Conan Doyle to Conan Barbarian.
Unmarried and childless, Robert E. Howard committed suicide in 1936 over his mother’s terminal illness he’d been paying the medical bills for. He was survived by his father, Dr. Isaac Mordecai Howard. He died in 1944, willing the rights to a friend in the medical profession, Dr. Pere Kuykendall. Howard’s first published novel, A Gent from Bear Creek, was printed in Britain in 1937. This was followed in the United States by a collection of Howard’s stories, Skull-Face and Others in 1946. Conan the Barbarian wasn’t of any posthumous value until 1950, when the novel Howard had sold to a failing British publisher was picked up by small press called Gnome Press. It was a modest success, enough that the rights-holders wanted the Conan short stories compiled into hardcover books too.
Hither came Lyon Sprague de Camp, already an established science fiction writer both on his own (cf. Lest Darkness Fall) and as someone who loved co-writing with a second author (cf. Harold Shea series with Fletcher Pratt, later with Lin Carter and others). He seemed an ideal candidate to edit the short story volumes for the Howard literary estate, especially as they had found Conan story fragments among the deceased’s papers. De Camp fastidiously filed copyrights on his Conan stories, presumably aware that under US law of the time Howard’s copyrights would expire after a renewable term of 28 years: between 1960 and 1964 unless the Kuykendall family could renew them (they couldn’t or didn’t).
It wasn’t until 1967 that the Conan stories started being published in large print runs, in paperback by Lancer Books. I Am Not A Lawyer, but it would seem to me that Lyon Sprague de Camp would be the sole Conan copyright holder by this time, with the original Weird Tales texts in the public domain.
But there was so much more to Howard than Conan, and it seems that Edgar Hoffman Price, a pen pal of H.P. Lovecraft, had physically met Robert E. Howard through their mutual friend and, somehow, acquired a trunk containing everything he had failed to get published. So when the representative of the Kuykendall family decided to close up shop in 1965, she asked de Camp to become Howard’s literary executor, but he found it expedient under US copyright law to keep his Conan interests separate from that and recommend for the job Glenn Lord, who had bought the Howard story trunk from Price.
Fast forward to late 1970. Marvel Comics writer Roy Thomas is interested in licensing the character of Conan. His boss, Martin Goodman, says “I’ll pay rights-holders $150 a month for a sword-and-sorcery character.” Thomas approaches Lin Carter about his Conan clone Thongor; finding he’s barely within the budget, he assumes the popularity of the Conan paperbacks means he’s a more valuable IP… but what the heck, I’ll go behind Goodman’s back and offer Lord & de Camp $200.
They jumped at it. But the legal contract wasn’t exactly for the Conan stories of Howard (now in the public domain anyway), de Camp, Lin Carter et al. They were the rights to everything by Robert E. Howard. Later, by 1978, de Camp and Carter folding all Conan rights into the licensing deal with Marvel Comics.
Shortly after this time, the copyrights and trademarks start to become indecipherable. By the time the 1982 Hollywood film was being licensed, the purported rights-holders had transformed into a corporation, “Conan Properties Inc.” Toy corporation Mattel briefly had a contract for action figures related to the film, but dropped the R-rated property in favor of something that had been in development in-house: Masters of the Universe. In 1984, Conan Properties Inc. sued them for copyright and trademark infringement and breach of contract. Mattel’s lawyers successfully argued “Who the Hell are you? You can’t even prove the purported rights were legally transferred to you.”
Sadly, intellectual property case law is obscure enough even to the judges called to rule on IP law that CPI has successfully sued artists for making Conan works as recently as 2018.
I’ve liked the proposal where the copyright holder can renew indefinitely, but it gets exponentially more expensive (not original to me but I can’t remember where I first read of it).
For example, you get your first 25 years free (possibly with the registration fee currently required if you want to actually sue somebody). After 25 years, you can renew, but it costs you, say, $1000. 5 years later, at 30 years, you can renew again, but it costs $2000. At 35, $4000, and so on. If, at any point, the rightsholder doesn’t renew it irrevocably drops into the public domain. The rights can be licensed or assigned just as they do now, but only the current holder can actually renew–they have to either renew or sell it to somebody before the next renewal period expires.
All of the costs and timeframes are subject to debate. Maybe it’s $10,000 at 25 years, or maybe the renew period is yearly at 25–I don’t know enough about the economics of publishing, TV, and moviemaking to pick good ones–but this is the general framework.
This has a couple of advantages: 1) people still get the protection of their work for a relatively long initial period that they can live on until they do their next thing; 2) if something isn’t making money, there’s no reason for people to hold on to it; 3) but if it *is* they can keep a hold of the copyright; 4) it avoids “orphan” works, because if somebody isn’t using it it’ll relatively quickly drop into the public domain, and if it’s not clear who the rightsholder is there’s nobody paying the fee; and 5) eventually, it gets so expensive to renew that even somebody like Disney can’t hold on to it forever.
It’s a cool idea, but I think realistically the timeframes we’re talking about are subject to large changes in technology, media landscape, economics, and certainly there will be significant inflation. I just imagine this will probably be something you’d want to fine-tune to incentivise the right models of usage and force copyright to eventually expire; there’s probably no simple, good way of doing it, a flat, mandated formula will get crushed by inflation, there will probably be a way to game anything you do to tie it in with the value or usage of the work.
For over 50 years, what looked like a company specializing in secure communication called Crypto was actually owned by the CIA and was sharing messages with the US.
The reporter thought this was pretty funny, and I’m still patriotic enough to see the humor. However, there are ethical issues with doing something like that, and there were people who were at least uneasy about it.
For those of you who are interested in ethics, what do you think of shenanigans on that level?
What do you think is the likely cost of this coming out?
Also, how much good did it do the US? It’s not as though people can look at American foreign policy and say it was strikingly clueful.
It makes the argument against countries using Huawei for 5G/other tech infrastructure more of a “lesser of two evils” situation. Who do you prefer will have your secrets, Uncle Sam or Uncle Xi? Ethically, it makes American carping about potential Chinese spying nothing more than shameless hypocrisy.
To be sure, I consider myself an American patriot. I think it’s the responsibility of every great power to acquire information on all other nations as a matter of national security. I’m not personally bothered by the spying, only that we got caught.
We’d already had public demands that American companies put back doors in all their cryptographic products for purporses of law enforcement, when the moral panic about Huawei began. I don’t recall whether those bills passed or not; the point was that to some number of elected Ameicans, law enforcement was always more important than privacy, and I can’t recall any nation where “the national interest” aka “intelligence” was deemed less important than law enforcement.
As a Canadian, I objected to the Huawei panic in Canada on those grounds. The US is not a reliable friend to Canada, and is more likely than China to see Canada as a source of gain for them, due to simple proximity. It’s been bullying everyone in the Americas since the Monroe doctrine, and it’s quite possibly only racism which currently makes them less nasty to Canada than to e.g. Mexico. (And we have the Trump trade renegotiation as a recent example of this kind of bullying. To this Canadian, what appeared to be going on was that the deal wasn’t “fair” to the US because they only got 90% of the gain, and wanted 99% ;-( – numbers picked to convey emotion; I don’t recall any useful info being published at the time.)
If there is no trustworthy tech supplier available – Canada should be making its own, not buying from either Great Power. Of course that would be hard to manage, given our historic tendency to bend over whenever the US asks, and our much smaller home customer base.
The trade deal was kind of a wash vs. NAFTA for Canadians, other than Canadians getting cheaper milk. (I despise farm subsidies at home and abroad, people shouldn’t have to pay more for food to protect a small, politically connected minority of inefficient businesses)
Also, I’m pretty sure it’s your GDP per capita more than your melanin which influences American attitudes.
It seems to me that the real problem with NAFTA is that it didn’t have Trump’s name on it. He’d be happy to pass the exact same deal as long as he got the credit.
As a Canadian, I objected to the Huawei panic in Canada on those grounds. The US is not a reliable friend to Canada, and is more likely than China to see Canada as a source of gain for them, due to simple proximity.
The US reliably defends Canada, which A: we would like a bit of recognition for and B: gives us an interest in making sure Canada remains defensible. If, e.g., any cellphone that contacts a Huawei tower becomes a piece of Chinese spyware, then Canada may be comfortable with that, but the United States may legitimately not. And if Canada is going to trust China more than the United States in this regard, then the United States may have to stop trusting Canada at least where telecommunications are concerned.
This would be inconvenient for both Americans and Canadians, but I expect more so for Canadians.
It’s unsavoury but it’s the kind of thing state intelligence services do all the time. However, it does make all the crying about “Russian interference” look silly, and the newfound reverence on the left side for the FBI/CIA seem even more grotesque than it already was.
This is what governments are going to do, and it’s only fair to assume that if We are doing it to Them, they are just as much doing it back to Us. Nobody has any foothold on the moral high ground here. Ireland is too small and weak to be of interest to anyone, but that didn’t stop GCHQ, and if we could/can spy on anyone, we’re probably doing it too.
Though it does seem to be the case that if you know they’re doing it to you, you can take advantage of this (by using Cunning Devices along the lines of “Don’t throw me in the briar patch!”) 🙂
Indeed, Irish officials sometimes used this to their advantage. “Sometimes we wanted them to hear what we were saying,” said Mr Lillis. In these situations officials would speak in Irish or add extra encryption to their messages to signal to the British they contained particularly sensitive information and they should pay attention to it.
Has anyone used finasteride? What was your experience?
I’m not balding but my hair is thinning out and I’m hoping there’s a relatively simple fix. Finasteride seems like it will prevent further hair loss for ~$30/month, which is worth it to me, but I’m reading different things about the medical side effects and I’d appreciate anyone’s experience.
If you are worried about side effects, some doctors will prescribe topical finasteride which has some of the same effect and a much lower side effect risk. I have been applying a cocktail of topical finasteride and stronger-than-OTC minoxidil to my scalp every morning for several years now. No side effects and hair thinning/recession seems to have not gotten worse since I started applying it. But of course post hoc ergo proper hoc remains a fallacy.
I started to notice my hair thinning about 6 years ago so I went ahead and asked about and got finasteride, I feel like my hair has stayed at roughly the same thinness since. I take it orally, and I had mild sexual side effects for a few months after I started taking it but they faded and now I seem to have no side effects, it is probably one of the highest value on the dollar things that I have ever bought, assuming my hair would have continued to thin. I started using minoxidil about 6 months ago to try and gain back some lost ground if possible, it might be working, but it also makes my head itchy.
On my todo list is to try mesotherapy for the scalp. Ideally I’d find a mix that contains minoxidil and finasteride as well. But either way, I’d start topical. I’ve heard rumours of very small risk of pretty horror side effects – stuff like permanent depression.
My hair started thinning about three years ago. I tested a variety of interventions, including minoxidil, ketoconazole, adenosine, changing shampoos, switching from combs to brushes, and finasteride. In my personal experience, among this set of interventions, only finasteride and minoxidil had observable effects. I currently am on a regime of once-daily 1.25mg finasteride and once-daily minoxidil foam treatment (note that twice-daily is recommended for minoxidil, and note that liquid minoxidil is substantially cheaper than foam but can irritate the scalp). Under this regime, I have observed no further hair thinning, and my vertex hair seems to have gradually (over two years) recovered most of its original volume.
It’s hard to assess the effectiveness of hair-loss interventions – you can’t easily see the top of your own head, humidity/haircare/styling/lighting strongly affect appearance, and the hair follicle cycle is very long (2-7 years for scalp hair). For my purposes, I attempted to assess treatment effectiveness by photographing the top of my head daily and, less rigorously, by gathering handfuls of scalp hair to feel for thickness and volume.
I am not a physician, and my opinions should not be construed as medical advice. But in my personal experience, finasteride has prevented my male-pattern hair loss.
In a thread below John Schilling makes a reference to “six figures” in the context of a job. That got me thinking. I think my impression of a six figure salary as being what you need to have made it, was set somewhere in the mid 90s. Ye old inflation calculator tells me that $100,000 in 1995 is around $171,000 today. Introspecting, that rings pretty true to me. While I still say “six figures” what I’m thinking of is closer to a $175,000 lifestyle than a $100,000. I wonder if people older or younger than me anchor differently or if some don’t anchor that way at all.
I think people tend to think in relative terms and the thing about making six figures is that both now and in 1995 you were making about twice the average household income. A two income household where both people make six figures is in the top 5% of all households almost by definition. (A single earner six figure household is still roughly in the top fifth to quarter.)
I think the meaning is thus still pretty accurate: upper middle class or whatever you want to call it. I don’t think anyone thinks of six figures as 1% wealthy (those are “millionaires and billionaires”) but it’s definitely comfortable and high earning.
This ‘money equates to class’ idea is one of the very few things that I find jarring in the list of differences between America and Britain (or America and everywhere else?) I don’t know why it does, but the thought that Donald Trump could be considered upper class merely because he has a ton of money, is one I find somewhat ludicrous. It could be snobbery on my part of course, but I also recoil when upper class (or upper middle class) people claim to be ‘of the people’ or working class simply because they’re short of cash.
My sister makes approximately ten times what I do, but from the perspective of everybody who knows us both (in the UK) we’re of the same class. Same culture, same education, same accent etc.
I agree that it’s a blind spot in the American mentality. Though I also think the system is less entrenched in the US than in Britain, I agree it does exist and is a thing. As I’ve pointed out a few times, Trump’s family has been rich for two or three generations at best while Anderson Cooper’s family is old money/old blood. This dynamic is important, if under-appreciated.
All that said, I think the same dynamics work once you presume everyone is working class. I doubt highly educated gentry would ever admit to caring about income, even if they were not all that wealthy until either they married properly or their uncle died.
As I’ve pointed out a few times, Trump’s family has been rich for two or three generations at best while Anderson Cooper’s family is old money/old blood.
In case anyone doesn’t know what Erusian is saying, Anderson Cooper is the son of Gloria Vanderbilt, the late great-great-granddaughter of American railroad baron Cornelius Vanderbilt via his youngest grandson (also named Cornelius Vanderbilt). In what was the tech capitalism of his day, the elder Cornelius raised his family from New York Dutch of modest means to the closest thing to American nobility.
Indeed. I have a direct ancestor who was the secretary of Robert Stephenson (of Rocket fame) and we’ve been keeping quiet about it for 200 years in an attempt to move up in the world..
ETA Yes, I know – “Don’t tell him your name, Pike!”
Apart from the ones that managed to marry into the Dukedom of Marlborough.
(okay, not the druggo one)
The heir apparent (Marquise of Blandford) played polo at Harrow before rowing across the Atlantic, which is at minimum the very top of middle-class.
None of the ones that stayed in the US look like they could pull off Harris tweed, though. Probably don’t own any of Scotland, either.
@Zephalinda
Is that really the case? Are there no English lordlings attending Eton today, the fifth generation in his family to do so–his up from nothing great-great-great-great grandfather having bought a title with a fortune made in the Industrial Revolution and married a daughter off to the scion of a down on its luck noble family? And if their are, do his peers really consider him middle class because of this terribly shameful history?
I don’t understand the point of your objections if the “European class system” you are trying to contrast doesn’t actually exist anymore. Seem pretty affected.
And if their are, do his peers really consider him middle class because of this terribly shameful history?
Not so much those, since a lot of the posh schools are quite used to “quis paget, entrat” (as Private Eye‘s mythical St Cake’s school has it for its school motto) and so have happily enrolled the sons of foreign despots and gangsters alongside home-grown nouveau riche and old blood/old money scions. Lordlings with an American moneybags great-grandfather probably pass the test.
The snobbery is rather more refined than that; take the case of Michael Heseltine, a so-called Tory grandee who was a big shot in the party and bought his own stately home, but could never quite shake off the stigma of being a self-made man (and indeed felt that it had harmed his chances with the upper ranks of the party):
(from an obituary of Alan Clark in 1999): But contrary to the myth that has grown up, he did not accuse Michael Heseltine of having had to buy his own furniture. He merely recounted, with some glee, the gibe made by Michael Jopling, the former Chief Whip.
What Clark said in his diaries:
Michael Heseltine: “An arriviste, certainly, who can’t shoot straight and in Jopling’s damning phrase ‘bought all his own furniture’, but who at any rate seeks the cachet. All the nouves in the party think he is the real thing.”
Original remark:
1986
Conservative politician Alan Clark writes in his diary about his detested “arriviste” colleague Michael Heseltine, quoting fellow MP Michael Jopling:
“The trouble with Michael is that he has had to buy all his own furniture.” Clark found the remark “snobby but cutting.”
Why is it cutting? Precisely because Heseltine had to buy his own stately home, instead of inheriting it (and all the original furniture to go with it), hence an arriviste, one of the nouveau riches, not really ‘one of us’ and so looked down on by the real grandees of the party who wielded influence and power.
Jump forward to the Tories under David Cameron as Prime Minister 2010-2016 and the (not so) subtle internal pecking order. From a newspaper article in 2014 about the Tory leadership struggle, where Gove and Osborne were alleged to be allied to stop Boris getting it:
Rather than attacking Mr Cameron, it seems more likely that in mentioning Eton, Mr Gove was seeking to make another point. A Tory MP said yesterday: “Who else went to Eton? Boris. Gove is saying don’t pick another Old Etonian as leader after Cameron. George went to St Paul’s.”
Indeed, Mr Osborne was nicknamed “oiky Osborne” by some of his associates at Oxford, on account of him having attended St Paul’s School in London. While it is one of the top schools in Britain, it is more traditionally one for children of the ambitious west London middle classes, whereas Eton is regarded as being socially more elevated. On such small and ludicrous differences – irrelevant to most voters – are Tory feuds built.
EDIT: Fun fact, G.K. Chesterton also attended St. Paul’s – I know which of the two alumni I prefer!
Yes, Osborne will be a peer (baronet) in due time, but it’s only an Irish peerage and honestly, his family made their money in trade, so he’s not really top-drawer (from a handy but not comprehensive guide to the ranking of English public schools and why it matters):
Indeed, poor George Gideon Oliver Osborne, who will become the 18th baronet of Ballintaylor and Ballylemon on the death of his father, was known as “oiky” by his Etonian friends on the basis of his ‘humble’ education.
Wikipedia:
His father is Sir Peter Osborne, 17th Baronet, co-founder of the firm of fabric and wallpaper designers Osborne & Little. George Osborne is to inherit the baronetcy; he would thus become Sir George Osborne, 18th Baronet. His mother is Felicity Alexandra Loxton-Peacock, the daughter of Hungarian-born Jewish artist Clarisse Loxton-Peacock (née Fehér).
This ‘money equates to class’ idea is one of the very few things that I find jarring in the list of differences between America and Britain (or America and everywhere else?) I don’t know why it does
That’s because the United States spent a century or two pretending it was a classless society. Since it is and always has been unignorably obvious that some Americans have a whole lot more money than others, we folded all of the observed socioeconomic differences between groups of Americans into “well, they’re all the same class, it’s just that some have more money than others”.
And proceeded to talk at length about the differences between poor Americans, middle-class Americans, and rich Americans. Please to ignore that word after “middle-“, means nothing. And if someone crassly talks about lower- or upper-class as if those things might mean something in America, they’ll just be remapped to poor and rich, respectively.
I think the meaning is thus still pretty accurate: upper middle class or whatever you want to call it. I don’t think anyone thinks of six figures as 1% wealthy (those are “millionaires and billionaires”) but it’s definitely comfortable and high earning.
Just for the sake of accuracy, six figure salaries do put people in the top 1% (just not in the $100k range).
Markets are so variable that it is hard to compare now. I wouldn’t want to make our income in San Fran adn live that lifestyle, but making it here (roughly national average) is quite nice and making it in a cheap area and we would live like kings.
Out of curiosity, I looked up the official numbers for Norway (2018). We are often considered a rich country due to oil exports accumulated in a large (~ 1 trillion USD, or 200K per capita) national fund, but while the Norwegian state is undoubtedly affluent, to what extent does it impact the wealth of the population? Here’s the table, roughly translated to USD using a rate of 9 NOK per USD. Some caveats below:
Not unexpectedly, the spread is much larger in the US, high incomes are higher, low incomes are lower. I suspect looking at individual incomes makes this even clearer, it is very rare to have a wages in six figures (USD), so most high-income households are double-income households.
Pegging the poverty line at half the median income seems to result in a little less than 20% of Norwegian households and a little more than 25% of American households classed as poor. That said, I think living costs are higher in Norway, one major difference is the 25% VAT (compared to single digit percent sales tax in the US?) – but especially low-income living costs are high. Even cheap food isn’t cheap, and when living in Germany, I estimated our living costs there¹ to be about half of what they are here. I think this makes a pretty big difference to the impact of poverty. House ownership is very profitable compared to renting, which also hurts the low income segments disproportionally. Things like (non-electric) cars, alcohol, and eating out is expensive.
Norway has “socialized” medicine, meaning that all citizens are part of the national program and will receive medical help in most cases (not, e.g. dentistry) and with no or moderate payment (low caps on yearly pharmaceutical expenses before state takes over) . It’s not clear to me if American households spend their income on medical insurance, and if so, how much – or if it is paid by the employer and thus also an additional benefit. Likewise for pensions. – are you expected to manage this on your own dime? And saving for children’s education – in Norway this is mostly covered by free tuition in public universities (most of them), and by student loans to cover living costs.
I guess I’m rambling on here, hope this is interesting to some of you.
For medical insurance it depends. The very poor get medicaid which is free but no doctor wants to take it because it pays less than anything else. The working poor and lower middle class have it perhaps the worst–their employers either don’t offer insurance at all or offer plans where employees have to contribute to the premium as well as pay high deductibles and co-pays. The professional classes generally have quite good employer provided health insurance, as do employees in union jobs and those that work for a government. The elderly have Medicare which is quite good except for the fact that there is a 20% coinsurance for doctors’ visits as well as co-pays for drugs. Bottom line is can vary a lot how much an American household has to pay for healthcare / heath insurance.
For pensions, the traditional pension is dying in the US. Government employees still get them, as well as some unionized workplaces, but that’s about it. For everyone else you are expected to save on your own or rely on social security.
Finally, even public colleges are fairly expensive these days and there are also living costs. Attitudes vary as to whether parents should save for these expenses or their kids should take student loans.
Yeah, instinctively I think of “six figures” as what six figures was worth when I started working full time, which is roughly double what it is now. But I’m aware of this and usually remember to correct for it. A Google starting salary of ~$115K may not be what I’d have thought of as “six figures”, but it is still a lot higher than the ~65K (in 2020 dollars) I got from IBM when I started working full time.
$100,000 is a very good salary. It is senior analyst, mid-level management, engineer level. When we say certain people feel entitlted to a $100k salary, that’s exactly the band most mean.
$170ks is extremely high. That’s senior middle manager or director, basically the point where you get serious perks and one rung below incentive payments being 40-50% of your compensation.
$200K looks to me like a very good, very experienced software engineer who isn’t in management, and isn’t a director-equivalent Individual contributor. In Silicon Valley. $170K looks like the same engineer, minus skill at negotiating salary and the sense to jump ship when their current employer takes them for granted.
I wasn’t trying to denigrate $100k/year salary or reignite the endless debate about what constitutes middle/upper middle etc. Just thinking how there are these markers that are fixed verbally but chance in value over time. It could just as easily been “millionaire”.
Welcome, again, to Hollywood. This time the King of Kings/Executive Producer has purchased the Terminator franchise and is determined to relaunch it, starting with a remake of the original film. Who should we cast in the principal roles?
Dave Bautista was actually my first pick for the Terminator, but then I thought about trying to go for the less obvious move of not automatically picking a wrestler/bodybuilder for that role, and Tom Hardy has shown he is perfectly capable of playing extremely imposing muscular juggernauts.
I guess the first question is whether we want to keep the contrast between the Terminator and Kyle Reese that was present in the first film: the Terminator is big and doesn’t say much, while Reese is smaller and more articulate. We don’t have to. An advanced android that specializes in infiltration could be a charmer rather than a bruiser.
But if we decide to follow in the footsteps of the original, we need a big man, and probably someone famous enough to draw an audience. Dwayne Johnson? Jason Momoa? Going a bit farther afield, maybe Rory McCann(The Hound).
Florence Pugh is an absolute no-brainer for Sarah. She’s the right age (several of the other suggestions are too old) and she’s got the range for the full transformation from frazzled waitress to monomaniacal badass.
George Mackay is our Reese. That’s a young man whose eyes can tell us he’s seen things no-one should.
Ben Foster is the cyborg. Dude knows how to be scary – but then he knows how to do everything: he might be the best actor working today.
Mahershala Ali is Traxler. Does this even need explanation?
Michael Shannon is Vukovich.
Alison Brie is Dr Silberman. We need some more women in the cast, this is a spot that makes sense, and, well, don’t you want to see what she’d do with it? I do.
Holliday Grainger is Ginger. I think she’d be able to bring that party girl energy while retaining truth and nuance.
But mostly, why in blue fuck are we remaking Terminator?
Every single one of these actors is better than the one who originated the role. We’ll have wizzo 2020s VFX. We’ll have a 9 figure budget instead of 7. We’ll have Director Bong and Roger Deakins shoot Craig Mazin’s script.
And the film we make is 100% guaranteed to be a pale shadow of Cameron’s masterpiece.
I didn’t mind Cumberbatch as the character – I thought he played the role of John Harrison extremely well. He wasn’t Khan of course, but that is the fault of Abrams and his gaggle of untrained monkeys in Bad Robot, not the fault of the actor. He did the best he could with what he was given and on that measure, it worked.
(They had to do a fix-it comic book series in which we find out how come Khan Noonien Singh, South Asian genetic superman, looks like Cumberbatch and the answer is “plastic surgery by Marcus to disguise him”. Yeah, that was convincing).
So I refuse Abrams’ hack jobs on Trek and substitute my own headcanon where Cumberbatch is John Harrison, renegade Section 31 Star Fleet officer, and the rest falls nicely into place (never mind the Magic Space Blood Cures Death rubbish, the rebooted franchise ignored that out of sheer embarassment as well). The entire Marcus plot was dumb but salvageable (nobody noticed he was building his own superduper warship out around Jupiter? Really?? So Starfleet only has one (1) functioning starship post Nero and that’s Enterprise?) so I’d appreciate a reboot of the reboot without the stupid crap where Abrams was trying to literally recreate shots and style of the original Star Wars movies as his showreel to prove to Disney he could do the job for them. No Spock/Uhura romance where Uhura gets turned into a nagging shrew who wants to talk about their relationship and her feeeeelings in the middle of an important mission in front of their commanding officer, no building your starships on the ground in the desert, no “Klingon homeworld is practically next door in travel time”, no transwarp silliness, the list goes on…
Cumberbatch as the new improved model T-1000 would work, but perhaps not as the original Terminator. Not unless you’re really rebooting the heck out of the original and doing an Abrams on it 🙂
The star trek reboot movies did a great job of casting people who look/feel like the original characters, but the plots are just stupid. I honestly don’t get why SF-ish movies can afford gazillion-dollar special effects budgets but can’t put together a plot that minimally hangs together in the fact of, say, a bright 15 year old spending fifteen minutes after the movie thinking about it. Not that the original series was any great shakes for coherent plots and consistent world building, but the reboot managed to fail to meet even that low bar.
Sorry, but after watching CumberKhan, I’d rather he not ruin any more classic villains.
But we are asking him to play a literally inhuman villain here, and I thought he did fine as Smaug. There were other problems with those movies, but not Cumberbatch’s casting.
He was the wrong choice for Khan, or at least for remember-Wrath-of-Khan(*), because he doesn’t chew scenery in Ricardo Montalban’s larger-than-life fashion. And that’s what that role called for.
But for the Terminator, the inhumanly detached and dispassionate (or at least very selectively passionate) Cumberbatch of e.g. Sherlock Holmes would work quite well, I think. He doesn’t have Schwarzenegger’s physique, but we’re not supposed to believe it’s muscles that are doing the work anyway. He’s tall and he has presence and he’s done motion-capture work for the endoskeleton scenes.
I’m in. Cumberbatch, Clarke, and Lewis it is.
* Having just rewatched “Space Seed”, I’d be up for an alternate universe where Khan was always Cumberbatch, but that’s not what Abrams was going for by far.
We already saw Emilia Clarke as Sarah Connor in “Terminator: Genisys”. She was ok, but nothing special. Lena Headey, who would later be known for playing Cersei Lannister, did a better job as Sarah in “The Sarah Connor Chronicles”.
Jessica Chastain showed some real spirit as Maya in Zero Dark Thirty, and would be a fine choice, but she’s quite old, at 42. I suppose we could rewrite things so she’s a midlife professional of some sort, rather than a waitress/college student.
But if we keep the script as it is, we’d need an actress who can play both sweet and fierce, and can credibly portray an early-twenties Sarah.
Jesus wept, the Emilia Clarke suggestion was *serious*? She can’t act! She can’t fucking act! She’s killed three franchises and counting with her utter failure to act, She’s a goddamn joke. There’s a really good actress who went to her old school, but she ain’t the Teddy’s old girl you’re looking for.
For the record I voted for Sanders in 2016 (I voted for Biden this time), and I have sympathy for these youngsters grips, but
“...“Educating a generation and saddling them with debt and then not giving them jobs where they have the wage that they presume they should receive based on the amount of time they spent on education,” Virgil said. “That’s a pretty good way to turn them into radicals.”
He is a good example of his own target audience: He graduated with $100,000 of debt from Cornell and after college took freelance gigs from Craigslist, hoping to write...”
Yeah, um but what the Hell?
Four years and $100,000 for Craig’s list gigs?
I’m (early) Gen-X, but feel free to “okay Boomer” me because that sounds insane to me.
For a five-year union apprenticeship you can get paid and come out with a $100,000 wage at the end of it, when I got in it was a series of multiple choice exams, and I imagine someone who passed the SAT’s to get into a university could get it the way I did.
Two years and a few hundred dollars in fees and materials (maybe $2,000 total) of welding classes at a community college gets you a $80,000 to $120,000 a year job.
Less than a year at the welding equipment manufacturers school (in Cleveland, Ohio so cheap to live there) gets you the same jobs, and while not as cheap as the first two “learn a trade” options, it’s less than $100,000 – which you can still buy a home in parts of California for!!
Sure, the conditions aren’t great, there’s no girls and few women, and you really shouldn’t be a smoker and a welder (though many are and have short lives), but at least you earn $100,000 instead of owe it!
I have deep sympathy for the kids at Kennedy High School in Richmond, California who lost a chance out of the ghetto when Mr. Floyd died (nice guy, but he smoked like a chimney and it killed him) and there was no longer someone left to teach how to use the tools Chevron donated, but these “dirtbag” guys?
A thread or three ago @Conrad Honcho described Sanders supporters as “a bunch of people who chose the wrong majors”, and I thought that was unfair, but I take that back for some of them.
Sure, if everyone now rushing into the universities learned a trade instead the wages would be less, and more would be displaced in the trades and forced to beg or pick tomatoes but
“…Adam Angstead, 46, had stepped out of the theater for a cigarette. He works for the Iowa City school district as a substitute teacher five days a week, but he said his employment offers no benefits. On the weekends he works at a diner. Twice a week he sells his blood plasma for extra cash.
It’s still not enough. He was trying to pay down his $40,000 in student loans for a while, but it hardly made a dent…”
doesn’t sound like a good path.
I don’t suppose “learn to weld” can be scaled up for everyone more than “learn to code”, but even if there’s “free college for all”, if college leads to Craig’s list gigs or selling plasma, what’s it for?
I don’t suppose “learn to weld” can be scaled up for everyone more than “learn to code”, but even if there’s “free college for all”, if college leads to Craig’s list gigs or selling plasma, what’s it for?
It’s for creating Brahmins. A lot of 18-year-olds “have to” go to college because their parents are Brahmins and they don’t want to be a big disappointment. Others have Vaisya or Sudra parents, and for decades there’s been much fear in their parents’ culture that college will turn their against them (in This Present Darkness back in 1986, the author ascribed college indoctrination to literal demons).
Kids whose parents aren’t Brahmins would be insane to try to change classes, unless they can successfully major in something more than renumerative enough to pay back student debt like Computer Science, but what is society to do with all those Brahmin kids who have filial duties to go to college even if they can’t hack a high-paying major?
I don’t know what this Indian caste metaphor is supposed to mean, which I suppose is rather the point, but kids with very wealthy parents aren’t the ones being described. Those kids are fine with no real job after their expensive educations because they have trust funds. And if they want well paid jobs they can get them regardless of what they majored in.
The ones that are really upset are upset for the same reason many Trump supporters are—they are raging against the fact that we no longer live in a world with highly paid buggywhip salesmen. They want what their parents had and they can’t have it, because they were born too late. Unlike Trump voters, what their parents had was well paid tenured professorships and intellectual magazine editorships.
It’s a Death Eater thing. Moldbug uses “Brahmin” to describe a modern American cultural cluster. It maps pretty well to Scott’s “blue tribe” idea.
That usage isn’t original to Moldbug, although he did expand it significantly. “Boston Brahmin” is a long-standing term for the old-money, generally Ivy League educated, mostly-WASP subculture in New England.
It’s a Death Eater thing. Moldbug uses “Brahmin” to describe a modern American cultural cluster. It maps pretty well to Scott’s “blue tribe” idea.
That usage isn’t original to Moldbug, although he did expand it significantly. “Boston Brahmin” is a long-standing term for the old-money, generally Ivy League educated, mostly-WASP subculture in New England.
Mind that I’m not a Death Eater. I skimmed a number of his essays when he was active, but never considered him insightful enough for finishing one to be useful. And Thomas Carlyle is a damn fool influence to have if the elevator pitch for your political philosophy is “like monarchy, but a publicly-traded business rather than a family business.”
I independently think that Indian caste terms are a more insightful way to talk about class than our “lower” (same as “working”?)/”middle”/”upper middle” ladder, which is focused on income (and maybe status, if it doesn’t equate status to income) rather than functionalism and the mores you internalize from parents and peers. The idea that rulers/society owe you a living preaching or teaching or other work that’s not beneath your class is captured much better by this, and the whole varna (caste) idea also captures much of the economic functionalism of the Marxist class terms, without misleading you into starving people.
The pre-existing term “Boston Brahmin” is apt, though I’m skeptical that there’s a real distinction between what an Ivy degree in a soft subject tells you to believe vs. what a State U degree in the same field tells you to believe. The higher status does translate into higher income after graduating with a degree that has no useful content, due to networking with higher-status Brahmins as a student being a feedback loop.
To the second though, I don’t think the disgruntled indebted college students are largely cases of “failed to launch” magazine editors and college professors. There really are a large cluster of people best described as “took a BS in their ‘passion’, used loans to go to their ‘dream school’ and just sort of assumed that entitled them to a well paying job”.
That may be pretty close to what they were promised, but I’m not sure it was ever reality. There really was a time when you could get a union manufacturing job without any real effort, work for solid pay, and retire on a decent pension. There was never a time when a 4 year philosophy degree was a guaranteed ticket to a “living wage”.
Some ended up dying in Vietnam; turned on, tuneed in, and dropped out; or suffered one of the many other vicissitudes of life–but I bet as a cohort the bachelor’s class of 1968 did very very well for itself, philosophy majors included.
There was never a time when a 4 year philosophy degree was a guaranteed ticket to a “living wage”.
I believe there was a period of about a generation where this was close to true, as hiring for low-level office jobs started to strongly favor BA-in-who-cares candidates where a high school diploma and a bit of training had sufficed in the past, and the number of low-level office jobs was increasing due to the growth of the regulatory state.
But that gets you a living wage doing something, not a six-figure salary for doing the thing you’re passionate about. I fear we may have started encouraging people not to “settle”, at about the time when settling is what you probably have to do if all you’ve got is a degree in something too many people are passionate about.
Some ended up dying in Vietnam; turned on, tuneed in, and dropped out; or suffered one of the many other vicissitudes of life–but I bet as a cohort the bachelor’s class of 1968 did very very well for itself, philosophy majors included.
Before the GI Bill, the percentage of Americans who got college degrees was, what, 1-2%? The government did veterans (a huge % of the male population, unlike the small kshatriya class we have today) a solid, and then college was still a prudent choice for their kids the Boomers.
After the Boomers, everything went to Hell and there’s little consensus as to why.
Though even when X% of Boomers were doing the right thing by going to college, there still needed to be Boomer plumbers and house builders and all the rest for civilization to keep going.
My dad was born in 1940, missed vietnam and got a degree in philosophy. He bought a computer and taught himself to program in what must have been the late 60s or 70s and ended up with a proper programming career, staying with the same company for most of his career. Despite being on a ‘legacy’ salary during the many layoffs in the 90s he survived to retirement with a sweet pension.
To the second though, I don’t think the disgruntled indebted college students are largely cases of “failed to launch” magazine editors and college professors. There really are a large cluster of people best described as “took a BS in their ‘passion’, used loans to go to their ‘dream school’ and just sort of assumed that entitled them to a well paying job”.
I’m not sure the bold is really true, though. In college people who weren’t in STEM made jokes about not being able to pay their student loans back all the time. I know of no case where someone changed majors over this concern. One of my younger brothers is in college now, and I’ve heard some of his friends who are still in high school express their intention to get a certain degree, crack a joke about how they’ll never get a job with it, and then change absolutely nothing about their plans. That’s not “assuming you’ll get a good job somehow”, that’s, “knowing you’re about to do something stupid and doing it anyway”.
That’s not “assuming you’ll get a good job somehow”, that’s, “knowing you’re about to do something stupid and doing it anyway”.
Eh, I think that’s mostly gallows humor.
On the one hand, sure, they understand that the outlook is bad and the odds are against them. On the other hand, they still think, deep down inside, that they will be the ones to beat the odds. But they can’t say that aloud because it sounds ridiculous and arrogant.
Kids whose parents aren’t Brahmins would be insane to try to change classes, unless they can successfully major in something more than renumerative enough to pay back student debt like Computer Science
It’s not insane; it’s what I did. But it is getting harder.
The consequences of rising costs fall more on the middle class, though, I think—families wealthy enough that FAFSA assumes they can pay for their kids’ education, which means they might as well save for it. This even if, due to the two income trap or some other way of living beyond their means, they’re already investing a lot indirectly in their kids’ education and can’t much afford it. Their kids are meanwhile told all the way through school, Do what you love and don’t worry about the cost. And then they graduate and get a rude awakening.
Kids who are much poorer than the middle class avoid this because they’re eligible for financial aid, which makes state colleges and the like affordable, provided they major in something that pays, as you say. But they face serious challenges middle class kids don’t: they probably don’t have the family or community support to make it to college, or schools that can properly prepare them for it. Regardless, these kids are generally not today’s dirtbag left or, for that matter, Bernie bros, but rather the middle class kids.
An anecdote: When I was getting ready to enter college, I heard my parents bemoaning the state of college graduates who weren’t prepared to work, had a lot of debt, etc. Both expressed the opinion that more people should go to trade school instead of college. After hearing this kind of thing off and on, I raised the idea at dinner one night of becoming welder. I was treated to a long lecture about how becoming a welder was wasting my potential and about how I really should go to college.
Not too long ago, one of younger brothers expressed a similar idea at dinner and got the same lecture. The experience left a bad taste in my mouth; it felt like my parents were saying, “This is for other people, but not MY kids!”
It wouldn’t surprise me if this mindset is very widespread.
The experience left a bad taste in my mouth; it felt like my parents were saying, “This is for other people, but not MY kids!”
It wouldn’t surprise me if this mindset is very widespread.
I agree that this is happening to a large extent.
But while it seems hypocritical, it might often be correct. It can simultaneously be true that a whole lot of people who are currently getting low-value degrees from non-prestigious institutions would be better off becoming welders, and also true that any particular highly motivated and intellectually gifted individual is still better off going the traditional college route.
In the supposed “golden age” of the United States (the 1950s) a lot of people were earning L2 compensation for L3 work. In a time when well-paid but monotonous labor was not considered such a bad thing (to people coming off the Great Depression and World War II, stable but boring jobs were a godsend) this was seen as desirable, but we can’t go back to that, and most people wouldn’t want to.
To a G2, being a college professor, scientist, entrepreneur, or writer are desirable jobs. Creative control of work is important to G2′s, although not all are able to get it (because creative jobs are so rare). David Brooks’s Bobos in Paradise captured well the culture of G2′s in that time. Members of this social class aggressively manage their careers to get the most out (in terms of intellectual and financial reward) of their careers, but what they really want is enough success and money to do what they really value, which is to influence culture.
You can say something very similar about the second group. 30-40 years ago a lot of people were earning E3 compensation for G2 work. It was an enviable sweet spot, of course it got arb’ed away. Why pay big bucks for jobs people are willing, eager even under a revealed preferences model, to do for small bucks? So we get the current situation where people do those jobs for the small bucks, just spend a lot of time bitching and moaning about how terrible it is that they get paid small bucks. We see the same thing with teachers, for example.
My guess is none of these people are struggling to make ends meet, so any complaints about money are likely to reflect their resentment over the status hit that comes with a low salary rather than any genuine material need.
No one* in the US has genuine material need. They want money for the same reason everyone wants money.
I think a lot of the anger is coming from the fact that these people were doing everything society told them to do, and they’re still just scraping by. When I was in high school, the message was all “You must go to college; you’re doomed to ‘do you want fries with that’ jobs if you don’t go to college.” I have seen schools in my area put up the banners of colleges their students got into. I am not aware of any high schools boasting of the trade schools their students got into. I myself was raised in an upper middle class family, and the no college options were not really presented to me. Sometimes I wish I had gone into some sort of government blue collar work, but how often is that really presented as a viable option? Even in lower class communities, I think most high school guidance counselors push college uber alles.
Welding is for Red Tribe. We’re talking about Blue Tribe Americans here. And this discussion is pretty much exactly what Scott’s tribal distinctions are meant for.
Blue Tribe Americans believe that they have the inalienable right to sit behind a desk(*) thinking Deep Thoughts and telling other people what to physically do to make the world a better place. And to earn at least six figures for it. Blue Tribe Americans believe that all Americans have this inalienable right, as soon as they can be educated out of their Wrong Tribe ways. Yes, yes, this implies that there be someone to actually do the physical stuff that we’re all going to think up for them to do, but that’s for other people. All their role models sat behind a desk thinking deep thoughts and telling other people what to do, and that’s what they’re going to do.
Welders, however well paid, don’t get to think deep thoughts and they don’t get to tell other people what to do. To some people, that’s worth going $100K in debt and selling plasma for ramen to avoid – especially since one of the first things they’re going to think deeply about and tell other people to do is political stuff that involves erasing that debt and shafting whoever was fool enough to loan to them.
Disclaimer: I earn six figures sitting behind a desk telling other people what to do. Or at least what not to do.
* Well, OK, some artistic careers are acceptable even if they do require standing in front of an easel or on a stage. So long as you are artistically expressing deep thoughts about what other people should do to make the world a better place.
It is interesting how this idea is similar to the society described in Starship Troopers. In the novel, people became Citizens by serving in military, because only the people caring enough about their society to defend it should be allowed to steer it.
Now imagine that instead of exhaustive training, and fighting where you can randomly lose your life, the requirement for becoming a Citizen is merely to spend some time listening and learning how to be a good Citizen. That’s it; to become a Citizen, you only need to say you want to, and then learn how to do it. Such society, despite technically having two unequal castes, doesn’t feel unfair, because the door to become the elite is wide open. The only people who don’t become the elite are the ones who choose not to. No injustice is done to them.
…and this is, kinda, how the current caste system feels to the “Brahmins”. Anyone can get a diploma, if they choose so. Biological intelligence is not an obstacle because, remember, IQ ain’t real. Difficult subject is not an obstacle because you can choose a simple one. Cost is not an obstacle because you can choose a cheaper university (and in some countries, the state will pay for you). All you need to do is apply, and spend some time trying. It is perfectly fair to treat those who refuse as second-class citizens; they literally chose so.
(Of course this is not the true description of how it really works, but it requires some privilege-checking to notice so.)
Blue Tribe Americans believe that they have the inalienable right to sit behind a desk(*) thinking Deep Thoughts and telling other people what to physically do to make the world a better place. And to earn at least six figures for it.
Work that wasn’t beneath a Brahmin included being a pundit (priest), teacher, or philosopher sitting and thinking Deep Thoughts for other people to physically implement. Chanakya would be an archetypal example of that last one (the government he told what to do was the famous Chandragupta Maurya).
Blue Tribe Americans believe that all Americans have this inalienable right, as soon as they can be educated out of their Wrong Tribe ways.
Yet the Wrong Tribe includes all the plumbers, electricians, HVAC techicians, construction workers etc. who let that class of people have air-conditioned desk jobs in buildings with indoor plumbing rather than thinking Deep Thoughts under a tree and defecating in a pot. Educating all of them out of their class/tribe would be a disaster.
Welding is for Red Tribe. We’re talking about Blue Tribe Americans here. And this discussion is pretty much exactly what Scott’s tribal distinctions are meant for.
I think what we’re seeing is the creeping forward of “progress” into the white collar jobs that were formerly considered inviolable. The whole reason people were told, as Theodoric says, “You must go to college; you’re doomed to ‘do you want fries with that’ jobs if you don’t go to college” is because the blue-collar jobs of boring but stable and well-paid work were being automated away or outsourced overseas, with the labour market turning gradually from manufacturing to service industries. So to get a decent job where you wouldn’t be low-paid and precarious work, you needed to move up the rung of the ladder to the world of “clean indoor work with no heavy lifting” and that meant a college degree.
Now the same rationalisation of industry/the economy is hitting the white collar world due to automation/outsourcing/progress and the same people who nodded along to “it’s a shame but it’s how the economy works, those kinds of manual labour jobs are dead or dying” articles in the media are now seeing it hit them instead, and they don’t like it any better than the working/lower middle-class did when their traditional good pensionable jobs dried up. (See how outraged journalists got at the “learn to code” slagging directed towards them: how very dare anyone think journalism for online clickbait organs is anything less than a sacred calling pursued by the best and brightest! it must be targetted anti-media harassment by the notorious alt-right, not just people taking the opportunity to make dumb jokes!)
Hmm. Outsourcing of tech jobs was already a thing 20 years ago, and working conditions (and to a lesser extent renumeration) were dropping because of that. At that point (aged about 40) I wasn’t sure my career would remain good until I was able to retire.
Shortly after that, I managed to bust my way into the next rung, in spite of it previously seeming to be marked off as “no Aspies dare apply,” and things got easier for me personally. (Outsoucring was never for the top roles, let alone folks the execuytives would see as belonging to their own class.) Also, the engineers in India began demanding a lot more money than they had been when the outsourcing to India started, and lots of potential outsourcers changed their minds about its profitability. And at the same time the best Indian engineers were still mostly emigrating, and that meant really good engineers in India were hard to come by, unless you offered them a job that would move, with them, to the US.
So that phase of white collar outsourcing caused less problems for US engineers than I’d originally expected.
But anyone who thinks this is new, is either well under 40, or wasn’t paying attention at the time.
And yes, the “we write well” knowledge workers were mostly affected a bit later than the “we do technical stuff” knowledge workers, but at this point we have “local” newspapers outsourced to god-alone-knows-where, and the written english of US-born people is, like that of ESL people, is whatever the spellchecker/grammar checker/AI suggestions happens to produce, and too often ranges from ungrammatical to incoherent. (And meanwhile, I’ve learned to read and write both Indian English and Chines English fairly fluently, since I see so much of both of them.)
Cornell claims that $100,000 of debt isn’t even a possibility. ($30,000 after 4 years is the max). They could be lying, but if so it seems to me the NYTimes should be investigating that. I suspect you’d actually find that either the person isn’t telling the truth, or that they did some exceptionally unwise things to increase their debt, even besides taking some useless degree at an Ivy League school with $56,500/year tuition.
A substantial fraction of college debt is driven by living expenses, not just tuition, fees, and course materials. Going to college full-time usually means you aren’t working, or at least not working much, and 4-5 years of living expenses adds up to a tidy sum, even just for a dorm room and a campus meal plan.
Living expenses make up about half the cost of attendence at most public universities, or maybe 20-25% of the cost of attending a private university without a scholarship.
Yep. For poor people, the most limiting cost in your life is, well, the cost of your life. You can’t simply stop paying it for a few years, regardless of how much good it could do for you later.
The officially recommended path upwards on the social ladder is to borrow a lot of money, gamble with it in a game that is stacked against you, and win. The game is called university, and it is stacked against you because unlike your classmates with university-educated parents, you didn’t get the same training at home, you don’t know how the system really works as opposed to how it is supposed to work, in case of trouble it is more difficult for you to find help, etc. And if you lose this gamble, you lost at minimum a few years of your potential income.
My father went to college in a country were public universities were both free and considered top tier. He did well in his exams and got accepted, then flunked his first trimester hard because he still had to work to pay his living expenses, and that left him no time to study. He applied for some scholarships, got one from some corporation which was investing in increasing the supply of engineers, and was able to pay his living expenses with it and apply himself to his studies and graduate. Afterwards, he started working on an MBA but dropped out to join the work force because he was out of scholarship money and found he couldn’t afford to not work.
This is indeed part of the problem. You fill out your FAFSA. You get offered a pile of loans. There is no immediate effort to say “here’s what we think you actually need”, just “here’s what you can have” and I think a lot of people just take out the max.
The other problem is taking substantially longer than 4 years to finish.
This is an excellent point, and though several of the responses to it are good, I don’t see why the basic idea isn’t shouted from the rooftops more. That said, many people entering college are genuinely clueless about money, jobs, and careers, and their equally clueless parents have somehow drilled into them the notion that “college = good” regardless of major, debt, or career path.
I don’t suppose “learn to weld” can be scaled up for everyone more than “learn to code”, but even if there’s “free college for all”, if college leads to Craig’s list gigs or selling plasma, what’s it for?
I guess I don’t understand the problem here. Most college graduates still make higher incomes than people with high school diplomas. There are a select few majors where this may not be the case. The problem, then, lies in the curriculum (or existence?) of these select few majors. The fact that a small number of people are getting degrees that make them worse off is problematic whether college is free or not.
And consider this: If you are one of the unlucky ones who chose a major that is somehow rendered worthless (let’s say, you went to school to trade school and your very trade was replaced by an AI), which would be the better situation: 1. Having a worthless degree and no debt 2. Having a worthless degree and [school tuition] worth of debt? The existence of worthless degrees only bolsters the case for free college IMO. The rich have plenty of money to tax, taking it from them to pay for college incurs almost no utility loss.
As a small aside: There are lots of very bad theories in the comments about why people support free college. I call them bad, because the assumption seems to be that support of free college is primarily fueled by college graduates expressing regret for their choices. This doesn’t square with (in both exit polling and pre-election polling) the Democratic candidate advocating for free college having greater proportional support of non-college graduates while the candidates who don’t having higher support of college graduates. And since everyone is arm-chairing their political-psych theories, I’ll offer mine: It’s called “pulling the ladder up”.
Um… Okay. I am frankly fascinated. Tell us more. Does it apply only to the first person with the chutzpah to ask? Or only to people named “brad”? Or does the government really have some back room that contains 372 trillion dollars?
I feel sure that either you have misunderstood brad or that I have misunderstood you.
Well, Brad didn’t specify any details or elaboration, so I didn’t feel to add any myself.
Firstly, one million dollars isn’t that much money. Many government employees easily make that cumulatively over the span of 10-20 years. So the generic question “should the government pay one millions dollars to some people” is “yes” assuming that government employees should exist.
For the sillier question of “should the government give one million dollars out to people randomly”? Sure, as long as it comes from the DoD budget. The government spends tons of money on things that actively make the world worse off. Giving a million dollars to random people is probably a better use of it. We could have a lottery system.
And that’s only if you insist on being revenue neutral. You could ask “should the government tax people, such that brad gets a million dollars” and the answer could still be yes, assuming the tax was levied in a way such that it is re-distributive, e.g. a billionaire tax that turns people into millionaires.
It’s all very silly. I have no idea what he was going for with the question.
It’s all very silly. I have no idea what he was going for with the question.
It was in response to this part:
which would be the better situation: 1. Having a worthless degree and no debt 2. Having a worthless degree and [school tuition] worth of debt? The existence of worthless degrees only bolsters the case for free college IMO.
That people will be better off if you give them free stuff is both obvious and a totally inadequate justification for a proposed policy of giving some people free stuff.
I don’t think non college grads support Bernie because he offers free college specifically. He offers free lots of things.
The actual college grads are more likely to realize they are going to be stuck with the bill for all this “free” stuff.
I don’t know how you justify making worthless degrees “free”. That’s a huge pile of resources going to something you admit is worthless that could better be put toward healthcare or infrastructure or pensions or whatever.
I don’t know how you justify making worthless degrees “free”.
That’s a huge pile of resources going to something you admit is worthless that could better be put toward healthcare or infrastructure or pensions or whatever.
If we know already that a particular degree is worthless, then it’s existence is a problem whether it is tuition-free or not. “Huge piles of resources” are being wasted by people deceived into paying for something that doesn’t deliver. This is a problem regardless of whether that pile of money comes from the public or private sector.
So that is one problem we can try to solve.
But if we assume that this problem isn’t one we are going to tackle (reasonable, since I’ve not heard anyone talking about it on the campaign trail), or if the problem is more intractable than it appears (also possible, it may be difficult to predict what careers are actually profitable) then we are left with the question of what to do assuming there will be worthless degrees.
My take, is that if there are going to be worthless degrees, it is better for people not to be saddled with debt for having them. That just compounds the already existing harm of the wasted earning-years. It would be like, as an alternative to offering unemployment benefits for anyone who lost their job, we slapped a fine on them instead.
Simple yes or no question, would you agree with the following statement[Edit: last sentence added for clarity]:
If we already know that cigarettes are harmful, then their existence is a problem whether they are free or not. “Huge piles of resources” are wasted by people deceived into paying for something that harms them. This is a problem regardless of whether that pile of money comes from the public or private sector. So subsidizing them wouldn’t make the problem any worse.
Or how about:
If we already know that homeopathic medicines are ineffective and are harmful if substituted for real medicines, then their existence is a problem whether they covered by medicare or not. “Huge piles of resources” are wasted by people deceived into paying for something that harms them. This is a problem regardless of whether that pile of money comes from the public or private sector. So subsidizing them wouldn’t make the problem any worse.
Assuming the answer is no, I think the difference comes down to the “worthless degrees” being associated with your tribe, unlike cigarettes and homeopathic medicines. And in the case especially of cigarettes you’ll understand that making something free leads to more of it being consumed.
A man is walking down an alley late one night and gets robbed of $1000. The police catch the robber and he goes to trial. The prosecutor suggests that the robber should give the victim back the money. However, the defense argues that if we transfer $1000 from the robber to the victim, we are subsidizing the victim’s poor decision to walk down alleys at night. After all, if you subsidize something, you get more of it.
A man is walking down an alley late one night and gets robbed of $1000. The police catch the robber and he goes to trial. The prosecutor suggests that the robber should give the victim back the money. However, the defense argues that if we transfer $1000 from the robber to the victim, we are subsidizing the victim’s poor decision to walk down alleys at night. After all, if you subsidize something, you get more of it.
No, I don’t agree with the defense. Because “we” are not giving anything back. The taxpayer is not giving anything back. The robber is giving something back. I feel like this gets to the crux of the difference between the way we look at the world. You don’t see the significance of the difference between my money, your money, government’s money, a rich man’s money, you want to look at it as if it’s all in the same pile. Maybe in your moral system there isn’t a significant difference. There is in mine and so that’s the answer to your question.
@Lambert
@Alexander Turok
So you all are fine with subsidizing the bad behavior of walking down alleyways at night, and your only insistence is that the money come from what you view as the appropriate pool? That’s what I suspected.
It seems like we are all okay with subsidizing people’s ability to make bad decisions without them reaping too much of a penalty. It’s true that if you reduce the “cost” of picking bad degrees, you get more of them. The same applies to reducing the “cost” of walking down alleyways at night.
I think people should be able to walk down alleyways without losing $1000, and likewise I think people should be able to pick a bad degree (wasting four years of their life) and not incur the additional penalty of debt. Doing dumb things has inherent penalties of its own, we shouldn’t be trying to make it even worse.
@Alexander Turok
I notice you haven’t answered mine.
I never said that subsiding something wouldn’t lead to more of it. The question of whether such a subsidy makes things “better” or “worse” depends on what you think people are owned. Cigarettes and homeopathy have significant non-financial harms, so the question is not quite analogous.
So you all are fine with subsidizing the bad behavior of walking down alleyways at night, and your only insistence is that the money come from what you view as the appropriate pool
You are not using the word “subsidy” correctly. I don’t think you understand the concept.
I think people should be able to pick a bad degree (wasting four years of their life) and not incur the additional penalty of debt. Doing dumb things has inherent penalties of its own, we shouldn’t be trying to make it even worse.
I don’t think of asking people to pay for things they consume as a “penalty” whether it’s food or housing or education or whatever. Cigarette smokers should be able to chose their habit and should incur the “penalty” of having to pay for their own cigarettes. We should be “trying to make it worse” by not subsidizing that decision.
I never said that subsiding something wouldn’t lead to more of it. The question of whether such a subsidy makes things “better” or “worse” depends on what you think people are owned.
In my moral system making a bad thing more numerous is a morally bad action. I’m sure there are exceptions where you can identify compensating good outcomes which justify them. Do they exist in this case? Why are people “owed” education as opposed to other things? Why not food or housing? Is it just because the blue tribe raised you to think of it that way?
In my moral system making a bad thing more numerous is a morally bad action.
Do you agree that walking down alleys alone at night is a bad thing, and should be discouraged? And yet, in my example, you support transferring $1000 dollars to someone who does it. Quibble if you want whether this money counts as a “subsidy”, it’s undeniable that this $1000 transfer encourages this behavior as opposed to the counter-factual where the $1000 wasn’t transferred.
I’m sure there are exceptions where you can identify compensating good outcomes which justify them. Do they exist in this case?
Indeed. The “good outcome” in this case, is that people who are already suffering from wasted years are not burdened with the additional suffering of debt. Because of the declining marginal utility of a dollar, this increase in net utility can be achieved via progressive taxation and transfers (i.e. taxing the rich to pay for the less-rich)
Why are people “owed” education as opposed to other things? Why not food or housing? Is it just because the blue tribe raised you to think of it that way?
lol at thinking I was raised by the “Blue Tribe”.
But yeah, they are owed an education. They are owed food, housing, and education, among many other things. We owe lots of things to people. If you want to argue that the same pot of money could theoretically be given to people to spend on more useful things you won’t get any objection from me, but that’s wasn’t the question at hand. It wasn’t “free college vs. SNAP”, it was “free college vs. nothing”.
Do you agree that walking down alleys alone at night is a bad thing, and should be discouraged? And yet, in my example, you support transferring $1000 dollars to someone who does it.
No, I support giving them their money back. It’s like saying “hey, do you support not having police beat people who smoke? Doesn’t this encourage smoking as opposed to the counter-factual where the beatings do occur? Then how can you object to making cigarettes free on the basis that it encourages smoking!”
Quibble if you want whether this money counts as a “subsidy”,
It doesn’t count. Word have meanings. This is how intelligent you sound:
You say you oppose prostitution, but now you’re telling me prostitution is okay so long as no money or anything else of value changes hands!
Do you agree that walking down alleys alone at night is a bad thing, and should be discouraged? And yet, in my example, you support transferring $1000 dollars to someone who does it. Quibble if you want whether this money counts as a “subsidy”, it’s undeniable that this $1000 transfer encourages this behavior as opposed to the counter-factual where the $1000 wasn’t transferred.
In this scenario I think its perfectly reasonable for the government to tax that 1k as a disincentive to walking down alleys. Letting the robber keep it is totally immoral. Perhaps you could tax it at 100% I think something more reasonable like a 25% idiot tax is more appropriate, but we are haggling about price at that point.
And as people have pointed out, the schools are the robbers, and the loans have caused schools in increase enrollment and raise tuition. The moral way to cancel student loan debt by taxing universities to pay for 100% of the bill.
And how do we fund the system of courts and police that causes their money to be given back? Because if your answer involves taxation (and you don’t seem to be advocating for anything as radical as a taxless system, given your use of words like “taxpayer”), that starts to look a lot like subsidizing activities by building public infrastructure.
One hardly needs to be an anarcho-capitalist to be skeptical of free college for all. So it doesn’t seem like much of a gotcha to point out that one such a skeptic believes in police and courts.
If we know already that a particular degree is worthless, then it’s existence is a problem whether it is tuition-free or not.
The degree is generally not worthless, but its worth is distributed extremely unequally. 1% of people who major in music will get a lot of money. Most of the others will end up indebted and angry.
The real problem with college loans (or anything looking like free college) is that it turns out its not a subsidy for students, its a subsidy for teachers and administrators. Thus we are only paying for bloat, not anything of value.
And consider this: If you are one of the unlucky ones who chose a major that is somehow rendered worthless (let’s say, you went to school to trade school and your very trade was replaced by an AI)
There it is! The example is always something which comes totally out of left field. No one could have predicted it. The thing about these worthless majors is that people know they are worthless. You may say “well what does it matter if they’re in a hole because the wind blew them there or because they jumped in there, they need help out regardless.” I say it does matter and if it doesn’t then why are the examples almost always of the former?
which would be the better situation: 1. Having a worthless degree and no debt 2. Having a worthless degree and [school tuition] worth of debt? The existence of worthless degrees only bolsters the case for free college IMO.
This is the equivalent of saying “the fact that these government-provided free cars frequently break down and leave their owners without a means of transporation only bolsters the case for free cars! Better to have no means of transporation and no debt than no means of transporation and debt! The whole X to Y comparison is flawed because you assume that when you start subsidizing something the amount consumed does not change.
The rich have plenty of money to tax, taking it from them to pay for college incurs almost no utility loss.
This only makes sense if you believe that the rich are just storing their money in a vault somewhere, that it’s not doing anything, not being invested and not creating any value. Yes, some is spent on yachts and other conspicuous consumption. Other money is invested. For the record I support higher taxes on the rich but not if the money will just be wasted.
I call them bad, because the assumption seems to be that support of free college is primarily fueled by college graduates expressing regret for their choices. This doesn’t square with (in both exit polling and pre-election polling) the Democratic candidate advocating for free college having greater proportional support of non-college graduates while the candidates who don’t having higher support of college graduates.
I don’t see any contradiction here: poorer people are more likely to vote for the economically Left-wing candidate. Poor non-college Democrats support it because they understand coalition politics, you scratch my back I’ll scratch your back. But if given a choice between free college and programs that actually benefit them, which do you think they’ll choose? I say the reason that cancelling student debt is on the table and cancelling credit card debt is not is because the college educated chattering classes don’t want to pay back their loans.
And since everyone is arm-chairing their political-psych theories, I’ll offer mine: It’s called “pulling the ladder up”.
Since America has never had “free” college I don’t know what “ladder” you are talking about. Presumably there’s some other “ladder” they benefited from because it would be impossible that anyone ever got anything through hard work and their own effort.
The thing about these worthless majors is that people know they are worthless.
No they don’t. People are value-maximizes. No one does something that they think is worthless on purpose. If I knowingly decline to become an electrician out of high school (and make ~$60,000) and instead get a major in interpretative basket weaving and make ~$15,000) then I must have valued the experience of the degree/pleasure of the job at >$45,000.
So yes, people only receive worthless degrees through either 1. Deception 2. Changing market conditions they didn’t anticipate. If you are talking about anything else, you aren’t talking about worthless degrees.
What I see in the world is a whole lot of people engaging is short-sighted behavior and continuing to do so even as they heard very many warnings. People continuing to smoke even as they see the warning label on every pack, not making any effort to quit. If you tautologically define “value” such that any freely chosen activity where you have all the information increases it then, sure.
I think there’s no disagreement about objective facts here, both of us agree that there are majors which do not deliver economic value to their graduates. The disagreement is just about what you call it. You brought up the term “worthless majors” and now you’re saying it doesn’t apply here. Okay, would “economically non-productive majors” be acceptable? If so, replace the phrase in my original comment. All my points still apply.
People continuing to smoke even as they see the warning label on every pack, not making any effort to quit. If you tautologically define “value” such that any freely chosen activity where you have all the information increases it then, sure.
I have a neighbor who is in his 60s who just quit smoking and claims it isn’t hard for him. I haven’t seen him smoking (he smokes on his front porch) in months so I believe it has taken, he basically smoked for 40 years without once really trying to quit.
What I see in the world is a whole lot of people engaging is short-sighted behavior and continuing to do so even as they heard very many warnings. People continuing to smoke even as they see the warning label on every pack, not making any effort to quit. If you tautologically define “value” such that any freely chosen activity where you have all the information increases it then, sure.
I’m not unsympathetic to this position, but think about its implications: If people are idiots who don’t know what it best for themselves, then what is the argument for people who make “bad decisions” to be saddled with debt?
They couldn’t have known (or perhaps were too feeble-minded to make) the best decision in the first place. So why punish them for it?
The conclusion is that the option should just be removed by the nanny-state all together, not left lying there as a trap ready to be sprung.
I think there’s no disagreement about objective facts here, both of us agree that there are majors which do not deliver economic value to their graduates. The disagreement is just about what you call it. You brought up the term “worthless majors” and now you’re saying it doesn’t apply here. Okay, would “economically non-productive majors” be acceptable? If so, replace the phrase in my original comment. All my points still apply.
Setting aside my admitted trollishness, there are very few majors that have a negative market value in the sense that people who have these degrees command no higher salary than people without. I’m no expert on this, but I’m going to wildly ballpark something like 1%-10%. STEM, business, education, law, healthcare, social work, religion, and trade school degrees all still “pay”.
And yeah, this is probably bad that not everything does (but may be fine if you really like the experience of going to college, but again I’m setting this aside). But why the “free college” debate always seems to hone in on what is really only a small minority of degrees, treating them as an example of what we are “subsidizing”, is indeed curious.
I’m not unsympathetic to this position, but think about its implications: If people are idiots who don’t know what it best for themselves, then what is the argument for people who make “bad decisions” to be saddled with debt?
They couldn’t have known (or perhaps were too feeble-minded to make) the best decision was in the first place. So why punish them for it?
Having to pay back debt you willingly took out is not “punishment.”
The conclusion is that the option should just be removed by the nanny-state all together, not left lying there as a trap ready to be sprung.
Only if you don’t put any value on human liberty. And even then, you have to trust the nanny state to get the answer right.
There was a popular musical about the stressful urban poverty of people who get two Masters degrees in social work (“and now I am therapist / But I have no clients! / And I have an unemployed fiance! / And we have lots of bills to pay!”) or Bachelors of Arts in English.
One of these OTs I’ll start a top-level discussion asking folks to Bulverize why it’s so uniquely gratifying to sneer at philosophy majors.
What’s wrong with “it’s because they, unlike smokers, demand everyone else subsidize them?” It would be another matter if it were their own money being spent.
Most studies showing different ROI for different majors are deeply confounded by selection effects of particular majors on intelligence, gender, and personality, all of which then have sizeable downstream effects on earning potential. Abolishing the religious studies major and routing all those students into astrophysics is not going to make them into astrophysics-type people.
Agree completely.
If you buy Bryan Caplan’s signaling model of education, the branding/informational value of the institution + major is what employers primarily want from a degree, anyway. And indeed, many physics majors go on to consulting gigs where they use at most a little basic math from their undergrad, none of the other content. If what employers want is a signal of exceptional quantitative aptitude or systems thinking, then removing the major-based signal by making all degrees STEM degrees will just prompt them to seek other ways to filter that.
Bryan’s model seems to me like an exercise in motivated reasoning. He wants to believe the market is rational and marshals a bunch of evidence that the standard explanation for why college graduates are favored is wrong. So he goes and looks for one and he finds one. I think he’s accurately described the reason employers favor college graduates but failed to show that this is the rational approach. And anyway, if they find one the crucial question would be is it being subsidized by the taxpayer or not?
A lot of thinking about “useful” majors suffers from fallacy of composition, assuming that the job prospects for the~5% of students who earn engineering degrees today would equally be available to 100% of students if we could only get all those kids into engineering training. Instead, I’m not aware that we see ravenous demand or amazing working conditions in any STEM fields; testimonial anecdata suggest that on the margins talented and BA-equipped folks regularly leave tech and engineering after failing to find a suitable gig.
Agree completely.
I think anybody calling to solve our education woes by eliminating useless fields of study also needs to specify the alternative avenue they’d recommend, and moreover, to demonstrate that it would scale.
If you’re point is that for it to be politically realistic you need an alternative, sure. If this is your view then we have a fundamental value difference. Suppose you are a medieval peasant and you’re experiencing drought. Someone comes offering to do a rain dance for a price. Someone else says ‘let’s not.’ Would you agree that they need to provide an alternative way of handling the drought before advocating not employing the rain dancer?
When my grandparents went to CCNY it was free. Of course, like most CCNY students (then and now) they lived with their parents and got to campus on the subway. And while at the time it had a strong reputation as the “Harvard of the Proletariat” now it’s… well, not as pitiful as it was during the open admissions era, but thoroughly undistinguished.
The University of California system was also historically tuition-free for in-state residents. I would imagine this was similarly true for public universities nationwide. Over the second half of the twentieth century there was a gradual shift from getting the bulk of their funding from direct appropriations to getting it from student loans, which is probably more equitable from a perspective of not wanting to subsidize dilettantes, but does mean there’s nothing to keep administrative bloat in check. In other words, we’re indirectly subsidizing a different group of dilettantes, namely assistants to the deputy vice-dean for strategic dynamism.
At this point we’re probably better off doing away with student loans altogether. If we’ve decided that higher education isn’t a public good, let’s stop throwing money at it in the form of loans we’ll never be able to collect. And for god’s sake undo the 2005 bankruptcy amendment and make them dischargeable again.
And for god’s sake undo the 2005 bankruptcy amendment and make them dischargeable again.
+1
And while we’re at it, since a lot of this was colleges admitting marginal students and offering “degrees in useless”, put the colleges on the hook for at least part of the discharged amount.
Pretty sure the “worthless majors” are a red herring, although gee whiz do folks love to get their smug on about them. (One of these OTs I’ll start a top-level discussion asking folks to Bulverize why it’s so uniquely gratifying to sneer at philosophy majors.) Reasoning:
I know this OT is mostly dead at this point, but as I was catching up I planned to say exactly this, I’m glad you beat me to it.
In addition, some of the people who chose a “worthless major” were fully aware that the type of career it set them up for was a not a high paying job, the problem is now they can’t get any job.
The claims generally about people with “worthless majors” and the further claim that these people feel entitled to six figure jobs are both way off base with minimal support. Weak manning at best, but most likely just flat out straw manning.
In addition, some of the people who chose a “worthless major” were fully aware that the type of career it set them up for was a not a high paying job, the problem is now they can’t get any job.
So, I’m just curious… what sort of jobs did humanities majors reasonably expect they were going to get that they suddenly find to be unavailable?
@Matt M > “…what sort of jobs did humanities majors reasonably expect they were going to get…”
Thanks to the magic of family, friends, and Facebook I’ve some idea of what my peers who were humanities majors did wind up doing:
My brother (thanks to family and in-law support, including mine) went to college in the early 2000’s with a political science major, moved to Maryland, worked various odd jobs, the longest with an aftermarket auto parts manufacturers lobbying organization until getting a job with The State of Maryland (which by SHEER COINCIDENCE his father in law also worked for), one guy I went to high school with is now a public librarian in Orinda, California, another guy became a lawyer, most of the girls I knew became school teachers – and most of them moved out of state, some guys I knew attempted college, but didn’t graduate, and they usually did worse than those who never made the attempt, military to trades usually worked better than “some college” (except for an electrician turned cripple).
My wife was a dual English/Philosophy major, then she went to law school, made (what seemed to me) good money as a paid intern while going to law school, then met me, dropped out of law school, worked a few temp agency jobs (mostly at banks), then a stay at home wife and mother.
My mother, father, and step-father all went to college in the ’60’s and/or ’70’s (though in my father’s case ‘community college’ with the hope of transferring to a university and becoming a pharmacist), my Dad failed to graduate, became an “independent contractor” (a truck, tools, and his back), until joining the laborer’s union, before the hospice and hospital he lived in public housing in Oakland, California. In contrast my uncle never went to college, joined the Plumbers union, then became a contractor, and doesn’t live in public housing.
My step-father after college was briefly a social worker (before I knew him), quit that, became a taxi driver, then press photographer (he first took some riot photos freelance), then camera store, then bait shop, and finally a gun shop that the City of Oakland taxed out of existence.
My Mom after working with my Dad blue collar (she did roofing with him!), divorced him, then made and sold candles and puppets on Telegraph Avenue, then got a job as a secretary for the University of California, then a secretary for The New York Times (her boss wrote The Falcon and the Snowman), then back to U.C.
So law school or government work (usually teacher) is what I’ve seen humanities majors do, fairing better than “some college”, and the non-law school ones just don’t earn as much as guys in the trades who didn’t suffer crippling injuries.
The rich have plenty of money to tax, taking it from them to pay for college incurs almost no utility loss.
Even ignoring the hurt feelings of the rich, there is a significant loss. It is much better described in The Case Against Education, but the short version is that even ignoring money, you still pay for education with years of your life, and it’s a zero-sum game, because the only thing that matters to employers is that you wasted more time than the other guy, therefore you send a stronger signal. The less people need to pay with money, the more they will be asked to pay with years of their lives.
In some sense, a society where only the 1% could get college education would be better. Because in such society, not having the college degree would not prevent you from getting a good job.
On the other hand, imagine a dystopia where 90% of population spends 30 years of their lives in college, and the education they get there is mostly crap. In such society, if you decided to save 30 years of your life and skip the system, you would find out that no one actually wants to hire you — failing to do what 90% of people can do sends a pretty bad signal. (So you start your own company then? Oops, there is all kind of regulation against people like you.)
FWIW College is often considered the best time of a persons life, 30 years of subsidized college where I only have to pay in years of my life sounds like an amazing deal that I would take in a heartbeat, work sucks.
I wonder if that’s out of date. When I was in college, towards the end the coursework was heavy enough that I literally had no time to do anything but work most days, and 18 hours a day still meant I had to prioritize which items were most important to get done on time – and I still wasn’t doing enough credits to graduate in 4 years. My heart quit on me for a few seconds after a calculus test.
Other people seemed a little better off but not by much.
it’s a zero-sum game, because the only thing that matters to employers is that you wasted more time than the other guy, therefore you send a stronger signal.
I don’t care the Bryan Caplan wrote a whole book with this as the premise: It seems so flatly, obviously untrue that I’m struggling to formulate a response that would even register to the understanding-of-reality of someone who would advocate for it.
I mean, is he really saying straight face that the reason doctors go to medical school is nothing more than zero-sum positional signaling? How about engineers- no need for those pesky physics classes, right? If I’m going to be hired as a biotechnician, it can’t really be relevant whether I actually know chemistry, genetics, or evolution, right?
It just seems like an over-correction of the most basic kind: Caplan correctly realizes that some degrees are useless positional signalling, and that some credentialing acts more as gate-keeping than quality-control. But he runs away with the logic to argue that all degrees and credentialing must be bad. Which is utterly indefensible IMO.
I mean, is he really saying straight face that the reason doctors go to medical school is nothing more than zero-sum positional signaling? How about engineers- no need for those pesky physics classes, right? If I’m going to be hired as a biotechnician, it can’t really be relevant whether I actually know chemistry, genetics, or evolution, right?
It just seems like an over-correction of the most basic kind: Caplan correctly realizes that some degrees are useless positional signalling, and that some credentialing acts more as gate-keeping than quality-control. But he runs away with the logic to argue that all degrees and credentialing must be bad. Which is utterly indefensible IMO.
Perhaps the government should abolish useless signalling degrees at public universities, which would aid in clearing up this confusion.
Obviously under US law, trying to ban the Ivies and other private universities from offering degrees in Signalling Studies for a million dollars would violate several clauses of the First Amendment, but what they could do would be a game-changer.
To the extent that Villiam is right that useless degrees are a problem (and again, I suspect they make up a tiny minority of all degrees, but they do exist) the “cleanest” solutions would be difficult to mesh with existing US law. In an unburdened legislature the best solutions would be:
1. Banning colleges from offering truly useless degrees. Cut it at the roots.
2. Banning employers from requiring useless degrees. Eliminate demand.
But since we can’t do that, the best solution we are left with:
3. Make getting the useless degree as painless as possible for those who have to do it.
The government can’t outright ban tulips, and it can’t eliminate people’s people demand for tulips. But it can give free, unlimited tulips for everyone, which has largely the same effect on the signalling game.
No, the “best” solution is forcing colleges to self finance these loans. And if we are going to forgive those in the past, make those same colleges pay for the forgiveness.
> How about engineers- no need for those pesky physics classes, right?
I ain’t learned jack that I couldn’t’ve taught myself. And I don’t expect to use more than a tiny fraction of it. (And I intend to go into some pretty solid R&D)
I ain’t learned jack that I couldn’t’ve taught myself. And I don’t expect to use more than a tiny fraction of it.
Ah, but which fraction of it you will use is going to be the question and you won’t know that until you need to know it.
As to “I could have taught myself” perhaps indeed you could. But who would have checked you weren’t teaching yourself bad habits or going down a rabbit hole and learning the wrong thing? “This is the way I’ve always done it” is not always the right way.
I don’t care the Bryan Caplan wrote a whole book with this as the premise: It seems so flatly, obviously untrue
Yeah, it’s obviously untrue. I am an employer, in that I have to occasionally pick between candidates for jobs, and when I see a degree from a good university, I see a reliable indicator of intelligence, conscientiousness, knowledge, talent, and skill – some of these things being merely screened for by the university, others actually enhanced by it. If our union allowed it, I would at least be willing to consider a self-educated engineer, but the degree is far more than a signal of wasted time. And the difference between a bright undergraduate intern and a bright graduate with an MS or Ph.D., in terms of what I can trust them to do without my regular and direct supervision, is enormous.
So Caplan is wrong. But we knew that.
The people saying we should get rid of the degrees in uselessology are also wrong, and we ought to know that from the fact that our host’s undergraduate degree was in Philosophy IIRC. There are some places where deep knowledge of a particular domain is needed, but others where breadth of knowledge and ability to apply one’s intellect to anything is more valuable. The classic liberal-arts degrees, and most of the rest of the “uselessology” fields, are pretty good for that, and it’s something we need.
So, maybe don’t subsidize them as much, but don’t denegrate them as much as they often are here.
No, the “best” solution is forcing colleges to self finance these loans. And if we are going to forgive those in the past, make those same colleges pay for the forgiveness.
To be more precise: as far as federal aid goes, there is no difference between getting a degree in the humanities or grievance studies vs. computer science. There’s no reason for universities to stop offering these as long as students are still majoring in them. If federal aid depended on the market value of these degrees, and we can expect many fewer to major in them.
(For what it’s worth, students at the margin have already realized that some degrees are useless; as well, my impression is that when there is an economic downturn, useless majors go down. This is bad enough that some schools are having trouble keeping certain programs going, because they need at least a few majors to be able to run their programs. At my Jesuit university, for instance, Theology was scarcely getting one major per year, and the strategy for the last few years has been to attract as many minors as possible. So I’m sure this would hit those programs hard. I think a lot could be said about what that will mean and how colleges should respond, but that’s a little too tangential. Maybe we should save that for the next quarter thread.)
I don’t see how this follows from what you wrote. From your comment, only the part “other [things] actually enhanced by it” goes against the signalling model. And Caplan admits that the school is only 80% signalling, and the remaining 20% is some useful stuff.
This reminds me of my question about factory work. The answer that a lot of people said to me probably applies here: people do what they know and they’re often very risk averse.
“Educating a generation and saddling them with debt and then not giving them jobs where they have the wage that they presume they should receive based on the amount of time they spent on education,” Virgil said. “That’s a pretty good way to turn them into radicals.”
A great example of what conservatives and libertarians mean when they talk about the “entitlement mentality.” Everyone is EQUAL but I deserve more money because I went to college, the government must give it to me!
Four years and $100,000 for Craig’s list gigs?
I’m (early) Gen-X, but feel free to “okay Boomer” me because that sounds insane to me.
Well, I’m sure the craiglist gigs were just supplementing the real source of income: Daddy.
I think there’s a stigma against the blue collar trades but there’s also a real fear and a real risk to the trades which justifies the salary.
While plumbing is still doing fine, there’s a lot of skilled tradesmen in the Midwest with 20 years experience, no job, no prospects, and no degree. There’s a lot of long-distance truckers with no future in the industry and driver-less trucking approaches. There’s a lot of taxi drivers who probably made decent money with $1 million dollar medallions and the memorized layout of a major city like New York who have been replaced by Google Maps and Uber. And I can tell you there’s a lot of competition for any firefighter or police officer positions.
I don’t see how any kid or parent trying to plan out a 40-year career could look at what happened to blue-collar workers over the past 40 years and be confident that automation/outsourcing/etc wouldn’t consume their industry and leave them without valuable skills or a fallback option. The middle/striver class is freaking out because, well, college was supposed to be a $100,000 job guarantee and it’s not anymore. That doesn’t change the fact that blue collar workers have been devastated in general and the college-educated have done better.
Or, to rephrase, how do we know plumbing and other skilled trades aren’t just suffering from survivorship bias, and most skilled tradesmen who started in the 1980-1990 aren’t significantly worse off?
I’m not aware of this midwestern phenomenon. Links? It’s been a while but when I was last out there the trades were still doing well. And things like carpentry, plumbing, etc are unlikely to be automated. Your two examples are truck driving and taxi driving, neither of which are skilled trades or manual labor.
I’d say automation risk runs the other way. It’s highly unlikely tasks that involve a high degree of visual identification and working with hands will be automated or outsourced in the near future. White collar work, meanwhile, is much more likely to get automated.
‘Trades’ insofar as they involve working on stuff in-situ are unlikely to get automated soon.
But the jig-borers are all gone. One person can feed G-code into a dozen CNC machines and let them all run etc. There’s no lofts full of draughtsmen. Robot arms can weld, rivet, glue with preternatrual precision and consistency.
This doesn’t necessarily mean that jobs go away. There’s now a market for 5-axis milled parts that were just not practical in the past. And you need skilled machinists to operate them.
A draughtsman is not someone who’d work at a factory…
Anyway, factory work isn’t a trade either. At least not by my definition. What’s yours? Because if your argument is just that unskilled labor is having issues, then I’d agree. But as someone intimately familiar with Midwestern factories, I can assure you high skill non-college workers are in very high demand. The effect was not to eliminate tradespeople but to eliminate the least skilled workers. Which is still a problem because those are people too.
The effect was not to eliminate tradespeople but to eliminate the least skilled workers.
And that’s the crux of it: why people thought “college/more college” was the answer. For an increasing demand for more skilled/higher-skilled workers, more training and more education was needed. Now it wasn’t just “you’ll pick it up on the job”, you needed some level of instruction and qualifications beforehand as well as what you learned on the job. Hence going to college to get that shiny degree which was a guarantee that you were indeed qualified and enabled you to walk into that good job. And the inevitable creep from “left school but hard-working” to “need to have your high school diploma” to “need a certificate for training beyond high school” to “need a basic college degree” onwards.
The new jobs coming along are no longer on the shop floor, they’re the ones that require a high set of skills and abilities, and where there’s the continuing sifting of people so the less-skilled drop out at more and more levels of the process. That’s our problem. I don’t know the solution, because the demand for the higher skills/higher IQ workers is ongoing, and you can’t simply move your “used to be a coach driver working with horses, then cars replaced horses, now he works on the automobile assembly line” employees around like that anymore, it doesn’t work like that anymore. Now you’re requiring your “used to be a coach driver” employee to be able to design the engines for the automobiles at high performance specs for the new jobs.
First, the midwestern reference is just to the general de-industrialization of the US, which hit the Midwest particularly hard. I’m sure there were many skilled trades involved in steel, car manufacturing, sheet metal, etc. Some people were surely labor
Second, I’m not confident automation is the major threat here. Sure, you can automate car production but outsourcing seems by far the bigger threat there. Everything I hear about farming is bad (and farming is certainly high skill) but we haven’t outsourced or automated that, it’s just market consolidation there. There’s several different reasons salaries have stalled, not just automation; looks at the sorry lot of adjunct professors: no outsourcing, no automation, still miserable.
But for a more specific example that I have some familiarity with, take able bodied seaman. A highly skilled physical trade requiring no more than a high school degree paying ~$300/day or $54,000/year. Lots of machinery skills required as well as the ability to work in a specialized environment. And it’s not like shipping doesn’t make money. But the merchant marine has been slowly dying for decades. The total fleet went from 2,926 ships to 182. And unlike self-driving cars, autonomous cargo ships are currently being built and tested. Able bodied seaman jobs, and the merchant marine, were and still can be very profitable but it requires high degrees of visual identification and working with your hands and is likely to be gone within 10-20 years.
Are line workers in a factory tradespeople? Not by my definition, but what’s yours? If you mean people who operate or repair machinery that takes a long time to learn, their employment prospects are fine. It’s the people who used to do simple rote tasks that are having issues.
Farming is a special market because it’s so highly regulated and the government has been very unkind to farmers in order to extract cheap food from them. I’m not sure how the others have to do with skilled trades. As for your example of seamen, again you appear to have an extremely non-standard definition of skilled trade. More to the point, the US sailor has been dying due to cheap competition (it is, definitionally, a global market) and flags of convenience going to other countries.
I’m not sure this will go anywhere but I’d define skilled trades as anything requiring significant non-college experience primarily involving physical goods.
For reference, for immigration purposes, Canada defines skilled trades as including bakers, cooks, butchers, mechanical/technical maintenance, and agricultural supervisors.
“Learn to weld” is probably good for diligent individuals on the left-side of the IQ distribution, but diligent individuals on the right-side probably have access to better paying jobs, more comfortable jobs than that. They should get different advice from just picking a trade. However, there’s more to it than picking a major, because you also need to know how to act on the job and boost your performance.
“You sure about those numbers? BLS has Welder salary at $40k/year…”
Yes.
I’m sure that BLS wage rate is for “There Be Dragons”, not Emperor Norton’s realm!
The BLS has Plumbers at $53,910 per year, when in my area it’s about $90K to $110K per year for a full-time union plumber.
The City and County of San Francisco usually pays less per hour than guys got in the private sector, but the scale for Welder (representative by the Electrical Workers union) is $81,692.00-$99,294.00 Yearly.
For a Pipe Welder (representated by the Plumbers and Steamfitters union) it’s $96,902.00-$117,832.00 Yearly.
For a Fusion Welder (representated by the Iron Workers union) it’s $95,472.00-$116,038.00 Yearly
In California (the last time I checked) at least one county’s union local rate was $60K per year, instead of the $95K that was my couny’s rate, I suspect it’s the same for college grads that different areas will have different pay scales (though I’ve read that public school teachers in Texas all get about the same rate, so living in a cheaper area of Texas can dramatically increase your savings compared to living in Austin).
Austin is a pretty reasonably priced city. The city center is very small and expensive but living 10 minutes away you can rent a two bedroom apartment for 1200 a month in a nice neighborhood and less than 1000 in as close as Austin gets to a not nice neighborhood. If you wanted to live in Odessa you might save 200 a month but nobody wants to live in Odessa.
This is another thing people miss: you shouldn’t compare outcomes to the national average because geography is important. Almost every job in SF is paid better than a comparable job in Des Moines. You need to compare trade labor in the area to unskilled labor in the area, not unskilled labor in the area to national average trade labor.
Two years and a few hundred dollars in fees and materials (maybe $2,000 total) of welding classes at a community college gets you a $80,000 to $120,000 a year job.
Is this starting? How easy is it to get these jobs?
It is insane. As someone with several degrees, no debt, and who is doing decently on a professional level at age 30, it was my experience that we were encouraged to look at college in an insane way. Doing an apprenticeship was looked down upon at my upper/middle class public school, and I’m told in my state there is a huge shortage of people doing those jobs. My parents rail about the foolishness of taking out huge student debt, but idk what they would have said had I not been able to rely on their support, because parents’ status was ridiculously caught up in the college game, and parents truly religiously believed an education at a good school would pay off. To me it seems foolish to have paid that much at all, loan or no loan. I declined Cornell Law because it would have required crazy loans, but I can definitely see how someone would be awed into paying that for Cornell. It took me way too long to question it, and I was lucky there wasn’t much damage done in my case.
I agree the caste thing mentioned below is useful—I’m from Massachusetts. Not a WASP, family has only been here a few generations, I’m a second-generation college graduate, but the ethos I grew up in was still Brahminy, and I work with actual WASPs all the time. Their kids *have* to be “successful,” and they’re desperately trying to have the same path work for their kids that did for them, but it doesn’t work anymore because there is too much competition and other issues. It’s not a goofy major thing, mainly. My parents go on about that too but my undergrad degrees (poli sci and communications) are not meaningfully different, IMO, than a philosophy or women’s studies degree. College wasn’t really about the major, but about learning the soft skills for jobs typically taken by people in that class, and for making connections. Nor is it really a find your passion thing–my experience was “find something you’re good at and you will have status if you follow the rules.” That meant something that worked with natural strengths and wasn’t unpleasant, but not some dream situation. In reality, there are limits to the number of those jobs, and industries like media and law have been pretty wrecked by changes. College costs are up and salaries are probably down in many of them, or don’t go as far.
We sent so many people to college, at such a high cost, and the message “good career” was more tied up in having a respectable sounding position than financial security as the end game, though I don’t think that was fully understood by anyone. Few parents absorbed the changing realities, and most of the kids didn’t know better. Scrambles to preserve the status of aspirational classes’ children are always dysfunctional over time and we’ve hit that point quickly because the last few decades have seen so much change and because the Boomer experience was so anomalous and based in unrealistic hope and symbolism.
ETA: It’s easy to mock these people or just see them as entitled whiners, but real damage was done to them by the social norms and adults around them. I’m talking about a certain class of young people. They hardly have the worst lives ever, but, speaking generally, the choices they made were heavily encouraged by the schools themselves, government, and their parents. These adults still don’t admit there was/is a problem, so it can be hard to come to terms with the situation. A huge part of this is people unable to disappoint or push back at their parents and face the reality that the path they’d hoped for was always illusory.
Conan review #18: “The Black Stranger”
This is a direct sequel to “Beyond the Black River”. It also has a strange history: it’s the only Conan story rejected in Howard’s lifetime after “The Frost-Giant’s Daughter” and “The God in the Bowl”, which were submitted to Weird Tales hot on the heels of the first one published, “The Phoenix on the Sword”. There’s no evidence that “The Vale of Lost Women” was submitted for publication, and it seems like Farnsworth Wright got in the habit of never rejecting a Conan story. So the fact that this one was found in a chest of unpublished papers along with a finished rewrite into an Age of Sail pirate story, “Swords of the Red Brotherhood” (Conan turns into 17th century Irishman Black Vulmea) is strange.
In 1953, L. Sprague de Camp edited it into “The Treasure of Tranicos” so it would end linking up with the rebellion in Aquilonia that brought Conan to the throne.
Conan has been running west from Picts for a hundred miles. He takes cover from the arrows of 40 of them, and mysteriously the chief calls off the attack. Whatever refuge he’s using, they seem to have superstitious fear of it. Wolf-Picts “captured him, in a foray against the Aquilonian settlements along Thunder River, and they had given him to the Eagles in return for a captured Wolf chief”, so he’s even farther from his job in Aquilonia than the great distance he’s run.
He walks into a tunnel in his stone refuge and finds a heavy iron-bound oaken door He’s amazed, because he’s at least 200 miles west of Thunder River and near the coast, where the Picts are too fierce for civilized people to come and build things. Then he finds iron-bound chests ranged along the walls. There are also silent figures at table. Have you guessed that he found a pirate hideout?
Elsewhere, Lady Belesa of Zingara has been living a year in a log fortress her exiled Count uncle has built on the Pictish coast, a thousand miles north of home. We’re also introduced to Tina, a freed child slave. A pirate ship appears on the horizon! Hustling inside, the Zingarans find the newcomers approaching under a flag of truce. Strom the pirate captain acts like Count Valenso has treasure and no ship to take it away in. Valenso has an archer shoot Strom, who responds by having his pirates surround the fort. They figure out how to defeat the defenders, but then another ship flying the royal Zingaran flag scares them into retreat!
This turns out to belong to Black Zarono, a buccaneer and another enemy of Valenso. The enemy is invited to table, with none of his crew inside the log wall, where he insinuates that the Count has built a log simulacrum of his castle here on the shore of wilderness to hunt for treasure, which he denies, saying it was meant to be a temporary stop on his way away from the corrupt stink of Zingara’s court, and he’d go somewhere else in civilization if he could, “to Vendhya, or Khitai—” Zarono presses him, but is shocked into believing him when he says his navigator let anchor here for reasons he had not time to reveal before getting beheaded by a Pict.
“Supposing you to have already secured the treasure, I meant to take this fort by strategy and cut all your throats. But circumstances have caused me to change my mind—” What’s this, a double-cross story where no one’s good at lying?
So change of plans, Zarono says: I need to stay here to actually find the treasure of Tranicos, famous pirate of 100 years ago who “stormed the island castle of the exiled prince Tothmekri of Stygia,” – times like this I feel Howard is daring the reader to stop suspending disbelief in his mashup of historical periods, but he seems to carry it with conviction.
So Zarono tries to strike a bargain where they split the treasure 50-50, lift anchor, and Valenso can have Zarono’s ship when he abandons it to settle down in Zingara with a noble wife – the non-consenting Belesa. Tina interrupts to report that a very tall black man showed up on the beach in a black boat alight with blue fire, which sends Valenso into violent terror. Now Belesa is motivated to get away from her uncle with the child he hurt.
When Chapter 5 rolls around, Zarono’s ship is destroyed in an unseasonable storm. The indefatigable pirate says the two groups have 260 men left between them and a vast forest, so let’s build another. This new plan is disrupted by the other pirate ship returning, and then Conan re-entering the story. A pirate from that ship was killed, and Strom blames one of the other two schemers, ignorant that Conan did it. He bursts into the negotiating room in 100-year-old pirate garb, which he put on back in the tunnel. Zarono says: “Three years ago the shattered hull of your ship was sighted off a reefy coast, and you were heard of on the Main no more.” (Would that be the Wastrel he stole in “The Pool of the Black One”?) We’re told that by this time in his life, he’s seen as “a legendary character in the flesh.”
The pirate captains telegraph that they’d kill Conan for the treasure map he now has, so he throws it in the fireplace. With the only map in his memory, he offers thus:
“We’ll split the treasure four ways. Strom and I will sail away with our shares aboard the Red Hand. You and Valenso take yours and remain lords of the wilderness, or build a ship out of tree trunks, as you wish.”
But Valenso is too terrified to stay that long. Belesa thinks all the negotiations are a farce, as all except her uncle are honorless pirates who won’t leave without the whole treasure and rival blood on their blades, and she no longer thinks much of her uncle either. For now, though, they need an intricate plan to carry the treasure out of the cave without one faction outnumbered and betrayed. Valenso is too scared to go, so they break it down as Conan, the two captains, and 15 bearers from each crew.
As they leave, Conan asks Valenso why he decapitated a Pict, or so he believes because he found the Count’s necklace at the scene of the murder. Then Galbro the Count’s seneschal tries to decipher what’s left of the map in the fireplace…
Then Belesa asks her uncle his thoughts on all the scheming. He says Strom would murder them all aboard ship for their share of the treasure. Zarono would be honest because he wants to marry her, but has no ship, so he’ll send fishermen in the dark to overwhelm the pirate ship’s skeleton crew. Then Zarono’s men will murder Strom and Conan on the beach, hoping the former’s death demoralizes his camping pirates, and sail away to share the treasure 50-50.
He goes on to explain who the black man is:
‘In my youth I had an enemy at court,’ he said, as if speaking more to himself than to her. ‘A powerful man who stood between me and my ambition. In my lust for wealth and power I sought aid from the people of the black arts—a black magician, who, at my desire, raised up a fiend from the outer gulfs of existence and clothed it in the form of a man. It crushed and slew my enemy; I grew great and wealthy and none could stand before me. But I thought to cheat my fiend of the price a mortal must pay…’
Conan has led four pirates to the treasure cave, where Tranicos and his captains sit dead. They also find Galbro dead. There’s bluish mist in part of the cave, which they guess is deadly – just before Conan shoves them into it! They recover and Conan kills one before jumping to a ledge as the rest of the pirate crews pour in. He goes prone on the crag outside, out of sight.
‘Well, what did you expect? You two were planning to cut my throat as soon as I got the plunder for you. If it hadn’t been for that fool Galbro I’d have trapped the four of you, and explained to your men how you rushed in heedless to your doom.’
‘And with us both dead, you’d have taken my ship, and all the loot too!’ frothed Strom.
‘Aye! And the pick of each crew! I’ve been wanting to get back on the Main for months, and this was a good opportunity!’
True, each captain had a plan to kill him, but Conan still comes across as a bad guy here. He sounds like he had a good job serving the King of Aquilonia on the frontier and wanted to go back to piracy because he was bored. The pirates reconstruct Conan’s plan to get the treasure out despite the mists, but he boasts that they won’t make it back alive without his woodcraft. To puncuate the point, Picts suddenly appear, enough to keep the pirates besieged in the taboo cave until they die of dehydration. They declare truce with Conan, who uses the Climb skill on the side opposite the cave mouth to help them slip around the semicircle of warriors to their west. Of course they run away unencumbered by heavy loot.
Soon, Picts attack them at the fort (the conspirators against Conan don’t shoot him on the way in, which I find unconvincing). That night they find a man of Strom’s dead, with no Pict either inside the wall or visibly running away. Strom blames Zarono. They fight. Then the Picts do break in. Strom and Valenso are dead by the time the chivalrous Conan reaches the girls, whom he finds being menaced by the smoky, horned, pointy-eared Black Man. He finds a piece of silver furniture to throw at it, knocking the thing back into the fireplace. He gets the girls to safety out on a headland, leaving only the Picts and the Dead behind.
Conan sends smoke signals to the few people who were out of the ship, surmising they’ll make him captain because none of them is a navigator.
‘What will you do when you get back to Zingara?’ Conan asked.
She shook her head helplessly. ‘I do not know. I have neither money nor friends. I am not trained to earn my living. Perhaps it would have been better had one of those arrows struck my heart.’
He gives her a handful of rubies he looted.
I might as well leave you for the Picts to scalp as to take you back to Zingara to starve,’ said he. ‘I know what it is to be penniless in a Hyborian land. Now in my country sometimes there are famines; but people are hungry only when there’s no food in the land at all. But in civilized countries I’ve seen people sick of gluttony while others were starving. Aye, I’ve seen men fall and die of hunger against the walls of shops and storehouses crammed with food. Sometimes I was hungry, too, but then I took what I wanted at sword’s-point.
Conan the Communist?
She asks what will become of him, and he says don’t worry, he’ll be fine because he’ll be a pirate again! Yo ho ho.
I like that we have three factions plus Conan planning to betray each other for treasure in a Treasure of the Sierra Madre moral fable. I like that the plot is complicated by people under the Count having agency and not following his plans (imagine how elaborate the plot would get if each pirate crew had been given such characters too). On the other hand, Conan’s morals are hard to sympathize with (even giving up his loot to help a woman has confusing motivation), there are lapses of logic, and the supernatural being seems superficial to the tale.
If I remember correctly, in de Camp’s edit, Conan is running from betrayal by the King of Aquilonia when the first group of Picts captures him and he ends up able to recover all the treasure. That’s tighter and more surprising, given the character’s pattern with treasure.
Your thoughts?
Next OT, we’ll be looking at two very early stories, in which Conan is a king.
it was meant to be a temporary stop on his way away from the corrupt stink of Zingara’s court, and he’d go somewhere else in civilization if he could, “to Vendhya, or Khitai—”
Did anyone point out that if he intended to head to Khitai, sailing off West to the coast of Pictland was the wrong direction?
I found the story confusing because of all the different factions plus everyone planning to knife everyone else in the back. Conan plotting to kill the pirates was nastily realistic, given that they were all hoping to do the same to him and to each other, but as you say it does leave a bad taste in the mouth. Pragmatic, ruthless and yeah what a real pirate was like but we’re more used to Conan doing his killing face-to-face and being attacked first. Hanging around ‘civilised’ people has had a bad influence on his plain barbarian straightfordwardness!
The ending with Belesa at least does address “so what happens the girls when Conan and they fall out of lust and he moves on to his next adventure?” and that unlike him, unless they’re Belit or Valeria or Red Sonja, they will need someone to set them up with enough money to keep themselves for at least a while or to take them on and support them. This is why the princesses and queens either stole a private moment for the burning kisses or didn’t fall into his arms at all at the end of the story, because while it’s okay for an adventurer to love ’em and leave ’em, a woman has to maintain her social standing as respectable or else lose it all.
It works best probably as a fix-it-up explanation for how Conan went from having one ship under him to wandering around doing some jobbing soldiering to getting back into piracy again, but it’s not a favourite story of mine (and the dissonance between “This is supposed to be the Picts, who lived in what is broadly Scotland, plus Roman/Saxon/whatever European colonists, but it’s plainly a Western written in terms of North America and Native Americans” of the Black River stories gives me a headache every time).
Did anyone point out that if he intended to head to Khitai, sailing off West to the coast of Pictland was the wrong direction?
Heh, no.
I found the story confusing because of all the different factions plus everyone planning to knife everyone else in the back. Conan plotting to kill the pirates was nastily realistic, given that they were all hoping to do the same to him and to each other, but as you say it does leave a bad taste in the mouth. Pragmatic, ruthless and yeah what a real pirate was like but we’re more used to Conan doing his killing face-to-face and being attacked first. Hanging around ‘civilised’ people has had a bad influence on his plain barbarian straightfordwardness!
Totally agreed, except that I liked every faction trying to strike deals predicting what oversights could get them backstabbed, with two people in the Count’s faction having the agency to undermine him.
The ending with Belesa at least does address “so what happens the girls when Conan and they fall out of lust and he moves on to his next adventure?” and that unlike him, unless they’re Belit or Valeria or Red Sonja, they will need someone to set them up with enough money to keep themselves for at least a while or to take them on and support them.
It’s a good issue to be honest about. These women who don’t have adventurer skills would end up in terrible circumstances in pre-modern civilization.
It works best probably as a fix-it-up explanation for how Conan went from having one ship under him to wandering around doing some jobbing soldiering to getting back into piracy again, but it’s not a favourite story of mine (and the dissonance between “This is supposed to be the Picts, who lived in what is broadly Scotland, plus Roman/Saxon/whatever European colonists, but it’s plainly a Western written in terms of North America and Native Americans” of the Black River stories gives me a headache every time).
That’s fair. It seems there truly were times on the periphery of history when groups of white people were in the same position vis-a-vis civilization as 16th-19th century Native Americans* (save that they had the same disease resistances as everyone else), but calling such people “the Picts” and writing a Western about them can be headache-inducing.
*IIRC, proto-historic Spain was colonized for its silver and other metals, with land being seized and locals enslaved to increase mine productivity.
The fourth and final book I read about intelligence.
Inventing Intelligence (2012) By Elaine Castles
This is mostly an “Anti-IQ” book. She believes that IQ is a good clinical tool to help individuals understand their strengths and weaknesses, but not a good way to rank people on their intellectual abilities or to use to make education or employment decisions.
The first 2/3’s of the book is to talk about the history of IQ testing, starting with various kinds of testing in the 19th Century. It was somewhat interesting to read the history, although she often inserts snide remarks about bias in these old tests, comparing them to current day IQ testers. She implies throughout this section that IQ testing and even merit itself are not good ways to measure people.
Finally in chapter 10 she talks in more detail of her own skeptical view of IQ testing. She sometimes includes the point of view of “pro-IQ” advocates. But she then states as gospel the results of one cherry picked test that lines up with her own beliefs. It is true that both pro-IQ and anti-IQ sides cherry pick the studies they present, but it appears to me that the anti-side does this a lot more prolifically than the pro-side. In all the anti-IQ books and articles I’ve read, there are about half a dozen studies that show up in every one of these. I don’t see this effect in the pro-IQ books and articles; I think this is because there are a lot more studies on their side. Maybe this is my priors speaking, but that is what I see. And this book is very much in this direction.
Positions she states:
1) IQ tests do not measure creativity or out-of-the-box responses
2) Anxiety, depression, curiosity, impulsivity, distractibility, and motivation all affect IQ scores
3) IQs account for only 25% of grade variation
4) Self-discipline, ability to delay gratification, belief in utility of one’s efforts affect grades more than IQ
5) IQ explains only 4-18% of income
6) <4% of delinquency/crime explained by IQ
7) <2% of divorce and unemployment explained by IQ
8) Adaptive view of IQ is multi-faceted and better measure person that psychometric testing
a) Reasoning abilities, social competence, creativity, problem solving do not correlate highly
b) One example of adaptive testing explained college grades as well as SATs
c) Although she admits these facets are hard to measure
9) She suggests that heredity might contribute only 30-40% to IQ
10) She believes White-Black gap is all environmental, for mostly the same reasons as Nisbett does, although Castles emphasizes discrimination more as the environmental explanation (Nisbett talks about the Black culture as a problem).
11) She doesn’t believe IQ selects well for college or job.
I think she is mostly wrong in these comments, such as for #6, #7, #8a, #9, #11, and maybe #3, #5. I think she uses a cherry picked study for each of her beliefs. She seems to be avoiding all the studies that show that many of these attributes do highly correlate with each other, and they all correlate with education and job performance. Of course what is considered a high correlation is a judgment call.
You’ve been substituting ‘heridity’ for ‘heritability’ since the very first post of this series, both when relaying estimated heritability percentages and in responding to commenters that correctly themselves used ‘heritability’.
I didn’t point this out when I saw the first point because there were enough knowledgeable others that I assumed someone else would. But while several people did try to explain what heratability is–the proportion of the variance within a given population explained by genetics–it appears that no one ended up explicitly pointing out your mix-ups.
And, though it’s hard to say for sure since I can’t see inside your head, I think this confusion is likely more than just verbal. For instance, if one really understands what it means for something to be heritable, it’s hard to see how one could get the mistaken impression that heritability figures give an upper bound for what environmental changes can result in.
The mix-up here might be the same one that Ned Block pointed out in response to the Bell Curve orginally, between two different senses of something being “genetically caused”. Sentences like “She suggests that heredity might contribute only 30-40% to IQ” certainly suggest the reading of ‘gentically caused’ that is not captured by ‘heritability’.
You’ve been substituting ‘heridity’ for ‘heritability’ since the very first post of this series, both when relaying estimated heritability percentages and in responding to commenters that correctly themselves used ‘heritability’.
I didn’t point this out when I saw the first point because there were enough knowledgeable others that I assumed someone else would. But while several people did try to explain what heratability is–the proportion of the variation in a trait within a given population explained by variation in genetics–it appears that no one ended up explicitly pointing out your mix-ups.
And, though it’s hard to say for sure since I can’t see inside your head, I think this confusion is likely more than just verbal. For instance, if one really understands what it means for something to be heritable, it’s hard to see how one could get the mistaken impression that heritability figures give an upper bound for what environmental changes can result in.
The mix-up here might be the same one that Ned Block pointed out in response to the Bell Curve orginally, between two different senses of something being "genetically caused”.[1] Sentences like "She suggests that heredity might contribute only 30-40% to IQ" certainly suggest the reading of 'gentically caused' that is not captured by 'heritability'.
[1] As usual of late, the spam filter won't let me get away with even a single link, but you should find a relevant article by googling "how heritability misleads about race".
For instance, if one really understands what it means for something to be heritable, it’s hard to see how one could get the mistaken impression that heritability figures give an upper bound for what environmental changes can result in.
You need to explain this further. There was someone is a previous thread who stated pretty much this, except they explained it more fully. I responded that yes, technically there isn’t a cap if you can come up with some environment outside what exists today that increases intelligence greater than what we have now. But practically speaking there is a cap, until someone comes up with such an environment. Based on current environmental variation, the environmental % of IQ is the cap for how much we can increase IQs. Please explain why if you disagree with this.
Perhaps I should say heritability instead of heredity, but I think it is just semantics. At this point I don’t get your issue.
Yes, I am mainly talking about the issue you bring up here. But phrasing it as about “coming up with” a way to develop an “environment outside of what exists today” makes the issue sound far more sci-fi than it really is. For one thing, it calls to mind the image of creating a new environment in its entirety (say, placing kids in an experimental bio-dome), when in reality small additions to the current environment count too.
But even things that already exist in the measured environment can have an impact greater than heritability numbers suggest. As long as some IQ-affecting feature of the environment isn’t very common currently, it won’t affect variation in intelligence very much, so by the same token it won’t affect heritability measures much. But if the feature became more common it could have a much larger affect.
The Flynn Effect supports the idea that we have a limited understanding of many of the environmental factors that have been capable of increasing IQ. So I don’t think we should be surprised if future ones do so in ways we don’t have a clear sense of yet.
As a side note, looking back at my original comment I think I could have done more to blunt its harshness, especially considering that your series of posts exemplifies a type of content I’d appreciate seeing more of in these threads. So thanks for the effort you put into them.
First of all thank for the last comment. It was a bit harsh. But I mostly disliked it because I didn’t think I really understood what you meant, and I hate it when I am arguing something and there is no meeting of the minds. So I am very relieved that you meant what I guessed you meant. 🙂
But phrasing it as about “coming up with” a way to develop an “environment outside of what exists today” makes the issue sound far more sci-fi than it really is.
Yeah I do think it is kind of sci-fi. Education theorists have been working intensively on the problem of increasing intelligence for decades and haven’t come up with much. I don’t think there are environmental solutions out there that will increase intelligence significantly. It appears to me that the best way to maximize the intelligence of a person is to immerse them in a highly intellectual society where parents, peers, and teachers all reward cognitive thought and problem solving. But this is the current environment of some people, so I think it does fall under the current cap of how much is environmental.
I believe that the only way to significantly increase intelligence will be to work on the heredity portion, that is directly improve the genetics of individuals. I guess this could be done by some kind of selective breeding (which makes me nervous if the government has control), or gene splicing (which makes me nervous in a different way). Neither one of these is foreseeable in the near future.
If we are to make a significant effect on the environment, someone has to come up with some radical new technique, so definitely what is now sci-fi.
Yes the Flynn effect is hard to explain, and the existence of it does somewhat decrease my confidence in any of my judgments. But my best guess is that most of this effect is NOT an increase in intelligence, but simply an increase in the ability to take IQ tests. IQ tests never have and never will test intelligence precisely. But I think as schooling increasingly reaches the poorest areas, it’s getting to be that almost everyone is used to taking tests. Thus test-taking skills have definitely increased over the decades. This is not an increase in intelligence itself, but does result in IQ test increases. And I think IQ tests are becoming more accurate as the population is becoming more equal in their experience in taking tests. The Flynn effect may reflect some increase in intelligence, but I think it is mostly test-taking skills.
I think there are a few environmental interventions we know that will actually raise average IQ, without either some kind of selective breeding of humans or genetic engineering of humans. Those mostly don’t apply to middle-class-and-up Americans, but they’re a big deal in much of the world:
a. Proper sanitation, including mosquito control. Kids that spend a lot of their childhood sick, or that are sharing their meager food supply and brain development budget with a bunch of parasites, are not going to develop their brains as well as kids without those handicaps.
b. Getting lead and other environmental toxins out of the environment. In the US, poorer people tend to be more exposed to lead than richer people, which probably explains some of the difference in average IQ between poor and rich kids. If I were king. we’d be spending about 10% of the military budget on lead abatement.
c. Nutritional supplements (stuff like iodized salt) prevent deficiency diseases that stunt brain development. It’s possible that vitamin D deficiency affects blacks more than whites and explains some of the IQ difference, so this isn’t 100% a third-world problem. But mostly, rich countries have already done this stuff and reaped the rewards.
d. Extra years of school seem to raise IQ a little bit later in life, so making sure everyone goes to school for many years probably helps. It seems plausible to my amateur mind that this actually represents an increase in intelligence (giving you an intellectually demanding environment for another couple years when your brain is developing might help), but it also seems plausible that this is just leading people to be better at taking tests and so raising IQ scores without raising actual intelligence.
I also suspect there are ways we can raise average IQ that may also have a big uneven effect. IQ becomes more heritable[1] as you get older, and it seems likely that this has to do with self-selected intellectually complex/simple environments. Smart people tend to spend more time in intellectually engaging pursuits, and this has some kind of effect on them once they get out into the world and their parents are no longer providing their environment. So offering more intellectually engaging/demanding things to do with your time, and more intellectually demanding environments, probably makes it easier for smart people to self-select into a more stimulating environment. Alice never cracks a book after high school and watches TV/plays on Facebook when she’s not at her job driving a school bus; her adoptive sibling Bob reads a couple books a week and plays internet chess for fun when he’s not at his programming job.
[1] That is, imagine looking at the set of 15 year olds and the set of 30 year olds, where they were all raised in adoptive homes. When you look at 15-year-olds, less of the variation in their IQs is explained by their parents’ IQs than for 30-year-olds.
@albatross.
My comments were about the US. There is clearly much potential for increasing intelligence in 3rd world countries by improving conditions there.
Improving conditions in the US can also increase intelligence, but that’s where we run into the cap where the percent of heredity cannot be changed without some dramatic new technology, whether in the environmental area or improving genetics. And in fact I believe the US is continually doing this, which explains the Flynn effect (both the IQ testing improvements and true intelligence improvements). Lead has been dramatically decreased (isn’t the usual explanation for decreasing crime since the ’90’s?). And schooling reaching far into rural areas where it previously did not. Those are good things, but I’m not sure if the current improvements can be accelerated.
Your mission, should you choose to accept it, is to change the lyrics to Acadian Driftwood so that it’s about Akkadians.
Warning: it’s an automatic loss if you can’t keep “what went down on the Plains of Abraham” unchanged.
Complaint about delivery of the wrong grade of copper
— prestigious museum
He wasn’t just into copper trading. There are letters complaining about Ea-nasir’s business practices with respect to everything from kitchenwares to real estate speculation to second-hand clothing. The guy was everywhere.
It’s the bones of 4,000 years but human nature never changes 🙂
“You stiffed me on the delivery and then the material you did send was crappy quality! Okay, so I still have an outstanding bill on account with you but that’s beside the point – this is terrible customer service and I will be writing a STRONGLY WORDED LETTER OF COMPLAINT!”
Sitcom guest star: Laban, Jacob’s father-in-law! “Oh, I’ve got this relative who’s come to me after running from his brother. And he volunteers to work for room and board for seven years to marry my more-beautiful daughter? Well, why don’t I give him the less-beautiful one instead and save the more-beautiful one for some other deal!”
The ongoing sub-plot over the series is these two both trying to out-manoeuvre the other because they’re got that family resemblance of being just that bit too clever for their own good.
You can imagine Laban: “Look, I’ve got two unmarried daughters, eldest girl is a lovely girl, lovely girl. Not a looker, no, that’s her sister, but a lovely girl all the same. Make a great wife for any man. Not her fault all the young guys are only interested in hot chicks, you know?
Now this nephew of mine – oh, what a trouble maker! His mother’s fault, she spoiled him. Well that’s my sister for you, always has to have her own way. Anyhow, he gets up to some shady business with his elder brother and she steers him my way to keep him out of trouble. Must think I’ve gone soft in my old age, eh?
Ha ha, no, I know a trick worth two of that! My nephew, such a smooth-talker, thinks he’s pulled the wool over old uncle’s eyes. Well, he wants to marry my daughter, I’ve got two lovely daughters like I said – one is as good as the other, even better, right?”
Disclaimer: I’m not a psychotherapist, nor do I play one on TV.
Overall I came away impressed and hope to see more research like this done. That said, I’m a little concerned that there was no mention of the idea that the same issue like depression or anxiety might have different root causes in different people, and different therapies might be appropriate depending on the cause.
A similar issue seems to hold just for psychiatric medications: there are a ton of antidepressants, and finding which one will work best for you with the fewest side-effects is basically trial and error.
If treatment A helps 80% of the population and treatment B helps 20%, giving everyone treatment A is certainly better than flipping a coin to decide which treatment to give. I worry, however, that making it a general rule “use A, not B, since it’s better for more people” could morph into “never, ever use B, even when A isn’t working.”
Thanks to Moore’s Law, did the SETI network’s computers gain the upper hand on analyzing the huge trove of radio recordings faster than the trove could grow, and did the last significant chunk of data recently get analyzed?
I was going to help out with SETI back when, but I asked a friend who’s expert in astrophysics – “If aliens on a planet around Proxima Centauri were running SETI, could they detect us?” She said no.
The Wait for Extra-Terrestrial Inteligence’s programme of checking the news every now and then to see if ‘Aliens arive’ is the top story is still going strong.
In a recent episode of Star Trek Picard, the heroes had to fight with an old, decommissioned Romulan space ship that a private warlord had somehow obtained. He and his crew used it to dominate several planets in a lawless and poor part of space.
Have there been any real-life examples of something like this from 1900 onward? I’m imagining someone like the leader of a powerful group of insurgents or a drug lord acquiring a retired warship and using it to control an island or a stretch of coastline.
I don’t know of anything 1900s onward, but in the 1830s James Brooke leveraged his control of a schooner with cannons into eventually becoming the King of a section of northwest Borneo. I mean he didn’t conquer it outright (he got the Sultan who controlled the area to give it to him as a vassal, and then eventually became independent), but without that boat and the cannons it probably wouldn’t have happened. That’s the best example of a random dude using a warship to outright rule an area which was poor and technologically behind.
There are other people who really should be the ones to answer this, but they haven’t, so I’ll just say the thought I had: something tells me that as you get into the era of steam-powered warships, the fuel demands and complexity of operation will mean that the ships are somewhat useless without the logistics of an actual navy behind them.
FLWAB already brought up the example of James Brooke, but note that he was semi-officially representing the British Empire at the time.
Really, if you were going to do this sort of thing entirely on your own account, you wanted to be done with it and joined up with a proper Empire by the end of the eighteenth century. See also the history of Belize, for how to do that right. Once the Napoleonic wars came to an end, the great colonial empires had large and capable navies and nothing better to do with them than to check up on pretty much every habitable place on earth, regularly enough that nobody was going to build a mini-empire without their notice. Also to suppress the slave trade, and just about anything you can do to build an empire on the basis of “I’ve got cannon and you wogs don’t” is going to look kind of like slavery.
So, it doesn’t work any more unless at least one major empire wants it to work and wants it strongly enough to lean on the other major empires to back off and not just claim the place for themselves.
Over the next few generations we get steamships, telegraphs, crusading journalists, and bureaucrats numerous and capable enough to directly administer every Empire’s territories without having to outsource the work to freebooters like Brooke.
There are a number of Soviet weapons systems floating around the Middle East in the hands of non-state actors. ISIS operated Scud missiles, a surface-to-air missile, and maybe some MiGs.
Nothing with the scale or complexity of a warship, though, and it’s not an asymmetric threat like you describe. Weapons, including very powerful ones, are just plentiful.
In the “News Articles I didn’t Expect to Read in 2020” category, the FDA has banned schools from using electric shocks as punishment.
I had no clue this was even a thing, but apparently there’s one school for the disabled in Massachusetts that uses small devices that provide painful shocks to their target as punishment. That being said, all students who wear these devices are adults, it’s only done with the family’s consent and the article cites at least one person that seems to have benefited from the use of these devices.
That being said, you know, it’s still a system whereby officials can press a button and shock people; if this was a prison or any other kind of non-disabled victim I’d be 100% against this device, so I see no reason to change my previously held “don’t shock people as punishment” bias.
Is there any reason I shouldn’t just chalk this up as an example of the FDA getting something correct?
If the only place in this country that is considering or would ever consider using this practice is that one school in Mass, it does seem like overkill to use the power of the federal government to end it. On the other hand, if this is a practice that had the potential to spread AND is not medically sound, i’m perfectly find with the FDA stepping in and nipping it in the bud.
I don’t know enough about the topic to decide which option is more accurate.
Sounds to me like the practice IS medically sound, the problem is that the school did the thing institutions do and started using the zapper at the first sign of hesitation to comply rather than only in the most extreme cases.
If there’s only one school doing it, and it’s done for a few specific people with major problems, then I suspect it’s going to be hard, from the outside, to tell whether this was:
a. A barbaric horrible practice that got accepted and normalized there and persisted way too long. In this case, the action is good and the students/patients will be better off as a result.
b. A genuinely helpful and humane practice that looks horrible and pattern-matches to cruelty to the helpless, and so it’s being shut down by uninformed do-gooders. In this case, the action is tragic and the students/patients will be worse off as a result.
I mean, my first guess is “barbaric horrible practice,” but I have to acknowledge that I don’t know enough to be entitled to a strong opinion here. And many things that look needlessly cruel end up actually being the best thing anyone knows to do–think of chemotherapy for cancer patients, or basic training for soldiers, or letting toddlers fall down and skin their knees and occasionally break a bone rather than keeping them pent up inside wrapped in cotton.
Is this the Judge Rotenberg centre?
Founded by a colleague of B. F. Skinner, so I’m not surprised they’re so big on operant conditioning.
The JRC deliberately modifies the electrodes to deliver more power for a longer time. (up to 45mA RMS for 2 seconds)
Multple packs are often applied to the same pupil.
The shock units are used in the bath/shower, against manufacturer instructions.
Shocks are administered for very minor infractions.
From what I’ve read, the highest praise I can give to the staff is that they have the moral high ground compared to the SS-TV and NKVD.
My big question for you would be why you are against shocking people as a punishment or training method in the first place. If it were to turn out that you are just against physical punishment in a general sense whether it’s effective or not and no matter how it’s used, I’m not sure how you’d be convinced.
If we look at your second to last paragraph, the words “That being said, you know,” are meant to dismiss the parts you’ve already about this system perhaps being effective and good. And after dismissing them, you then ask for OTHER reasons to convince you it should be kept, AFTER you subsequently say you think electroshock is bad in a fundamental, inherent way.
That’s not a fair way to ask to be persuaded on this. It rephrases to something like “Effectiveness and examples of this helping people are assigned a near-zero value. You now have to convince me this is a net positive without using the most effective evidence available. Also, I have a visceral reaction against this particular practice”.
The most effective arguments for this are going to look something like “You yourself brought up a situation where this is helping people. Unless you fetishize pain avoidance or shocks-as-evil, your own words indicate this shouldn’t have been banned”. That argument may or may not be right and electroshock may or may not be possible to use in a net-beneficial way, but if you indicate in your question’s construction that the above argument is hand-waved from the get-go, I’m not sure you are going to get the vigorous defense you want.
I’m not the guy to provide that vigorous defense in the first place, but I know I wouldn’t try in this particular thread for the reasons above.
When someone starts talking about “video games,” are you more likely to immediately think of?
1. A narrative experience (think single player, high amount of storyline, clear progression)
2. Repetitive entertainment (think multiplayer, high amounts of repetition, storyline/progression unimportant to nonexistent)
I know what you mean, but I think saying that “storylines emerge” in Civ is putting it rather generously, as that relies pretty heavily on the player’s active imagination. Those games are definitely designed to be played multiple times.
When I first tried to learn to play Civ, I was probably in elementary school, and for some reason I got it in my head that “irrigate the entire landmass of Eurasia” was a productive goal…
That reminds me of playing Simcity with the cheat code, where the first thing I do after getting unlimted funds is plop down a 9×9 grid of railroad across the entire map to minimize pollution and maximize efficiency.
I’m sure Nick will revoke my Trad card for that crime against organic expression of communal knowledge.
@Randy M
I’m updating the membership lists as we speak.
I’ve actually played lots of Simcity and other city sims, and believe me, much ink could be spilled at request on the ways they inculcate unrealistic assumptions about traffic and modern cities in general, with the result that they turn our kids into little traffic managers instead of artists. This abominable crime I lay at the feet of Will Wright.
Eh, both about equally.
What I’m not thinking about is all the mobile or flash games that somehow seem to occupy a lot of market/mind share with the public at large.
Also, there’s good single player games with high repetition and low storyline; roguelikes and such. There have been multiplayer games with story and progression, but usually moreso as an option on a single player game, like Baldur’s gate.
Also, there’s good single player games with high repetition and low storyline; roguelikes and such.
Those go in bucket 2.
Basically I’m trying to craft a categorization scheme that doesn’t depend on single/multi player (although will be highly correlated with it).
What I’m not thinking about is all the mobile or flash games that somehow seem to occupy a lot of market/mind share with the public at large.
So what do you think goes in bucket 2 that clearly isn’t this? If nobody thought of Tetris until 2015, it would have been a free to play mobile game. Just saying.
Basically I’m trying to craft a categorization scheme that doesn’t depend on single/multi player (although will be highly correlated with it).
I got ya covered, fam. (Do us young people still say “fam”?)
Linear–Gameplay is delivered according to a pre-programmed, sequential progression, usually tied to a narrative, however loosely. Upon completion, there will be no new content. Fairly light on emergent properties. Variations include the Sloped (gradually increasing difficulty) and the stair-step (new features unlocked periodically). Examples would be adventure games like Myst, platformers like Mario, etc.
Cyclical–Gameplay occurs in a series of matches. Content is open ended and may have many emergent aspects as various systems interact. Narrative likely exists on a back-ground level but does not really drive the gameplay in any meaningful way. See Real Time Strategy, digital trading card games, arena shooters, etc.
Hybrid: Daisy-Chain: Gameplay consists of a series of matches with increasing options and challenge, which may be distributed according to a narrative, culminating in some kind of final match. See the single player campaigns of real-time strategy games or shooters (if the missions aren’t too tightly scripted to make it essentially Linear), the tutorial mode of digital trading card games, Tactical RPG games, JRPG games like Final Fantasy.
Spiral: Gameplay is essentially a series of matches, but new options are gradually unlocked. However, matches are not chained together with a wider narrative and gameplay options may be added in a variety of ways. See Roguelikes, such as Slay the Spire or Risk of Rain.
Forked: Mostly Linear, but with enough branching choices to make consecutive play throughs required for seeing all the content.
Sandbox: Game is structured in independent matches with little or no overarching narrative, but the matches last arbitrarily long allowing the player to continuosly develop the world. Examples: Skyrim, Dwarf Fortress, Minecraft, Factorio.
I’m most likely to think of #2-specifically of the arcade-type games.
Is your categorization scheme intended to encompass computer games as well? Minesweeper, Jezzball, online chess and bridge, Angry Birds, 2048, the various balloon popping/space filling games…
Personally, the only video game I ever played heavily was Minecraft, which is… 2, I guess.
In general, when someone says “video game,” a picture of a first-person shooter and the words “Call of Duty” are probably the most likely to be first in my head, and… I’m actually not knowledgable enough to know which type that is, but I’m guessing 1.
I think of a song by a hot duck-faced woman. I don’t think I’ve ever referred to computer games as video games, and I’m not sure anyone I know has either.
False dichotomy. The best of narrative gaming isn’t cinematic at all, it’s sneaking up on the player with its themes/story delivered via gameplay. Universal Paperclips, Braid, and Touhou blur between the categories. And then you have games that openly do both, such as LoZ Four Swords, GTA, or Don’t Starve.
This is the craziest market in my lifetime, and I am 40 so I sort of knew somethings about what was going on in 2000 and was following reasonably closely in 2008. The big thing right now as far as I can tell is how treasury yields are acting. In 2008 there was a massive drop in the 10 year yield from early November into December with a drop from ~3.9% to ~2.1%, a 46% decline in yields. The drop right now is from December is ~ 1.9% to 0.7% a 63% drop, and it might not be finished yet.
What is also interesting to terrifying is that the yield crash of 2008 came second, the S&P was down 33% from June by early November, and ~37% off the late 2007 highs (Yields fell from over 5% in 2007 to that 3.9%, the overall decline from peak to trough was from ~5.1% to ~2.1% for a 59% decline over almost 18 months, which is still less than the 63% decline over the last 3 months*), but stocks are only 10-15% off their highs right now.
*Yes, bond yields aren’t great for comparisons across time, I’m just trying to highlight how large this move is in really any framework, and this comp is vs the GFC**.
** Speaking of the GFC, is this going to be like ‘The Great War’ that had to be renamed WW1 just a couple of decades later? The Great Financial Crisis might end up looking like trench warfare vs the blitz.
My shorts are a small portion (~25) of our portfolio. I don’t really worry about risk adjust gains here, i’m more looking for out sized gains.
My big move in our primary portfolio was to go 100% T-bonds back in October, so if I counted risk adjusted I would be even happier.
@ The Nybbler-
I don’t consider this day trading, my average day the past two weeks has probably had 0.5 trades executed and this is the most active I have been. I am trading options which requires certain day trading like behavior,
If you are comparing bond yields, I’m not sure that % change in yield is the most appropriate measure, particularly since we’ve seen that 0 is not a hard lower bound. I think raw change in yield is a little more representative of how much things are changing? Better is probably blend somewhere between.
0 is still a soft lower bound, the negative rates were heavily subsidized with CBs granting concessions that made them close to zero in most scenarios for the actual bond holders.
Even if we are talking raw yields the 1.2% drop is ~ 2/3rds the size of the largest similar length drop during the great financial crisis.
> December with a drop from ~3.9% to ~2.1%, a 46% decline in yields. The drop right now is from December is ~ 1.9% to 0.7% a 63% drop, and it might not be finished yet.
I don’t think it makes much sense to look at fractions of interest rates like that. Interest rates are not like stock prices. Imagine what you would be saying if the rates went negative. An infinite decline? A more useful view is that they went down 1.8% before and they’ve gone down 1.1% now.
> Speaking of the GFC, is this going to be like ‘The Great War’ that had to be renamed WW1 just a couple of decades later? The Great Financial Crisis might end up looking like trench warfare vs the blitz
So far, there’s no reason to think this will be anywhere near as bad as the Panic of 2008.
I don’t think it makes much sense to look at fractions of interest rates like that. Interest rates are not like stock prices. Imagine what you would be saying if the rates went negative. An infinite decline? A more useful view is that they went down 1.8% before and they’ve gone down 1.1% now.
On the market long term bonds act like stocks with capital gains when yields move. If I have a bond yielding 2% and yields drop to 1% my 2% bond is has increased in value, and that increase will be (not linearly) proportionate to the percentage change in the coupon.
Yes there are better formulas than straight % declines, and I also noted the absolute decline as well. The point is that I am comparing it to one of the steepest yield drops in history which highlights how insane this action is.
The market’s going utterly nuts responding to rumors and fears rather than data and reasoning. Even more so than it usually does. IMO trying to figure it out short term is an impossible task unless you have some incredible instincts.
The thing about 2008 is it wasn’t just crazy; there were fundamental economic problems it was responding to.
There are the sort of economic issues which have caused 10 out of the last 2 recessions, certainly. But the current market craziness has little to do with that.
It’s no longer growing exponentially in China because the factories are closed. Maybe they can reopen them and not have it grow exponentially. We haven’t seen that yet.
We are in a supply shock and a demand shock. And central banks are cutting rates (Fed) and buying equities at a record pace (BoJ). PRINTING MONEY DOES NOT BOOST THE ECONOMY WHEN FACTORIES ARE IDLING AND PEOPLE AREN’T BUYING THINGS – I want to scream this into the void.
Central banks have completely surrendered to the markets and are targeting asset prices.
That’s how you get inflation. See John Law and the Mississippi Company.
Lower rates do help. Companies facing temporary revenue shortfalls due to (say) supply chain disruptions will use credit to maintain solvency, avoid layoffs, etc. until their business improves again.
Even if the shortfalls are temporary they have to be temporary and transient, for this to work demand has to pick back up not to previous levels but also to fill in the gap from the shortfall AND businesses have to not significantly adjust now that they have experience with supply chain disruptions.
There is also no real reason to avoid layoffs with the financing available. Cheaper to furlough and borrow than keep everyone on the payroll.
It’s probably true that layoff and rehire is cheaper for unskilled workers, but not for skilled ones. It takes a long time for a new hire engineer at someplace like NVIDIA to reach full productivity.
But anyway the more general point is that credit helps companies avoid being forced to take short-term actions to survive that are bad for the long-term health of the business. Layoffs of difficult-to-replace employees, sales of critical assets, closing down long-term projects that aren’t revenue-producing yet, bankruptcy, etc.
It’s probably true that layoff and rehire is cheaper for unskilled workers, but not for skilled ones. It takes a long time for a new hire engineer at someplace like NVIDIA to reach full productivity.
This is no different from bankruptcy due to a short term disruption. Your entire labor force isn’t getting fired on the day you file.
For a rate cut to positively impact employment it has to maintain more jobs than what would occur without the rate cut. Your reply above largely implies that the main way it would do this is by maintaining solvency, so the base case would be how many employees would be laid off under bankruptcy without rate cuts vs whatever happens with rate cuts.
My point is what I said, “credit helps companies avoid being forced to take short-term actions to survive that are bad for the long-term health of the business” and layoffs were an example. Bankruptcy is another example and it’s extremely costly.
Also, companies will definitely do some costly layoffs if it lets them avoid the costs of bankruptcy.
My point is what I said, “credit helps companies avoid being forced to take short-term actions to survive that are bad for the long-term health of the business” and layoffs were an example. Bankruptcy is another example and it’s extremely costly.
This is specific to an individual business, it has to expand to being good for the overall economy which includes those surviving businesses who would be able to buy the bankrupt ones cheaply and start expansion again.
There is little reason to think that an increase in liquidity via a rate cut will actually lead to maintaining employment levels in the face of a supply chain disruption. If the disruption is short enough to keep people on staff without them having productive work to do then a bridge loan really shouldn’t be needed. If the disruption is longer then the bridge loan won’t prevent layoffs. All that a rate cut should do is to increase the incentive to borrow under uncertainty about the outcomes, which means the Fed is encouraging leverage during a crisis, which is literally the exact opposite thing from what should be prescribed from a theory perspective.
No it is not specific to the company. Bankruptcy imposes real costs, it is not merely transfer of ownership.
You are assuming the conclusion. The longer that bankruptcy is put off and the more debt the bankrupt company owes, and the lower the rates they owe the worse bankruptcy is and the more costs it imposes. If the virus is increasing the likelihood of bankruptcy then borrowing companies should be facing higher, not lower, rates.
Therefore your position has to be that the lower rates will permanently reduce the likelihood of bankruptcy, which means that the companies will have to cut costs (as they have lower revenues) which will include layoffs, which (according to Keneysian/monetary theory) will cause less spending and require more rate cuts.
Bankruptcy costs are certainly not linear in debt; there is a massive step function as you declare bankruptcy, as everything about the company becomes more uncertain since it is being trusted to the court system.
In the long run it is certainly true that lower revenues mean that ultimately costs need to be cut relative to what they were (if we take an expansive view of “costs” to include things like dividends) but credit allows companies to better choose the nature and timing of those cuts. Cutting costs massively and hastily this quarter is much more damaging than spreading the cuts over the next two years.
I’m quitting from this conversation now, I’m not getting anything out of it anymore and I suspect you aren’t either.
I’m quitting from this conversation now, I’m not getting anything out of it anymore and I suspect you aren’t either.
Ok, this’ll be my last reply then.
You are still only working on one side of the equation. If you need loans to prevent bankruptcy you need higher, not lower, interest rates to get people to actually make the loans. According to standard monetary theory (which I think is wrong, but is what CBs act under more or less) you cut rates in the face of a demand shock the lower rates act as a stimulus for demand by reducing the opportunity cost of consumption making it more attractive. In the face of a supply shock the rate cut doesn’t improve the actual supply conditions which means rates should be moving higher to increase effective liquidity, not lower. Lower rates should lower liquidity making bankruptcy likelihood higher, not lower*.
*There actually is some evidence this past week of this happening, fed repo operations were oversubscribed by the highest ratio after the 50 basis point cut this past week.
I’m not getting anything out of it anymore and I suspect you aren’t either.
I actually am. I need to refine my thinking on this while the markets are calm… which has pretty much only been on weekends recently. This is the biggest opportunity to make outside gains for me* in my lifetime to date, and to have a chance at nailing it I am going to have to be sharp. These types of disagreements work for me for that.
*That is considering the capital I have to use and my experience and knowledge of markets as well as the market conditions.
I don’t see this as being particularly crazy. It’s easier to explain to a child than the typical economic crisis: there’s this thing called coronavirus,[assuming that it’s the cause, and it’d be quite the coincidence if it weren’t] it’s causing factories to temporarily close, that means the companies owning these factories are not able to make things and sell them and make money and the people who own these companies are getting less money.
Generally agree. As far as I can tell, what’s been happening the last few days is that the market has been struggling to decide whether the virus is going to result in significant and sustained economic disruptions, or not.
One day, the “yes, it will” faction wins out and prices fall 3% or so, and the next day the “no it won’t” faction regains momentum and prices rise by 3% or so.
This ignores that CBs have stepped in with aggressive measures after the down days, and also that markets basically ignored the possibility of it being serious for a couple of months first before it became ‘unsure’.
Oh, and also that bond yields have been falling since early January and there have been many large down days for yields and almost no big up days, so the bond market is not saying ‘maybe its nothing, maybe its something’ it is consistently saying ‘yep, its something, maybe its huge and maybe its just kinda big’.
Are you saying the issue is that stock prices should have fallen as bond yields fell? There’s no contradiction there if investors expect the fed to act to lower interest rates regardless of whether a recession will occur.
Are you saying the issue is that stock prices should have fallen as bond yields fell? There’s no contradiction there if investors expect the fed to act to lower interest rates regardless of whether a recession will occur.
No, I am saying the interpretation of ‘stocks up and down because people are changing their minds about how serious the corona virus is daily’ doesn’t fit with how other markets are behaving. Your statement only works if the Fed’s rate cuts odds are independent of the severity of the coronavirus, which implies the Fed is reacting to something else which implies that the implication that the market is only reacting to the virus is also false/unlikely.
Its easy to explain the housing crisis to a child to if you explain it incorrectly.
The questions that this answer brings up are
1. Why were the US stock markets making all time highs while the virus was spreading? There was a 2-3 month lead in with the virus spreading and the markets were going up off already all time highs.
2. Why was the Fed injecting hundreds of billions in liquidity from October on with UE extremely low levels and stocks at or near all time highs?
3. Why did the coronavirus cause the fastest 10% decline off an all time high in history? This is not just crazy price action, its historically crazy price action.
1. Why were the US stock markets making all time highs while the virus was spreading? There was a 2-3 month lead in with the virus spreading and the markets were going up off already all time highs.
Good old-fashioned FOMO. In retrospect, TSLA punching through $600 should’ve been an obvious sign of a blow-off top.
2. Why was the Fed injecting hundreds of billions in liquidity from October on with UE extremely low levels and stocks at or near all time highs?
That is a good question. A bunch of hedge funds and mortgage REITS borrow in the repo market for some reason; the repo market started blowing out in mid-September for some reason (rates spiked to 10%) so the Fed started buying T-bills to bail them out. They stopped in early February, AFAICT. I have no idea why the repo market started blowing out, or why it stopped. It’s too early to be COVID-19 related.
3. Why did the coronavirus cause the fastest 10% decline off an all time high in history? This is not just crazy price action, its historically crazy price action.
Options positioning. When long positive gamma persists for a long time, it leads to artificially suppressed volatility. When an exogenous shock (like COVID-19) disrupts that artificially suppressed volatility, it leads to more extreme (positive and negative) swings. This article explains the dynamic very well. Some of what happened was related to forced deleveraging, not just virus fear.
First, I don’t think the way I am approaching the math is correct. I can sum up the issue. Suppose a ship at rest accelerates, reaches a point, accelerates back, then slows down approaching it’s starting position, where it is back in rest. Calling the starting time T0, and the ending time Tf, my understanding of the correct approach to figure out the difference in time was to integrate lamda over the course of the path, and multiply this by Tf.
I am reasonably certain this is wrong. One of the substitutions that gives rise to lamda is t=x/c, which is true if and only if t’=x/c. Substituting t=x/c for Tf results in a different result – namely, no time dilation at all. Meaning I’m pretty sure the equation for lamda is invalid for at least part of the path.
Working through this issue now, uncertain of the outcome.
The other issue, which I am less certain of, is the choice of the positive root of the square root in the equation. It isn’t clear to me that the positive root is the correct choice for the entire integral. On this point I am simply confused; generally we say sqrt(x^2) is x, which is to say, the sign is preserved. I am less certain how sqrt(f(x^2)) should be handled, but it doesn’t seem quite correct to discard the sign of x altogether in favor of one root or another.
The correct way to get the time elapsed according to some object is to integrate that object’s proper time over the interval. The infinitesimal proper time is \sqrt{dt^2 – (dx^2+dy^2+dz^2)/c^2}. So if you know the object’s velocity as a function of time between t0 and tf then the amount of time that object experiences during that interval is:
For linear velocities such that u=at + b, where the units of the equation are expressed in terms of proportions of c, the equation should end up being a*integral(1/2 * (arcsin(u) + u*sqrt(1-u^2)) – the unit I chose for -t + 5 was c/6, for t from 0 to 10.
If the sign didn’t matter, the square root portion would have canceled out. Instead I got 2.3712149. (edit: This value is probably wrong, I forgot my unit conversion.)
Getting a weird result when I use constant velocity there, though. The change in proper time becomes 0. Trying to figure that out.
Edit: To be clear, trying to figure out why a constant velocity doesn’t work in the linear velocity equation. I see the integral I am doing is unnecessary, since the square root is a constant. This is leading me to suspect my integral is flawed.
Here’s a general expression you can check against. For an object with a velocity that changes at a constant rate, v(t) = v_i + at, over a time period \Delta t, and using \beta to mean v/c, the proper time is
I’d check if you forgot a factor in doing a u-substitution somewhere. The denominator may also show up as a factor of 1/a, where a is the acceleration.
Hrm. Alright. I am satisfied with the behavior of the equation when velocity varies over time.
The only question still remaining is about the behavior of the square root when velocity is constant, namely whether it is correct to take the positive root when velocity is negative (towards, rather than away from, the origin).
I need to think about that one.
ETA:
You were correct on the u substitution! That was the issue.
Hrm. I am now uncertain if I am evaluating some math correctly.
Consider two equations:
v(t) = (t)*c/6
And
v(t) = (t + .9)*c/6
If I am doing the math correctly, evaluating the change in proper time, for t(0,5), the second equation produces a smaller change in proper time than the first, even though the velocity is always higher.
Nice to know I’m not a physics teacher for nothing 😉
As for taking the positive vs negative square root consider the way you derive the the integral. IMO the most basic way is to ask the question “if I give the object a clock identical to my own clock and then let it travel its path, how much more often will I see my clock tick than the object’s clock while it is travelling?”
You can use the basic principles of relativity to answer that question, and you find that the amount of time the object’s clock will tick out is given by the integral. And if pose the question in terms of clock ticks in this way it only really makes sense to take the positive root.
If you want to start wondering whether the clock will start ticking backwards in time you can, but it’s a separate question and not really related to the sign of the square root.
In terms of clocks, it is more about what an observer watching both clocks would see.
Or, more particularly, given a traveling observer traveling along with a clock, exactly when the observer sees the stationary clock ticks faster. (It has to at some point on a round trip, since at the end it is ahead of the observer’s own clock; more, it has to tick faster in proportion to the Lorentz factor).
I have ruled out it happening during the acceleration at the start and end of the trip, because we can have identical accelerations for trips of different distance, and it doesn’t make sense for the clock to tick faster if a trip was longer.
The two possibilities that remain are during the turnaround acceleration, or during the return trip itself. If the negative root should be taken, it is during the return voyage. If it shouldn’t, it has to be during the turnaround acceleration.
I can’t figure out the logic for it happening during the turnaround acceleration, since light is arriving to the traveling observer at the same rate no matter the distance, but that may just be something I am missing.
The answer is in the integral. The ratio by which another clock ticks slower than your own is given by \sqrt{1-v^2/c^2} where v is the speed of the other clock relative to you. IF you are in an inertial reference frame.
If you are not in an inertial reference frame you need to use a different rule for calculating the ratio. (Just like how you can’t just use Newton’s 2nd law when in a non-inertial frame)
Why does the integral return a smaller change in proper time when there is acceleration than when there is no acceleration, given the same time frame and the same base velocity?
I’m not ruling out “I am doing this math wrong.” But is this what you are referring to?
But now I’m back to the same problem. More, the acceleration form of the equation doesn’t include position, so I don’t see how the integral resolves this.
Taking the negative root in the integral for negative velocities does resolve the issue – thought about it more and I realized my expectation that it had to be symmetric was in error – and is mathematically valid, but I’m pretty sure that’s not what you mean.
Actually, it may be helpful for me to iterate the understanding I have of how the usual thought process involving this goes:
Suppose there is a clock beacon one light hour away from a ship at rest with respect to the beacon. They are synchronized, so the ship’s clock says 1:00, and the beacon says 12:00, but the one hour delay means the ship should interpret this as 1:00 also.
The ship accelerates to a Lorentz Factor of .5 – meaning the distance from the ship’s perspective is half a light hour, meaning it interprets the beacon as being at 12:30. So when the ship moves to the beacon and decelerates, the half time rate means when it has arrived, the beacon shows 1:00 as expected, and the ship’s clock shows 12:30.
Which is fine if that is how the universe operates, but it implies something significant – two ships can tell who is in motion by comparing the distance both measure between each other. Which, as I understand relativity, is pretty much specifically forbidden. More, if they measure the same distance, or a distance that doesn’t correlate exactly with the Lorentz factor, they can find a true inertial rest frame by both decelerating symmetrically for the case of identical distances, or decelerating in a more complicated way if the discrepancy doesn’t match the Lorentz factor.
My suspicion is that this is just another case of “Teaching something wrong to get students used to thinking in a particular way before they learn the real way it is done.”
If this is really the way it is done, it looks… well, entirely wrong.
Er, that’s off by about ten minutes, so the final times are more like 1:09 and 12:34, setting aside the time dilation owing to acceleration, but close enough.
At any rate, I begin to suspect SR is the wrong framework to ask this question anyways, and I need to get back on learning tensors.
Note: slightly rambling. Exams, cognitive laziness, old LW rationality (thoughts on parts of sequences), small aside on the Memory Book
I recently gave an exam that I had to spend more-than-normal effort for (short version: end-of-high school exams are disproportionately more important for getting into colleges here than the rest of your high school performance) and I realized that I wasn’t really pushing myself, cognitively, in general. For this particular exam, as it was significant, I spent more time and energy than what I would have done otherwise; and it feels like (modulo the difficulty of the exam) I really did do well, which was a surprise.
It’s possible this is attributable to unconscious cost-benefit analysis, e.g. school exams in general not really mattering for me, but I have to discount that because it’s a wider pattern. Typically my learning process was to either superficially learn things, or try to learn things, fail, and pretty quickly give up because I was clearly ‘not smart enough’.
On one hand, this is probably linked to laziness/lack of self-confidence/lack of perseverance. But on the other hand..I think at least some part of it comes from failing to internalize old-style LW Rationality properly. My primary exposure to rationality was the sequences when I was young (mostly because I didn’t know of anything else at the time) and its focus on ‘quantify your uncertainty’ and ‘make your map match the territory’ and etc made me internalize ‘there’s a chance everything you think is totally wrong’, but not internalize the working of the tools that would let me quantify that wrongness. So I constantly was self-questioning ‘could these thoughts be wrong?’, which, lacking a way to find a proper answer, my brain always responded with ‘yes’. (I don’t think this is a failure mode of learning rationality in general, given I was trying to figure out how to think better and so on; if I was more innately curious, or..perhaps smarter in general, I would most likely have had the confidence to maintain a strong belief in myself while using doubt as a reasonable tool.)
One thing I do like from the sequences, though, is the post on judging thoughts/actions on if they make you stronger or weaker. Relating to the exams thing earlier, I had a lot of guilt about not working as hard as I should sometimes, but since a few days ago I’ve been constantly asking myself the question ‘is what I’m thinking useful?’, which is very helpful when the guilt is, essentially, your brain extracting an emotional cost from you, so you don’t actually have to take any actions to fix the situation. (Maybe it’s something else, but that’s what I think is happening; also I’m not saying this always works, just sometimes.)
P.S. I’ve been reading The Memory Book by Harry Lorayne, and it’s really very good. I thought I wasn’t good at imagination, but apparently that was false. And using imagery to memorize really helps. I haven’t even got to using pegs/etc to remember stuff out of order yet. If you can get your hands on it, I really recommend you try it out; specially if you struggle with memory like I do.
Paper preprint finds that people believe that women are told white lies more often and themselves tell white lies to women more often.
Study one told subjects that a manager gave feedback to an under-performing employee and then presented the subjects with one of 6 descriptions of the feedback, ranging from honest/harsh to very kind/white lies. The subjects were asked to guess the gender of the employee, which they guessed as female far more often when the feedback was kinder.
In study two, subjects were given a shitty and good essay to read, where these the essay were attributed to Sarah/Andrew or vice versa. The subjects were asked to grade the essays for the researcher and also to grade the essays for the writers (the non-existing Sarah & Andrew). Subjects gave a 9% higher grade to Sarah when giving direct feedback, while they gave Andrew roughly the same grades. When the subjects were asked whether they had given a different evaluation to the researcher and the writers, 65% answered yes. Interestingly, when asked how truthful they were to Sarah and Andrew, they answered that they were about equally truthful to both, despite the evidence showing that they actually only lied to Sarah.
The General Discussion section offers up various theories, only to explain that their findings didn’t support them, like the hypothesis:
– that women were lied to because they were considered less competent, which failed because the subjects rated the genders equally competent, yet still lied only to women (btw, how does considering a group less competent make it logical to tell white lies to that group?)
– that subjects considered women ‘warmer,’ which failed as both genders were rated equally warm, yet still lied to women (at least this hypothesis makes sense, by arguing that people don’t feel bad about hurting the feeling of ‘cold’ people).
– that men lied to women out of benevolent sexism, which the authors see as untrue as women lied to women more often than men did. The authors don’t seem to consider that women may exhibit benevolent sexism (or ingroup bias, for that matter).
– that people lied to women out of chivalry, which again proved false as women lied to women more often than men did.
– that people think women are incompetent, but rate them relative to their peers. Yet this doesn’t explain why people would only the increase in rating when giving direct feedback.
The authors did find that subjects rated women as having less confidence. They argue that subjects may attribute women’s failure to low confidence, but not men’s failures, causing them to try to raise women’s confidence to make them perform better.
What I found to be conspicuously absent was the hypothesis that people give kind feedback because they see women as more easily hurt and/or less capable of handling feedback due to higher anxiety and/or sensitivity to criticism. In particular because they later actually cite research that suggests that women interpret feedback more negatively. Yet they fail to consider that people may want to shield women from harm.
—
The authors also discuss the possible impact of these white lies on women, but merely how it may be motivating or demotivating. They completely ignore how a logical consequence is that women become misinformed about their performance. Is the narrative of the gender pay gap being caused by discrimination or the existence of discrimination in hiring and promotions so believable to many because they see women get less pay and get hired/promoted less than men who get worse feedback, so they think that those men get paid/hired/promoted unjustly, even though these men actually perform better, but get criticized more harshly than women who perform worse?
The study, which seems written from a severely female-centric point of view, also ignores how harsh feedback for men might partially explain bifurcation among men, who more often drop out and who more often get top positions. This merely requires that some men are motivated by harsh criticism and others are demotivated by it. If true, this hypothesis would also suggest that harsher criticism of women would result in more women getting into top positions, although at the expense of having more drop out (and given the higher anxiety in women, probably more so than for men).
What I found to be conspicuously absent was the hypothesis that people give kind feedback because they see women as more easily hurt and/or less capable of handling feedback due to higher anxiety and/or sensitivity to criticism. In particular because they later actually cite research that suggests that women interpret feedback more negatively.
If people phrase feedback towards women more kindly, aren’t women correct in interpreting the same feedback more harshly?
E.g. on a 1-10 scale, a woman’s 4 would be phrased like a man’s 6, so the woman interprets this type of feedback like a 4, while a man would interpret the same words as a 6.
So there’s not really a problem to be solved here, just a gendered difference in phrasing. In my experience, women generally have kinder and gentler norms of social interaction than men, and just use different words and gestures to convey the same meaning.
If people phrase feedback towards women more kindly, aren’t women correct in interpreting the same feedback more harshly?
Yes, which is actually good to know if you’re dealing with someone who isn’t neurotypical who doesn’t understand that when we’re told to treat men and women equally, they interpret criticism more harshly.
So if someone criticizes both a man and a woman in the exact same way, which he thinks is a 5, the woman hears a 4 and the man hears a 6.
That’s possible, although the study found (narcissistic?) hypocrisy in that people interpreted criticism by someone else as having a gender bias, but not their own criticism.
If women similarly have (narcissistic?) hypocrisy where they interpret criticism aimed at themselves as being unbiased, but criticism aimed at other women as being biased, then they would still feel badly treated.
Although that would then not explain a feeling that other women are mistreated, unless they project their own feelings of being mistreated on others, which does seem plausible.
I’d add another hypothesis–related to “more easily hurt”, but not the same.
I’m a manager: giving feedback to people who work for me is part of my job. My goal when giving critical feedback is to get better performance–but what kind of feedback is helpful is not the same across people. I might tell one person “that could have gone better” and another “that was a complete shitshow,” and expect their perception of how much improvement was needed to be about the same.
Sure, but managers may also have a higher expectations of some employees than for others, where they criticize some employees more for a similar level of performance.
Men’s tendency to focus more on salary can create a feedback-loop where men are more often better at getting a slightly higher salary than their previous performance justifies, which makes managers judge them more harshly (are they worth that salary?), which causes them to work harder/longer/faster.
Were they told the sex of the manager in the first study? My guess would be that a male manager would be less harsh on a female employee than a male, but women would be less harsh in general.
My theory would be rather similar to yours, though there’s a little bit of daylight between then: It’s seen as less acceptable for men to be harsh on women than it is for men be harsh on men. If you’re an authority and you’re harsh on a man and he’s hurt by it and gets angry, that’s on him. If he cries, that’s even worse on him. If you’re harsh on a woman and she cries, “OMG YOU MADE A WOMAN CRY!”
The paper didn’t address the gender of the manager. Whether this means that the gender was left unspecified is unclear. If it was, it might have been interesting if they had asked what the subject thought the gender of the manager was, but that is not in the paper.
The female subjects in the study were also nicer to the female essay writer than the male writer, so that doesn’t fit your theory. The paper doesn’t state whether male subjects had a significantly different gap in evaluations based on gender, which could fit or not fit your theory, but we don’t know.
Write me a check? An hour should be plenty of time to get it in the mail.
I suppose, if I get to pick a specific hour to control him, I could try to make him veto a law I don’t like or something, but the timing on that would have to be really tight, and I’d have to go to the trouble of figuring out his schedule.
I feel like most Presidential decisions actually take more than an hour to complete, so making him nominate me to the Supreme Court or something wouldn’t be effective. I couldn’t make the decision stick long enough to succeed.
I think “write me a check” is the second-best answer. “Send cash” is the best, because when he regains his senses in an hour he could cancel the check.
If I’m in a bad mood, start WWIII. If I’m in a better mood, I have him order the FAA to lay off model airplanes, then I have him tweet about how great it is that he did that (to discourage backsliding)
If the stock market is open? Have him target some company, preferably a defense contractor (for maximum vulnerability), for an “investigation”, after I’ve shorted it.
(My first thought was “kill himself” but that would give us Pence as President. Whereas him killing someone else – and getting caught doing so, which would be pretty much inevitable – would be equally effective at getting rid of Trump.)
That would be effective at getting Pelosi the Presidency, yes.
However, could someone in Trump’s body do it? Presumably, due to his age, he’d need to use a gun or sword or other weapon of the sort the Secret Service would frown on anyone else carrying in Trump or Pence’s presence. What would they say if the President himself brought a gun, or asked for one of their guns? Or is there a ceremonial sword on the office wall that Possessed!Trump could snatch down?
Write a note about how he has a rare moment of clarity and is incredibly sorry for all the harm he has caused to society. Then pull some kind of stunt that will make sure he won’t be re-elected (run outside naked and shout weird stuff is the first thing I can come up with).
Depending on whether I have access to his memories and what they are, maybe use the opportunity to also harm the reputation of the worst guys in US politics, whoever they are.
I’ve never read Friedrich Hayek’s The Road to Serfdom but have seen summaries of the stages he laid out for the trip from liberal democracy to authoritarian socialism. Prior to 2016 my understanding was that no country had actually traversed those stages; actual authoritarian regimes were the result of violent revolutions and coups, not the kind of frog-boiling Hayek described.
Then Venezuela happened.
My questions to those more in the know than I: how accurately did Hayek’s model predict what happened in Venezuela? Were there any other examples of Hayek’s process in action beforehand that I didn’t know about?
Czechoslovakia after WW2 might fit Hayek´s model, as I remember it from reading his book many years ago. But I am not sure what stages you have in mind.
Soviet troops arrived only in 1968. Czechoslovakia was one of a few countries in the world where Communist party won relatively democratic elections – in 1946.
Wait, I’m confused. I get that the Communist Party of Czechoslovakia won in the 1946 elections, but weren’t they still a minority that had to resort to a coup in order to maintain power?
Well, sort of, altought “coup” had a considerable popular support, from their voters, who were large minority. I think that fits Hayek´s model, though – communists came to power via elections, and then abolished them and created totalitarian regime.
I’m hard pressed to think of a coup that didn’t enjoy some popular support. However, I find “power via elections” and “power via coup” to be essentially irreconcilable opposites.
The fact that the Communists won the previous elections isn’t helping their case much, because if they were popular enough to handily win the upcoming ones, they wouldn’t need to stage a coup.
Presumably, you had “elections” throughout the Communist era just like we did?
I find “power via elections” and “power via coup” to be essentially irreconcilable opposites.
Well, in Czechoslovak case it is not quite so, since communists first won an election and then used power gained thusly to entrench themselves and abolish democratic competition.
Presumably, you had “elections” throughout the Communist era just like we did?
Yeah. In our case, there was only one list of candidates.
Well, in Czechoslovak case it is not quite so, since communists first won an election and then used power gained thusly to entrench themselves and abolish democratic competition.
To my mind, the important question is: was the communist takeover accomplished by ostensibly-legitimate means, up to and including patently absurd reinterpretations of the nation’s constitution, or did even the communists admit that they were breaking the rules?
To my mind, the important question is: was the communist takeover accomplished by ostensibly-legitimate means, up to and including patently absurd reinterpretations of the nation’s constitution, or did even the communists admit that they were breaking the rules?
It was largely within the rules. There were elections in 1946 into a body which was an acting parliament and constitutional convention at the same time, tasked with drafting a new Czechoslovak constitution. It was called Constitutional assembly. Communists won the elections, but fell short of absolute majority. Then they formed a coalition government.
Now I have to consult Czech wikipedia on what happened next: In 1948, part of coalition ministers resigned in an attempt to stop communist takeover of a security apparatus, which was however legal, since it was initiated within constitutional prerogative of communist ministers. Resigning ministers hoped that other noncommunist ministers are going to join them, since that would under existing constitutional arrangements meant new elections. This did not happen however, and under threats of violence from communist paramilitary brigades, who demonstrates in the street along with various pro and anticommunist protesters, president of the republic accepted resignation of noncommunist ministers, and replaced them with communist figures (I am not sure whether replacements were all party members).
Then Constitutional assembly finalised a new constitution, which formed a legal basis of a new, communist regime. Constitution provided for elections, which were formally held, but they were manifestations of loyalty, not a genuine competition, only communist approved candidates could be elected.
I actually think I’m willing to accept that. While it involved threats of violence, it sounds like it was on a level that I wouldn’t necessarily count as rendering the whole process undemocratic; if it had had any other purpose, I’d just call it “corruption.”
I feel for the guy in step 14 who says: “But I’m not a carpenter, I’m a plumber“, as telling that to my wife doesn’t get me out of doing the work she assigns for the house either!
Oh, but that goes considerably beyond “mean” into outright approving murder. This is criminalized in many European countries, including my own. I do not know about Canada.
I am personally more sympatetic to an American approach to free speech, but laws of this kind are perfectly compatible with liberal democracy.
Yes, it’s an extreme case. But it’s a slippery slope, and the guy was obviously an immature kid venting his anger. If you take his words at face value, you can say he’s approving murder of this particular teacher. But more likely, he’s saying “I hated her, not shedding any tears over this.” It’s not like he was waging a campaign telling people to go and commit murder.
If Trump was assassinated, and a bunch of people on facebook say “I’m glad he was assassinated”, would you want these people jailed also, or would you consider that they were expressing their dislike of Trump in a very crude way?
What about if someone gets cancer and you despise that person because they are very evil and you post “I’m glad he has cancer”, is that the same thing? Are you approving of cancer? What if you post “Im glad the marines shot Osama Bin Laden”?
I agree that posting that on social media is a terrible idea and should not be done. But jail is excessive by a large margin as a remedy. People naturally express themselves on social media the way they do in conversations. And if I heard what the kid said in a conversation, I would never construe it as approving of murder, even though that’s what it is on its face. I would construe as an extreme and tasteless expression of dislike for the victim. I think others would also.
If Trump was assassinated, and a bunch of people on facebook say “I’m glad he was assassinated”, would you want these people jailed also, or would you consider that they were expressing their dislike of Trump in a very crude way?
I do not approve of criminalization of a hate speech in a broad way which is done in most of Europe. But I also do not think that existence of such laws makes those countries authoritarian regimes, as opposed to liberal democracies. There are plenty of laws in liberal democracies with which I personally disagree.
But I also do not think that existence of such laws makes those countries authoritarian regimes
Well those laws are authoritarians in nature, as they claim authority over something which is usually outside the purview of government. How many authoritarian laws do you need to have authoritarian regimes? I’m not sure. These laws certainly make these governments more authoritarian than not.
Do you think that UK kid was actually approving of murder, or just expressing his dislike and anger in an extreme fashion?
These laws certainly make these governments more authoritarian than not.
I agree. But I also think that many laws which conservatives like, and I couldn’t help but notice that you are quite conservative, make countries more authoritarian than not. Perhaps less loaded word should be used, like “less free”, with recognition that some restrictions on freedom are necessary, although those particular restriction aren´t.
Do you think that UK kid was actually approving of murder, or just expressing his dislike and anger in an extreme fashion?
I think that this distinction is nonsensical. He knowingly publicly said that he approves murder, whether he did “really meant it in his heart” is irrelevant.
But I also think that many laws which conservatives like, and I couldn’t help but notice that you are quite conservative, make countries more authoritarian than not.
Correct on both counts. That said, I dont line up with the conservative position 100% of the time, and my preference is for not legislating around things that should be private, unless the legislation is necessary and effective. de minimis non curat lex is one of my favorite latin phrases.
I think that this distinction is nonsensical. He knowingly publicly said that he approves murder, whether he did “really meant it in his heart” is irrelevant.
I dont mean what was in his heart, I mean what he was objectively conveying by his post. If somebody says “Trump is literally worse than Hitler”, that person is not conveying the belief that building a wall is worse than the holocaust, even though that’s what they said.
Quibble: I don’t think the person is approving of murder. I doubt hardly anyone approves of murder, except maybe Nietzscheans? The person probably meant that the killing was justified, which is a completely ordinary view even if in this particular case that’s outlandish and offensive. We don’t after all consider killing in self defense murder, and many do not consider capital punishment murder.
@jermo sapiens
de minimis non curat lex is one of my favorite latin phrases.
I prefer in flagrante delicto, but de gustibus non est disputandum.
Quibble: I don’t think the person is approving of murder. I doubt hardly anyone approves of murder, except maybe Nietzscheans?
If it be approved of, none dare call it “murder”.
Murder is unlawful killing. Since I’m pretty sure no government ever put people in jail for approving of that nation’s soldiers lawfully killing enemy soldiers in wartime, it’s clearly the “unlawful” part that is at issue here, not the “killing” part. Jake Newsome was jailed for approving of a particular kind of unlawfulness.
I think I’m on pretty solid ground in saying that a nation which imprisons people for expressing disapproval of that nation’s laws, is properly called “authoritarian”. And that doesn’t change if they limit it to just the really important laws – as defined by that nation’s government and its supporters.
@John Schilling
I basically agree with that, especially about this coming down to “punishing speech that disapproves of our laws,” which was the point I was going to make before deciding to dial it back some. My only caveat is that I think people mean by murder unjust killing, not unlawful. After all, pro-life activists who say abortion is murder know perfectly well that it’s legal, they aren’t contradicting themselves. Same with people who say capital punishment is murder, which is obviously a legal killing.
This sort of boundary around the word murder where it’s always wrong is remarkably unhelpful a lot of the time, because it creates confusions just like this, and it can seem kind of contentless (why don’t we talk about why a particular killing is unjust instead?). But it’s been used that way for a very long time.
I think I’m on pretty solid ground in saying that a nation which imprisons people for expressing disapproval of that nation’s laws, is properly called “authoritarian”. And that doesn’t change if they limit it to just the really important laws – as defined by that nation’s government and its supporters.
The way I think about it, there are various axes of authoritarianism. European countries are more authoritarian than US with regards to speech, but US is more authoritarian than Europe in some other respects. For example US criminal law has longer prison sentences and US police behaves more violently towards civilians.
The thing is, places like the UK, Germany, France, Belgium, Canada, the Netherlands, Spain, etc., don’t actually look at all like police states or oppressive regimes. That’s true, even though I think they all have more speech restrictions than they should, they all impose a lot more control on gun ownership than I’d prefer, some of them have a formally recognized state church, all of them lock people up for some things I think ought to be legal, etc. I’d say all those things are potentially steps toward an oppressive regime, but we actually know what oppressive regimes look like, and modern-day Germany, Belgium, Canada, etc., aren’t it.
Having a tough time confirming whether jail/prison is ever used as a sentence for such behavior. But arrests and fines seem to be commonplace. And I have a hard time imagining that most people who are generally okay with arrests/fines for mean tweets would draw the line at prison time (and what happens to people who can’t/don’t pay the fines?)
Thank you, that looks genuinely bad, especially that 19 year old girl being convicted for posting a Snoop Dog lyrics on Instagram. UK clearly clearly has overly strict hate speech laws.
What I had in mind was specifically a planned economy with forced labor. Not “work somewhere or you starve” kind of forced, but “work on what we tell you to or you go to jail” kind of forced.
So, while it’s hardly comparable to Venezuela, somewhere on the road between “completely free economy” and “full communism” there are a couple pit stops.
One such pit stop is something like “if you want to engage in commerce at all, there are certain protected groups you will be forced to serve, even if you don’t want to.”
I know that the whole “bake the cake” thing isn’t what you’re describing, but it’s certainly an intermediate step. The ability of people within a particular occupation to pick and choose what type of work they will do and what type of customers they will serve is heavily regulated, and becoming increasingly moreso. And everyone is generally fine with that.
Not “work somewhere or you starve” kind of forced, but “work on what we tell you to or you go to jail” kind of forced.
Does Venezuela even have this? My googling tells me that there was a proposal floated in 2016 that amounted to something along these lines, but I can’t find any indication it was actually implemented.
Most of the references to the proposal online are from libertarian websites using it to try to make a point, which makes me suspicious that it may not have ever materialized.
The research I did at the time revealed this to be vastly overblown. What it actually did was allow the government to requisition labor from the private sector. Thus, the government could go to a business and say, “Hello, we need X workers” and the business would have to provide them. At least in their comments to the UN Venezuela claims that people have to consent to be reassigned.
So, the reason they only target people with a job is that this policy reassigned people (with their consent?) from “whatever job they were doing previously” to “working on a farm.”
Now, this clearly sucks and in a country wracked by economic catastrophe, I’m sure it’s pretty easy to get people to “consent” to farm labor if their other option is “being fired” but it’s not bad as the libertarian blogs were making it sound.
Venezuela claims that people have to consent to be reassigned.
That changes my opinion considerably, from “Wow, Hayek’s model was right all along,” to “Yet another tragicomically corrupt nation, albeit an extreme case.”
That hasn’t got anything to do with socialism, though. It’s perfectly plausible to combine a dynamic free market economy with heavy-handed restrictions on speech. That’s pretty much how America was before the obscenity laws were loosened in the ’60s.
Turkey under Erdogan and India under Modi. China is also said to have become more authoritarian under Xi, though it was never a democracy.
And as noted by Matt M, in the UK, and in fact in quite a few Western countries, e.g. Germany or Canada, you can go to jail for mean tweets.
I wonder if this is due to modern IT technology enabling higher government surveillance and making it easier for people to carelessly expose themselves (e.g. your seditious tweets stay online for everybody to read, while the seditious words you used to speak at the pub would vanish unless there was an informant keen to write them down).
I should have been more clear in my initial comment. As I said to others in this thread, by “authoritarian regime” I meant “authoritarian socialist regime,” one with a planned economy and work-where-we-tell-you-or-else kind of forced labor.
Other forms of authoritarianism, such as curtailing certain kinds of speech, are go hand-in-hand with this but are common to socialist, fascist, and theocratic forms of authoritarianism. I’m only interested here in the first one.
Broadly speaking, I’ll count any republican form of government where elected representatives turn it into a forced-labor regime through at least superficially legitimate means (including absurd reinterpretations of the constitution and legal precedent) as pattern-matching at least somewhat to this model, but my question is how well Venezuela matched the model and if anywhere else did the same sort of thing.
Are you sure you are not exaggerating a little bit when referencing Venezuela? Last I checked, it was estimated that 70% of productive activity was still privately owned there, and people were not being executed on the street there. I just had a friend come back from there, and his testimony was that it seemed like a remarkably normal country…like, Miami, FL but a lot less glitzy. Supermarkets looked normal. I saw pictures too. Even the slummy parts looked hardly worse than the slummy areas I see where I live in Missouri.
Also, it strikes me that the most questionable assumption in that “Road to Serfdom” cartoon is the certainty that, without a common unifying war effort, the population will not be able to peacefully agree on a common plan. The cartoon seems to not dispute that planning obviously works during wartime (which I find to be a surprising concession from Austrian economists…albeit completely justified by the evidence). But the next assumed step, the transition to a peacetime plan inevitably degenerating into confusion, doesn’t really ring true to me. Where does this inability to compromise and agree on a common plan during peacetime come from? Isn’t Britain after WW2 (with the NHS) a counter-example against this?
I could see Leninists agreeing, in a way, with this cartoon…in that they would predict that a peacetime “war on poverty” will inevitably be obstructed by the capitalist class because, for example, building lots of more housing is liable to devalue the exchange-value of existing housing stock and harm the owners of those assets…or, better, more secure living conditions for workers will inevitably increase labor militancy, lead to increased wages, lower profits, etc. (See Kalecki’s article, “The Political Aspects of Full Employment).
But the Leninists would not blame the resulting dysfunction on the plan itself; rather, they would blame it on the attempt to find a common plan for society when there are fundamentally incompatible economic interests still splitting their class society into two opposing camps; the Leninists would thus blame the capitalist class, and also partly the less radical, non-Leninists socialists for NOT being willing to take vigorous measures to overcome the totally predictable “capital strike” from the capitalist class…just like the patriot revolutionaries in the American Revolution had to unleash a terror against “traitorous Tory” loyalists, and just like bourgeois revolutionaries in the French Revolution had to take vigorous measures against their feudal class enemies that were rallying around all of the foreign invaders. Hence, why Leninists will often blame Salvador Allende for not “arming the people” against the completely predictable capital strike that sought to “make the economy scream” and discredit his program.
It’s certainly possible that I read anti-socialist propaganda and uncritically accepted it. That said, as I mentioned in a comment to Guy in TN above, the forced-labor law is real and was a decree signed by Maduro.
As to current conditions, in searching for confirmation of the existence of the law, I ran across this article. I only skimmed it, but it seems to paint a picture of Venezuela transitioning, China-style, toward a kind of “capitalism with Venezuelan characteristics.”
I can’t remember the last time I used a pay phone. When I was a nerdy kid, it was a bit interesting to see the variations from place to place: well after the Bell System broke up, all the regional Bells still used the old standard Western Electric phones, but the different company names let me know where I was in the country. GTE (the biggest non-Bell) had slightly fancier phones that took credit cards, as did the Canadian provinces I visited… and of course there were radically different phones in Europe and Asia, as chronicled on the back page of 2600 magazine. The differences gave you a sense of place within an interconnected system; nowadays with deregulation and homogenization, everyone is using one of the same few brands of smartphone to connect to a mobile network run by a telco that might be over 100 years old but now has a nonsensical name made up by a marketing consultant to shed it of all local connotations. It’s more convenient now, sure, but something’s missing… I don’t know if any of this makes sense, or what…
Anyway. In 2014, the franchises giving ten companies the right to put payphones on New York streets expired, replaced by a single contract to CityBridge for the new LinkNYC internet kiosks. Even before the expiration, the payphones were rapidly falling into disrepair, and are now totally useless as phones: most handsets have a big “no dial tone” sticker, if they haven’t been broken off entirely. Their main function is as billboards, many of them having been sold by Verizon (founded 1879, formerly Bell Atlantic) to billboard companies Titan Outdoor and Van Wagner. And now CityBridge (a joint venture including Titan Outdoor as the lead partner) runs its kiosks mainly as advertising screens. They do include built-in web tablets and VoIP speakerphones with free calling, which is certainly good enough to replace whatever you’d be using a payphone for, but I don’t think I’ve seen anyone using them as such. But at least they look a little nicer than the old phones did.
Last time I used one, most payphones required you to insert 50 cents, which meant anyone with two quarters could make one (at least local) phone call.
What’s the cost of a “minimum viable phone call” a person could make today within a few city blocks of any given urban or suburban-commercial location, without having to ask anyone else for a favor?
The cheapest burner flip phones are, I think, around $14 (or were last time I was on the market for one about 5 years ago), but there are initiation fees, the minimum number of minutes you need to purchase, etc.
No, a personal phone is not required. The kiosks have both a touchpad and a touchscreen, and can be used to call any number in the US for free. There is no handset, but you can plug in earphones if you have them: https://www.link.nyc/faq.html#phone-call.
If you don’t have earphones it looks like your call is played on a speakerphone for passersby to hear. But you can buy $0.99 earbuds at many convenience stores, so I guess that’s a pretty good deal!
As far as I can tell, the only downside compared to payphones are the ads.
In addition, the handful of remaining enclosed phone booths (you can count them on one hand) are being refitted with free VoIP phones. This actually seems like it’d be useful to make more widespread now that TCP/IP is too cheap to meter, but physical services in the public square are vanishing. The concept just seems antiquated now.
We got rid of public bathhouses when the housing codes were updated to require running water in every house and apartment. Cell phones replaced pay phones. Internet cafes, where you could rent a computer and an internet connection for a few hours, now are reduced to a WiFi password on your receipt, because who doesn’t have a laptop or a smartphone these days?
Now a large part of this is the potential for abuse. The Link kiosks used to have web access on their built-in screens, but this was reduced to a few restricted apps after a few incidents of homeless people using them for pornography. Since we’re unwilling or unable to police abuse of public services, instead we just withdraw the services altogether. That’s the motto of the 21st century: We Can’t Have Nice Things.
My wife and I took a train across Canada last spring, and were surprised to see banks of pay phones in various public places (as I recall train stations and airports). I haven’t seen anything like this in the US for years. Although I never saw anyone using them. Is it that they just haven’t gotten around to removing them? Do they still work?
Based upon a conversation in the marriage thread:
Your missions, etc. etc., is to design an appropriate coming of age ritual for your country in present day. It should give young adults a feeling of being full members of society, be unambiguous, and broadly acceptable.
My first pass: At the end of the month in which you earn your first paycheck, you take your buddies out for a night on the town (whatever that looks like to your subculture, so long as no one ends up in a cell or hospital bed).
Then, the next morning, if you have enough sobriety and cash on hand to mail your rent check before noon, the postman will certify you as a functional adult, with all attendant rights and responsibilities. You are thenceforth no longer able to publicly wonder “How do I even adult?”
My standard for successful parenting is that one ought to be comfortable handing one’s newly-minted adults the keys to a sports car, a bottle of scotch, a pack of condoms, and a loaded revolver, saying “have a good time!”, and trust that it will work out OK. Seems to me we could properly ritualize this.
But we’d need another ritual for the “we regret to inform you that your son/daughter…” cases. The military used to be pretty good for that, so I think we’ll do OK there.
Women, men, hermaphrodites, whatever floats your boat. And for that matter, feel free to substitute weed or speed or whatever for the booze.
But, yeah, the four things every American will have to deal with in early adulthood, which can irrevocably screw up their life if they do it wrong. Unfortunately, most American parents want to actively teach their children how to handle about half of these things responsibly, and play the three-monkeys “Just Say No” game with the other half, and they can’t even agree on which goes in which category.
So, if I’m designing the ritual, I’m designing it so that parents know they have to prepare their offspring for all four and then let go.
What else do you think military enlistees spend their enlistment bonuses on?
Military enlistment as a rite of passage to adulthood would in fact work very well, except than I’m not willing to make it anything close to mandatory. And, yes, enlistment – officers should spend a year or two in the ranks before getting their commission.
Do you feel that South Korea’s men have benefited from mandatory service? Or Israeli citizens? (Ooh, I wonder if there’s a study there, to see if Israel’s gender-neutral mandatory service has benefits above Korea’s, or are there too many confounding factors?)
Do you feel that South Korea’s men have benefited from mandatory service? Or Israeli citizens?
I have a vague “yes, probably” on that, but only to the extent that I’d be willing to consider making it a cultural default after studying it some more. If nothing else, it’s probably better than making a four-year college education the cultural default. Legally mandatory conscription, no.
I hadn’t thought about the implications of mandatory service only for men. Does this mean that women are always more advanced in their educations or careers than men of the same age? Are the military-service-aged women generally in distance relationships with conscripts, or dating older men? Does it mean there’s usually a military-service-length age gap between the man and woman in a couple?
Seems like it would have some fascinating consequences to pull one population out of society for a few years while the other keeps going.
I hadn’t thought about the implications of mandatory service only for men.
Yeah, we probably don’t want to do that. Unless hypothetically we’re going back to the old cultural default of women as wives and mothers and only occasionally pursuing professional careers, in which case sure, the girls can get their 2- or 4-year pre-wed degrees while the guys do two years of military service and then get their professional education, and the women can “marry up” in terms of socioeconomic status and age at the same time. But, however well this might have worked in the past, we’re probably not going there again.
If men and women are going to be socioeconomic equals, and the men are going to do two years of military service or the like, then women should do the same. Or something similar, at least. Fortunately, the military isn’t even mostly infantry or other front-line combat troops any more, so we can find roles for everyone if we’re doing the Israeli or Korean thing.
And if we expand the cultural default from “2 years military service” to “2 years military or other public service”, and it turns out the guys volunteer 80/20 for the military over the Public Health Corps or whatever and the girls do the reverse, that’s also fine.
For Korea, one factor not mentioned here is that hiring practices are not very meritocratic, meaning that men are favored in the job market through the connections they make during their time in the service. (And as John Schilling points out, the cultural assumption of women as wives and mothers.)
That would actually be another good part of an adulthood rite of passage, that I’m not quite willing to make even quasi-mandatory.
Oh, and give everyone their first credit card on Adulthood Day, with a limit tailored to let them have plenty of fun if they’re careful and plenty of impending grief if they’re not.
For consistency, a gift certificate to the local brothel would work best, but one cultural engineering miracle at a time. And I like your interpretation even better.
No using the handgun to find a mate, though, on penalty of your parents getting the “we regret to inform you…” ritual. And any prospective mate should either have a handgun of their own, or be under the protection of a shotgun-wielding parent, so we may be OK there.
Your ritual catches the “hot” failure modes, but not the “cold”. You catch the exceptionally crazy drivers, but not those afraid to drive (or who drive so timidly as to be a problem). You catch the drunks, but not those who can’t loosen up at all. I guess the condoms mean you catch those who make poor decisions about sex, but you don’t catch those who can’t get any. You catch the reckless with weapons, but not the fearful of them.
Perhaps intentionally.
Bus-taking, unarmed, incel Teetotalers can be, if not the most successful, at least productive members of modern society in the way a primitive man unable to hurl a spear could not–and modern hot-rodding, gun-blasting, promiscuous drunkards cannot either.
I remember reading that in some societies such ritual required to survive a certain time in the woods, alone or teamed up with other adolescents. We can go a similar route.
Young people are sent alone in another city far away which they’ve never visited before (preferrably) and are blocked from communicating with anyone they knew before, with a possible exception of others such adults-to-be. They’re given a certain allowance of money per month which is just above what’s needed to survive there, and some basic housing (maybe in the form of prearrange rent for which they pay with their allowance or get kicked out == fail, maybe it’s just provided and allowance is accordingly smaller). They are allowed and perhaps facilitated to team up with others being tested, and have some task they need to accomplish to pass the test. The task and allowance are choosen in such away that they have some slack to trade off between comfort, fun and risk of failure, but not too much, so there’s still almost no way of passing without learning basic things like keeping budget, cooking, doing laundry etc. And of course they’ll have to learn to work in a self-motivated manner to complete the task. I’m not sure about the appropriate timeframe but probably something between a month and half-year. That’s a long time for a ritual but you can compensate for it with the task being something actually useful.
Which of course sounds like and modelled from the freshman’s year for many people, with the main difference being the communication ban and absence of any fixed schedule. Which – together with the rest of the college – kind of sort serves a similar purpose anyway, so why not just decouple it from education.
ETA: I actually like John’s suggestion more, but muh expenses muh mortality rates..
Meh, I feel much cynicism over how any proposed task would get gamed into easy mode by the upper class, so things quickly become YA Dystopia-Lite, and then warped by playground politics into a frat-hazing type ritual.
I sometimes forget the small, specialised ways in which America is bizarrely backwards. I’m pretty sure in most Western countries this would be more appropriate for a senescence ceremony than a coming of age one. I haven’t had a chequebook for well over a decade (and I barely used it when I did have one). Even my dad doesn’t use cheques any more, and he’s 63.
Have you talked to your bank about having them automatically write and mail the check for you each month? A nice way to automate that process for particularly particular landlords.
My usual process was to stop at the bank on my way home, get a cashier’s check* and an envelope, and walk straight there to drop it off. It was all pretty convenient seeing as the bank is 10 minutes’ walk from my house and the landlord’s drop box is right on the way. But it would be much more convenient if I could have it taken out of my account automatically, of course….
*cashier’s checks normally have a fee, but they always gave them to me for free
It’s definitely possible. “Online bill pay” as offered by most banks (not the type offered by the people you are paying) is effectively this. You input information into a form and they literally generate (and in some cases mail) a printed copy of an actual check. It’s a lot less “high tech” than people think.
You got me, although like Nick this is the single instance of check writing we do as a household. Also usually the single instance of cursive writing as well.
FWIW, the few times I use paper checks I print everything except my signature. I have only had one check rejected due to handwriting issues, but that was due to a combination of a)the amount being very large (it was for the down payment of my current home) b)my handwriting being VERY bad and c)I lined out a mistake and initialed the line out rather than “wasting” a check by starting again with a new one. My lawyer’s paralegal wrote out the check for attempt #2 for me.
I write a paper check and drop it off at the office in the apartment complex every month. The landlord recommended using Venmo, but I’m old fashioned and refuse to use it.
In Canada, almost 30 years ago, it was normal (and often required) to give your landlord a stack of checks, each dated for when they’d be due. US law would allow the landlord to cash them all right away, before the date on the check, so that wasn’t done there, as I discovered when I moved.
Here in the US, I still write an average of 1 check per week, handed to individual service providers who aren’t set up for credit cards, let alone something like paypal – either of which would charge them rather more than the bank charges for cashing my checks. (The bank probably requires that they have an account with some minimum balance, so there is a cost, but OTOH they need to have some bank account somewhere. Credit cards take a %, and I’m not sure what PayPal does.)
Most of my regular monthly payments are set up to be paid electronically from my checking account; that doesn’t work with fee for service.
I also use cash a couple of times in an average month.
I’m 62, living in the US.
None of this feels backwards to me. Except perhaps the inability to pay someone with a check they can’t cash right away, for service they’ve contracted to give you in the future.
I choose to pay my credit card bill with a paper check, as a firewall between random merchants and my bank account. Since I’m going to go through the trouble of reviewing the bill every month anyway, the added “difficulty” is trivial. Other than that, I’m with DinoNerd at one check a week for specialty merchants or service providers.
A technology being new and shiny does not necessarily make it better. Paper checks are very good for certain applications, and there’s really not much room to improve on them.
Since I’m going to go through the trouble of reviewing the bill every month anyway, the added “difficulty” is trivial.
Why use a paper check rather than your bank’s bill-pay service, though? As you say, either way you have to review the bill and using the service saves the (small but not zero) trouble of filling out the check and mailing it.
The “trouble” of filling out the check and mailing it, is really quite close to zero. It’s not literally zero, but neither is using my bank’s bill-pay service. Or did you find a bank that employs staff psychics to divine and implement your fiscal intent without so much as a keystroke on your part? Because I’d be concerned with the security implementations of that.
I choose to pay my credit card bill with a paper check, as a firewall between random merchants and my bank account.
I don’t understand how paying via check is any more of a firewall than paying your credit card bill via ACH or whatever electronic transfer initiated by the card company? They get your routing and account numbers either way, and in neither case is that information exposed to the merchants you paid with the credit card.
Why use a paper check rather than your bank’s bill-pay service, though? As you say, either way you have to review the bill and using the service saves the (small but not zero) trouble of filling out the check and mailing it.
Several people have brought up the bank’s bill-pay service, and I’ll be honest…I would rather just write the check myself than use the bill pay.
Of course, I would rather just pay the bill electronically myself than do either.
Or did you find a bank that employs staff psychics to divine and implement your fiscal intent without so much as a keystroke on your part? Because I’d be concerned with the security implementations of that.
It sounds like you are envisioning an automatic transfer of whatever your bill is from the checking account to the CC account? Probably because you use a CC through your bank?
I have a CC through a different institution. Each month (or more often) I check the balance, skim the transaction list, then click the link to tell it to transfer the amount from checking to CC.
Do you expect at some point that the money will be transferred without your notice if they have your account info?
My bank is Wells Fargo, so the only thing their psychics can do is divine my intent to open an account that provides employees with the biggest bonuses. (Though they didn’t even do that with me, for some reason)
But I find it significantly easier to fill out a few boxes on a web page and hit “submit” rather than fill out a bunch of checks and put them in a bunch of envelopes and mail them. (It’s not “one envelope” and wouldn’t be even if I went down to one credit card, because there’s also power, water, and internet bills.)
@acymetric
Of course, I would rather just pay the bill electronically myself than do either.
Why is it better to pay the bill yourself (I assume you mean using the biller’s web site?) than using the bank’s bill pay? It’s an ACH either way. I use the bank because it collects them all in one spot.
I do the same as John Schilling, I think. Everyone is payed as anonymously as possible. Only the deepest intermediaries get a credit card, and only the credit card gets the check from the bank account.
I sometimes forget the small, specialised ways in which America is bizarrely backwards. I’m pretty sure in most Western countries this would be more appropriate for a senescence ceremony than a coming of age one. I haven’t had a chequebook for well over a decade (and I barely used it when I did have one). Even my dad doesn’t use cheques any more, and he’s 63″
The last time I paid rent was in 2012, and it was be check…
…but since my single biggest expense is California property tax I suppose you could say I’m still paying rent – just to the State of California, other than that I use checks to pay other State and Federal taxes, the monthly bill to the gas and electric company, and the every three months bill to the water and sewer agency (note: as a citizen/customer I notice almost no difference in service between the regulated “share holder owned” power company, and the used to be a private company decades ago municipalized water agency, both seem fine to me, and I feel no urgency for the State to take over one or privatize the other).
A few other checks in a decade for classes and recertification tests, and sometimes my wife asks for one to supplement the cash I give her, but that’s about it, mostly I pay cash, though my wife uses cards more than cash, and she buys stuff on-line which I only do about twice a decade, the last time for brake pads for a 1979 Raleigh Tourist I gave to my son that he uses to get to his computer classes that I had to order from a shop in Boston, Massachusetts as none of the local shops had them or would order them (I have mixed feelings about this, on the one hand my still being able to get the parts is great, on the other hand in a more just, true, right, good, and beautiful world I should be able to walk into one of the two bicycle shops that are within a mile of my house and pay cash for the parts, and for that matter pay for a newly made full bicycle that is in every way identical to the then new made in England Raleigh‘s, made in the U.S.A. Schwin‘s of the ’70’s, or even better a made in France René Herse of the 1940’s, also more Chuck Berry and Credence Clearwater Revival should be on the radio, and Doctor Who, and Star Trek should be on broadcast television!).
Something that’s struck me this week with the Democratic primaries is how uninfluential the “Blue Tribe” has been on the “Blue” Party in 2020.
In 2016 non-college graduates favored Sanders more than they did Hillary, and (just like then) he’s second in delegates, maybe he’s favored more by grads now, but all I’ve seen is stuff about how he’s favored by younger poorer Democrats, especially Latinos.
On Warren’s failure to win more delegates Matthew Yglesias of vox.com wrote:
“…Warren has been further hampered in 2020 by the fact that two other candidates, Pete Buttigieg and Amy Klobuchar, were also occupying her same “educated white people” lane, albeit with more moderate ideological profiles. If you think about politics in highly ideological policy-oriented terms, that may seem odd, but the fact is a lot of people just aren’t that ideological and, to an extent, the primary sorted into a Biden/Sanders working-class camp and a Warren/Pete/Klobuchar white-collar one.Buttigieg, famously, is almost ostentatiously smart — speaking a little Norwegian and checking all the boxes on the high-achiever résumé before becoming mayor of South Bend, Indiana. And Klobuchar, like Warren, is actually the author of a good serious book, Uncovering the Dome, a case study in the corrupt politics of municipal stadium deals.
Within that electoral niche, Warren has done the best (and is still in the race). Unfortunately, she split the educated group three ways while Bernie Sanders and Joe Biden divided up the larger working-class bloc along age and ideological lines.
If you feel like Warren is very impressive and lots of people you know feel the same way, you’re not imagining it — lots of people just like you all across the country feel the same way.
It’s just that most Democrats aren’t all that much like you”
I’ve banged on this drum before, the ‘Blue-Tribe’ is small, and the leading candidate for the Democratic nomination is ahead because of ‘Red-Tribe’ (and Red State) support within the ‘Blue’ Party.
About 45% of American eligible votes just don’t vote, among voters about 40% are Republicans or strongly Republican leaning “independents”, slightly less than half of voters lean or are Democrats, and slightly over 10% are ‘swing’ and third-party (the last time I checked), only about one-third of 2016 voters had college degrees, the share of graduates among Hillary Clinton voters was higher, at 43 percent, so 55.7 (percentage 2016 turnout) x 48.2 (percentage who voted for Hillary) × 43 (percentage of Hillary voters with college diplomas), that gives us just under 12% of eligible voters who are both “Blue-Tribe” and ‘blue voters’, which is just under 21% of actually-bother-to-vote voters (not counting college students, but many who attempt college don’t graduate, and the young vote far less).
I don’t want to give the impression that “the blue tribe” isn’t influential, Hillary was almost our President and she’s who the Blue-Tribe supported, I just want to remind y’all how small the Blue Tribe is, and that they alone can’t win elections by themselves (outside of some cities, and maybe Massachusetts).
For “Red-Tribe” Democrats, I’d define them as “Democrats without college diplomas, especially rural ones”, i.e. a lot of the South Carolina primary electorate.
I’d define “Blue-Tribe” Republicans as “Republicans with college diplomas who live in cities” (a small percentage of the electorate).
“Red-Tribe” Republicans are, well, most Republicans.
“Blue-Tribe” Democrats are (see above) about 43% (and growing) of Democrats.
My line of thought on this is kinda spent now, but I invite others conclusions and comments on these matters.
The problem with being a core constituency for a party is the party tends to take you for granted. If the “Blue Tribe Democrats” aren’t in play, there’s no reason for the Democrats to pander to them and every reason for them to attempt to peel off possible Trump voters (such as the Red Tribe Trump voters who usually voted Democratic).
“The problem with being a core constituency for a party is the party tends to take you for granted. If the “Blue Tribe Democrats” aren’t in play, there’s no reason for the Democrats to pander to them and every reason for them to attempt to peel off possible Trump voters (such as the Red Tribe Trump voters who usually voted Democratic)”
Yes, there’s been an argument among Democrats over whether to pursue more “suburban women” or win back “rust belt voters”, if “pursue suburban women” was the electorate than Buttigieg, Klobucher, or Warren would still be in the running, Sanders is running a hard “win the working-class campaign”, but what he’s won is youngsters instead, Biden is a compromise candidate, doesn’t scare older suburbanites as much as Sanders, but doesn’t have as much anti-magnetism to working class white voters as Hilary did (black women still turned out for her, black men not so much, white men, especially non collegiate white men just said “Nope!” to Hillary and voted for Trump).
I think it’s not so much about raw numbers as about the identity of battleground voters. If the election is going to be decided by working class voters in the Rust Belt, you’d better appeal to them. Perhaps in the past, those voters were more reliably Democrat and a higher proportion of highly educated voters were moderates who could be swayed to either party, which incentivised a different kind of pitch.
@Tarpitz,
Yes, see my response to @The Nybbler, I’m trying to bite my tongue and not do any gloating in front of the lovely ladies of the Blue-Tribe (I just don’t care as much about the gents feelings), but it’s hard, as Biden #1, Sanders #2, they rest (especially Bloomberg) out, is almost the best that I could hope for (a Jim Webb/Sherrod Brown ticket in 2016 would’ve been even better though!).
Blue Tribe definitely can’t win elections by themselves, but I think you’re underestimating their power strength somewhat. Pete and Klobuchar had some decent performances in NH and Iowa, and Warren still earned a hell of a lot of votes. There’s also still Blue Tribe support lining up behind Biden, Bloomberg, and especially Bernie, especially among younger generations. Where Bernie got clobbered in 2016 was older black voters over than Millennial, who are decidedly NOT Blue Tribe.
Blue Tribe is influential enough that they are pulling the entire party leftward, and this has been extremely obvious for the last 2 cycles, and visible for the last decade. 2008 Obama wouldn’t get the time of day in this primary, let alone 2004 and 2000 Kerry and Gore. Even 2004 Dean would be a moderate! And, yes, Biden will be the “consensus” and “moderate” candidate, but the platform is being pulled left.
Democrats are not just a party; they’re a community. In my years of covering politics I don’t think I’ve ever seen anything like what happened in the 48 hours after South Carolina — millions of Democrats from all around the country, from many different demographics, turning as one and arriving at a common decision.
It was like watching a flock of geese or a school of fish, seemingly leaderless, sensing some shift in conditions, sensing each other’s intuitions, and smoothly shifting direction en masse. A community is more than the sum of its parts. It is a shared sensibility and a pattern of response. This is a core Democratic strength.
Intersectionality is moderate. Campus radicals have always dreamed of building a rainbow coalition of all oppressed groups. But most black voters are less radical and more institutional than the campus radicals. They rarely prefer the same primary candidates.
If there’s any intersectionality it’s in the center. Moderate or mainstream Democrats like Biden, Clinton and Obama are the ones who put together rainbow coalitions: black, brown, white, suburban and working class.
The new Democrats are coming from the right. Bernie Sanders thought he could mobilize a new mass of young progressives. That did not happen. Young voters have made up a smaller share of the electorate in the primaries so far this year than in 2016 in almost every state, including Vermont.
Turnout was up by 76 percent in the Virginia suburbs around Washington, Richmond and parts of Norfolk. Turnout was up 49 percent over all in Texas. Many of these new voters must be disaffected Republicans who now consider themselves Democrats…
Okay, if Brooks’ guess is correct anti-Trump former Republicans are actually a thing instead of a rare few (my mental model had Trump being the candidate their base had been waiting for), or Texas and Virginia hold “open primaries”, it could be that Republicans crossed over to vote for the less socialist Democratic candidate, but will still vote for a Republican in November.
The polling on black Democrats is more clear though, they mostly just want the candidate with the best chance to win in the general election.
There some echoes of 2008 and 2016, but this election cycle seems weird to me, first the strongest generational divide in candidate preference that I’ve seen in decades (maybe since ’76? hard to tell, I was eight years old them!), next black Democrats trying to guess how white Americans will vote, and then older white Democrats taking their cue from black voters.
I was struck be the reports of the campaign in South Carolina by how much the Democrats there seemed swayed like how old fashioned Republicans were, endorsements from elder Statesmen, church meetings instead of campus rallies, I’m now more convinced than ever of the numbers and importance of ”Red Tribe Democrats”, but yeah the Blue-Tribe is the tail that wags the dog, every Democratic candidate, including Biden, is now running to the Left of Obama and Bill Clinton.
Minor point, but I wouldn’t actually expect “Blue Tribe Democrats” to be a group that’s growing all that much.
Perhaps I’m defining “Blue Tribe” too narrowly but I always thought there was more to it than just “urban/college-educated;” especially going on Scott’s original definition the more people and more diverse people that have college degrees, the smaller the proportion that hit his other characteristics (which were largely Stuff White People Like).
Plus that’s probably the least-likely to reproduce of the four categories you bring up, so they’re not going to grow the “old-fashioned way” either.
Or perhaps I misread, and you mean specifically that they’re growing as a proportion of Democrats, not simply growing. That makes more sense.
The concepts are significantly worse than uselessly vague. Red tribe, blue tribe, and gray/grey tribe (maybe just the word tribe altogether) should be added to the banned words list for comments.
I’ll give the benefit of the doubt that the individuals that use the term a lot are using the term with a consistent definition rather than using it to mean different things when it serves whatever argument they are presenting, but collectively everyone means something different when they use those terms.
I don’t even think Scott did a terribly good job of articulating what he meant by each tribe in his original post (it was a good post in terms of promoting some thinking about how people group together and along what axes, but it certainly didn’t provide some clear framework for how to actually assign people to a tribe in discussion, even though that is how it is used in the comments here).
@acymetric says: “The concepts are significantly worse than uselessly vague. Red tribe, blue tribe, and gray/grey tribe (maybe just the word tribe altogether) should be added to the banned words list for comments.
I’ll give the benefit of the doubt that the individuals that use the term a lot are using the term with a consistent definition rather than using it to mean different things when it serves whatever argument they are presenting,”
Thanks
“but collectively everyone means something different when they use those terms.
I don’t even think Scott did a terribly good job of articulating what he meant by each tribe in his original post (it was a good post in terms of promoting some thinking about how people group together and along what axes, but it certainly didn’t provide some clear framework for how to actually assign people to a tribe in discussion, even though that is how it is used in the comments here).”
I think our host was clear in his initial post about what he meant, I also think he was wrong in a lot of assumptions, and some of his subsequent posts, especially his New Atheism: The Godlessness That Failed post really confirmed for me that his model for what most Democrats and Republicans are like (if you include the bulk of his listed characteristics in his I Can Tolerate Anything Except The Outgroup post) is just wrong, he’s brilliant, and we all have blind spots (Lord knows I can’t follow a lot of the millennial/internet/pop-culture references discussion here), but he really seems to me to make the perception mistake described in the The New York Times The Democratic Electorate on Twitter Is Not the Actual Democratic Electorate, he’s very internet/Millennial focused, and his model of “meat space” isn’t mine, and he admits he (and many many people’s have a unique “bubble”, and I give him credit for that, but damn when basic assumptions about the world are so different communication is difficult!
I agree the way the tribes are deployed are not great, but part of what they get at is class, and we need to talk more about class in the US, not less.
I’m all for talking about “working class whites” or “educated suburanites” or whatever, but when people say red/blue tribe, it’s often unclear what they are actually referring to. Everyone seems to have a different idea of what it means.
“Minor point, but I wouldn’t actually expect “Blue Tribe Democrats” to be a group that’s growing all that much.
Perhaps I’m defining “Blue Tribe” too narrowly but I always thought there was more to it than just “urban/college-educated;” especially going on Scott’s original definition the more people and more diverse people that have college degrees, the smaller the proportion that hit his other characteristics (which were largely Stuff White People Like)”
My first response to our host’s list of Blue-Tribe characteristics was: “All of this list looks like some ladies I know (and a couple of gents in the public defenders office), most of the list looks like many ladies I know”, and my first response to @Scott Alexander’s Red-Tribe characteristics list was: “All of this list looks like some men I know (and one lady deputy, and one lady plumber), most of the list looks like most guys I know”.
Really, the “Blue-Tribe” is “Most of Scott’s family, former classmates, and co-workers”, and the “Red-Tribe” is “What @Scott Alexander imagines most Republicans are like”, or the folks “The Stuff White People Like” call “the wrong sort of white people”.
For my use I have Blue-Tribe = urban professionals (academics, lawyers, physicians, public-school teachers in large cities), and Red-Tribe = the non-college graduate majority, especially those who don’t live in large cities.
There’s some wiggle room here, more college graduates are atheists than not college graduates are on average, but frequent church-goers are also more likely to be college grads (“The working class” in the sense the NYTimes uses the term tends more to be believers but only intermittently go to church), and in our host’s “the tribes as cultures” sense, without knowing anything else about them I’d guess a college graduate atheist, Methodist (probably), or Unitarian (very likely) who live in a city are “Blue-Tribe”, but a college graduate suburban Baptist is “Red-Tribe”, as probably would be a non-college graduate Baptist, whether city, rural, or suburban. Catholics are a little harder, they’re so diverse in many ways, I’d still go city and collegiate = Blue-Tribe, non-collegiate and rural = Red-Tribe, but I’m sure there’s (likely a woman) who didn’t graduate from college, lives in the country, but buys “organic food”, drives a Sububu and listens to Joan Baez, and I’m sure there’s a college graduate guy who lives in the city, but drives a F-150, collects guns, and listens to Hank Williams Jr. (actually I know there is, his name is Gary).
“Plus that’s probably the least-likely to reproduce of the four categories you bring up, so they’re not going to grow the “old-fashioned way” either”
Here are two groups of people. Which one does Joe Biden instinctively fit into:
Group 1: Jimmy Carter, Ronald Reagan, Bill Clinton
Group 2: Mitt Romney, John Kerry, Bob Dole, Walter Mondale, George McGovern
Obviously these are all the candidates who ran against an incumbent President, and one thing that sticks out to me is the charisma deficit the second group has against the first.
Joe Biden seems like the next member of group 2 than the next of group 1 entirely agnostic of the actual underlying election mechanics.
This is probably based on my personal bias, of course, but I’d love feedback on this.
Group 1 is full of people who successfully presented (deserved or not), some sort of camaraderie with and/or affinity for the blue-collar working man.
Group 2 is full of people who may have tried to do that, but mostly failed at it, and came across as technocratic elitists (at least compared to their direct competition).
Biden is winning the primaries specifically because he’s closer to Group 1 than Group 2, at least as compared to his current competition. As compared to Trump, he’ll probably lose that particular battle, but still…
I am not saying there is a “right” or “wrong” choice for them. Charisma jumped out to me, so seeing “blue collar” as something else that jumped out is interesting.
I also think it’s possible that we’re all, collectively, as a society, post-fitting our models here.
Like, the fact that Person X wins an election over Person Y means they must have been more charismatic, right? Or that they must have done a better job appealing to blue-collar voters, right?
Because we all assume that blue-collar voters are the swing voters who decide elections, and that they always prefer the more charismatic candidate.
In a world where a thousand people in Florida vote differently in 2000, I definitely think popular history would record that Al Gore was considered a popular man of the people as opposed to George W Bush who was considered an entitled and spoiled child of an elitist political legacy family.
The most obvious distinction between Group 1 and Group 2 is that Group 1 consists of famous winners, Group 2 consists of famous losers. That’s probably the causative factor here…
I did exclude no-incumbent elections for that reason.
But let’s look at objective differences. Group 2 has no governors (kind of Romney, although he wasn’t serving at the time), Group 1 is only governors, for example.
I suspect that you can find pre-2004 election sources confirming that Kerry was perceived as more elite than Bush.
Bush had an elite background, but he did a good job of pretending to be “the guy you want to have a beer with”. And of course, the media was happy to play up his supposed stupidity, which made him seem less elite.
I suspect that you can find pre-2004 election sources confirming that Kerry was perceived as more elite than Bush.
Bush had an elite background, but he did a good job of pretending to be “the guy you want to have a beer with”. And of course, the media was happy to play up his supposed stupidity, which made him seem less elite.
@EchoChaos says: “Here are two groups of people. Which one does Joe Biden instinctively fit into:
Group 1: Jimmy Carter, Ronald Reagan, Bill Clinton
Group 2: Mitt Romney, John Kerry, Bob Dole, Walter Mondale, George McGovern”
I don’t remember McGovern in ’72 (I was four years old and we didn’t have a television!), but I remember when he ran again in 1984, and from that list Biden most reminds me of Bob Dole, and he least reminds me of Kerry, with Romney a close second for “least seems like”. Of recent candidates for President the one’s Biden reminds me most of are Howard Dean, and George W. Bush.
“Obviously these are all the candidates who ran against an incumbent President, and one thing that sticks out to me is the charisma deficit the second group has against the first…”
From list one, when Biden is being empathetic there’s some Carter like touches, but I absolutely can’t imagine Carter ever saying something like he’d like to “take Trump to the back of the gym”, though I can imagine both Reagan and Clinton saying something like that. Clinton was a chameleon, with charisma to spare, and Reagan had literally been an actor, Biden just isn’t as persuasive as Clinton and Reagan were, so on balance I’d place Biden more with group two.
Force distribution on your buttocks.
On a chair, especially one that’s upholstered, you’re supported by a large area, centred on each buttcheek.
On a toilet seat, you only have support from a ring, and much of the force ends up on the upper thighs.
I don’t understand how this is such a problem for people.
You sit down, lean forward, and shove it out. Takes 2-3 minutes tops. Why would someone sit on a toilet for 20-30 minutes doing nothing?
Carl Sagan’s Pale Blue Dot speech and Charlie Chaplin’s speech from The Great Dictator are both well-known as inspirational speeches with a humanist bent. What are some examples of similar speeches?
The I Have a Dream speech by MLK is probably the most well-known example among Americans.
The Oration on the Dignity of Man is a speech that was drafted (but not immediately given, because he was prevented by the Pope) by Pico della Mirandola in 1486. It is a bold-text item in what remains one of the standard American textbooks on the Renaissance, so while it’s not exactly well-known, it’s about as well-known as these things get. It’s not quite humanist, exactly, but it’s really quite something when you consider the time period.
Possibly a bad example, but John Galt’s speech in Atlas Shrugged (Bad example, because it’s a hundred pages long, so over an hour long, and I’ve never managed to make it through the whole thing.)
For twelve years, you have been asking: Who is John Galt? This is John Galt speaking. I am the man who loves his life. I am the man who does not sacrifice his love or his values. I am the man who has deprived you of victims and thus has destroyed your world, and if you wish to know why you are perishing—you who dread knowledge—I am the man who will now tell you.
Paternity Leave is ending soon, so I’ll chuck up a list of pros and cons.
Pros:
-Blood pressure down 15 points
-No heart palpitations!
-Weight down 6 pounds
-Sleep 6+ hours everyday (you can always nap while baby naps)
-Dishes done every day
-Laundry done every day
-Nice home cooked meals every day (we made Sauerbraten yesterday: it was pretty sweet! Not the thing I would normally cook if I have to work constantly
-Lots and lots of time to D&D Prep
-Take nice long walks during the day, when it’s warm and sunny
-More time to read books during the day
-More time to yell comment on the internet
-Time to binge entire seasons of shows on Netflix or Amazon Prime while feeding baby
-Play with baby whenever I want
-Do not have to leave baby with strangers I barely know
Cons:
-Not currently sustainable due to $$$
-Simmering feeling of “what the fuck am I doing with my life” constantly is there, because you know you have to go back to work and are picking money over baby time. And you will continue doing this, until you are 65, when there is a solid chance you will be dead 5 years after that, and a large chunk of that time may be spent at an especially poor quality of life. 5 years is nothing. It is the time I’ve been married to my wife.
Things that were not cons:
-Not being able to talk to adults: I get to talk to adults a lot. I still have lots of friends that I text through the day. We hosted a dinner party and several D&D sessions over paternity leave, and I got to spend more time with both my parents and my in-laws. If anything, I get to spend more time dealing with adults that I actually want to see. The local library also has events 3-4 times a week for babies/parents where you can interact with other adults. Granted, it’s almost exclusively stay-at-home Moms, but there are adults.
-Not working on interesting problems: Guys, I don’t know what to tell you. I’m working a job that College ADBG would have dreamed about. It’s not fun. It sucks. It sucks less than most other jobs, but there’s no way on Earth I’d be doing it if I were doing it free.
Ending conclusion:
-Keep working, make sure my kids do not graduate with student loans, so they have an easier time being stay-at-home parents, if they so choose.
-Revealed Preference: Not Having to Worry About Money and Having Nice Things is worth an awfffffffullllll lot, or else I just wouldn’t show up on Monday.
I feel like I know you well enough that anything I could say on the policy or revealed preferences would be obvious and we’d probably agree. I guess one possibility is that you could start adjusting your remaining career around towards working fewer hours or retiring sooner, but at a lower consumption level.
This reinforces a point I made in a recent discussion, that in general, stay at home moms are not especially burdened by the experience, compared to the likely alternative. Exceptions abound, but there is more variety, stimulation, and socializing available in that “career” than is often portrayed, and less in the average career than is often assumed.
Which doesn’t say it is for everybody, or even of course every woman.
But the dissatisfaction that it is assumed to bring is probably related to either burnout in initial extremely busy period, uncertainty, not taking advantage of options like meeting with others or reading, etc., or making comparisons to particularly interesting or high status careers that will never be options for everyone.
Anyways, I’m glad you took advantage of the time to get acquainted with a new person.
This reinforces a point I made in a recent discussion, that in general, stay at home moms are not terribly burdened by the experience. Exceptions abound, but there is more variety, stimulation, and socializing available in that “career” than is often portrayed, and less in the average career than is often assumed.
Which doesn’t say it is for everybody, or even of course every woman.
We currently do a lot in our society in order to persuade, cajole, and push women into careers. I wonder how different the numbers would look if we weren’t doing that nearly so much. Sort of the way that in more egalitarian nations fewer women choose to be engineers not because of sexism, but because they just don’t want to be engineers.
As a datapoint the other way, my wife stayed home with our three kids, and has found it quite hard to go back to work in her field–her experience and knowledge were too far out of date, and the “start from the bottom” kinds of jobs she could reasonably get weren’t too appealing for a mom in her 50s. She’s working again, and enjoying it, but in a completely different field.
That’s… not really at all a data point the other way. (Or rather, it is, but only because of the overly broad phrasing of my thesis, I suppose).
It’s true that homemaker skills might not transfer or that a particular career might not be able to be learned/advanced from the sidelines–that’s a very reasonable concern for a woman with long-term career goals, and an example of a possibly overlooked sacrifice parents might make for children.
The question is, while devoting herself to childcare, did she find it particularly miserable? (I’m open to the idea that I’m generalizing from unrepresentative samples)
“…This reinforces a point I made in a recent discussion, that in general, stay at home moms are not especially burdened by the experience, compared to the likely alternative…”
FWIW, my wife has been stay at home since ’93, but after our youngest son was born in ’16 she’s expressed a lot more interested in working or being a student again.
“How many kids do you have? There’s a 23 year old age gap between youngest and oldest?”
Our first son didn’t live long and we were childless together with my wife being stay at home for over ten years, our second son will be sixteen years old in January, and our third son will be four in June, so about an 12 year gap between them.
she’s expressed a lot more interested in working or being a student again.
Not to go off on a tangent (by which I mean I don’t think the following directly addresses your point) but I’d love to be a student again, too, assuming I didn’t have to also work. College was the most carefree, fascinating, socially engaged time of my life. My mom practically had to drag my dad away when they dropped me off for orientation.
People also often want a change in their life at some point; see the whole mid-life crisis cliche among career men in their, what, 40’s?
But I do take your point, I’m probably over-looking plenty of difficulties. Especially when you are nearing fifty with a toddler. (?)
My college days were not carefree, but I definitely miss the easy socializing.
Oh, there’s the difficult weekend writing a paper or cramming for finals here and there, and there was the time I blew up a beaker while dissolving a cat’s leg in Anatomy…lol… but it wasn’t like anyone’s life depended on anything I did.
A lot of that carefree feeling was deferring expenses via subsidized loans that later made the adult years more stressful… but now I am on a tangent.
@Plumber
Didn’t meant to pry, I was just thinking stamina, really. Even at forty there’s a difference between how we feel having been kept up all night now versus ten years ago.
This reinforces a point I made in a recent discussion, that in general, stay at home moms are not especially burdened by the experience, compared to the likely alternative.
Nor stay-at-home dads; my brother has been enjoying that experience more than he did his abbreviated professional career. Not suffering any lack of adult social contact, largely free to pursue his interests/hobbies, on top of the bit where he has two kids to play with.
There’s definitely a “grass is always greener” mentality, on both sides. For one, I am only taking care of a single infant. That’s a much different story than taking care of multiple toddlers, because toddlers tend to require a good deal more attention than infants.
However, most jobs involve a great deal of stress and responsibility, at least that ones that pay good money. I still have to do a bit of work pretty much every single day on my paternity leave, but it’s like 20-30 mins a day rather than my whole freakin’ day. Plus, a lot of the stuff you deal with is insanely boring or political.
No, we’re pretty good at keeping them too disunited to effectively rebel.
Seriously I’m trying to think of a historical situation where the sub-states quarrelled more with each other than the emperor. I think the problem is with framing them as the aristocracy, concerned with their own political power, when they’re more like squabbling peasants, concerned over property rights or who farted in whose direction.
I took three weeks of paid paternity leave for the birth of my most recent daughter about a year and a half ago, and I found that I did miss work for manufacturing challenges more severe than I would give myself automatically and forcing me to solve them.
I really like my job and don’t feel that it sucks.
I think there’s a really underestimated feature of work there. I have a life that’s a fair amount too easy – I have to goad myself to take on challenges that most jobs provide you with for free. And then people pay you to have the satisfaction of undertaking them!
I work in a manufacturing facility, so that’s a no-go. Technically, yes, I am allowed to do so per corporate policy. In reality, no, not allowed, because the factory feels it would impact performance (and would definitely impact morale if people see the factory controller getting to work from home while they need to come to work).
Simmering feeling of “what the fuck am I doing with my life” constantly is there, because you know you have to go back to work and are picking money over baby time. And you will continue doing this, until you are 65, when there is a solid chance you will be dead 5 years after that, and a large chunk of that time may be spent at an especially poor quality of life. 5 years is nothing. It is the time I’ve been married to my wife.
Don’t do some Cat’s in the Cradle shit, man. Spend time with your kiddo, as much as you can. My parents always intended to pay for my education, but my mother dying of cancer put a bit of a damper on that. If my dad had thrown himself into work hard enough to pay my way, I probably wouldn’t have ended up nearly as happy or well-adjusted, or have nearly as good a relationship with him. It cost me a couple years of my life that I dedicated almost entirely to paying off loans, but I gained the kind of childhood my friends are jealous of. I wouldn’t trade that away for those years back.
Congratulations on having an experience most men will never have!
Sleep 6+ hours everyday (you can always nap while baby naps)
Yes, definitely do this. It is tempting to use the “free time” when your baby naps, but it is better to get enough sleep whenever you can.
Dishes / Laundry / Nice home cooked meals every day
This week I work remotely from home. When I need to take a short break from work, instead of going to company kitchen for some free coffee, I can do some dishes or exercise shortly instead. It helps to free my mind the same way, and at the end of my working time, the dishes are done — it’s not something I need to do at the evening when I am tired. This small difference already improves my mood.
Cooking at home is great for health, and it saves money. When I am at work, I have a lunch in the center of the city for 7 €. When I am at home, I make a lunch for the entire family for 3 €; and I have control over how much salt and sugar gets there.
More time to …
Yeah, if you can multitask something with child care, you get virtually unlimited amounts of time to do that. With small kids — learn to do things one-handed. With bigger kids — keep talking to them, or give them something interesting to do.
Not currently sustainable due to $$$ / Simmering feeling of “what the fuck am I doing with my life” constantly is there … until you are 65, when there is a solid chance you will be dead 5 years after that
I have this feeling for the last 15 years. It’s like being at a playground, but not allowed to play. There are so many awesome things I would like to do. But instead I spend most of my day at job, and then I am too tired and frustrated to do anything meaningful; and there is a good chance it will go on exactly like this until I die. All the things that I dream about… will remain a dream.
My salary is big enough that I can support my wife staying at home. Unfortunately, her salary is not big enough that she could do the same for me. Not everyone is a software developer. (Perhaps me working half-time and her working full-time could pay our bills together. I mean later, when the kids are of school age.)
Curiosity, Opportunity, and Spirit very much fit in with one of the Puritan trends of naming children (especially women) after virtues. It is not a big step to branch out to other Puritan naming conventions. Therefore I propose we call the next rover “If-Christ-had-not-died-for-thee-thou-hadst-been-damned.”
the Puritan trends of naming children (especially women) after virtues
I suppose that’s better than the Victorian habit of naming their daughters after ornamental vegetation (Rose, Violet, Daisy, Lily, Briar, Heather, etc).
And then there’s the Roman habit of just calling their daughter by the feminine form of their fathers’ family name, and numbering them if there’s more than one of them that need to be distinguished. When I first started reading up on early-Empire Roman history, it had struck me as odd that half the women seemed to be named “Julia”, and that’s is what the reason for it turned out to be (“Julia” = feminine form of “Julius”).
I suppose that’s better than the Victorian habit of naming their daughters after ornamental vegetation (Rose, Violet, Daisy, Lily, Briar, Heather, etc).
My impression based on looking at 19th Century baby name tables in the interests of writing research was that the Victorians mostly named their female children Mary.
Is it useful to reify parasite load?
Why not talk about specific parasites?
We do things to reduce the load of polio and measles. Sanitation is a general tool for reducing infection. (Though there is a theory that sanitation delayed polio from infancy to childhood, to disastrous effect.)
Cochran-Ewald-Cochran argue that lots of diseases are parasite load (eg, schizophrenia), without identifying the parasites. This seems like a useful argument, but adding the numbers up and calling that parasite load doesn’t seem very useful to me.
I came across the term, Wikipediaed it and still don’t know much about it. Is there a list of parasites, effects causes, tests, methods of invasion and where they’re found for those that infect humans?
I vaguely remember hearing a theory that a lack of parasitic load might be part of why allergies are so much more common.
Disclaimer: I don’t remember the source and am not sure if allergies are actually more common nowadays but if this sort of thing interests you, it might be something worth checking, unfortunately, I currently don’t have the time to do so myself and hope going “I think I remember seeing something interesting this way” is ok with the appropriate disclaimer.
I’ve found I like Explore Cuisine edamame pasta– the flavor is a little odd, but I can tolerate it, and the texture is excellent. I need to check, but I think they’re a good bit cheaper at Sprouts than at amazon.
Anyone know of a good forum for discussing qi gong in general?
The forums I’ve found are for specific systems, but I’d like to find a forum which permits comparing systems and possibly discussing side issues. I’m hoping this exists.
I realize the requirement for only discussing a specific system may be the result of bitter experience.
I’ve not come across any general forums. Perhaps there’ll be enough interest here to generate some useful discussions?
I regularly practice some chi gung (if you’ll forgive my spelling..) mostly as preparation for tai chi. Not a specific system, just some exercises recommended by my first tai chi teacher.
One thing that stands out to me is that what exercises I do is much less important than how I do them. Hence I always think of them as ‘silk-reeling’ exercises.
I snagged the question from a semi-private discussion elsewhere.
One thing mentioned was a society which has tailoring but not scissors, and I realized I have no idea whether tailoring could be done if all you had was exacto knives, leaving aside the question of the likelihood of having exacto knives (possibly with heavier blades) without having invented scissors.
My contribution was an X-Files (?) movie where an ancient virus was presented as especially dangerous. I *think* it not having evolved in the presence of modern immune systems would tend to make it less dangerous, but I suppose it could go either way.
The Movie Looper – the plot relied on the future mob sending people back in time to be eliminated by hitmen in the present day. The hitmen would kill these unwitting time travelers for a while and were paid in bricks of silver that were sent back in time with the victims.
Sooner or later, however, the hitman’s future version would be eliminated by the future mob (to get rid of the witnesses). They would be sent back in time to be killed by their past version who would know what was going on since the hitman would be paid in gold bars for this job.
I never understood why the mob just didn’t send the future version of the hitman to a different hitman. Why send someone you want to kill to the only person on earth with a vested interest in not killing them? Over the course of the move we see this happen two times and each time the past version wimps out and refuses to off their future version.
If hitmen were unionized, work rules would require 10 hitmen for each victim, and then you’d need 100 hitmen to off the 10 hitmen and it would just spiral out of control.
According to the premise, there are “tracking systems” that make it near impossible to dispose of a body in the future. How disposing of them into the past is a workaround for this I don’t know…wouldn’t the tracking system still know the last place the victim was before the time travel event? Since the victims are bound up and masked, you would think their last known location would still be “boss’s lair with the time machine.”
Also, I’m not sure why being able to track the bodies matters that much. Great, you’ve tracked the body to the volcano. Good luck gathering forensic evidence.
Oh, but as for why time travel is a good idea for active volcano disposal, we know when the volcano was active in the past, and we’re assuming it’s not active in the future/present. So you take the victim to the dormant volcano and then chuck them into the past when the volcano was active.
Ok, maybe they had to use time travel, but then they could have strapped a bomb to their neck rather than a silver bar to pay a killer, or just teleported them above a volcano as Conrad says, or into the ocean, or into a mountain. Sloppy world-building.
I see a pattern here:
“So, to me the notion of what’s the entire galaxy or world that you are creating or something, I can’t imagine getting excited about creating that. To me what I’m excited about is creating a two hour long experience for an audience to have in the theater. And that means how they engage moment to moment with the story and the characters that are on the screen. And that doesn’t change in either one of those.”
Ok, Rian, this may work for comedies/murder mysteries like Knives Out, but please stay away from fantasy and sci-fi.
In an early James Bond film, the moment when one car chases another down a gravel road…and the sound effects for car-going-around-a-corner include squealing tires. The kind of squealing-tires sound that happens on pavement, and not on gravel.
I don’t remember which Bond film it is, but I know that it starred Sean Connery. At the time, I was trying to watch the Bond movies in sequence.
Trains driven by diesel and nuclear engines through an interstellar network of wormholes. As well as I at least dozen other details in Pandora Star by Hamilton, bit that’s the most egregious. If you can open a wormhole to anywhere within N light-years, you certainly can open it to a hundred or two kilometers above, build as many solar panels there as you need and lay a cable through it. In fact this possibility – go to space by just opening a wormhole there – is specifically mentioned in the book and used for a couple of plot critical things. Hell, Earth in fact does have all its energy generation done by solar panels on the Moon! However all other planets prefer to enjoy their smog and oil spills (also mentioned in the book a good number of times) or go primitive, except for those few that can afford 21st century clean energy technologies.
I think a recurrent theme of Hamilton’s, humanity reaches for the stars and then proceeded to make all the same mistakes as their forebears. Often in exacting detail. One colony expedition I believe doesn’t even bother to supply the settlers with power tools which is just insane given they got there via FTL spaceship.
That happens on the character level too. Ozzie Isaacs spent few hundred years as the richest person and the most famous adventurer in the whole galaxy, and yet he doesn’t know how to console – or send off – a teenager, or how to hook up with a woman. Many others are also surprisingly naive about some things for their stated multi-lifetime experience.
I’ve recently been binging on old Doctor Who (specifically, early Jon Pertwee) and I’ve had a fair bit of that with the Ambassadors of Death storyline.
I know Doctor Who well enough to be aware that the show’s approach to science was always fast and loose, but this particular story’s take on radiation was giving me constant needle-scratch moments. For a start: if something is radioactive enough to kill you outright on touch, it’s probably radioactive enough to give you severe radiation poisoning by standing anywhere near it. Also, make everything in its immediate surroundings insanely radioactive, too. Oh and let’s not forget “isotopes” being used as meaning “radioactive substance” (there’s actually a crate labelled “Warning! Isotopes”, or some such, on screen).
Other than that, I never fail to be amazed how absolutely incompetent UNIT are. It’s almost like the British military took the absolute worst performers that, for whatever reason, cannot simply be sacked and said “put those lot in UNIT, they won’t do any harm there”. I get that they may be outmatched when faced with a hitherto unknown alien menace whose powers greatly exceed our own, but in Ambassadors they repeatedly get their asses handed to them by what are essentially criminals and it’s not because the criminals have superior information (they do) or incredibly cunning plans. UNIT – a military organization, with military-grade gear – is incapable of handling mobsters armed with pistols in a straight up firefight.
While on that subject, they could learn a thing or two about proper security protocols, because – apparently – if you’re guarding a place you know is under threat, you just let enemy agents come and go as they please and/or station solitary guards in key spots to be knocked out by said agents or otherwise overpowered.
Which reminds me of the preceding storyline (Doctor Who and the Silurians) that helpfully informs us – in these trying times – how not to perform a quarantine. Pro tip: if you’re in a sealed underground complex that your military controls and you learn that a highly infectious and deadly pathogen has been introduced for the specific purpose of culling the human race, you do not allow one of the people who were in the room with patient zero, just before he helpfully expires to demonstrate just how bad the disease is, to get on the early train to London before you seal off the place. The correct response is: nobody gets in or out starting right now!
For a start: if something is radioactive enough to kill you outright on touch, it’s probably radioactive enough to give you severe radiation poisoning by standing anywhere near it.
Yup. One vivid example is the combined statistics from the two Demon Core incidents, where scientists running near-criticality experiments with a prototype core for a Fat Man style A-Bomb on two separate occasions (with the same core, hence the name) accidentally sent the core into a supercritical state (i.e. undergoing an uncontrolled fission chain reaction).
In each event, the person closest to the core (working directly with it, and in the second incident, physically touching it to knock the top off the core to take it out of critical) died of acute radiation poisoning, 25 and 9 days later respectively. About a third of the other people in the room at the time eventually died of long-term diseases that can be triggered by radiation poisoning (two cases of acute myeloid leukemia and one case of aplastic anemia; the former in particular hard to draw conclusions from since the lifetime base risk of cancer-related mortality is counterintuitively high, and AML in particular is fairly common and has other well-documented risk factors including smoking). Nobody died right away. I’m not even sure it’s possible to kill someone immediately from radiation alone, short of pumping enough energy into them to literally cook them.
Also, make everything in its immediate surroundings insanely radioactive, too.
Only for neutron radiation, which you usually only get from an unshielded/under-shielded active nuclear reaction. The forms of radiation you see from radioactive decay, alpha particles (high-energy He+ ions), beta particles (high-energy electrons), and gamma rays, will ionize and denature molecules, but don’t affect atomic nuclei and can’t make anything radioactive itself. Well, technically matter heats up when it absorbs radiation, and it will re-radiate some of that heat energy as infrared or visible light, but that’s not what we mean by “radioactive” in this context.
Only for neutron radiation, which you usually only get from an unshielded/under-shielded active nuclear reaction.
You’re right, of course. However, the show makes a big deal out of being able to detect where said radioactive sources have been by residual radiation, complete with Geiger counters ticking like a metronome to a Dragonforce song.
Now, technically, this could be the result of radioactive matter being shed by the sources (despite the fact that there appears to be no way this could happen), rather than the environment itself becoming radioactive, but I’d venture that for the very definitely unshielded humans doing the investigating it’s a distinction without a difference.
The only way to deal with it is to accept that “radioactivity” in the context of the show is just another word for “magic”.
However, the show makes a big deal out of being able to detect where said radioactive sources have been by residual radiation, complete with Geiger counters ticking like a metronome to a Dragonforce song.
You’re right. I’d forgotten about that part, as it’s been years since I last watched that story.
The only way to deal with it is to accept that “radioactivity” in the context of the show is just another word for “magic”.
As I recall, the same can be said of “reversing the polarity” in that era of Doctor Who. Kinda like how the later Star Trek series frequently “Quantum” or “Neutrino” to mean “magic”.
I was wondering whether I was remembering it correctly, so I just checked and at some point the radiation being detected from the sources is quoted as being 2 million rads plus a bunch of other numbers that I can’t quite make out over the screaming. The screaming might be me.
they repeatedly get their asses handed to them by what are essentially criminals and it’s not because the criminals have superior information (they do) or incredibly cunning plans. UNIT – a military organization, with military-grade gear – is incapable of handling mobsters armed with pistols in a straight up firefight.
I googled it and eww, what’s this crap? Reminds me of Adam West’s Batman, but at least that one was supposed to be funny. Anyway, it’s an episode from 50 years ago, I’m sure that in half a century our present shows will look equally idiotic.
Reminds me of Adam West’s Batman, but at least that one was supposed to be funny. Anyway, it’s an episode from 50 years ago, I’m sure that in half a century our present shows will look equally idiotic.
I actually like it for the old-school charm. For all my poking fun at it, the writing is generally clever enough to keep me reaching for the next episode, even though it’s way past bedtime.
The core of the story is actually a pretty solid thriller that could really shine if given a few “ok this is silly/doesn’t make sense” passes. Not much we can do about the production values, though, unless we assume a much later date and bigger budget.
In Ocean’s Twelve, as part of the heist, Julia Roberts’ character impersonates famous actress and celebrity… Julia Roberts. My family, with whom I was watching the movie, thought this was clever. I could barely get through the rest of the film.
Heist show Leverage’s showrunner ran fan Q&As on his blog during the run of the show:
[Fan question]: LASER TRIPWIRES DO NOT WORK LIKE THAT!
Answer: Do not even get me started on the laser tripwires. However, TV Tropes basically require their presence for The Big Heist. And what’s that? It’s the oncoming rumble of the FUN TRAIN! WOOOOOOT!! WOOOOOOT!
Don’t know if it exactly counts as a detail or more a switch of genre – from supernatural horror story to plain old ‘eek it’s a monster’ horror story – but years ago I was enjoying having the living daylights frightened out of me by a book in which there was an (apparently) supernatural entity slaughtering everyone in a remote town. It seemed to be unkillable and able to go anywhere and get anyone it wished, and nobody had the faintest idea what they could do to stop it. On top of that were the apparent supernatural elements, as I said, which were freaking out the plucky band of ‘demon’s happy meals on legs’.
Then for no discernable reason it swerved off the tracks to be “Surprise, it’s a material animal monster!” which totally killed the entire mood for me and stopped scaring me. Because if it’s material, it can be killed (and was, eventually, by our plucky gang). All you need is a Sufficiently Big Gun and/or bomb(s). You can’t shoot the Devil or a Lovecraftian cosmic horror, but a big ugly monster made out of flesh and blood? No problem (eventually). I did read on to the end, but I was so disappointed – the delicious scares had stopped because I was just turning the pages until our heroes accumulated enough artillery to blow the thing to kingdom come.
Just about any medical drama (or comedy, for that matter) seems to take place in an alternate universe where HIPAA privacy rules aren’t a thing.
There’s one episode of West Wing where John Larroquette’s character storms into the White House Chief of Staff’s office (right down the hall from the President’s office) brandishing a cricket bat and shouting about how he’s going to kill someone. And at no point in the scene does he get tackled by a Secret Service agent.
In the movie 300, the Persian cavalry is shown as having stirrups. This is about as anachronistic as it would be for a movie about Attila the Hun that depicts his warriors as armed with matchlock muskets.
Yeah, but most of those were defensible as artistic license, depicting a combination of how the Spartans themselves would have seen things and how the story would be told today as a fictional story set in a heroic fantasy universe.
For the former, one big example is minimizing the contributions of the Athenian navy and reducing the contributions of the other Greek cities’ soldiers at Thermopylae, especially the Thesbians and Thebans who stayed and died alongside the Spartans to cover the retreat of the rest of the defenders. Another is showing the Spartans going into battle wearing flashy read cloaks and budgie-smuggler loincloths instead of heavy armor and face-covering helmets: Spartan artistic depictions of themselves from that era often show their warriors fighting naked except for a flashy red cloak fluttering dramatically in the breeze, and the loincloths were no doubt added to keep the rating down to R instead of NC-17.
For the latter, the clearest example is probably the Persian army apparently taking their stylistic cues from Mordor.
What’s the excuse for the bit where Gerard Butler gives the speech about how Sparta has no use for individually capable warriors because it’s all about the cohesion of the phalanx, the movie then offers one brief scene, less than a minute, of something close to a proper phalanx, and then it’s all about the individually superb lone-wolf Spartan warriors individually swordfighting the Persian horde into oblivion?
That bothered me, too. Less than the stirrups, since the did at least give us the token scene of them doing it right-ish before switching over to flashy individual dance-fighting.
And even the latter is defensible by a combination of the two categories of artistic license I called out before: Spartan artistic depictions of their warriors in action that I’ve seen appear to be split about 50/50 between showing something like the tight formations they would actually use and showing warriors fighting individually. And the latter also maps better to modern movie fight choreography tropes, so I understand them going with that instead of figuring out a way to keep tight formation phalynx fighting exciting for two hours even though I would have preferred the latter if they could pull it off.
Another is showing the Spartans going into battle wearing flashy read cloaks and budgie-smuggler loincloths instead of heavy armor and face-covering helmets: Spartan artistic depictions of themselves from that era often show their warriors fighting naked except for a flashy red cloak fluttering dramatically in the breeze
As the French history painter Jacques-Louis David demonstrates, all a real Spartan warrior needs is a flower crown and to lace his sandals up right before going into battle. The Romans, being more practical, dispensed with the flower crowns 🙂
The Romans, being more practical, dispensed with the flower crowns 🙂
I think my favorite part of the second painting you linked is the strategically-aligned scabbard worn by the fellow in the foreground towards the left side of the frame.
Everything else about 300 was perfectly realistic! As soon as you remember that it’s being told within a frame story by the lone survivor trying to raise morale before the real battle. The enemy as a horde of inhuman monsters being carved apart by our superhuman fighters seems par for the course.
Baphomet also makes a cameo in Xerxes’s field tent, which is a completely inexplicable bit of propaganda for the Spartan survivor to make up. More armor on the Persians, deformed giants, bomb-throwing magi, and a rhinoceros, sure, but not a demon first described in the High Middle Ages.
Baphomet also makes a cameo in Xerxes’s field tent, which is a completely inexplicable bit of propaganda for the Spartan survivor to make up.
Never mind Baphomet, I was highly disgruntled that they went over the top with the S&M rig-out, facial piercings and Jagganath-style chariot for Xerxes but mentioned nothing, nothing, about the plane tree?
Everything else about 300 was perfectly realistic! As soon as you remember that it’s being told within a frame story by the lone survivor trying to raise morale before the real battle. The enemy as a horde of inhuman monsters being carved apart by our superhuman fighters seems par for the course.
Mostly, although I’m still left wondering why a Spartan would have portrayed the Ephors as a bunch of perverted, misshapen priests, when every in the audience would have known that they were actually a board of annually-elected magistrates tasked with making sure the Kings didn’t try to go beyond their lawful powers.
“Anonymous Colin” asked about “fodder for Weird Fiction” in the integer thread. To answer here:
I love ancient aliens. Not the wan recent kind popularized by Erich von Daniken and later The “History” Channel who showed up to build the Pyramids and make the Sumerians worship them. I mean the kind whose expanding civilization drove Permian species extinct and changed the global climate with their industry, the kind who make sense and reason of Deep Time, so we are not forced to believe things like “If human governments don’t change their policies, there will be catastrophic global warming and mass extinction, just like some pre-human events human scientists have discovered, but never mind those because there weren’t rational animals around to be guilty for them.”
Hot take: college students complaining about being underemployed and trapped in crappy, precarious work are the economic equivalent of the sort of unhappy singles whom Scott discusses in “Radicalising the Romanceless”. By this I mean that both groups are unhappy because they did what society told them to do, and now are either not benefitting (at best) or actively being held back (at worst) because of it. When it comes to getting dates, society (or at least a large portion of — especially blue-tribe, pro-feminist — society) says, “Women like caring, sensitive men, so be all emotionally available, do whatever your crush asks you to, and eventually she’ll realise what a swell guy you are and start dating you.” When it comes to getting a good job, society (or at least the state education system) says, “Graduates earn so much more than non-graduates! If you don’t get a degree, you’re really shooting yourself in the foot in terms of future earnings! If you have a degree, you can get all sorts of amazing jobs!” In both cases, of course, this advice turns out to be rubbish, but many people accept it when they’re growing up it because it’s what society tells them and they have no reason to doubt it. In both cases, the reaction when people say “Hey, I’m doing what society said I’m supposed to and my life still sucks, what gives?” is, as often as not, “You’re a bad, entitled person who deserve your suffering! Society doesn’t owe you a living/girlfriend!” And in both cases, a lot of people respond by becoming bitter and disillusioned and supporting groups which promise to right the situation — radical left-wing politics in the case of underemployed university grads, and incels or PUA-types in the case of the romantically unsuccessful.
Yes, this. I don’t have a terribly good answer for what to do about it, in the educational case. In the romantic case, the mediocre but better than nothing advice has been to look into what the PUA and adjacent manospheric communities are teaching, but be careful not to drink the kool-aid. We need an equivalent for people who spent four years on an education that doesn’t really advance their economic prospects.
Also, in both cases, could we maybe stop giving the bad advice in the first place?
The advice isn’t that bad, really. Sure there are some great jobs in the trades and corners of the civil service that don’t require degrees. But they are no more universal solutions than learn to code is. For most everyone else they probably are better off with a four year degree, even with debt, than without. It is dumb that assistant manager of the clothing department at Target requires a degree, but that’s where we are. And it’s better to be an assistant manager of the clothing department at Target than a part time stock person the clothing department at Target.
That may be so, but if the part time stock person is actually good at her job, she’s getting promoted.
There’s a pretty hard ceiling on promotions without a degree at Target, apparently. Getting any salaried managerial position without a degree is theoretically possible but in practice never happens.
Walmart is less credential-happy.
Some kind of alternative credential that would work as rational astrology for employers, but wouldn’t cost employees a gazillion dollars, would be a huge win for mankind.
Agreed. And IMO, this is where the analogy ultimately fails.
While getting any college degree at any price isn’t necessarily a good idea, it is clearly true that statistically, those with degrees do better economically than those without. A degree doesn’t guarantee wealth and opulence, but for 99% of people, having a degree is better than not having a degree.
This is not true for the “nice guys” out there. The issue isn’t just that being a nice guy is no guarantee of romantic success. It’s that being a nice guy is actively harmful and does not make one better off, all things considered, at all. It’s not as if being the nice/sensitive guy clearly works for 99% of guys but there are a few edge cases where it doesn’t. And yet, despite clear and obvious evidence that it doesn’t work for most people, society (for reasons largely political) absolutely refuses to change/update this advice.
“Go to college” is good, but not perfect, advice. “Be a nice guy” is abjectly terrible advice.
But that’s weighted by A: people from an upper-class background who are going to be rich no matter what, and B: people who go to college to pursue a specific professional career. Those aren’t the people the usual advice is targeted at; they’re going to go to college no matter what. For the person on the margin who might or might not go to college, “you should absolutely go to college, you will be more prosperous if you do no matter what you bother to study”, does not put them in either of the groups that unambiguously prosper afterattending college.
I don’t believe that number is 99%. I don’t believe it is even 90%, or even just 90% of the people who actually attend college. Particularly if “having a degree” is not qualified in some economically sensible way. And, again, the (bad) advice is for people on the margins, least likely to prosper from it.
Fair points, particularly about the marginal advice-receiver.
That said, I still think I’m correct that “go to college” is generally good advice that sometimes fails, while “be a nice guy to women” is generally bad advice that might still, nonetheless, occasionally succeed.
> “Be a nice guy” is abjectly terrible advice.
This is why I’ve become a miserable misanthrope. And opposed to women’s suffrage.
@Garrett: But if you’re a misanthrope, you should be happy when women suffer! (But not only women, as that would be misogyny.)
I’m against women’s sufferage.
Part of the reason PUA was created is because people (mostly men, I presume) were willing to seek out and pay for such advice. Are people willing to do the same for career and self-improvement advice? There’s a few financial shows around (though again, people are willing to pay for financial advice) but nothing like that.
Would the equivalent be an online community of people figuring out how to get acceptable credentials for hiring/promotion with minimal cost and effort?
I think that’s too indirect. If you want a literal equivalent, it’d be an organization of people with a strong ideology that corporations don’t understand who they really need to hire in order to be profitable and repeatedly choose to promote workers that are popular with management over the real hard workers who keep the company running. The ideology would then focus on strategies to get paid as much as possible for as little work as possible with a strong ideology that corporations are thieves and it’s all BS from top to bottom. Plus some virtue posturing about social and moral decay, probably going back to the halcyon days when everyone had their own farm, was their own boss, etc.
More healthily, some kind of career advice/coaching based on getting the skills, credentials, and experience, and leveraging them to get the best job possible however they define that (whether high pay and low work, personally fulfilling, etc). Which is analogous to the healthier part of PUA too: if PUA was just telling me to touch up their social skills, dress better, get in good shape, and get successful, then I don’t think most people would find it objectionable. Likewise, if someone advocates stealing from your employer that’s not great, but if someone advocates negotiating harder I doubt most people will object.
Hotter take: This is all part of a long-term plan to restore ‘Murica back to its anti-elitist roots. What better way to build up a nation of individualists who don’t trust no “experts” than to select hard against those who trust what society tells them?
What is the selection effect here? Those who conform to the narrative (got a degree and a good job) are the ones who succeed, and get the financial and social power to further entrench the system.
Could someone please reply to this comment? I accidentally unsubscribed from email notifications, and I’m not sure if I successfully resubscribed.
Here’s a link to my favorite existentialist Flash game for your trouble.
Reply
Thanks. Looks like it’s not working.
Copyright/Trademark as Eternal Rent-Seeking
When a character created in the early 20th century, occasionally even in the Victorian era, maintains a certain continuous level of popularity, the heirs of the deceased creator get royalties. These royalties can then be used to harass creators of derivative works after the character and writings are believed to be in the public domain.
The oldest example I know off the top of my head is Sherlock Holmes.
In Conan Doyle’s home United Kingdom, the copyrights expired in 1980, but the stories were somehow removed from the public domain from 1996-2000 by his heirs. Meanwhile in the United States, due to laws like the Sonny Bono Copyright Term Extension Act, publication before or after 1923 was frozen as the cutoff between whether it was legal or illegal for the public to freely Cher a work of human creativity.
As all but ten Sherlock Holmes novels and short stories were published before 1923, the Conan Doyle estate continued to demand royalties for all Sherlock Holmes works from first appearance A Study in Scarlet on, under the claim that Holmes and Watson’s personalities were seamless wholes that included copyrighted elements. Fortunately in 2013, Leslie Klinger (lawyer and editor of The New Annotated Sherlock Holmes) refused to agree with this out of fear of legal fees and was vindicated by the original court and Seventh Circuit Court of Appeals (the Conan Doyle estate tried to appeal all the way to the Supreme Court of the United States, which declined to hear their appeal – perhaps under the legal theory of “Are you shitting me?”)
Meet the rent-seekers!
As far as I can gather, the most legitimate holder of the few remaining and geographically-limited Arthur Conan Doyle intellectual properties is a private corporation held by the ex-wife of Sheldon Reynolds, an American TV movie producer-director who attempted to approach Doyle’s literary estate in 1976 and found that the IP had been transferred from his three surviving heirs to a holding company, Baskervilles Investments Ltd, that had gone into receivership with the Bank of Scotland. This legal entity has a detailed but self-serving (it never mentions the copyright trolling Klinger broke) history of the copyrights. Hilariously, they call another company, Conan Doyle Estate Ltd, “copyright trolls” for claiming that they own the late Doyle heir Lady Jean Bromet’s 1/3 of the IP.
It’s a big problem. We reached the point long ago, even before Sonny Bono, where copyright is having the opposite of its originally intended effect. It’s supposed to encourage creativity (by helping creators get paid for their work), not stifle it (by allowing corporations and media conglomerates, often with no relationship to the original creation of the work, to monopolize creative work that was done before my grandparents were born). The ideal period for copyright, in my opinion, would be something like 25 years, non-renewable. The original Star Wars trilogy, The Terminator, and Back to the Future would currently be public domain in a just world.
There’s a case to be made that members of the public shouldn’t be allowed to dilute the author’s vision while (s)he is alive. Unfortunately corporate authorship is so common that this might not be a viable reform! 25 years non-renewable from date of release would be roughly ideal for corporate ownership.
There’s an easy fix for that – corporations are not people. Require them to record actual author(s) of works for hire, and they get to keep copyright only as long as there’s at least one author still working for them. (Or if we want to be nice to corporations, as long as there’s at least one author still alive.)
If we want to provide slightly more predictability, make the rule “author’s life or 25 years, whichever is longer”. That allows an independent author who dies young to provide for their surviving family, and allowes Disney et al. to plan to have whatever-it-is for at least 25 years.
My hesitation with this is that if we start extending the human lifespan, the period could get too long again. If people start routinely living to 120, we’d eventually have a lot of 100-year copyrights and the situation would be little better than it is now. I’d want to have a hard chronological limit, just in case the transhumanists win. Maybe 50 years?
That’s a pretty big “if”, if you ask me. And even if it does happen, we can revisit copyright law as and when it becomes an issue. In the meantime, I think it would be better to base our copyright laws on actual human lifespans, not on lifespans from a hypothetical future which may not even come to pass.
Yeah, if the last few decades is any indication, “we need to extend copyrights” is a problem that’s too easy to solve, not too difficult.
The problem is that copyrights are easy to extend but hard to shorten. So this could be a real problem if we don’t prepare for it before the problem arises, as long as we’re magically reforming copyright law anyway.
That’s a good problem to have. We can cross that bridge if we come to it. Chances are, society will have evolved in unforseeable ways by that time that make present planning futile anyway.
If someone lives to be 120, and if it matters whether copyright lives with them, then there’s a good chance that you’ve got an author living in poverty while their works are still immensely popular and being “ripped off” by lesser, but more commercial, corporate imitators. That’s going to be massively unpopular.
I’d be in favor of an absolute fixed term for copyright, but if copyright doesn’t last for life or at least for a fixed term of approximately an adult lifespan, that’s the sort of unpopularity you’re going to have to account for.
The whole author starves while work remains popular thing is a hypothetical so very hypothetical I think it likely to fall into the category of “inheritance Taxes bankrupted the family farm” – that is, no actual examples to be found.
Most books earn almost all their earnings in the first 6 years. Ebooks have made that falloff slightly less brutal, but not.. that much less brutal. Books that sell significant volume after twenty years, as a rule sold goddamn mountains in the first decade, so if the author is depending on the trickle income still, they are catastrophically bad with money, and would be in dire straights pretty much no matter what.
Also. Not to put to fine a point on it, but if you only wrote one book, you are not a professional author. Expecting to make a living of a profession in which you have not done any work for decades is.. overly entitled.
Copy right terms, in practice need to achieve two things :
1 Generate enough revenue to make writing not entirely a fools quest. – a single decade would more than suffice for this.
2: Make hollywood, the gaming industry and the rest who make secondary IP pay up. This means the term needs to be long enough to make waiting a bad strategy, compared with finding an appropriate sum of money. 10 years is not enough for that – too many still culturally relevant works to choose from, but 25 should more than do it. I mean sure, the local film school will now be filming a lot of 25 year old books, but that was not a relevant source of income anyway.
I find it very hard to justify more than 25 years, flat. The granting of monopolies is a very severe violation of liberty. Granting effectively perpetual ones is just offensive.
@Thomas
+1. I agree with everything you said. Even 25 years is too high. Although it’d be a great improvement over what we have now.
Each of us has a strong moral claim to the fruits of his labours. This means, in the case of authored works, if anyone makes money, it should be the author. This to me suggests copyright should last for the life of the author or if we must have a fixed term, for approximately the expected life of the author, meaning something like 50 years. I agree the current policy of life of the author + seventy years is excessive.
@johan_larson
Consider the incentives this would create.
“Mr Larson, we would like to publish your book, and we offer you royalty of $100. If you refuse, we would like to remind you that if an accident happened to you, we will be able to publish the book for free.”
This. And I’d also say that copyright should only apply to the works themselves, or literal pieces of text/sound/video extracted from them. Copyrighting a fictional character or a fictional setting should not be possible.
Anybody who wishes to produce fanfiction or fanart should be able to do so without any legal issue. Note that many historical masterpieces would be considered fanfiction or fanart by modern copyright laws.
The problem with that is that other works can affect the perception of the original, and do so negatively. If I write a series of stories about the fun adventures of a group of colts and fillies, aimed at ten-year-olds, and someone else comes along and uses those same characters for fantasies of horse-fucking, that’s going to affect how people think of my stories. It could definitely affect my ability to earn a living from the stories I wrote, to say nothing of the offensiveness of having characters I created used for things that offend me. As the original author I should be able to put a stop to such things, which is why copyright should include the ability to prevent derivative works.
For more good news, US copyrights finally started expiring again last year! One more year’s works will become public domain each January First.
That situation certainly isn’t going to last long. The Mouse is probably paying into politician’s coffers for more extensions as we speak.
They’ve said they aren’t planning on it, there haven’t been any moves yet, and there’d be much greater opposition to copyright extensions now than in the nineties – both from the Internet and from the leftist opposition to large corporations.
Four years ten months to go till Steamboat Willie enters public domain!
I’ll believe it when I see it.
The extended copyrights have already started expiring. If they were going to fight it, they would have done so a couple years ago, back when it would have been a lot easier. But even they they recognized that it would be an uphill battle.
Something like that example is probably typical when an author with a spouse and/or children dies with creations worth licensing. Edgar Rice Burroughs has heirs; so do Tolkien and his now-deceased son Christopher, the original literary executor. But what if someone creates valuable intellectual property and doesn’t have typical heirs?
That brings us from Conan Doyle to Conan Barbarian.
Unmarried and childless, Robert E. Howard committed suicide in 1936 over his mother’s terminal illness he’d been paying the medical bills for. He was survived by his father, Dr. Isaac Mordecai Howard. He died in 1944, willing the rights to a friend in the medical profession, Dr. Pere Kuykendall. Howard’s first published novel, A Gent from Bear Creek, was printed in Britain in 1937. This was followed in the United States by a collection of Howard’s stories, Skull-Face and Others in 1946. Conan the Barbarian wasn’t of any posthumous value until 1950, when the novel Howard had sold to a failing British publisher was picked up by small press called Gnome Press. It was a modest success, enough that the rights-holders wanted the Conan short stories compiled into hardcover books too.
Hither came Lyon Sprague de Camp, already an established science fiction writer both on his own (cf. Lest Darkness Fall) and as someone who loved co-writing with a second author (cf. Harold Shea series with Fletcher Pratt, later with Lin Carter and others). He seemed an ideal candidate to edit the short story volumes for the Howard literary estate, especially as they had found Conan story fragments among the deceased’s papers. De Camp fastidiously filed copyrights on his Conan stories, presumably aware that under US law of the time Howard’s copyrights would expire after a renewable term of 28 years: between 1960 and 1964 unless the Kuykendall family could renew them (they couldn’t or didn’t).
It wasn’t until 1967 that the Conan stories started being published in large print runs, in paperback by Lancer Books. I Am Not A Lawyer, but it would seem to me that Lyon Sprague de Camp would be the sole Conan copyright holder by this time, with the original Weird Tales texts in the public domain.
But there was so much more to Howard than Conan, and it seems that Edgar Hoffman Price, a pen pal of H.P. Lovecraft, had physically met Robert E. Howard through their mutual friend and, somehow, acquired a trunk containing everything he had failed to get published. So when the representative of the Kuykendall family decided to close up shop in 1965, she asked de Camp to become Howard’s literary executor, but he found it expedient under US copyright law to keep his Conan interests separate from that and recommend for the job Glenn Lord, who had bought the Howard story trunk from Price.
Fast forward to late 1970. Marvel Comics writer Roy Thomas is interested in licensing the character of Conan. His boss, Martin Goodman, says “I’ll pay rights-holders $150 a month for a sword-and-sorcery character.” Thomas approaches Lin Carter about his Conan clone Thongor; finding he’s barely within the budget, he assumes the popularity of the Conan paperbacks means he’s a more valuable IP… but what the heck, I’ll go behind Goodman’s back and offer Lord & de Camp $200.
They jumped at it. But the legal contract wasn’t exactly for the Conan stories of Howard (now in the public domain anyway), de Camp, Lin Carter et al. They were the rights to everything by Robert E. Howard. Later, by 1978, de Camp and Carter folding all Conan rights into the licensing deal with Marvel Comics.
Shortly after this time, the copyrights and trademarks start to become indecipherable. By the time the 1982 Hollywood film was being licensed, the purported rights-holders had transformed into a corporation, “Conan Properties Inc.” Toy corporation Mattel briefly had a contract for action figures related to the film, but dropped the R-rated property in favor of something that had been in development in-house: Masters of the Universe. In 1984, Conan Properties Inc. sued them for copyright and trademark infringement and breach of contract. Mattel’s lawyers successfully argued “Who the Hell are you? You can’t even prove the purported rights were legally transferred to you.”
Sadly, intellectual property case law is obscure enough even to the judges called to rule on IP law that CPI has successfully sued artists for making Conan works as recently as 2018.
I’ve liked the proposal where the copyright holder can renew indefinitely, but it gets exponentially more expensive (not original to me but I can’t remember where I first read of it).
For example, you get your first 25 years free (possibly with the registration fee currently required if you want to actually sue somebody). After 25 years, you can renew, but it costs you, say, $1000. 5 years later, at 30 years, you can renew again, but it costs $2000. At 35, $4000, and so on. If, at any point, the rightsholder doesn’t renew it irrevocably drops into the public domain. The rights can be licensed or assigned just as they do now, but only the current holder can actually renew–they have to either renew or sell it to somebody before the next renewal period expires.
All of the costs and timeframes are subject to debate. Maybe it’s $10,000 at 25 years, or maybe the renew period is yearly at 25–I don’t know enough about the economics of publishing, TV, and moviemaking to pick good ones–but this is the general framework.
This has a couple of advantages: 1) people still get the protection of their work for a relatively long initial period that they can live on until they do their next thing; 2) if something isn’t making money, there’s no reason for people to hold on to it; 3) but if it *is* they can keep a hold of the copyright; 4) it avoids “orphan” works, because if somebody isn’t using it it’ll relatively quickly drop into the public domain, and if it’s not clear who the rightsholder is there’s nobody paying the fee; and 5) eventually, it gets so expensive to renew that even somebody like Disney can’t hold on to it forever.
It’s a cool idea, but I think realistically the timeframes we’re talking about are subject to large changes in technology, media landscape, economics, and certainly there will be significant inflation. I just imagine this will probably be something you’d want to fine-tune to incentivise the right models of usage and force copyright to eventually expire; there’s probably no simple, good way of doing it, a flat, mandated formula will get crushed by inflation, there will probably be a way to game anything you do to tie it in with the value or usage of the work.
I’m waiting for dystopian sf about a world where you have to pay a fee to remember copyrighted work.
In “Life“, you awaken into a simulation where copyrighted works are deleted from your memory unless you pay a licensing fee.
“You overdrew the money in the memory bank remembering Casablanca! What a bungle, Fingal!”
https://www.npr.org/2020/03/05/812499752/uncovering-the-cias-audacious-operation-that-gave-them-access-to-state-secrets
For over 50 years, what looked like a company specializing in secure communication called Crypto was actually owned by the CIA and was sharing messages with the US.
The reporter thought this was pretty funny, and I’m still patriotic enough to see the humor. However, there are ethical issues with doing something like that, and there were people who were at least uneasy about it.
For those of you who are interested in ethics, what do you think of shenanigans on that level?
What do you think is the likely cost of this coming out?
Also, how much good did it do the US? It’s not as though people can look at American foreign policy and say it was strikingly clueful.
We did win the cold war without having to nuke anyone.
That has pluses and minuses.
It’s not obvious to me how much the Crypto program helped.
It makes the argument against countries using Huawei for 5G/other tech infrastructure more of a “lesser of two evils” situation. Who do you prefer will have your secrets, Uncle Sam or Uncle Xi? Ethically, it makes American carping about potential Chinese spying nothing more than shameless hypocrisy.
To be sure, I consider myself an American patriot. I think it’s the responsibility of every great power to acquire information on all other nations as a matter of national security. I’m not personally bothered by the spying, only that we got caught.
We’d already had public demands that American companies put back doors in all their cryptographic products for purporses of law enforcement, when the moral panic about Huawei began. I don’t recall whether those bills passed or not; the point was that to some number of elected Ameicans, law enforcement was always more important than privacy, and I can’t recall any nation where “the national interest” aka “intelligence” was deemed less important than law enforcement.
As a Canadian, I objected to the Huawei panic in Canada on those grounds. The US is not a reliable friend to Canada, and is more likely than China to see Canada as a source of gain for them, due to simple proximity. It’s been bullying everyone in the Americas since the Monroe doctrine, and it’s quite possibly only racism which currently makes them less nasty to Canada than to e.g. Mexico. (And we have the Trump trade renegotiation as a recent example of this kind of bullying. To this Canadian, what appeared to be going on was that the deal wasn’t “fair” to the US because they only got 90% of the gain, and wanted 99% ;-( – numbers picked to convey emotion; I don’t recall any useful info being published at the time.)
If there is no trustworthy tech supplier available – Canada should be making its own, not buying from either Great Power. Of course that would be hard to manage, given our historic tendency to bend over whenever the US asks, and our much smaller home customer base.
You say “bully,” I say “sphere of influence”…
The trade deal was kind of a wash vs. NAFTA for Canadians, other than Canadians getting cheaper milk. (I despise farm subsidies at home and abroad, people shouldn’t have to pay more for food to protect a small, politically connected minority of inefficient businesses)
Also, I’m pretty sure it’s your GDP per capita more than your melanin which influences American attitudes.
As far as I can tell, Canada’s biggest concession in NAFTA 2.0 is allowing the US access to 3.6% of the dairy market.
It seems to me that the real problem with NAFTA is that it didn’t have Trump’s name on it. He’d be happy to pass the exact same deal as long as he got the credit.
The US reliably defends Canada, which A: we would like a bit of recognition for and B: gives us an interest in making sure Canada remains defensible. If, e.g., any cellphone that contacts a Huawei tower becomes a piece of Chinese spyware, then Canada may be comfortable with that, but the United States may legitimately not. And if Canada is going to trust China more than the United States in this regard, then the United States may have to stop trusting Canada at least where telecommunications are concerned.
This would be inconvenient for both Americans and Canadians, but I expect more so for Canadians.
Anyway, the impact of Canada using Huawei equipment isn’t that China listens in instead of the US. It’s that China listens in in addition to the US.
It’s unsavoury but it’s the kind of thing state intelligence services do all the time. However, it does make all the crying about “Russian interference” look silly, and the newfound reverence on the left side for the FBI/CIA seem even more grotesque than it already was.
This is what governments are going to do, and it’s only fair to assume that if We are doing it to Them, they are just as much doing it back to Us. Nobody has any foothold on the moral high ground here. Ireland is too small and weak to be of interest to anyone, but that didn’t stop GCHQ, and if we could/can spy on anyone, we’re probably doing it too.
Though it does seem to be the case that if you know they’re doing it to you, you can take advantage of this (by using Cunning Devices along the lines of “Don’t throw me in the briar patch!”) 🙂
I see differences between spying, interference to get specific advantages, and interference to just cause damage.
Has anyone used finasteride? What was your experience?
I’m not balding but my hair is thinning out and I’m hoping there’s a relatively simple fix. Finasteride seems like it will prevent further hair loss for ~$30/month, which is worth it to me, but I’m reading different things about the medical side effects and I’d appreciate anyone’s experience.
If you are worried about side effects, some doctors will prescribe topical finasteride which has some of the same effect and a much lower side effect risk. I have been applying a cocktail of topical finasteride and stronger-than-OTC minoxidil to my scalp every morning for several years now. No side effects and hair thinning/recession seems to have not gotten worse since I started applying it. But of course post hoc ergo proper hoc remains a fallacy.
I started to notice my hair thinning about 6 years ago so I went ahead and asked about and got finasteride, I feel like my hair has stayed at roughly the same thinness since. I take it orally, and I had mild sexual side effects for a few months after I started taking it but they faded and now I seem to have no side effects, it is probably one of the highest value on the dollar things that I have ever bought, assuming my hair would have continued to thin. I started using minoxidil about 6 months ago to try and gain back some lost ground if possible, it might be working, but it also makes my head itchy.
On my todo list is to try mesotherapy for the scalp. Ideally I’d find a mix that contains minoxidil and finasteride as well. But either way, I’d start topical. I’ve heard rumours of very small risk of pretty horror side effects – stuff like permanent depression.
My hair started thinning about three years ago. I tested a variety of interventions, including minoxidil, ketoconazole, adenosine, changing shampoos, switching from combs to brushes, and finasteride. In my personal experience, among this set of interventions, only finasteride and minoxidil had observable effects. I currently am on a regime of once-daily 1.25mg finasteride and once-daily minoxidil foam treatment (note that twice-daily is recommended for minoxidil, and note that liquid minoxidil is substantially cheaper than foam but can irritate the scalp). Under this regime, I have observed no further hair thinning, and my vertex hair seems to have gradually (over two years) recovered most of its original volume.
It’s hard to assess the effectiveness of hair-loss interventions – you can’t easily see the top of your own head, humidity/haircare/styling/lighting strongly affect appearance, and the hair follicle cycle is very long (2-7 years for scalp hair). For my purposes, I attempted to assess treatment effectiveness by photographing the top of my head daily and, less rigorously, by gathering handfuls of scalp hair to feel for thickness and volume.
I am not a physician, and my opinions should not be construed as medical advice. But in my personal experience, finasteride has prevented my male-pattern hair loss.
In a thread below John Schilling makes a reference to “six figures” in the context of a job. That got me thinking. I think my impression of a six figure salary as being what you need to have made it, was set somewhere in the mid 90s. Ye old inflation calculator tells me that $100,000 in 1995 is around $171,000 today. Introspecting, that rings pretty true to me. While I still say “six figures” what I’m thinking of is closer to a $175,000 lifestyle than a $100,000. I wonder if people older or younger than me anchor differently or if some don’t anchor that way at all.
I think people tend to think in relative terms and the thing about making six figures is that both now and in 1995 you were making about twice the average household income. A two income household where both people make six figures is in the top 5% of all households almost by definition. (A single earner six figure household is still roughly in the top fifth to quarter.)
I think the meaning is thus still pretty accurate: upper middle class or whatever you want to call it. I don’t think anyone thinks of six figures as 1% wealthy (those are “millionaires and billionaires”) but it’s definitely comfortable and high earning.
This ‘money equates to class’ idea is one of the very few things that I find jarring in the list of differences between America and Britain (or America and everywhere else?) I don’t know why it does, but the thought that Donald Trump could be considered upper class merely because he has a ton of money, is one I find somewhat ludicrous. It could be snobbery on my part of course, but I also recoil when upper class (or upper middle class) people claim to be ‘of the people’ or working class simply because they’re short of cash.
My sister makes approximately ten times what I do, but from the perspective of everybody who knows us both (in the UK) we’re of the same class. Same culture, same education, same accent etc.
I agree that it’s a blind spot in the American mentality. Though I also think the system is less entrenched in the US than in Britain, I agree it does exist and is a thing. As I’ve pointed out a few times, Trump’s family has been rich for two or three generations at best while Anderson Cooper’s family is old money/old blood. This dynamic is important, if under-appreciated.
All that said, I think the same dynamics work once you presume everyone is working class. I doubt highly educated gentry would ever admit to caring about income, even if they were not all that wealthy until either they married properly or their uncle died.
In case anyone doesn’t know what Erusian is saying, Anderson Cooper is the son of Gloria Vanderbilt, the late great-great-granddaughter of American railroad baron Cornelius Vanderbilt via his youngest grandson (also named Cornelius Vanderbilt). In what was the tech capitalism of his day, the elder Cornelius raised his family from New York Dutch of modest means to the closest thing to American nobility.
@Erusian & Le Maistre Chat
Thanks for contributing to my education about matters USAnian
@Zephalinda
Indeed. I have a direct ancestor who was the secretary of Robert Stephenson (of Rocket fame) and we’ve been keeping quiet about it for 200 years in an attempt to move up in the world..
ETA Yes, I know – “Don’t tell him your name, Pike!”
Apart from the ones that managed to marry into the Dukedom of Marlborough.
(okay, not the druggo one)
The heir apparent (Marquise of Blandford) played polo at Harrow before rowing across the Atlantic, which is at minimum the very top of middle-class.
None of the ones that stayed in the US look like they could pull off Harris tweed, though. Probably don’t own any of Scotland, either.
@Zephalinda
Is that really the case? Are there no English lordlings attending Eton today, the fifth generation in his family to do so–his up from nothing great-great-great-great grandfather having bought a title with a fortune made in the Industrial Revolution and married a daughter off to the scion of a down on its luck noble family? And if their are, do his peers really consider him middle class because of this terribly shameful history?
I don’t understand the point of your objections if the “European class system” you are trying to contrast doesn’t actually exist anymore. Seem pretty affected.
And if their are, do his peers really consider him middle class because of this terribly shameful history?
Not so much those, since a lot of the posh schools are quite used to “quis paget, entrat” (as Private Eye‘s mythical St Cake’s school has it for its school motto) and so have happily enrolled the sons of foreign despots and gangsters alongside home-grown nouveau riche and old blood/old money scions. Lordlings with an American moneybags great-grandfather probably pass the test.
The snobbery is rather more refined than that; take the case of Michael Heseltine, a so-called Tory grandee who was a big shot in the party and bought his own stately home, but could never quite shake off the stigma of being a self-made man (and indeed felt that it had harmed his chances with the upper ranks of the party):
What Clark said in his diaries:
Original remark:
1986
Why is it cutting? Precisely because Heseltine had to buy his own stately home, instead of inheriting it (and all the original furniture to go with it), hence an arriviste, one of the nouveau riches, not really ‘one of us’ and so looked down on by the real grandees of the party who wielded influence and power.
Jump forward to the Tories under David Cameron as Prime Minister 2010-2016 and the (not so) subtle internal pecking order. From a newspaper article in 2014 about the Tory leadership struggle, where Gove and Osborne were alleged to be allied to stop Boris getting it:
EDIT: Fun fact, G.K. Chesterton also attended St. Paul’s – I know which of the two alumni I prefer!
Yes, Osborne will be a peer (baronet) in due time, but it’s only an Irish peerage and honestly, his family made their money in trade, so he’s not really top-drawer (from a handy but not comprehensive guide to the ranking of English public schools and why it matters):
Wikipedia:
That’s because the United States spent a century or two pretending it was a classless society. Since it is and always has been unignorably obvious that some Americans have a whole lot more money than others, we folded all of the observed socioeconomic differences between groups of Americans into “well, they’re all the same class, it’s just that some have more money than others”.
And proceeded to talk at length about the differences between poor Americans, middle-class Americans, and rich Americans. Please to ignore that word after “middle-“, means nothing. And if someone crassly talks about lower- or upper-class as if those things might mean something in America, they’ll just be remapped to poor and rich, respectively.
@Erusian
Just for the sake of accuracy, six figure salaries do put people in the top 1% (just not in the $100k range).
Household: $475,000
Individual: $329,000
Markets are so variable that it is hard to compare now. I wouldn’t want to make our income in San Fran adn live that lifestyle, but making it here (roughly national average) is quite nice and making it in a cheap area and we would live like kings.
for the record, the 2019 household income percentiles were as follows:
10%: 14,603
25%: 31,201
50%: 63,030
75%: 113,000
90%: 184,000
95%: 248,304
99%: 475,116
Individual percentiles were:
10%: 8,503
25%: 22,000
50%: 40,100
75%: 70,125
90%: 116,250
95%: 158,330
99%: 321,551
Thanks for this information, useful context stuff!
Out of curiosity, I looked up the official numbers for Norway (2018). We are often considered a rich country due to oil exports accumulated in a large (~ 1 trillion USD, or 200K per capita) national fund, but while the Norwegian state is undoubtedly affluent, to what extent does it impact the wealth of the population? Here’s the table, roughly translated to USD using a rate of 9 NOK per USD. Some caveats below:
Decile 1 23 000 (US 10%: 14,603)
Decile 2 32 000
Decile 3 41 000
Decile 4 51 000
Decile 5 60 000 (US 50%: 63,030)
Decile 6 79 000
Decile 7 89 000 (US 75%: 113,000)
Decile 8 108 000
Decile 9 135 000 (US 90%: 184,000)
Not unexpectedly, the spread is much larger in the US, high incomes are higher, low incomes are lower. I suspect looking at individual incomes makes this even clearer, it is very rare to have a wages in six figures (USD), so most high-income households are double-income households.
Pegging the poverty line at half the median income seems to result in a little less than 20% of Norwegian households and a little more than 25% of American households classed as poor. That said, I think living costs are higher in Norway, one major difference is the 25% VAT (compared to single digit percent sales tax in the US?) – but especially low-income living costs are high. Even cheap food isn’t cheap, and when living in Germany, I estimated our living costs there¹ to be about half of what they are here. I think this makes a pretty big difference to the impact of poverty. House ownership is very profitable compared to renting, which also hurts the low income segments disproportionally. Things like (non-electric) cars, alcohol, and eating out is expensive.
Norway has “socialized” medicine, meaning that all citizens are part of the national program and will receive medical help in most cases (not, e.g. dentistry) and with no or moderate payment (low caps on yearly pharmaceutical expenses before state takes over) . It’s not clear to me if American households spend their income on medical insurance, and if so, how much – or if it is paid by the employer and thus also an additional benefit. Likewise for pensions. – are you expected to manage this on your own dime? And saving for children’s education – in Norway this is mostly covered by free tuition in public universities (most of them), and by student loans to cover living costs.
I guess I’m rambling on here, hope this is interesting to some of you.
¹ Excluding rent, this was in Munich.
For medical insurance it depends. The very poor get medicaid which is free but no doctor wants to take it because it pays less than anything else. The working poor and lower middle class have it perhaps the worst–their employers either don’t offer insurance at all or offer plans where employees have to contribute to the premium as well as pay high deductibles and co-pays. The professional classes generally have quite good employer provided health insurance, as do employees in union jobs and those that work for a government. The elderly have Medicare which is quite good except for the fact that there is a 20% coinsurance for doctors’ visits as well as co-pays for drugs. Bottom line is can vary a lot how much an American household has to pay for healthcare / heath insurance.
For pensions, the traditional pension is dying in the US. Government employees still get them, as well as some unionized workplaces, but that’s about it. For everyone else you are expected to save on your own or rely on social security.
Finally, even public colleges are fairly expensive these days and there are also living costs. Attitudes vary as to whether parents should save for these expenses or their kids should take student loans.
You can’t leave out 401ks which employers usually contribute to.
Yeah, but getting a match at all requires you “to save on your own”.
Yeah, instinctively I think of “six figures” as what six figures was worth when I started working full time, which is roughly double what it is now. But I’m aware of this and usually remember to correct for it. A Google starting salary of ~$115K may not be what I’d have thought of as “six figures”, but it is still a lot higher than the ~65K (in 2020 dollars) I got from IBM when I started working full time.
$100,000 is a very good salary. It is senior analyst, mid-level management, engineer level. When we say certain people feel entitlted to a $100k salary, that’s exactly the band most mean.
$170ks is extremely high. That’s senior middle manager or director, basically the point where you get serious perks and one rung below incentive payments being 40-50% of your compensation.
$175k is about right for senior engineer salary in the Bay Area today, but that’s as part of a $350k+ total compensation with bonus and equity.
$200K looks to me like a very good, very experienced software engineer who isn’t in management, and isn’t a director-equivalent Individual contributor. In Silicon Valley. $170K looks like the same engineer, minus skill at negotiating salary and the sense to jump ship when their current employer takes them for granted.
I wasn’t trying to denigrate $100k/year salary or reignite the endless debate about what constitutes middle/upper middle etc. Just thinking how there are these markers that are fixed verbally but chance in value over time. It could just as easily been “millionaire”.
Welcome, again, to Hollywood. This time the King of Kings/Executive Producer has purchased the Terminator franchise and is determined to relaunch it, starting with a remake of the original film. Who should we cast in the principal roles?
Tom Hardy as the Terminator.
Michael Fassbender as Kyle Reese.
Ellen Page as Sarah Connor.
Dave Bautista as the Terminator
Emily Blunt as Sarah Connor
John Krasinski as Kyle Reese
Dave Bautista was actually my first pick for the Terminator, but then I thought about trying to go for the less obvious move of not automatically picking a wrestler/bodybuilder for that role, and Tom Hardy has shown he is perfectly capable of playing extremely imposing muscular juggernauts.
I think Arnold was great for the role not as much as he was “big” as he was “perfect“. It’s hard to find another actor with the same kind of build.
I guess the first question is whether we want to keep the contrast between the Terminator and Kyle Reese that was present in the first film: the Terminator is big and doesn’t say much, while Reese is smaller and more articulate. We don’t have to. An advanced android that specializes in infiltration could be a charmer rather than a bruiser.
But if we decide to follow in the footsteps of the original, we need a big man, and probably someone famous enough to draw an audience. Dwayne Johnson? Jason Momoa? Going a bit farther afield, maybe Rory McCann(The Hound).
Hafþór Björnsson fits that bill splendidly.
Florence Pugh is an absolute no-brainer for Sarah. She’s the right age (several of the other suggestions are too old) and she’s got the range for the full transformation from frazzled waitress to monomaniacal badass.
George Mackay is our Reese. That’s a young man whose eyes can tell us he’s seen things no-one should.
Ben Foster is the cyborg. Dude knows how to be scary – but then he knows how to do everything: he might be the best actor working today.
Mahershala Ali is Traxler. Does this even need explanation?
Michael Shannon is Vukovich.
Alison Brie is Dr Silberman. We need some more women in the cast, this is a spot that makes sense, and, well, don’t you want to see what she’d do with it? I do.
Holliday Grainger is Ginger. I think she’d be able to bring that party girl energy while retaining truth and nuance.
But mostly, why in blue fuck are we remaking Terminator?
Every single one of these actors is better than the one who originated the role. We’ll have wizzo 2020s VFX. We’ll have a 9 figure budget instead of 7. We’ll have Director Bong and Roger Deakins shoot Craig Mazin’s script.
And the film we make is 100% guaranteed to be a pale shadow of Cameron’s masterpiece.
Benedict Cumberbatch as the Terminator.
Emilia Clarke as Sarah Connor.
Matthew Lewis as Kyle Reese.
I’d watch this.
What if it was Benedict XVI as the Terminator, Emilia Attías as Sarah Connor and Matthew McConaughey as Kyle Reese?
Shut up and take my money. And by money, I mean investment.
Sorry, but after watching CumberKhan, I’d rather he not ruin any more classic villains.
Maybe Skynet designed the Cumbernator to wage psychological warfare against Star Trek purists.
I didn’t mind Cumberbatch as the character – I thought he played the role of John Harrison extremely well. He wasn’t Khan of course, but that is the fault of Abrams and his gaggle of untrained monkeys in Bad Robot, not the fault of the actor. He did the best he could with what he was given and on that measure, it worked.
(They had to do a fix-it comic book series in which we find out how come Khan Noonien Singh, South Asian genetic superman, looks like Cumberbatch and the answer is “plastic surgery by Marcus to disguise him”. Yeah, that was convincing).
So I refuse Abrams’ hack jobs on Trek and substitute my own headcanon where Cumberbatch is John Harrison, renegade Section 31 Star Fleet officer, and the rest falls nicely into place (never mind the Magic Space Blood Cures Death rubbish, the rebooted franchise ignored that out of sheer embarassment as well). The entire Marcus plot was dumb but salvageable (nobody noticed he was building his own superduper warship out around Jupiter? Really?? So Starfleet only has one (1) functioning starship post Nero and that’s Enterprise?) so I’d appreciate a reboot of the reboot without the stupid crap where Abrams was trying to literally recreate shots and style of the original Star Wars movies as his showreel to prove to Disney he could do the job for them. No Spock/Uhura romance where Uhura gets turned into a nagging shrew who wants to talk about their relationship and her feeeeelings in the middle of an important mission in front of their commanding officer, no building your starships on the ground in the desert, no “Klingon homeworld is practically next door in travel time”, no transwarp silliness, the list goes on…
Cumberbatch as the new improved model T-1000 would work, but perhaps not as the original Terminator. Not unless you’re really rebooting the heck out of the original and doing an Abrams on it 🙂
The star trek reboot movies did a great job of casting people who look/feel like the original characters, but the plots are just stupid. I honestly don’t get why SF-ish movies can afford gazillion-dollar special effects budgets but can’t put together a plot that minimally hangs together in the fact of, say, a bright 15 year old spending fifteen minutes after the movie thinking about it. Not that the original series was any great shakes for coherent plots and consistent world building, but the reboot managed to fail to meet even that low bar.
But we are asking him to play a literally inhuman villain here, and I thought he did fine as Smaug. There were other problems with those movies, but not Cumberbatch’s casting.
He was the wrong choice for Khan, or at least for remember-Wrath-of-Khan(*), because he doesn’t chew scenery in Ricardo Montalban’s larger-than-life fashion. And that’s what that role called for.
But for the Terminator, the inhumanly detached and dispassionate (or at least very selectively passionate) Cumberbatch of e.g. Sherlock Holmes would work quite well, I think. He doesn’t have Schwarzenegger’s physique, but we’re not supposed to believe it’s muscles that are doing the work anyway. He’s tall and he has presence and he’s done motion-capture work for the endoskeleton scenes.
I’m in. Cumberbatch, Clarke, and Lewis it is.
* Having just rewatched “Space Seed”, I’d be up for an alternate universe where Khan was always Cumberbatch, but that’s not what Abrams was going for by far.
We already saw Emilia Clarke as Sarah Connor in “Terminator: Genisys”. She was ok, but nothing special. Lena Headey, who would later be known for playing Cersei Lannister, did a better job as Sarah in “The Sarah Connor Chronicles”.
Jessica Chastain showed some real spirit as Maya in Zero Dark Thirty, and would be a fine choice, but she’s quite old, at 42. I suppose we could rewrite things so she’s a midlife professional of some sort, rather than a waitress/college student.
But if we keep the script as it is, we’d need an actress who can play both sweet and fierce, and can credibly portray an early-twenties Sarah.
Jesus wept, the Emilia Clarke suggestion was *serious*? She can’t act! She can’t fucking act! She’s killed three franchises and counting with her utter failure to act, She’s a goddamn joke. There’s a really good actress who went to her old school, but she ain’t the Teddy’s old girl you’re looking for.
Some time ago @Atlas mentioned some “Chappo House” podcasts and “the millennial left”, pod casts aren’t really my thing, but I just read a NYTimes piece on them titled: The Pied Pipers of the Dirtbag Left Want to Lead Everyone to Bernie Sanders, and oh my.
For the record I voted for Sanders in 2016 (I voted for Biden this time), and I have sympathy for these youngsters grips, but
Yeah, um but what the Hell?
Four years and $100,000 for Craig’s list gigs?
I’m (early) Gen-X, but feel free to “okay Boomer” me because that sounds insane to me.
For a five-year union apprenticeship you can get paid and come out with a $100,000 wage at the end of it, when I got in it was a series of multiple choice exams, and I imagine someone who passed the SAT’s to get into a university could get it the way I did.
Two years and a few hundred dollars in fees and materials (maybe $2,000 total) of welding classes at a community college gets you a $80,000 to $120,000 a year job.
Less than a year at the welding equipment manufacturers school (in Cleveland, Ohio so cheap to live there) gets you the same jobs, and while not as cheap as the first two “learn a trade” options, it’s less than $100,000 – which you can still buy a home in parts of California for!!
Sure, the conditions aren’t great, there’s no girls and few women, and you really shouldn’t be a smoker and a welder (though many are and have short lives), but at least you earn $100,000 instead of owe it!
I have deep sympathy for the kids at Kennedy High School in Richmond, California who lost a chance out of the ghetto when Mr. Floyd died (nice guy, but he smoked like a chimney and it killed him) and there was no longer someone left to teach how to use the tools Chevron donated, but these “dirtbag” guys?
A thread or three ago @Conrad Honcho described Sanders supporters as “a bunch of people who chose the wrong majors”, and I thought that was unfair, but I take that back for some of them.
Sure, if everyone now rushing into the universities learned a trade instead the wages would be less, and more would be displaced in the trades and forced to beg or pick tomatoes but
doesn’t sound like a good path.
I don’t suppose “learn to weld” can be scaled up for everyone more than “learn to code”, but even if there’s “free college for all”, if college leads to Craig’s list gigs or selling plasma, what’s it for?
This seems insane to me!
It’s for creating Brahmins. A lot of 18-year-olds “have to” go to college because their parents are Brahmins and they don’t want to be a big disappointment. Others have Vaisya or Sudra parents, and for decades there’s been much fear in their parents’ culture that college will turn their against them (in This Present Darkness back in 1986, the author ascribed college indoctrination to literal demons).
Kids whose parents aren’t Brahmins would be insane to try to change classes, unless they can successfully major in something more than renumerative enough to pay back student debt like Computer Science, but what is society to do with all those Brahmin kids who have filial duties to go to college even if they can’t hack a high-paying major?
I don’t know what this Indian caste metaphor is supposed to mean, which I suppose is rather the point, but kids with very wealthy parents aren’t the ones being described. Those kids are fine with no real job after their expensive educations because they have trust funds. And if they want well paid jobs they can get them regardless of what they majored in.
The ones that are really upset are upset for the same reason many Trump supporters are—they are raging against the fact that we no longer live in a world with highly paid buggywhip salesmen. They want what their parents had and they can’t have it, because they were born too late. Unlike Trump voters, what their parents had was well paid tenured professorships and intellectual magazine editorships.
It’s a Death Eater thing. Moldbug uses “Brahmin” to describe a modern American cultural cluster. It maps pretty well to Scott’s “blue tribe” idea.
That usage isn’t original to Moldbug, although he did expand it significantly. “Boston Brahmin” is a long-standing term for the old-money, generally Ivy League educated, mostly-WASP subculture in New England.
Mind that I’m not a Death Eater. I skimmed a number of his essays when he was active, but never considered him insightful enough for finishing one to be useful. And Thomas Carlyle is a damn fool influence to have if the elevator pitch for your political philosophy is “like monarchy, but a publicly-traded business rather than a family business.”
I independently think that Indian caste terms are a more insightful way to talk about class than our “lower” (same as “working”?)/”middle”/”upper middle” ladder, which is focused on income (and maybe status, if it doesn’t equate status to income) rather than functionalism and the mores you internalize from parents and peers. The idea that rulers/society owe you a living preaching or teaching or other work that’s not beneath your class is captured much better by this, and the whole varna (caste) idea also captures much of the economic functionalism of the Marxist class terms, without misleading you into starving people.
The pre-existing term “Boston Brahmin” is apt, though I’m skeptical that there’s a real distinction between what an Ivy degree in a soft subject tells you to believe vs. what a State U degree in the same field tells you to believe. The higher status does translate into higher income after graduating with a degree that has no useful content, due to networking with higher-status Brahmins as a student being a feedback loop.
I think you’re right on the first part.
To the second though, I don’t think the disgruntled indebted college students are largely cases of “failed to launch” magazine editors and college professors. There really are a large cluster of people best described as “took a BS in their ‘passion’, used loans to go to their ‘dream school’ and just sort of assumed that entitled them to a well paying job”.
That may be pretty close to what they were promised, but I’m not sure it was ever reality. There really was a time when you could get a union manufacturing job without any real effort, work for solid pay, and retire on a decent pension. There was never a time when a 4 year philosophy degree was a guaranteed ticket to a “living wage”.
Some ended up dying in Vietnam; turned on, tuneed in, and dropped out; or suffered one of the many other vicissitudes of life–but I bet as a cohort the bachelor’s class of 1968 did very very well for itself, philosophy majors included.
I believe there was a period of about a generation where this was close to true, as hiring for low-level office jobs started to strongly favor BA-in-who-cares candidates where a high school diploma and a bit of training had sufficed in the past, and the number of low-level office jobs was increasing due to the growth of the regulatory state.
But that gets you a living wage doing something, not a six-figure salary for doing the thing you’re passionate about. I fear we may have started encouraging people not to “settle”, at about the time when settling is what you probably have to do if all you’ve got is a degree in something too many people are passionate about.
Not with any definition of “solid” and “decent” that would be acceptable to modern workers.
@brad:
Before the GI Bill, the percentage of Americans who got college degrees was, what, 1-2%? The government did veterans (a huge % of the male population, unlike the small kshatriya class we have today) a solid, and then college was still a prudent choice for their kids the Boomers.
After the Boomers, everything went to Hell and there’s little consensus as to why.
Though even when X% of Boomers were doing the right thing by going to college, there still needed to be Boomer plumbers and house builders and all the rest for civilization to keep going.
My dad was born in 1940, missed vietnam and got a degree in philosophy. He bought a computer and taught himself to program in what must have been the late 60s or 70s and ended up with a proper programming career, staying with the same company for most of his career. Despite being on a ‘legacy’ salary during the many layoffs in the 90s he survived to retirement with a sweet pension.
I’m not sure the bold is really true, though. In college people who weren’t in STEM made jokes about not being able to pay their student loans back all the time. I know of no case where someone changed majors over this concern. One of my younger brothers is in college now, and I’ve heard some of his friends who are still in high school express their intention to get a certain degree, crack a joke about how they’ll never get a job with it, and then change absolutely nothing about their plans. That’s not “assuming you’ll get a good job somehow”, that’s, “knowing you’re about to do something stupid and doing it anyway”.
Eh, I think that’s mostly gallows humor.
On the one hand, sure, they understand that the outlook is bad and the odds are against them. On the other hand, they still think, deep down inside, that they will be the ones to beat the odds. But they can’t say that aloud because it sounds ridiculous and arrogant.
It’s not insane; it’s what I did. But it is getting harder.
The consequences of rising costs fall more on the middle class, though, I think—families wealthy enough that FAFSA assumes they can pay for their kids’ education, which means they might as well save for it. This even if, due to the two income trap or some other way of living beyond their means, they’re already investing a lot indirectly in their kids’ education and can’t much afford it. Their kids are meanwhile told all the way through school, Do what you love and don’t worry about the cost. And then they graduate and get a rude awakening.
Kids who are much poorer than the middle class avoid this because they’re eligible for financial aid, which makes state colleges and the like affordable, provided they major in something that pays, as you say. But they face serious challenges middle class kids don’t: they probably don’t have the family or community support to make it to college, or schools that can properly prepare them for it. Regardless, these kids are generally not today’s dirtbag left or, for that matter, Bernie bros, but rather the middle class kids.
I thought you got a CS degree? That’s not the kind of college education we’re talking about as driving Bernie support.
I did. I took you to mean it was insane even to try taking a remunerative major.
An anecdote: When I was getting ready to enter college, I heard my parents bemoaning the state of college graduates who weren’t prepared to work, had a lot of debt, etc. Both expressed the opinion that more people should go to trade school instead of college. After hearing this kind of thing off and on, I raised the idea at dinner one night of becoming welder. I was treated to a long lecture about how becoming a welder was wasting my potential and about how I really should go to college.
Not too long ago, one of younger brothers expressed a similar idea at dinner and got the same lecture. The experience left a bad taste in my mouth; it felt like my parents were saying, “This is for other people, but not MY kids!”
It wouldn’t surprise me if this mindset is very widespread.
I agree that this is happening to a large extent.
But while it seems hypocritical, it might often be correct. It can simultaneously be true that a whole lot of people who are currently getting low-value degrees from non-prestigious institutions would be better off becoming welders, and also true that any particular highly motivated and intellectually gifted individual is still better off going the traditional college route.
Certainly; but it’s also likely that there’s bias in evaluating the intelligence and diligence of one’s own offspring.
3 Ladders has been talked about here.
I was linked to it from (IIRC) the ‘Staying Classy’ post a while back.
You can say something very similar about the second group. 30-40 years ago a lot of people were earning E3 compensation for G2 work. It was an enviable sweet spot, of course it got arb’ed away. Why pay big bucks for jobs people are willing, eager even under a revealed preferences model, to do for small bucks? So we get the current situation where people do those jobs for the small bucks, just spend a lot of time bitching and moaning about how terrible it is that they get paid small bucks. We see the same thing with teachers, for example.
No one* in the US has genuine material need. They want money for the same reason everyone wants money.
*Okay not literally no one.
I think a lot of the anger is coming from the fact that these people were doing everything society told them to do, and they’re still just scraping by. When I was in high school, the message was all “You must go to college; you’re doomed to ‘do you want fries with that’ jobs if you don’t go to college.” I have seen schools in my area put up the banners of colleges their students got into. I am not aware of any high schools boasting of the trade schools their students got into. I myself was raised in an upper middle class family, and the no college options were not really presented to me. Sometimes I wish I had gone into some sort of government blue collar work, but how often is that really presented as a viable option? Even in lower class communities, I think most high school guidance counselors push college uber alles.
Welding is for Red Tribe. We’re talking about Blue Tribe Americans here. And this discussion is pretty much exactly what Scott’s tribal distinctions are meant for.
Blue Tribe Americans believe that they have the inalienable right to sit behind a desk(*) thinking Deep Thoughts and telling other people what to physically do to make the world a better place. And to earn at least six figures for it. Blue Tribe Americans believe that all Americans have this inalienable right, as soon as they can be educated out of their Wrong Tribe ways. Yes, yes, this implies that there be someone to actually do the physical stuff that we’re all going to think up for them to do, but that’s for other people. All their role models sat behind a desk thinking deep thoughts and telling other people what to do, and that’s what they’re going to do.
Welders, however well paid, don’t get to think deep thoughts and they don’t get to tell other people what to do. To some people, that’s worth going $100K in debt and selling plasma for ramen to avoid – especially since one of the first things they’re going to think deeply about and tell other people to do is political stuff that involves erasing that debt and shafting whoever was fool enough to loan to them.
Disclaimer: I earn six figures sitting behind a desk telling other people what to do. Or at least what not to do.
* Well, OK, some artistic careers are acceptable even if they do require standing in front of an easel or on a stage. So long as you are artistically expressing deep thoughts about what other people should do to make the world a better place.
+1
It is interesting how this idea is similar to the society described in Starship Troopers. In the novel, people became Citizens by serving in military, because only the people caring enough about their society to defend it should be allowed to steer it.
Now imagine that instead of exhaustive training, and fighting where you can randomly lose your life, the requirement for becoming a Citizen is merely to spend some time listening and learning how to be a good Citizen. That’s it; to become a Citizen, you only need to say you want to, and then learn how to do it. Such society, despite technically having two unequal castes, doesn’t feel unfair, because the door to become the elite is wide open. The only people who don’t become the elite are the ones who choose not to. No injustice is done to them.
…and this is, kinda, how the current caste system feels to the “Brahmins”. Anyone can get a diploma, if they choose so. Biological intelligence is not an obstacle because, remember, IQ ain’t real. Difficult subject is not an obstacle because you can choose a simple one. Cost is not an obstacle because you can choose a cheaper university (and in some countries, the state will pay for you). All you need to do is apply, and spend some time trying. It is perfectly fair to treat those who refuse as second-class citizens; they literally chose so.
(Of course this is not the true description of how it really works, but it requires some privilege-checking to notice so.)
Work that wasn’t beneath a Brahmin included being a pundit (priest), teacher, or philosopher sitting and thinking Deep Thoughts for other people to physically implement. Chanakya would be an archetypal example of that last one (the government he told what to do was the famous Chandragupta Maurya).
Yet the Wrong Tribe includes all the plumbers, electricians, HVAC techicians, construction workers etc. who let that class of people have air-conditioned desk jobs in buildings with indoor plumbing rather than thinking Deep Thoughts under a tree and defecating in a pot. Educating all of them out of their class/tribe would be a disaster.
Huh, turns out our word “pundit” is borrowed from Hindi. I wouldn’t have guessed.
Welding is for Red Tribe. We’re talking about Blue Tribe Americans here. And this discussion is pretty much exactly what Scott’s tribal distinctions are meant for.
I think what we’re seeing is the creeping forward of “progress” into the white collar jobs that were formerly considered inviolable. The whole reason people were told, as Theodoric says, “You must go to college; you’re doomed to ‘do you want fries with that’ jobs if you don’t go to college” is because the blue-collar jobs of boring but stable and well-paid work were being automated away or outsourced overseas, with the labour market turning gradually from manufacturing to service industries. So to get a decent job where you wouldn’t be low-paid and precarious work, you needed to move up the rung of the ladder to the world of “clean indoor work with no heavy lifting” and that meant a college degree.
Now the same rationalisation of industry/the economy is hitting the white collar world due to automation/outsourcing/progress and the same people who nodded along to “it’s a shame but it’s how the economy works, those kinds of manual labour jobs are dead or dying” articles in the media are now seeing it hit them instead, and they don’t like it any better than the working/lower middle-class did when their traditional good pensionable jobs dried up. (See how outraged journalists got at the “learn to code” slagging directed towards them: how very dare anyone think journalism for online clickbait organs is anything less than a sacred calling pursued by the best and brightest! it must be targetted anti-media harassment by the notorious alt-right, not just people taking the opportunity to make dumb jokes!)
If holding journalists and their religion in contempt is alt-right, I don’t want to be alt-wrong.
Hmm. Outsourcing of tech jobs was already a thing 20 years ago, and working conditions (and to a lesser extent renumeration) were dropping because of that. At that point (aged about 40) I wasn’t sure my career would remain good until I was able to retire.
Shortly after that, I managed to bust my way into the next rung, in spite of it previously seeming to be marked off as “no Aspies dare apply,” and things got easier for me personally. (Outsoucring was never for the top roles, let alone folks the execuytives would see as belonging to their own class.) Also, the engineers in India began demanding a lot more money than they had been when the outsourcing to India started, and lots of potential outsourcers changed their minds about its profitability. And at the same time the best Indian engineers were still mostly emigrating, and that meant really good engineers in India were hard to come by, unless you offered them a job that would move, with them, to the US.
So that phase of white collar outsourcing caused less problems for US engineers than I’d originally expected.
But anyone who thinks this is new, is either well under 40, or wasn’t paying attention at the time.
And yes, the “we write well” knowledge workers were mostly affected a bit later than the “we do technical stuff” knowledge workers, but at this point we have “local” newspapers outsourced to god-alone-knows-where, and the written english of US-born people is, like that of ESL people, is whatever the spellchecker/grammar checker/AI suggestions happens to produce, and too often ranges from ungrammatical to incoherent. (And meanwhile, I’ve learned to read and write both Indian English and Chines English fairly fluently, since I see so much of both of them.)
Cornell claims that $100,000 of debt isn’t even a possibility. ($30,000 after 4 years is the max). They could be lying, but if so it seems to me the NYTimes should be investigating that. I suspect you’d actually find that either the person isn’t telling the truth, or that they did some exceptionally unwise things to increase their debt, even besides taking some useless degree at an Ivy League school with $56,500/year tuition.
A substantial fraction of college debt is driven by living expenses, not just tuition, fees, and course materials. Going to college full-time usually means you aren’t working, or at least not working much, and 4-5 years of living expenses adds up to a tidy sum, even just for a dorm room and a campus meal plan.
Living expenses make up about half the cost of attendence at most public universities, or maybe 20-25% of the cost of attending a private university without a scholarship.
Yep. For poor people, the most limiting cost in your life is, well, the cost of your life. You can’t simply stop paying it for a few years, regardless of how much good it could do for you later.
The officially recommended path upwards on the social ladder is to borrow a lot of money, gamble with it in a game that is stacked against you, and win. The game is called university, and it is stacked against you because unlike your classmates with university-educated parents, you didn’t get the same training at home, you don’t know how the system really works as opposed to how it is supposed to work, in case of trouble it is more difficult for you to find help, etc. And if you lose this gamble, you lost at minimum a few years of your potential income.
My father went to college in a country were public universities were both free and considered top tier. He did well in his exams and got accepted, then flunked his first trimester hard because he still had to work to pay his living expenses, and that left him no time to study. He applied for some scholarships, got one from some corporation which was investing in increasing the supply of engineers, and was able to pay his living expenses with it and apply himself to his studies and graduate. Afterwards, he started working on an MBA but dropped out to join the work force because he was out of scholarship money and found he couldn’t afford to not work.
This is indeed part of the problem. You fill out your FAFSA. You get offered a pile of loans. There is no immediate effort to say “here’s what we think you actually need”, just “here’s what you can have” and I think a lot of people just take out the max.
The other problem is taking substantially longer than 4 years to finish.
This is an excellent point, and though several of the responses to it are good, I don’t see why the basic idea isn’t shouted from the rooftops more. That said, many people entering college are genuinely clueless about money, jobs, and careers, and their equally clueless parents have somehow drilled into them the notion that “college = good” regardless of major, debt, or career path.
@Plumber
I guess I don’t understand the problem here. Most college graduates still make higher incomes than people with high school diplomas. There are a select few majors where this may not be the case. The problem, then, lies in the curriculum (or existence?) of these select few majors. The fact that a small number of people are getting degrees that make them worse off is problematic whether college is free or not.
And consider this: If you are one of the unlucky ones who chose a major that is somehow rendered worthless (let’s say, you went to school to trade school and your very trade was replaced by an AI), which would be the better situation: 1. Having a worthless degree and no debt 2. Having a worthless degree and [school tuition] worth of debt? The existence of worthless degrees only bolsters the case for free college IMO. The rich have plenty of money to tax, taking it from them to pay for college incurs almost no utility loss.
As a small aside: There are lots of very bad theories in the comments about why people support free college. I call them bad, because the assumption seems to be that support of free college is primarily fueled by college graduates expressing regret for their choices. This doesn’t square with (in both exit polling and pre-election polling) the Democratic candidate advocating for free college having greater proportional support of non-college graduates while the candidates who don’t having higher support of college graduates. And since everyone is arm-chairing their political-psych theories, I’ll offer mine: It’s called “pulling the ladder up”.
I’d be better off as I am plus having an extra million dollars. Should the government give that to me?
Yes.
Um… Okay. I am frankly fascinated. Tell us more. Does it apply only to the first person with the chutzpah to ask? Or only to people named “brad”? Or does the government really have some back room that contains 372 trillion dollars?
I feel sure that either you have misunderstood brad or that I have misunderstood you.
Well, Brad didn’t specify any details or elaboration, so I didn’t feel to add any myself.
Firstly, one million dollars isn’t that much money. Many government employees easily make that cumulatively over the span of 10-20 years. So the generic question “should the government pay one millions dollars to some people” is “yes” assuming that government employees should exist.
For the sillier question of “should the government give one million dollars out to people randomly”? Sure, as long as it comes from the DoD budget. The government spends tons of money on things that actively make the world worse off. Giving a million dollars to random people is probably a better use of it. We could have a lottery system.
And that’s only if you insist on being revenue neutral. You could ask “should the government tax people, such that brad gets a million dollars” and the answer could still be yes, assuming the tax was levied in a way such that it is re-distributive, e.g. a billionaire tax that turns people into millionaires.
It’s all very silly. I have no idea what he was going for with the question.
It was in response to this part:
That people will be better off if you give them free stuff is both obvious and a totally inadequate justification for a proposed policy of giving some people free stuff.
I don’t think non college grads support Bernie because he offers free college specifically. He offers free lots of things.
The actual college grads are more likely to realize they are going to be stuck with the bill for all this “free” stuff.
I don’t know how you justify making worthless degrees “free”. That’s a huge pile of resources going to something you admit is worthless that could better be put toward healthcare or infrastructure or pensions or whatever.
@gbdub
If we know already that a particular degree is worthless, then it’s existence is a problem whether it is tuition-free or not. “Huge piles of resources” are being wasted by people deceived into paying for something that doesn’t deliver. This is a problem regardless of whether that pile of money comes from the public or private sector.
So that is one problem we can try to solve.
But if we assume that this problem isn’t one we are going to tackle (reasonable, since I’ve not heard anyone talking about it on the campaign trail), or if the problem is more intractable than it appears (also possible, it may be difficult to predict what careers are actually profitable) then we are left with the question of what to do assuming there will be worthless degrees.
My take, is that if there are going to be worthless degrees, it is better for people not to be saddled with debt for having them. That just compounds the already existing harm of the wasted earning-years. It would be like, as an alternative to offering unemployment benefits for anyone who lost their job, we slapped a fine on them instead.
Simple yes or no question, would you agree with the following statement[Edit: last sentence added for clarity]:
Or how about:
Assuming the answer is no, I think the difference comes down to the “worthless degrees” being associated with your tribe, unlike cigarettes and homeopathic medicines. And in the case especially of cigarettes you’ll understand that making something free leads to more of it being consumed.
Here’s one for you:
A man is walking down an alley late one night and gets robbed of $1000. The police catch the robber and he goes to trial. The prosecutor suggests that the robber should give the victim back the money. However, the defense argues that if we transfer $1000 from the robber to the victim, we are subsidizing the victim’s poor decision to walk down alleys at night. After all, if you subsidize something, you get more of it.
You agree with the defense?
In your cases, the robber loses money.
In the other, the taxpayer pays the
robberuniversity.Question is[emphasis added]:
No, I don’t agree with the defense. Because “we” are not giving anything back. The taxpayer is not giving anything back. The robber is giving something back. I feel like this gets to the crux of the difference between the way we look at the world. You don’t see the significance of the difference between my money, your money, government’s money, a rich man’s money, you want to look at it as if it’s all in the same pile. Maybe in your moral system there isn’t a significant difference. There is in mine and so that’s the answer to your question.
I notice you haven’t answered mine.
@Lambert
@Alexander Turok
So you all are fine with subsidizing the bad behavior of walking down alleyways at night, and your only insistence is that the money come from what you view as the appropriate pool? That’s what I suspected.
It seems like we are all okay with subsidizing people’s ability to make bad decisions without them reaping too much of a penalty. It’s true that if you reduce the “cost” of picking bad degrees, you get more of them. The same applies to reducing the “cost” of walking down alleyways at night.
I think people should be able to walk down alleyways without losing $1000, and likewise I think people should be able to pick a bad degree (wasting four years of their life) and not incur the additional penalty of debt. Doing dumb things has inherent penalties of its own, we shouldn’t be trying to make it even worse.
@Alexander Turok
I never said that subsiding something wouldn’t lead to more of it. The question of whether such a subsidy makes things “better” or “worse” depends on what you think people are owned. Cigarettes and homeopathy have significant non-financial harms, so the question is not quite analogous.
You are not using the word “subsidy” correctly. I don’t think you understand the concept.
I don’t think of asking people to pay for things they consume as a “penalty” whether it’s food or housing or education or whatever. Cigarette smokers should be able to chose their habit and should incur the “penalty” of having to pay for their own cigarettes. We should be “trying to make it worse” by not subsidizing that decision.
In my moral system making a bad thing more numerous is a morally bad action. I’m sure there are exceptions where you can identify compensating good outcomes which justify them. Do they exist in this case? Why are people “owed” education as opposed to other things? Why not food or housing? Is it just because the blue tribe raised you to think of it that way?
@Alexander Turok
Do you agree that walking down alleys alone at night is a bad thing, and should be discouraged? And yet, in my example, you support transferring $1000 dollars to someone who does it. Quibble if you want whether this money counts as a “subsidy”, it’s undeniable that this $1000 transfer encourages this behavior as opposed to the counter-factual where the $1000 wasn’t transferred.
Indeed. The “good outcome” in this case, is that people who are already suffering from wasted years are not burdened with the additional suffering of debt. Because of the declining marginal utility of a dollar, this increase in net utility can be achieved via progressive taxation and transfers (i.e. taxing the rich to pay for the less-rich)
lol at thinking I was raised by the “Blue Tribe”.
But yeah, they are owed an education. They are owed food, housing, and education, among many other things. We owe lots of things to people. If you want to argue that the same pot of money could theoretically be given to people to spend on more useful things you won’t get any objection from me, but that’s wasn’t the question at hand. It wasn’t “free college vs. SNAP”, it was “free college vs. nothing”.
No, I support giving them their money back. It’s like saying “hey, do you support not having police beat people who smoke? Doesn’t this encourage smoking as opposed to the counter-factual where the beatings do occur? Then how can you object to making cigarettes free on the basis that it encourages smoking!”
It doesn’t count. Word have meanings. This is how intelligent you sound:
@Guy
In this scenario I think its perfectly reasonable for the government to tax that 1k as a disincentive to walking down alleys. Letting the robber keep it is totally immoral. Perhaps you could tax it at 100% I think something more reasonable like a 25% idiot tax is more appropriate, but we are haggling about price at that point.
And as people have pointed out, the schools are the robbers, and the loans have caused schools in increase enrollment and raise tuition. The moral way to cancel student loan debt by taxing universities to pay for 100% of the bill.
@Alexander Turok
And how do we fund the system of courts and police that causes their money to be given back? Because if your answer involves taxation (and you don’t seem to be advocating for anything as radical as a taxless system, given your use of words like “taxpayer”), that starts to look a lot like subsidizing activities by building public infrastructure.
One hardly needs to be an anarcho-capitalist to be skeptical of free college for all. So it doesn’t seem like much of a gotcha to point out that one such a skeptic believes in police and courts.
@Brad
Who said anything about free college? I’m just arguing the definition of a subsidy.
The degree is generally not worthless, but its worth is distributed extremely unequally. 1% of people who major in music will get a lot of money. Most of the others will end up indebted and angry.
The real problem with college loans (or anything looking like free college) is that it turns out its not a subsidy for students, its a subsidy for teachers and administrators. Thus we are only paying for bloat, not anything of value.
There it is! The example is always something which comes totally out of left field. No one could have predicted it. The thing about these worthless majors is that people know they are worthless. You may say “well what does it matter if they’re in a hole because the wind blew them there or because they jumped in there, they need help out regardless.” I say it does matter and if it doesn’t then why are the examples almost always of the former?
This is the equivalent of saying “the fact that these government-provided free cars frequently break down and leave their owners without a means of transporation only bolsters the case for free cars! Better to have no means of transporation and no debt than no means of transporation and debt! The whole X to Y comparison is flawed because you assume that when you start subsidizing something the amount consumed does not change.
This only makes sense if you believe that the rich are just storing their money in a vault somewhere, that it’s not doing anything, not being invested and not creating any value. Yes, some is spent on yachts and other conspicuous consumption. Other money is invested. For the record I support higher taxes on the rich but not if the money will just be wasted.
I don’t see any contradiction here: poorer people are more likely to vote for the economically Left-wing candidate. Poor non-college Democrats support it because they understand coalition politics, you scratch my back I’ll scratch your back. But if given a choice between free college and programs that actually benefit them, which do you think they’ll choose? I say the reason that cancelling student debt is on the table and cancelling credit card debt is not is because the college educated chattering classes don’t want to pay back their loans.
Since America has never had “free” college I don’t know what “ladder” you are talking about. Presumably there’s some other “ladder” they benefited from because it would be impossible that anyone ever got anything through hard work and their own effort.
No they don’t. People are value-maximizes. No one does something that they think is worthless on purpose. If I knowingly decline to become an electrician out of high school (and make ~$60,000) and instead get a major in interpretative basket weaving and make ~$15,000) then I must have valued the experience of the degree/pleasure of the job at >$45,000.
So yes, people only receive worthless degrees through either 1. Deception 2. Changing market conditions they didn’t anticipate. If you are talking about anything else, you aren’t talking about worthless degrees.
What I see in the world is a whole lot of people engaging is short-sighted behavior and continuing to do so even as they heard very many warnings. People continuing to smoke even as they see the warning label on every pack, not making any effort to quit. If you tautologically define “value” such that any freely chosen activity where you have all the information increases it then, sure.
I think there’s no disagreement about objective facts here, both of us agree that there are majors which do not deliver economic value to their graduates. The disagreement is just about what you call it. You brought up the term “worthless majors” and now you’re saying it doesn’t apply here. Okay, would “economically non-productive majors” be acceptable? If so, replace the phrase in my original comment. All my points still apply.
I have a neighbor who is in his 60s who just quit smoking and claims it isn’t hard for him. I haven’t seen him smoking (he smokes on his front porch) in months so I believe it has taken, he basically smoked for 40 years without once really trying to quit.
@Alexander Turok
I’m not unsympathetic to this position, but think about its implications: If people are idiots who don’t know what it best for themselves, then what is the argument for people who make “bad decisions” to be saddled with debt?
They couldn’t have known (or perhaps were too feeble-minded to make) the best decision in the first place. So why punish them for it?
The conclusion is that the option should just be removed by the nanny-state all together, not left lying there as a trap ready to be sprung.
@Alexander Turok
Setting aside my admitted trollishness, there are very few majors that have a negative market value in the sense that people who have these degrees command no higher salary than people without. I’m no expert on this, but I’m going to wildly ballpark something like 1%-10%. STEM, business, education, law, healthcare, social work, religion, and trade school degrees all still “pay”.
And yeah, this is probably bad that not everything does (but may be fine if you really like the experience of going to college, but again I’m setting this aside). But why the “free college” debate always seems to hone in on what is really only a small minority of degrees, treating them as an example of what we are “subsidizing”, is indeed curious.
Having to pay back debt you willingly took out is not “punishment.”
Only if you don’t put any value on human liberty. And even then, you have to trust the nanny state to get the answer right.
There was a popular musical about the stressful urban poverty of people who get two Masters degrees in social work (“and now I am therapist / But I have no clients! / And I have an unemployed fiance! / And we have lots of bills to pay!”) or Bachelors of Arts in English.
What’s wrong with “it’s because they, unlike smokers, demand everyone else subsidize them?” It would be another matter if it were their own money being spent.
Agree completely.
Bryan’s model seems to me like an exercise in motivated reasoning. He wants to believe the market is rational and marshals a bunch of evidence that the standard explanation for why college graduates are favored is wrong. So he goes and looks for one and he finds one. I think he’s accurately described the reason employers favor college graduates but failed to show that this is the rational approach. And anyway, if they find one the crucial question would be is it being subsidized by the taxpayer or not?
Agree completely.
If you’re point is that for it to be politically realistic you need an alternative, sure. If this is your view then we have a fundamental value difference. Suppose you are a medieval peasant and you’re experiencing drought. Someone comes offering to do a rain dance for a price. Someone else says ‘let’s not.’ Would you agree that they need to provide an alternative way of handling the drought before advocating not employing the rain dancer?
When my grandparents went to CCNY it was free. Of course, like most CCNY students (then and now) they lived with their parents and got to campus on the subway. And while at the time it had a strong reputation as the “Harvard of the Proletariat” now it’s… well, not as pitiful as it was during the open admissions era, but thoroughly undistinguished.
The University of California system was also historically tuition-free for in-state residents. I would imagine this was similarly true for public universities nationwide. Over the second half of the twentieth century there was a gradual shift from getting the bulk of their funding from direct appropriations to getting it from student loans, which is probably more equitable from a perspective of not wanting to subsidize dilettantes, but does mean there’s nothing to keep administrative bloat in check. In other words, we’re indirectly subsidizing a different group of dilettantes, namely assistants to the deputy vice-dean for strategic dynamism.
At this point we’re probably better off doing away with student loans altogether. If we’ve decided that higher education isn’t a public good, let’s stop throwing money at it in the form of loans we’ll never be able to collect. And for god’s sake undo the 2005 bankruptcy amendment and make them dischargeable again.
@BBA >
Attending U.C was free for my Mom in the late ’60’s-early ’70’s, by the ’80’s it was no longer free.
+1
And while we’re at it, since a lot of this was colleges admitting marginal students and offering “degrees in useless”, put the colleges on the hook for at least part of the discharged amount.
@Zephalinda
I know this OT is mostly dead at this point, but as I was catching up I planned to say exactly this, I’m glad you beat me to it.
In addition, some of the people who chose a “worthless major” were fully aware that the type of career it set them up for was a not a high paying job, the problem is now they can’t get any job.
The claims generally about people with “worthless majors” and the further claim that these people feel entitled to six figure jobs are both way off base with minimal support. Weak manning at best, but most likely just flat out straw manning.
So, I’m just curious… what sort of jobs did humanities majors reasonably expect they were going to get that they suddenly find to be unavailable?
@Matt M > “…what sort of jobs did humanities majors reasonably expect they were going to get…”
Thanks to the magic of family, friends, and Facebook I’ve some idea of what my peers who were humanities majors did wind up doing:
My brother (thanks to family and in-law support, including mine) went to college in the early 2000’s with a political science major, moved to Maryland, worked various odd jobs, the longest with an aftermarket auto parts manufacturers lobbying organization until getting a job with The State of Maryland (which by SHEER COINCIDENCE his father in law also worked for), one guy I went to high school with is now a public librarian in Orinda, California, another guy became a lawyer, most of the girls I knew became school teachers – and most of them moved out of state, some guys I knew attempted college, but didn’t graduate, and they usually did worse than those who never made the attempt, military to trades usually worked better than “some college” (except for an electrician turned cripple).
My wife was a dual English/Philosophy major, then she went to law school, made (what seemed to me) good money as a paid intern while going to law school, then met me, dropped out of law school, worked a few temp agency jobs (mostly at banks), then a stay at home wife and mother.
My mother, father, and step-father all went to college in the ’60’s and/or ’70’s (though in my father’s case ‘community college’ with the hope of transferring to a university and becoming a pharmacist), my Dad failed to graduate, became an “independent contractor” (a truck, tools, and his back), until joining the laborer’s union, before the hospice and hospital he lived in public housing in Oakland, California. In contrast my uncle never went to college, joined the Plumbers union, then became a contractor, and doesn’t live in public housing.
My step-father after college was briefly a social worker (before I knew him), quit that, became a taxi driver, then press photographer (he first took some riot photos freelance), then camera store, then bait shop, and finally a gun shop that the City of Oakland taxed out of existence.
My Mom after working with my Dad blue collar (she did roofing with him!), divorced him, then made and sold candles and puppets on Telegraph Avenue, then got a job as a secretary for the University of California, then a secretary for The New York Times (her boss wrote The Falcon and the Snowman), then back to U.C.
So law school or government work (usually teacher) is what I’ve seen humanities majors do, fairing better than “some college”, and the non-law school ones just don’t earn as much as guys in the trades who didn’t suffer crippling injuries.
@Guy in TN
Even ignoring the hurt feelings of the rich, there is a significant loss. It is much better described in The Case Against Education, but the short version is that even ignoring money, you still pay for education with years of your life, and it’s a zero-sum game, because the only thing that matters to employers is that you wasted more time than the other guy, therefore you send a stronger signal. The less people need to pay with money, the more they will be asked to pay with years of their lives.
In some sense, a society where only the 1% could get college education would be better. Because in such society, not having the college degree would not prevent you from getting a good job.
On the other hand, imagine a dystopia where 90% of population spends 30 years of their lives in college, and the education they get there is mostly crap. In such society, if you decided to save 30 years of your life and skip the system, you would find out that no one actually wants to hire you — failing to do what 90% of people can do sends a pretty bad signal. (So you start your own company then? Oops, there is all kind of regulation against people like you.)
FWIW College is often considered the best time of a persons life, 30 years of subsidized college where I only have to pay in years of my life sounds like an amazing deal that I would take in a heartbeat, work sucks.
I wonder if that’s out of date. When I was in college, towards the end the coursework was heavy enough that I literally had no time to do anything but work most days, and 18 hours a day still meant I had to prioritize which items were most important to get done on time – and I still wasn’t doing enough credits to graduate in 4 years. My heart quit on me for a few seconds after a calculus test.
Other people seemed a little better off but not by much.
@Villiam
I don’t care the Bryan Caplan wrote a whole book with this as the premise: It seems so flatly, obviously untrue that I’m struggling to formulate a response that would even register to the understanding-of-reality of someone who would advocate for it.
I mean, is he really saying straight face that the reason doctors go to medical school is nothing more than zero-sum positional signaling? How about engineers- no need for those pesky physics classes, right? If I’m going to be hired as a biotechnician, it can’t really be relevant whether I actually know chemistry, genetics, or evolution, right?
It just seems like an over-correction of the most basic kind: Caplan correctly realizes that some degrees are useless positional signalling, and that some credentialing acts more as gate-keeping than quality-control. But he runs away with the logic to argue that all degrees and credentialing must be bad. Which is utterly indefensible IMO.
Perhaps the government should abolish useless signalling degrees at public universities, which would aid in clearing up this confusion.
Obviously under US law, trying to ban the Ivies and other private universities from offering degrees in Signalling Studies for a million dollars would violate several clauses of the First Amendment, but what they could do would be a game-changer.
@Le Maistre Chat
To the extent that Villiam is right that useless degrees are a problem (and again, I suspect they make up a tiny minority of all degrees, but they do exist) the “cleanest” solutions would be difficult to mesh with existing US law. In an unburdened legislature the best solutions would be:
1. Banning colleges from offering truly useless degrees. Cut it at the roots.
2. Banning employers from requiring useless degrees. Eliminate demand.
But since we can’t do that, the best solution we are left with:
3. Make getting the useless degree as painless as possible for those who have to do it.
The government can’t outright ban tulips, and it can’t eliminate people’s people demand for tulips. But it can give free, unlimited tulips for everyone, which has largely the same effect on the signalling game.
No, the “best” solution is forcing colleges to self finance these loans. And if we are going to forgive those in the past, make those same colleges pay for the forgiveness.
> How about engineers- no need for those pesky physics classes, right?
I ain’t learned jack that I couldn’t’ve taught myself. And I don’t expect to use more than a tiny fraction of it. (And I intend to go into some pretty solid R&D)
I ain’t learned jack that I couldn’t’ve taught myself. And I don’t expect to use more than a tiny fraction of it.
Ah, but which fraction of it you will use is going to be the question and you won’t know that until you need to know it.
As to “I could have taught myself” perhaps indeed you could. But who would have checked you weren’t teaching yourself bad habits or going down a rabbit hole and learning the wrong thing? “This is the way I’ve always done it” is not always the right way.
Anyway, good luck with your career!
Yeah, it’s obviously untrue. I am an employer, in that I have to occasionally pick between candidates for jobs, and when I see a degree from a good university, I see a reliable indicator of intelligence, conscientiousness, knowledge, talent, and skill – some of these things being merely screened for by the university, others actually enhanced by it. If our union allowed it, I would at least be willing to consider a self-educated engineer, but the degree is far more than a signal of wasted time. And the difference between a bright undergraduate intern and a bright graduate with an MS or Ph.D., in terms of what I can trust them to do without my regular and direct supervision, is enormous.
So Caplan is wrong. But we knew that.
The people saying we should get rid of the degrees in uselessology are also wrong, and we ought to know that from the fact that our host’s undergraduate degree was in Philosophy IIRC. There are some places where deep knowledge of a particular domain is needed, but others where breadth of knowledge and ability to apply one’s intellect to anything is more valuable. The classic liberal-arts degrees, and most of the rest of the “uselessology” fields, are pretty good for that, and it’s something we need.
So, maybe don’t subsidize them as much, but don’t denegrate them as much as they often are here.
@Clutzy
To be more precise: as far as federal aid goes, there is no difference between getting a degree in the humanities or grievance studies vs. computer science. There’s no reason for universities to stop offering these as long as students are still majoring in them. If federal aid depended on the market value of these degrees, and we can expect many fewer to major in them.
(For what it’s worth, students at the margin have already realized that some degrees are useless; as well, my impression is that when there is an economic downturn, useless majors go down. This is bad enough that some schools are having trouble keeping certain programs going, because they need at least a few majors to be able to run their programs. At my Jesuit university, for instance, Theology was scarcely getting one major per year, and the strategy for the last few years has been to attract as many minors as possible. So I’m sure this would hit those programs hard. I think a lot could be said about what that will mean and how colleges should respond, but that’s a little too tangential. Maybe we should save that for the next quarter thread.)
@John Schilling
I don’t see how this follows from what you wrote. From your comment, only the part “other [things] actually enhanced by it” goes against the signalling model. And Caplan admits that the school is only 80% signalling, and the remaining 20% is some useful stuff.
This reminds me of my question about factory work. The answer that a lot of people said to me probably applies here: people do what they know and they’re often very risk averse.
A great example of what conservatives and libertarians mean when they talk about the “entitlement mentality.” Everyone is EQUAL but I deserve more money because I went to college, the government must give it to me!
Well, I’m sure the craiglist gigs were just supplementing the real source of income: Daddy.
I think there’s a stigma against the blue collar trades but there’s also a real fear and a real risk to the trades which justifies the salary.
While plumbing is still doing fine, there’s a lot of skilled tradesmen in the Midwest with 20 years experience, no job, no prospects, and no degree. There’s a lot of long-distance truckers with no future in the industry and driver-less trucking approaches. There’s a lot of taxi drivers who probably made decent money with $1 million dollar medallions and the memorized layout of a major city like New York who have been replaced by Google Maps and Uber. And I can tell you there’s a lot of competition for any firefighter or police officer positions.
I don’t see how any kid or parent trying to plan out a 40-year career could look at what happened to blue-collar workers over the past 40 years and be confident that automation/outsourcing/etc wouldn’t consume their industry and leave them without valuable skills or a fallback option. The middle/striver class is freaking out because, well, college was supposed to be a $100,000 job guarantee and it’s not anymore. That doesn’t change the fact that blue collar workers have been devastated in general and the college-educated have done better.
Or, to rephrase, how do we know plumbing and other skilled trades aren’t just suffering from survivorship bias, and most skilled tradesmen who started in the 1980-1990 aren’t significantly worse off?
I’m not aware of this midwestern phenomenon. Links? It’s been a while but when I was last out there the trades were still doing well. And things like carpentry, plumbing, etc are unlikely to be automated. Your two examples are truck driving and taxi driving, neither of which are skilled trades or manual labor.
I’d say automation risk runs the other way. It’s highly unlikely tasks that involve a high degree of visual identification and working with hands will be automated or outsourced in the near future. White collar work, meanwhile, is much more likely to get automated.
‘Trades’ insofar as they involve working on stuff in-situ are unlikely to get automated soon.
But the jig-borers are all gone. One person can feed G-code into a dozen CNC machines and let them all run etc. There’s no lofts full of draughtsmen. Robot arms can weld, rivet, glue with preternatrual precision and consistency.
This doesn’t necessarily mean that jobs go away. There’s now a market for 5-axis milled parts that were just not practical in the past. And you need skilled machinists to operate them.
Jobbing is generally harder to automate/optimise away labour from than mass production. But CAE is dropping the cost of small runs, rapid protypes etc.
A draughtsman is not someone who’d work at a factory…
Anyway, factory work isn’t a trade either. At least not by my definition. What’s yours? Because if your argument is just that unskilled labor is having issues, then I’d agree. But as someone intimately familiar with Midwestern factories, I can assure you high skill non-college workers are in very high demand. The effect was not to eliminate tradespeople but to eliminate the least skilled workers. Which is still a problem because those are people too.
The effect was not to eliminate tradespeople but to eliminate the least skilled workers.
And that’s the crux of it: why people thought “college/more college” was the answer. For an increasing demand for more skilled/higher-skilled workers, more training and more education was needed. Now it wasn’t just “you’ll pick it up on the job”, you needed some level of instruction and qualifications beforehand as well as what you learned on the job. Hence going to college to get that shiny degree which was a guarantee that you were indeed qualified and enabled you to walk into that good job. And the inevitable creep from “left school but hard-working” to “need to have your high school diploma” to “need a certificate for training beyond high school” to “need a basic college degree” onwards.
The new jobs coming along are no longer on the shop floor, they’re the ones that require a high set of skills and abilities, and where there’s the continuing sifting of people so the less-skilled drop out at more and more levels of the process. That’s our problem. I don’t know the solution, because the demand for the higher skills/higher IQ workers is ongoing, and you can’t simply move your “used to be a coach driver working with horses, then cars replaced horses, now he works on the automobile assembly line” employees around like that anymore, it doesn’t work like that anymore. Now you’re requiring your “used to be a coach driver” employee to be able to design the engines for the automobiles at high performance specs for the new jobs.
First, the midwestern reference is just to the general de-industrialization of the US, which hit the Midwest particularly hard. I’m sure there were many skilled trades involved in steel, car manufacturing, sheet metal, etc. Some people were surely labor
Second, I’m not confident automation is the major threat here. Sure, you can automate car production but outsourcing seems by far the bigger threat there. Everything I hear about farming is bad (and farming is certainly high skill) but we haven’t outsourced or automated that, it’s just market consolidation there. There’s several different reasons salaries have stalled, not just automation; looks at the sorry lot of adjunct professors: no outsourcing, no automation, still miserable.
But for a more specific example that I have some familiarity with, take able bodied seaman. A highly skilled physical trade requiring no more than a high school degree paying ~$300/day or $54,000/year. Lots of machinery skills required as well as the ability to work in a specialized environment. And it’s not like shipping doesn’t make money. But the merchant marine has been slowly dying for decades. The total fleet went from 2,926 ships to 182. And unlike self-driving cars, autonomous cargo ships are currently being built and tested. Able bodied seaman jobs, and the merchant marine, were and still can be very profitable but it requires high degrees of visual identification and working with your hands and is likely to be gone within 10-20 years.
Are line workers in a factory tradespeople? Not by my definition, but what’s yours? If you mean people who operate or repair machinery that takes a long time to learn, their employment prospects are fine. It’s the people who used to do simple rote tasks that are having issues.
Farming is a special market because it’s so highly regulated and the government has been very unkind to farmers in order to extract cheap food from them. I’m not sure how the others have to do with skilled trades. As for your example of seamen, again you appear to have an extremely non-standard definition of skilled trade. More to the point, the US sailor has been dying due to cheap competition (it is, definitionally, a global market) and flags of convenience going to other countries.
I’m not sure this will go anywhere but I’d define skilled trades as anything requiring significant non-college experience primarily involving physical goods.
For reference, for immigration purposes, Canada defines skilled trades as including bakers, cooks, butchers, mechanical/technical maintenance, and agricultural supervisors.
You sure about those numbers? BLS has Welder salary at $40k/year. With a poor job outlook.
“Learn to weld” is probably good for diligent individuals on the left-side of the IQ distribution, but diligent individuals on the right-side probably have access to better paying jobs, more comfortable jobs than that. They should get different advice from just picking a trade. However, there’s more to it than picking a major, because you also need to know how to act on the job and boost your performance.
@A Definite Beta Guy says:
Yes.
I’m sure that BLS wage rate is for “There Be Dragons”, not Emperor Norton’s realm!
The BLS has Plumbers at $53,910 per year, when in my area it’s about $90K to $110K per year for a full-time union plumber.
The City and County of San Francisco usually pays less per hour than guys got in the private sector, but the scale for Welder (representative by the Electrical Workers union) is $81,692.00-$99,294.00 Yearly.
For a Pipe Welder (representated by the Plumbers and Steamfitters union) it’s $96,902.00-$117,832.00 Yearly.
For a Fusion Welder (representated by the Iron Workers union) it’s $95,472.00-$116,038.00 Yearly
And for my job classification it’s $96,902.00-$117,832.00 Yearly.
In California (the last time I checked) at least one county’s union local rate was $60K per year, instead of the $95K that was my couny’s rate, I suspect it’s the same for college grads that different areas will have different pay scales (though I’ve read that public school teachers in Texas all get about the same rate, so living in a cheaper area of Texas can dramatically increase your savings compared to living in Austin).
Austin is a pretty reasonably priced city. The city center is very small and expensive but living 10 minutes away you can rent a two bedroom apartment for 1200 a month in a nice neighborhood and less than 1000 in as close as Austin gets to a not nice neighborhood. If you wanted to live in Odessa you might save 200 a month but nobody wants to live in Odessa.
This is another thing people miss: you shouldn’t compare outcomes to the national average because geography is important. Almost every job in SF is paid better than a comparable job in Des Moines. You need to compare trade labor in the area to unskilled labor in the area, not unskilled labor in the area to national average trade labor.
Is that for 1G (I think the americans call it PC?) MIG welding of mild steel or for underwater 6G TIG watertight welding on aluminium?
Learning specialist welding processes is a decent route to progress as a welder.
Is this starting? How easy is it to get these jobs?
It is insane. As someone with several degrees, no debt, and who is doing decently on a professional level at age 30, it was my experience that we were encouraged to look at college in an insane way. Doing an apprenticeship was looked down upon at my upper/middle class public school, and I’m told in my state there is a huge shortage of people doing those jobs. My parents rail about the foolishness of taking out huge student debt, but idk what they would have said had I not been able to rely on their support, because parents’ status was ridiculously caught up in the college game, and parents truly religiously believed an education at a good school would pay off. To me it seems foolish to have paid that much at all, loan or no loan. I declined Cornell Law because it would have required crazy loans, but I can definitely see how someone would be awed into paying that for Cornell. It took me way too long to question it, and I was lucky there wasn’t much damage done in my case.
I agree the caste thing mentioned below is useful—I’m from Massachusetts. Not a WASP, family has only been here a few generations, I’m a second-generation college graduate, but the ethos I grew up in was still Brahminy, and I work with actual WASPs all the time. Their kids *have* to be “successful,” and they’re desperately trying to have the same path work for their kids that did for them, but it doesn’t work anymore because there is too much competition and other issues. It’s not a goofy major thing, mainly. My parents go on about that too but my undergrad degrees (poli sci and communications) are not meaningfully different, IMO, than a philosophy or women’s studies degree. College wasn’t really about the major, but about learning the soft skills for jobs typically taken by people in that class, and for making connections. Nor is it really a find your passion thing–my experience was “find something you’re good at and you will have status if you follow the rules.” That meant something that worked with natural strengths and wasn’t unpleasant, but not some dream situation. In reality, there are limits to the number of those jobs, and industries like media and law have been pretty wrecked by changes. College costs are up and salaries are probably down in many of them, or don’t go as far.
We sent so many people to college, at such a high cost, and the message “good career” was more tied up in having a respectable sounding position than financial security as the end game, though I don’t think that was fully understood by anyone. Few parents absorbed the changing realities, and most of the kids didn’t know better. Scrambles to preserve the status of aspirational classes’ children are always dysfunctional over time and we’ve hit that point quickly because the last few decades have seen so much change and because the Boomer experience was so anomalous and based in unrealistic hope and symbolism.
ETA: It’s easy to mock these people or just see them as entitled whiners, but real damage was done to them by the social norms and adults around them. I’m talking about a certain class of young people. They hardly have the worst lives ever, but, speaking generally, the choices they made were heavily encouraged by the schools themselves, government, and their parents. These adults still don’t admit there was/is a problem, so it can be hard to come to terms with the situation. A huge part of this is people unable to disappoint or push back at their parents and face the reality that the path they’d hoped for was always illusory.
Conan review #18: “The Black Stranger”
This is a direct sequel to “Beyond the Black River”. It also has a strange history: it’s the only Conan story rejected in Howard’s lifetime after “The Frost-Giant’s Daughter” and “The God in the Bowl”, which were submitted to Weird Tales hot on the heels of the first one published, “The Phoenix on the Sword”. There’s no evidence that “The Vale of Lost Women” was submitted for publication, and it seems like Farnsworth Wright got in the habit of never rejecting a Conan story. So the fact that this one was found in a chest of unpublished papers along with a finished rewrite into an Age of Sail pirate story, “Swords of the Red Brotherhood” (Conan turns into 17th century Irishman Black Vulmea) is strange.
In 1953, L. Sprague de Camp edited it into “The Treasure of Tranicos” so it would end linking up with the rebellion in Aquilonia that brought Conan to the throne.
Conan has been running west from Picts for a hundred miles. He takes cover from the arrows of 40 of them, and mysteriously the chief calls off the attack. Whatever refuge he’s using, they seem to have superstitious fear of it. Wolf-Picts “captured him, in a foray against the Aquilonian settlements along Thunder River, and they had given him to the Eagles in return for a captured Wolf chief”, so he’s even farther from his job in Aquilonia than the great distance he’s run.
He walks into a tunnel in his stone refuge and finds a heavy iron-bound oaken door He’s amazed, because he’s at least 200 miles west of Thunder River and near the coast, where the Picts are too fierce for civilized people to come and build things. Then he finds iron-bound chests ranged along the walls. There are also silent figures at table. Have you guessed that he found a pirate hideout?
Elsewhere, Lady Belesa of Zingara has been living a year in a log fortress her exiled Count uncle has built on the Pictish coast, a thousand miles north of home. We’re also introduced to Tina, a freed child slave. A pirate ship appears on the horizon! Hustling inside, the Zingarans find the newcomers approaching under a flag of truce. Strom the pirate captain acts like Count Valenso has treasure and no ship to take it away in. Valenso has an archer shoot Strom, who responds by having his pirates surround the fort. They figure out how to defeat the defenders, but then another ship flying the royal Zingaran flag scares them into retreat!
This turns out to belong to Black Zarono, a buccaneer and another enemy of Valenso. The enemy is invited to table, with none of his crew inside the log wall, where he insinuates that the Count has built a log simulacrum of his castle here on the shore of wilderness to hunt for treasure, which he denies, saying it was meant to be a temporary stop on his way away from the corrupt stink of Zingara’s court, and he’d go somewhere else in civilization if he could, “to Vendhya, or Khitai—” Zarono presses him, but is shocked into believing him when he says his navigator let anchor here for reasons he had not time to reveal before getting beheaded by a Pict.
“Supposing you to have already secured the treasure, I meant to take this fort by strategy and cut all your throats. But circumstances have caused me to change my mind—” What’s this, a double-cross story where no one’s good at lying?
So change of plans, Zarono says: I need to stay here to actually find the treasure of Tranicos, famous pirate of 100 years ago who “stormed the island castle of the exiled prince Tothmekri of Stygia,” – times like this I feel Howard is daring the reader to stop suspending disbelief in his mashup of historical periods, but he seems to carry it with conviction.
So Zarono tries to strike a bargain where they split the treasure 50-50, lift anchor, and Valenso can have Zarono’s ship when he abandons it to settle down in Zingara with a noble wife – the non-consenting Belesa. Tina interrupts to report that a very tall black man showed up on the beach in a black boat alight with blue fire, which sends Valenso into violent terror. Now Belesa is motivated to get away from her uncle with the child he hurt.
When Chapter 5 rolls around, Zarono’s ship is destroyed in an unseasonable storm. The indefatigable pirate says the two groups have 260 men left between them and a vast forest, so let’s build another. This new plan is disrupted by the other pirate ship returning, and then Conan re-entering the story. A pirate from that ship was killed, and Strom blames one of the other two schemers, ignorant that Conan did it. He bursts into the negotiating room in 100-year-old pirate garb, which he put on back in the tunnel. Zarono says: “Three years ago the shattered hull of your ship was sighted off a reefy coast, and you were heard of on the Main no more.” (Would that be the Wastrel he stole in “The Pool of the Black One”?) We’re told that by this time in his life, he’s seen as “a legendary character in the flesh.”
The pirate captains telegraph that they’d kill Conan for the treasure map he now has, so he throws it in the fireplace. With the only map in his memory, he offers thus:
“We’ll split the treasure four ways. Strom and I will sail away with our shares aboard the Red Hand. You and Valenso take yours and remain lords of the wilderness, or build a ship out of tree trunks, as you wish.”
But Valenso is too terrified to stay that long. Belesa thinks all the negotiations are a farce, as all except her uncle are honorless pirates who won’t leave without the whole treasure and rival blood on their blades, and she no longer thinks much of her uncle either. For now, though, they need an intricate plan to carry the treasure out of the cave without one faction outnumbered and betrayed. Valenso is too scared to go, so they break it down as Conan, the two captains, and 15 bearers from each crew.
As they leave, Conan asks Valenso why he decapitated a Pict, or so he believes because he found the Count’s necklace at the scene of the murder. Then Galbro the Count’s seneschal tries to decipher what’s left of the map in the fireplace…
Then Belesa asks her uncle his thoughts on all the scheming. He says Strom would murder them all aboard ship for their share of the treasure. Zarono would be honest because he wants to marry her, but has no ship, so he’ll send fishermen in the dark to overwhelm the pirate ship’s skeleton crew. Then Zarono’s men will murder Strom and Conan on the beach, hoping the former’s death demoralizes his camping pirates, and sail away to share the treasure 50-50.
He goes on to explain who the black man is:
Conan has led four pirates to the treasure cave, where Tranicos and his captains sit dead. They also find Galbro dead. There’s bluish mist in part of the cave, which they guess is deadly – just before Conan shoves them into it! They recover and Conan kills one before jumping to a ledge as the rest of the pirate crews pour in. He goes prone on the crag outside, out of sight.
True, each captain had a plan to kill him, but Conan still comes across as a bad guy here. He sounds like he had a good job serving the King of Aquilonia on the frontier and wanted to go back to piracy because he was bored. The pirates reconstruct Conan’s plan to get the treasure out despite the mists, but he boasts that they won’t make it back alive without his woodcraft. To puncuate the point, Picts suddenly appear, enough to keep the pirates besieged in the taboo cave until they die of dehydration. They declare truce with Conan, who uses the Climb skill on the side opposite the cave mouth to help them slip around the semicircle of warriors to their west. Of course they run away unencumbered by heavy loot.
Soon, Picts attack them at the fort (the conspirators against Conan don’t shoot him on the way in, which I find unconvincing). That night they find a man of Strom’s dead, with no Pict either inside the wall or visibly running away. Strom blames Zarono. They fight. Then the Picts do break in. Strom and Valenso are dead by the time the chivalrous Conan reaches the girls, whom he finds being menaced by the smoky, horned, pointy-eared Black Man. He finds a piece of silver furniture to throw at it, knocking the thing back into the fireplace. He gets the girls to safety out on a headland, leaving only the Picts and the Dead behind.
Conan sends smoke signals to the few people who were out of the ship, surmising they’ll make him captain because none of them is a navigator.
He gives her a handful of rubies he looted.
Conan the Communist?
She asks what will become of him, and he says don’t worry, he’ll be fine because he’ll be a pirate again! Yo ho ho.
I like that we have three factions plus Conan planning to betray each other for treasure in a Treasure of the Sierra Madre moral fable. I like that the plot is complicated by people under the Count having agency and not following his plans (imagine how elaborate the plot would get if each pirate crew had been given such characters too). On the other hand, Conan’s morals are hard to sympathize with (even giving up his loot to help a woman has confusing motivation), there are lapses of logic, and the supernatural being seems superficial to the tale.
If I remember correctly, in de Camp’s edit, Conan is running from betrayal by the King of Aquilonia when the first group of Picts captures him and he ends up able to recover all the treasure. That’s tighter and more surprising, given the character’s pattern with treasure.
Your thoughts?
Next OT, we’ll be looking at two very early stories, in which Conan is a king.
it was meant to be a temporary stop on his way away from the corrupt stink of Zingara’s court, and he’d go somewhere else in civilization if he could, “to Vendhya, or Khitai—”
Did anyone point out that if he intended to head to Khitai, sailing off West to the coast of Pictland was the wrong direction?
I found the story confusing because of all the different factions plus everyone planning to knife everyone else in the back. Conan plotting to kill the pirates was nastily realistic, given that they were all hoping to do the same to him and to each other, but as you say it does leave a bad taste in the mouth. Pragmatic, ruthless and yeah what a real pirate was like but we’re more used to Conan doing his killing face-to-face and being attacked first. Hanging around ‘civilised’ people has had a bad influence on his plain barbarian straightfordwardness!
The ending with Belesa at least does address “so what happens the girls when Conan and they fall out of lust and he moves on to his next adventure?” and that unlike him, unless they’re Belit or Valeria or Red Sonja, they will need someone to set them up with enough money to keep themselves for at least a while or to take them on and support them. This is why the princesses and queens either stole a private moment for the burning kisses or didn’t fall into his arms at all at the end of the story, because while it’s okay for an adventurer to love ’em and leave ’em, a woman has to maintain her social standing as respectable or else lose it all.
It works best probably as a fix-it-up explanation for how Conan went from having one ship under him to wandering around doing some jobbing soldiering to getting back into piracy again, but it’s not a favourite story of mine (and the dissonance between “This is supposed to be the Picts, who lived in what is broadly Scotland, plus Roman/Saxon/whatever European colonists, but it’s plainly a Western written in terms of North America and Native Americans” of the Black River stories gives me a headache every time).
Heh, no.
Totally agreed, except that I liked every faction trying to strike deals predicting what oversights could get them backstabbed, with two people in the Count’s faction having the agency to undermine him.
It’s a good issue to be honest about. These women who don’t have adventurer skills would end up in terrible circumstances in pre-modern civilization.
That’s fair. It seems there truly were times on the periphery of history when groups of white people were in the same position vis-a-vis civilization as 16th-19th century Native Americans* (save that they had the same disease resistances as everyone else), but calling such people “the Picts” and writing a Western about them can be headache-inducing.
*IIRC, proto-historic Spain was colonized for its silver and other metals, with land being seized and locals enslaved to increase mine productivity.
The fourth and final book I read about intelligence.
Inventing Intelligence (2012) By Elaine Castles
This is mostly an “Anti-IQ” book. She believes that IQ is a good clinical tool to help individuals understand their strengths and weaknesses, but not a good way to rank people on their intellectual abilities or to use to make education or employment decisions.
The first 2/3’s of the book is to talk about the history of IQ testing, starting with various kinds of testing in the 19th Century. It was somewhat interesting to read the history, although she often inserts snide remarks about bias in these old tests, comparing them to current day IQ testers. She implies throughout this section that IQ testing and even merit itself are not good ways to measure people.
Finally in chapter 10 she talks in more detail of her own skeptical view of IQ testing. She sometimes includes the point of view of “pro-IQ” advocates. But she then states as gospel the results of one cherry picked test that lines up with her own beliefs. It is true that both pro-IQ and anti-IQ sides cherry pick the studies they present, but it appears to me that the anti-side does this a lot more prolifically than the pro-side. In all the anti-IQ books and articles I’ve read, there are about half a dozen studies that show up in every one of these. I don’t see this effect in the pro-IQ books and articles; I think this is because there are a lot more studies on their side. Maybe this is my priors speaking, but that is what I see. And this book is very much in this direction.
Positions she states:
1) IQ tests do not measure creativity or out-of-the-box responses
2) Anxiety, depression, curiosity, impulsivity, distractibility, and motivation all affect IQ scores
3) IQs account for only 25% of grade variation
4) Self-discipline, ability to delay gratification, belief in utility of one’s efforts affect grades more than IQ
5) IQ explains only 4-18% of income
6) <4% of delinquency/crime explained by IQ
7) <2% of divorce and unemployment explained by IQ
8) Adaptive view of IQ is multi-faceted and better measure person that psychometric testing
a) Reasoning abilities, social competence, creativity, problem solving do not correlate highly
b) One example of adaptive testing explained college grades as well as SATs
c) Although she admits these facets are hard to measure
9) She suggests that heredity might contribute only 30-40% to IQ
10) She believes White-Black gap is all environmental, for mostly the same reasons as Nisbett does, although Castles emphasizes discrimination more as the environmental explanation (Nisbett talks about the Black culture as a problem).
11) She doesn’t believe IQ selects well for college or job.
I think she is mostly wrong in these comments, such as for #6, #7, #8a, #9, #11, and maybe #3, #5. I think she uses a cherry picked study for each of her beliefs. She seems to be avoiding all the studies that show that many of these attributes do highly correlate with each other, and they all correlate with education and job performance. Of course what is considered a high correlation is a judgment call.
You’ve been substituting ‘heridity’ for ‘heritability’ since the very first post of this series, both when relaying estimated heritability percentages and in responding to commenters that correctly themselves used ‘heritability’.
I didn’t point this out when I saw the first point because there were enough knowledgeable others that I assumed someone else would. But while several people did try to explain what heratability is–the proportion of the variance within a given population explained by genetics–it appears that no one ended up explicitly pointing out your mix-ups.
And, though it’s hard to say for sure since I can’t see inside your head, I think this confusion is likely more than just verbal. For instance, if one really understands what it means for something to be heritable, it’s hard to see how one could get the mistaken impression that heritability figures give an upper bound for what environmental changes can result in.
The mix-up here might be the same one that Ned Block pointed out in response to the Bell Curve orginally, between two different senses of something being “genetically caused”. Sentences like “She suggests that heredity might contribute only 30-40% to IQ” certainly suggest the reading of ‘gentically caused’ that is not captured by ‘heritability’.
You’ve been substituting ‘heridity’ for ‘heritability’ since the very first post of this series, both when relaying estimated heritability percentages and in responding to commenters that correctly themselves used ‘heritability’.
I didn’t point this out when I saw the first point because there were enough knowledgeable others that I assumed someone else would. But while several people did try to explain what heratability is–the proportion of the variation in a trait within a given population explained by variation in genetics–it appears that no one ended up explicitly pointing out your mix-ups.
And, though it’s hard to say for sure since I can’t see inside your head, I think this confusion is likely more than just verbal. For instance, if one really understands what it means for something to be heritable, it’s hard to see how one could get the mistaken impression that heritability figures give an upper bound for what environmental changes can result in.
The mix-up here might be the same one that Ned Block pointed out in response to the Bell Curve orginally, between two different senses of something being "genetically caused”.[1] Sentences like "She suggests that heredity might contribute only 30-40% to IQ" certainly suggest the reading of 'gentically caused' that is not captured by 'heritability'.
[1] As usual of late, the spam filter won't let me get away with even a single link, but you should find a relevant article by googling "how heritability misleads about race".
You need to explain this further. There was someone is a previous thread who stated pretty much this, except they explained it more fully. I responded that yes, technically there isn’t a cap if you can come up with some environment outside what exists today that increases intelligence greater than what we have now. But practically speaking there is a cap, until someone comes up with such an environment. Based on current environmental variation, the environmental % of IQ is the cap for how much we can increase IQs. Please explain why if you disagree with this.
Perhaps I should say heritability instead of heredity, but I think it is just semantics. At this point I don’t get your issue.
Yes, I am mainly talking about the issue you bring up here. But phrasing it as about “coming up with” a way to develop an “environment outside of what exists today” makes the issue sound far more sci-fi than it really is. For one thing, it calls to mind the image of creating a new environment in its entirety (say, placing kids in an experimental bio-dome), when in reality small additions to the current environment count too.
But even things that already exist in the measured environment can have an impact greater than heritability numbers suggest. As long as some IQ-affecting feature of the environment isn’t very common currently, it won’t affect variation in intelligence very much, so by the same token it won’t affect heritability measures much. But if the feature became more common it could have a much larger affect.
The Flynn Effect supports the idea that we have a limited understanding of many of the environmental factors that have been capable of increasing IQ. So I don’t think we should be surprised if future ones do so in ways we don’t have a clear sense of yet.
As a side note, looking back at my original comment I think I could have done more to blunt its harshness, especially considering that your series of posts exemplifies a type of content I’d appreciate seeing more of in these threads. So thanks for the effort you put into them.
First of all thank for the last comment. It was a bit harsh. But I mostly disliked it because I didn’t think I really understood what you meant, and I hate it when I am arguing something and there is no meeting of the minds. So I am very relieved that you meant what I guessed you meant. 🙂
Yeah I do think it is kind of sci-fi. Education theorists have been working intensively on the problem of increasing intelligence for decades and haven’t come up with much. I don’t think there are environmental solutions out there that will increase intelligence significantly. It appears to me that the best way to maximize the intelligence of a person is to immerse them in a highly intellectual society where parents, peers, and teachers all reward cognitive thought and problem solving. But this is the current environment of some people, so I think it does fall under the current cap of how much is environmental.
I believe that the only way to significantly increase intelligence will be to work on the heredity portion, that is directly improve the genetics of individuals. I guess this could be done by some kind of selective breeding (which makes me nervous if the government has control), or gene splicing (which makes me nervous in a different way). Neither one of these is foreseeable in the near future.
If we are to make a significant effect on the environment, someone has to come up with some radical new technique, so definitely what is now sci-fi.
Yes the Flynn effect is hard to explain, and the existence of it does somewhat decrease my confidence in any of my judgments. But my best guess is that most of this effect is NOT an increase in intelligence, but simply an increase in the ability to take IQ tests. IQ tests never have and never will test intelligence precisely. But I think as schooling increasingly reaches the poorest areas, it’s getting to be that almost everyone is used to taking tests. Thus test-taking skills have definitely increased over the decades. This is not an increase in intelligence itself, but does result in IQ test increases. And I think IQ tests are becoming more accurate as the population is becoming more equal in their experience in taking tests. The Flynn effect may reflect some increase in intelligence, but I think it is mostly test-taking skills.
I think there are a few environmental interventions we know that will actually raise average IQ, without either some kind of selective breeding of humans or genetic engineering of humans. Those mostly don’t apply to middle-class-and-up Americans, but they’re a big deal in much of the world:
a. Proper sanitation, including mosquito control. Kids that spend a lot of their childhood sick, or that are sharing their meager food supply and brain development budget with a bunch of parasites, are not going to develop their brains as well as kids without those handicaps.
b. Getting lead and other environmental toxins out of the environment. In the US, poorer people tend to be more exposed to lead than richer people, which probably explains some of the difference in average IQ between poor and rich kids. If I were king. we’d be spending about 10% of the military budget on lead abatement.
c. Nutritional supplements (stuff like iodized salt) prevent deficiency diseases that stunt brain development. It’s possible that vitamin D deficiency affects blacks more than whites and explains some of the IQ difference, so this isn’t 100% a third-world problem. But mostly, rich countries have already done this stuff and reaped the rewards.
d. Extra years of school seem to raise IQ a little bit later in life, so making sure everyone goes to school for many years probably helps. It seems plausible to my amateur mind that this actually represents an increase in intelligence (giving you an intellectually demanding environment for another couple years when your brain is developing might help), but it also seems plausible that this is just leading people to be better at taking tests and so raising IQ scores without raising actual intelligence.
I also suspect there are ways we can raise average IQ that may also have a big uneven effect. IQ becomes more heritable[1] as you get older, and it seems likely that this has to do with self-selected intellectually complex/simple environments. Smart people tend to spend more time in intellectually engaging pursuits, and this has some kind of effect on them once they get out into the world and their parents are no longer providing their environment. So offering more intellectually engaging/demanding things to do with your time, and more intellectually demanding environments, probably makes it easier for smart people to self-select into a more stimulating environment. Alice never cracks a book after high school and watches TV/plays on Facebook when she’s not at her job driving a school bus; her adoptive sibling Bob reads a couple books a week and plays internet chess for fun when he’s not at his programming job.
[1] That is, imagine looking at the set of 15 year olds and the set of 30 year olds, where they were all raised in adoptive homes. When you look at 15-year-olds, less of the variation in their IQs is explained by their parents’ IQs than for 30-year-olds.
@albatross.
My comments were about the US. There is clearly much potential for increasing intelligence in 3rd world countries by improving conditions there.
Improving conditions in the US can also increase intelligence, but that’s where we run into the cap where the percent of heredity cannot be changed without some dramatic new technology, whether in the environmental area or improving genetics. And in fact I believe the US is continually doing this, which explains the Flynn effect (both the IQ testing improvements and true intelligence improvements). Lead has been dramatically decreased (isn’t the usual explanation for decreasing crime since the ’90’s?). And schooling reaching far into rural areas where it previously did not. Those are good things, but I’m not sure if the current improvements can be accelerated.
Could you or someone else point me to your other reviews on this topic? Thanks.EDIT: I found them, thanks.
Your mission, should you choose to accept it, is to change the lyrics to Acadian Driftwood so that it’s about Akkadians.
Warning: it’s an automatic loss if you can’t keep “what went down on the Plains of Abraham” unchanged.
Oh, I wish I were more creative, because that could work brilliantly. All I can offer is this Tumblr post about Ea-Nasir, the Del Boy of Mesopotamia 🙂
— prestigious museum
This is so amazing.
It’s the British Museum.
Of course they have the world’s oldest extant formal letter of complaint.
Thought it was on a Hostory of the world in 100 objects but I was mistaken.
It’s the bones of 4,000 years but human nature never changes 🙂
“You stiffed me on the delivery and then the material you did send was crappy quality! Okay, so I still have an outstanding bill on account with you but that’s beside the point – this is terrible customer service and I will be writing a STRONGLY WORDED LETTER OF COMPLAINT!”
And thus, I really want a sitcom set in ancient Mesopotamia.
Sitcom guest star: Laban, Jacob’s father-in-law! “Oh, I’ve got this relative who’s come to me after running from his brother. And he volunteers to work for room and board for seven years to marry my more-beautiful daughter? Well, why don’t I give him the less-beautiful one instead and save the more-beautiful one for some other deal!”
Sitcom guest star: Laban, Jacob’s father-in-law!
The ongoing sub-plot over the series is these two both trying to out-manoeuvre the other because they’re got that family resemblance of being just that bit too clever for their own good.
You can imagine Laban: “Look, I’ve got two unmarried daughters, eldest girl is a lovely girl, lovely girl. Not a looker, no, that’s her sister, but a lovely girl all the same. Make a great wife for any man. Not her fault all the young guys are only interested in hot chicks, you know?
Now this nephew of mine – oh, what a trouble maker! His mother’s fault, she spoiled him. Well that’s my sister for you, always has to have her own way. Anyhow, he gets up to some shady business with his elder brother and she steers him my way to keep him out of trouble. Must think I’ve gone soft in my old age, eh?
Ha ha, no, I know a trick worth two of that! My nephew, such a smooth-talker, thinks he’s pulled the wool over old uncle’s eyes. Well, he wants to marry my daughter, I’ve got two lovely daughters like I said – one is as good as the other, even better, right?”
Relevant to those interested in psychotherapy –
The evidence for evidence-based therapy is not as clear as we thought
Disclaimer: I’m not a psychotherapist, nor do I play one on TV.
Overall I came away impressed and hope to see more research like this done. That said, I’m a little concerned that there was no mention of the idea that the same issue like depression or anxiety might have different root causes in different people, and different therapies might be appropriate depending on the cause.
A similar issue seems to hold just for psychiatric medications: there are a ton of antidepressants, and finding which one will work best for you with the fewest side-effects is basically trial and error.
If treatment A helps 80% of the population and treatment B helps 20%, giving everyone treatment A is certainly better than flipping a coin to decide which treatment to give. I worry, however, that making it a general rule “use A, not B, since it’s better for more people” could morph into “never, ever use B, even when A isn’t working.”
Why did this happen?
https://www.cnn.com/2020/03/05/us/seti-home-hibernation-alien-trnd-scn/index.html
Thanks to Moore’s Law, did the SETI network’s computers gain the upper hand on analyzing the huge trove of radio recordings faster than the trove could grow, and did the last significant chunk of data recently get analyzed?
I was going to help out with SETI back when, but I asked a friend who’s expert in astrophysics – “If aliens on a planet around Proxima Centauri were running SETI, could they detect us?” She said no.
The Wait for Extra-Terrestrial Inteligence’s programme of checking the news every now and then to see if ‘Aliens arive’ is the top story is still going strong.
In a recent episode of Star Trek Picard, the heroes had to fight with an old, decommissioned Romulan space ship that a private warlord had somehow obtained. He and his crew used it to dominate several planets in a lawless and poor part of space.
Have there been any real-life examples of something like this from 1900 onward? I’m imagining someone like the leader of a powerful group of insurgents or a drug lord acquiring a retired warship and using it to control an island or a stretch of coastline.
I don’t know of anything 1900s onward, but in the 1830s James Brooke leveraged his control of a schooner with cannons into eventually becoming the King of a section of northwest Borneo. I mean he didn’t conquer it outright (he got the Sultan who controlled the area to give it to him as a vassal, and then eventually became independent), but without that boat and the cannons it probably wouldn’t have happened. That’s the best example of a random dude using a warship to outright rule an area which was poor and technologically behind.
There are other people who really should be the ones to answer this, but they haven’t, so I’ll just say the thought I had: something tells me that as you get into the era of steam-powered warships, the fuel demands and complexity of operation will mean that the ships are somewhat useless without the logistics of an actual navy behind them.
FLWAB already brought up the example of James Brooke, but note that he was semi-officially representing the British Empire at the time.
Really, if you were going to do this sort of thing entirely on your own account, you wanted to be done with it and joined up with a proper Empire by the end of the eighteenth century. See also the history of Belize, for how to do that right. Once the Napoleonic wars came to an end, the great colonial empires had large and capable navies and nothing better to do with them than to check up on pretty much every habitable place on earth, regularly enough that nobody was going to build a mini-empire without their notice. Also to suppress the slave trade, and just about anything you can do to build an empire on the basis of “I’ve got cannon and you wogs don’t” is going to look kind of like slavery.
So, it doesn’t work any more unless at least one major empire wants it to work and wants it strongly enough to lean on the other major empires to back off and not just claim the place for themselves.
Over the next few generations we get steamships, telegraphs, crusading journalists, and bureaucrats numerous and capable enough to directly administer every Empire’s territories without having to outsource the work to freebooters like Brooke.
There are a number of Soviet weapons systems floating around the Middle East in the hands of non-state actors. ISIS operated Scud missiles, a surface-to-air missile, and maybe some MiGs.
Nothing with the scale or complexity of a warship, though, and it’s not an asymmetric threat like you describe. Weapons, including very powerful ones, are just plentiful.
In the “News Articles I didn’t Expect to Read in 2020” category, the FDA has banned schools from using electric shocks as punishment.
I had no clue this was even a thing, but apparently there’s one school for the disabled in Massachusetts that uses small devices that provide painful shocks to their target as punishment. That being said, all students who wear these devices are adults, it’s only done with the family’s consent and the article cites at least one person that seems to have benefited from the use of these devices.
That being said, you know, it’s still a system whereby officials can press a button and shock people; if this was a prison or any other kind of non-disabled victim I’d be 100% against this device, so I see no reason to change my previously held “don’t shock people as punishment” bias.
Is there any reason I shouldn’t just chalk this up as an example of the FDA getting something correct?
Would you accept a general appeal to federalism – that this doesn’t need to be decided by the federal government specifically?
Maybe?
If the only place in this country that is considering or would ever consider using this practice is that one school in Mass, it does seem like overkill to use the power of the federal government to end it. On the other hand, if this is a practice that had the potential to spread AND is not medically sound, i’m perfectly find with the FDA stepping in and nipping it in the bud.
I don’t know enough about the topic to decide which option is more accurate.
Sounds to me like the practice IS medically sound, the problem is that the school did the thing institutions do and started using the zapper at the first sign of hesitation to comply rather than only in the most extreme cases.
Wasn’t there some sociological study awhile back that demonstrated electric shock was a good way to train people?
/s
I thought that was a test to find out how willing people were to shock actors.
I’d make a pun here, but I think we all covered them already.
I’m going to take this opportunity to link my favorite comment of mine ever.
I think my favorite recent one was “Bingo.”
@Nick
I’m glad you reminded me of that. I read it at the time, and thought you were just applauding Conrad Honcho’s comment. Now I get it!
If there’s only one school doing it, and it’s done for a few specific people with major problems, then I suspect it’s going to be hard, from the outside, to tell whether this was:
a. A barbaric horrible practice that got accepted and normalized there and persisted way too long. In this case, the action is good and the students/patients will be better off as a result.
b. A genuinely helpful and humane practice that looks horrible and pattern-matches to cruelty to the helpless, and so it’s being shut down by uninformed do-gooders. In this case, the action is tragic and the students/patients will be worse off as a result.
I mean, my first guess is “barbaric horrible practice,” but I have to acknowledge that I don’t know enough to be entitled to a strong opinion here. And many things that look needlessly cruel end up actually being the best thing anyone knows to do–think of chemotherapy for cancer patients, or basic training for soldiers, or letting toddlers fall down and skin their knees and occasionally break a bone rather than keeping them pent up inside wrapped in cotton.
Is this the Judge Rotenberg centre?
Founded by a colleague of B. F. Skinner, so I’m not surprised they’re so big on operant conditioning.
The JRC deliberately modifies the electrodes to deliver more power for a longer time. (up to 45mA RMS for 2 seconds)
Multple packs are often applied to the same pupil.
The shock units are used in the bath/shower, against manufacturer instructions.
Shocks are administered for very minor infractions.
From what I’ve read, the highest praise I can give to the staff is that they have the moral high ground compared to the SS-TV and NKVD.
My big question for you would be why you are against shocking people as a punishment or training method in the first place. If it were to turn out that you are just against physical punishment in a general sense whether it’s effective or not and no matter how it’s used, I’m not sure how you’d be convinced.
If we look at your second to last paragraph, the words “That being said, you know,” are meant to dismiss the parts you’ve already about this system perhaps being effective and good. And after dismissing them, you then ask for OTHER reasons to convince you it should be kept, AFTER you subsequently say you think electroshock is bad in a fundamental, inherent way.
That’s not a fair way to ask to be persuaded on this. It rephrases to something like “Effectiveness and examples of this helping people are assigned a near-zero value. You now have to convince me this is a net positive without using the most effective evidence available. Also, I have a visceral reaction against this particular practice”.
The most effective arguments for this are going to look something like “You yourself brought up a situation where this is helping people. Unless you fetishize pain avoidance or shocks-as-evil, your own words indicate this shouldn’t have been banned”. That argument may or may not be right and electroshock may or may not be possible to use in a net-beneficial way, but if you indicate in your question’s construction that the above argument is hand-waved from the get-go, I’m not sure you are going to get the vigorous defense you want.
I’m not the guy to provide that vigorous defense in the first place, but I know I wouldn’t try in this particular thread for the reasons above.
When someone starts talking about “video games,” are you more likely to immediately think of?
1. A narrative experience (think single player, high amount of storyline, clear progression)
2. Repetitive entertainment (think multiplayer, high amounts of repetition, storyline/progression unimportant to nonexistent)
@Matt M says:
The Asteroids and Missile Command games I played in the ’80’s, or the Naruto and Need for Speed games my son played a few years ago.
I’m guessing those were all more like #2, but the video games I played in the ’80’s were seldom multi-player.
Hrm, what about single player sandbox games with storylines that emerge on their own?
That’s mostly what I play, e.g. EU4, HoI4, Civ VI.
I do play some Path of Exile, but again, I play it pretty single player sandboxy.
Those go in bucket 2.
I know what you mean, but I think saying that “storylines emerge” in Civ is putting it rather generously, as that relies pretty heavily on the player’s active imagination. Those games are definitely designed to be played multiple times.
Then definitely bucket 2.
And if you’ve never changed strategy to nuke Alexander’s stupid face because he said something you aren’t playing Civ right.
When I first tried to learn to play Civ, I was probably in elementary school, and for some reason I got it in my head that “irrigate the entire landmass of Eurasia” was a productive goal…
That reminds me of playing Simcity with the cheat code, where the first thing I do after getting unlimted funds is plop down a 9×9 grid of railroad across the entire map to minimize pollution and maximize efficiency.
I’m sure Nick will revoke my Trad card for that crime against organic expression of communal knowledge.
@Randy M
I’m updating the membership lists as we speak.
I’ve actually played lots of Simcity and other city sims, and believe me, much ink could be spilled at request on the ways they inculcate unrealistic assumptions about traffic and modern cities in general, with the result that they turn our kids into little traffic managers instead of artists. This abominable crime I lay at the feet of Will Wright.
I think I think of 1 more, but I play 2 more.
Eh, both about equally.
What I’m not thinking about is all the mobile or flash games that somehow seem to occupy a lot of market/mind share with the public at large.
Also, there’s good single player games with high repetition and low storyline; roguelikes and such. There have been multiplayer games with story and progression, but usually moreso as an option on a single player game, like Baldur’s gate.
Those go in bucket 2.
Basically I’m trying to craft a categorization scheme that doesn’t depend on single/multi player (although will be highly correlated with it).
So what do you think goes in bucket 2 that clearly isn’t this? If nobody thought of Tetris until 2015, it would have been a free to play mobile game. Just saying.
Oh, say, Call of Duty, Eternal, Street Fighter, Super Smash Brothers.
Yeah, probably. Or else a minigame inside a sprawling open world sandbox that you master to get +2 to your loot box rolls.
I got ya covered, fam. (Do us young people still say “fam”?)
Linear–Gameplay is delivered according to a pre-programmed, sequential progression, usually tied to a narrative, however loosely. Upon completion, there will be no new content. Fairly light on emergent properties. Variations include the Sloped (gradually increasing difficulty) and the stair-step (new features unlocked periodically). Examples would be adventure games like Myst, platformers like Mario, etc.
Cyclical–Gameplay occurs in a series of matches. Content is open ended and may have many emergent aspects as various systems interact. Narrative likely exists on a back-ground level but does not really drive the gameplay in any meaningful way. See Real Time Strategy, digital trading card games, arena shooters, etc.
Hybrid: Daisy-Chain: Gameplay consists of a series of matches with increasing options and challenge, which may be distributed according to a narrative, culminating in some kind of final match. See the single player campaigns of real-time strategy games or shooters (if the missions aren’t too tightly scripted to make it essentially Linear), the tutorial mode of digital trading card games, Tactical RPG games, JRPG games like Final Fantasy.
Spiral: Gameplay is essentially a series of matches, but new options are gradually unlocked. However, matches are not chained together with a wider narrative and gameplay options may be added in a variety of ways. See Roguelikes, such as Slay the Spire or Risk of Rain.
Forked: Mostly Linear, but with enough branching choices to make consecutive play throughs required for seeing all the content.
Sandbox: Game is structured in independent matches with little or no overarching narrative, but the matches last arbitrarily long allowing the player to continuosly develop the world. Examples: Skyrim, Dwarf Fortress, Minecraft, Factorio.
Of note to this entire discussion. This article:
https://heterogenoustasks.wordpress.com/2015/01/26/standard-patterns-in-choice-based-games/
Created the taxonomy we in the interactive fiction world use to describe our games.
Personally – 1
Any time I’m hearing about it in pop culture – 2
I think of a big blob of stuff that includes both of those things.
I’m most likely to think of #2-specifically of the arcade-type games.
Is your categorization scheme intended to encompass computer games as well? Minesweeper, Jezzball, online chess and bridge, Angry Birds, 2048, the various balloon popping/space filling games…
Yes, and all of those go in bucket 2.
Another way of thinking about it is, “is a video game closer to an interactive movie, or a high-tech toy?”
In Bucket 1, Red Dead Redemption 2 suffices as a reasonable substitute for The Good, The Bad, and The Ugly.
In Bucket 2, Tetris suffices as a reasonable substitute for a cup with a ball attached to a string.
They are both video games, but the role they are serving and the sort of things they are replacing are quite different.
Also sandbox games, which substitute for sandboxes.
(Sillica-based voxels?)
Personally, the only video game I ever played heavily was Minecraft, which is… 2, I guess.
In general, when someone says “video game,” a picture of a first-person shooter and the words “Call of Duty” are probably the most likely to be first in my head, and… I’m actually not knowledgable enough to know which type that is, but I’m guessing 1.
CoD has both a single-player campaign and a storyless multiplayer mode, so it’s both, although the multiplayer is probably the main appeal.
I think of a song by a hot duck-faced woman. I don’t think I’ve ever referred to computer games as video games, and I’m not sure anyone I know has either.
I think of a FPS, so two. I would think of one if someone said “computer game”.
False dichotomy. The best of narrative gaming isn’t cinematic at all, it’s sneaking up on the player with its themes/story delivered via gameplay. Universal Paperclips, Braid, and Touhou blur between the categories. And then you have games that openly do both, such as LoZ Four Swords, GTA, or Don’t Starve.
This is the craziest market in my lifetime, and I am 40 so I sort of knew somethings about what was going on in 2000 and was following reasonably closely in 2008. The big thing right now as far as I can tell is how treasury yields are acting. In 2008 there was a massive drop in the 10 year yield from early November into December with a drop from ~3.9% to ~2.1%, a 46% decline in yields. The drop right now is from December is ~ 1.9% to 0.7% a 63% drop, and it might not be finished yet.
What is also interesting to terrifying is that the yield crash of 2008 came second, the S&P was down 33% from June by early November, and ~37% off the late 2007 highs (Yields fell from over 5% in 2007 to that 3.9%, the overall decline from peak to trough was from ~5.1% to ~2.1% for a 59% decline over almost 18 months, which is still less than the 63% decline over the last 3 months*), but stocks are only 10-15% off their highs right now.
*Yes, bond yields aren’t great for comparisons across time, I’m just trying to highlight how large this move is in really any framework, and this comp is vs the GFC**.
** Speaking of the GFC, is this going to be like ‘The Great War’ that had to be renamed WW1 just a couple of decades later? The Great Financial Crisis might end up looking like trench warfare vs the blitz.
Great Financial Crisis 2: Coronavirus boogaloo
I closed out almost all my shorts at the open this morning. I really need to step back a bit, crazy emotional market.
Well that didn’t last long, back in (some of) my shorts!
Do you benchmark your performance on a risk adjusted basis?
If you can successfully day trade THIS market, I congratulate you. I wouldn’t have the stomach for it.
@ anon-e-moose
My shorts are a small portion (~25) of our portfolio. I don’t really worry about risk adjust gains here, i’m more looking for out sized gains.
My big move in our primary portfolio was to go 100% T-bonds back in October, so if I counted risk adjusted I would be even happier.
@ The Nybbler-
I don’t consider this day trading, my average day the past two weeks has probably had 0.5 trades executed and this is the most active I have been. I am trading options which requires certain day trading like behavior,
In actual day-trading behavior I straddled the s&p at the close.
If you are comparing bond yields, I’m not sure that % change in yield is the most appropriate measure, particularly since we’ve seen that 0 is not a hard lower bound. I think raw change in yield is a little more representative of how much things are changing? Better is probably blend somewhere between.
0 is still a soft lower bound, the negative rates were heavily subsidized with CBs granting concessions that made them close to zero in most scenarios for the actual bond holders.
Even if we are talking raw yields the 1.2% drop is ~ 2/3rds the size of the largest similar length drop during the great financial crisis.
> December with a drop from ~3.9% to ~2.1%, a 46% decline in yields. The drop right now is from December is ~ 1.9% to 0.7% a 63% drop, and it might not be finished yet.
I don’t think it makes much sense to look at fractions of interest rates like that. Interest rates are not like stock prices. Imagine what you would be saying if the rates went negative. An infinite decline? A more useful view is that they went down 1.8% before and they’ve gone down 1.1% now.
> Speaking of the GFC, is this going to be like ‘The Great War’ that had to be renamed WW1 just a couple of decades later? The Great Financial Crisis might end up looking like trench warfare vs the blitz
So far, there’s no reason to think this will be anywhere near as bad as the Panic of 2008.
On the market long term bonds act like stocks with capital gains when yields move. If I have a bond yielding 2% and yields drop to 1% my 2% bond is has increased in value, and that increase will be (not linearly) proportionate to the percentage change in the coupon.
Yes there are better formulas than straight % declines, and I also noted the absolute decline as well. The point is that I am comparing it to one of the steepest yield drops in history which highlights how insane this action is.
The market’s going utterly nuts responding to rumors and fears rather than data and reasoning. Even more so than it usually does. IMO trying to figure it out short term is an impossible task unless you have some incredible instincts.
The thing about 2008 is it wasn’t just crazy; there were fundamental economic problems it was responding to.
Are you arguing that there aren’t fundamental economic problems right now?
There are the sort of economic issues which have caused 10 out of the last 2 recessions, certainly. But the current market craziness has little to do with that.
The current market craziness has a lot to do with leverage/over liquidity. Economically there are major problems in bond spaces.
Have you seen what’s going on in China?
Epidemic (in China) no longer growing exponentially, factories reopening, that kind of thing?
It’s no longer growing exponentially in China because the factories are closed. Maybe they can reopen them and not have it grow exponentially. We haven’t seen that yet.
I’m more worried about what’s going on in Washington.
You know what really gets me?
We are in a supply shock and a demand shock. And central banks are cutting rates (Fed) and buying equities at a record pace (BoJ). PRINTING MONEY DOES NOT BOOST THE ECONOMY WHEN FACTORIES ARE IDLING AND PEOPLE AREN’T BUYING THINGS – I want to scream this into the void.
Central banks have completely surrendered to the markets and are targeting asset prices.
That’s how you get inflation. See John Law and the Mississippi Company.
Lower rates do help. Companies facing temporary revenue shortfalls due to (say) supply chain disruptions will use credit to maintain solvency, avoid layoffs, etc. until their business improves again.
Even if the shortfalls are temporary they have to be temporary and transient, for this to work demand has to pick back up not to previous levels but also to fill in the gap from the shortfall AND businesses have to not significantly adjust now that they have experience with supply chain disruptions.
There is also no real reason to avoid layoffs with the financing available. Cheaper to furlough and borrow than keep everyone on the payroll.
It’s probably true that layoff and rehire is cheaper for unskilled workers, but not for skilled ones. It takes a long time for a new hire engineer at someplace like NVIDIA to reach full productivity.
But anyway the more general point is that credit helps companies avoid being forced to take short-term actions to survive that are bad for the long-term health of the business. Layoffs of difficult-to-replace employees, sales of critical assets, closing down long-term projects that aren’t revenue-producing yet, bankruptcy, etc.
This is no different from bankruptcy due to a short term disruption. Your entire labor force isn’t getting fired on the day you file.
I don’t see your point. Bankruptcy is an example of an costly measure you’d want to avoid, right?
For a rate cut to positively impact employment it has to maintain more jobs than what would occur without the rate cut. Your reply above largely implies that the main way it would do this is by maintaining solvency, so the base case would be how many employees would be laid off under bankruptcy without rate cuts vs whatever happens with rate cuts.
My point is what I said, “credit helps companies avoid being forced to take short-term actions to survive that are bad for the long-term health of the business” and layoffs were an example. Bankruptcy is another example and it’s extremely costly.
Also, companies will definitely do some costly layoffs if it lets them avoid the costs of bankruptcy.
This is specific to an individual business, it has to expand to being good for the overall economy which includes those surviving businesses who would be able to buy the bankrupt ones cheaply and start expansion again.
There is little reason to think that an increase in liquidity via a rate cut will actually lead to maintaining employment levels in the face of a supply chain disruption. If the disruption is short enough to keep people on staff without them having productive work to do then a bridge loan really shouldn’t be needed. If the disruption is longer then the bridge loan won’t prevent layoffs. All that a rate cut should do is to increase the incentive to borrow under uncertainty about the outcomes, which means the Fed is encouraging leverage during a crisis, which is literally the exact opposite thing from what should be prescribed from a theory perspective.
No it is not specific to the company. Bankruptcy imposes real costs, it is not merely transfer of ownership.
You are assuming the conclusion. The longer that bankruptcy is put off and the more debt the bankrupt company owes, and the lower the rates they owe the worse bankruptcy is and the more costs it imposes. If the virus is increasing the likelihood of bankruptcy then borrowing companies should be facing higher, not lower, rates.
Therefore your position has to be that the lower rates will permanently reduce the likelihood of bankruptcy, which means that the companies will have to cut costs (as they have lower revenues) which will include layoffs, which (according to Keneysian/monetary theory) will cause less spending and require more rate cuts.
Bankruptcy costs are certainly not linear in debt; there is a massive step function as you declare bankruptcy, as everything about the company becomes more uncertain since it is being trusted to the court system.
In the long run it is certainly true that lower revenues mean that ultimately costs need to be cut relative to what they were (if we take an expansive view of “costs” to include things like dividends) but credit allows companies to better choose the nature and timing of those cuts. Cutting costs massively and hastily this quarter is much more damaging than spreading the cuts over the next two years.
I’m quitting from this conversation now, I’m not getting anything out of it anymore and I suspect you aren’t either.
Ok, this’ll be my last reply then.
You are still only working on one side of the equation. If you need loans to prevent bankruptcy you need higher, not lower, interest rates to get people to actually make the loans. According to standard monetary theory (which I think is wrong, but is what CBs act under more or less) you cut rates in the face of a demand shock the lower rates act as a stimulus for demand by reducing the opportunity cost of consumption making it more attractive. In the face of a supply shock the rate cut doesn’t improve the actual supply conditions which means rates should be moving higher to increase effective liquidity, not lower. Lower rates should lower liquidity making bankruptcy likelihood higher, not lower*.
*There actually is some evidence this past week of this happening, fed repo operations were oversubscribed by the highest ratio after the 50 basis point cut this past week.
I actually am. I need to refine my thinking on this while the markets are calm… which has pretty much only been on weekends recently. This is the biggest opportunity to make outside gains for me* in my lifetime to date, and to have a chance at nailing it I am going to have to be sharp. These types of disagreements work for me for that.
*That is considering the capital I have to use and my experience and knowledge of markets as well as the market conditions.
I don’t see this as being particularly crazy. It’s easier to explain to a child than the typical economic crisis: there’s this thing called coronavirus,[assuming that it’s the cause, and it’d be quite the coincidence if it weren’t] it’s causing factories to temporarily close, that means the companies owning these factories are not able to make things and sell them and make money and the people who own these companies are getting less money.
Generally agree. As far as I can tell, what’s been happening the last few days is that the market has been struggling to decide whether the virus is going to result in significant and sustained economic disruptions, or not.
One day, the “yes, it will” faction wins out and prices fall 3% or so, and the next day the “no it won’t” faction regains momentum and prices rise by 3% or so.
This ignores that CBs have stepped in with aggressive measures after the down days, and also that markets basically ignored the possibility of it being serious for a couple of months first before it became ‘unsure’.
Oh, and also that bond yields have been falling since early January and there have been many large down days for yields and almost no big up days, so the bond market is not saying ‘maybe its nothing, maybe its something’ it is consistently saying ‘yep, its something, maybe its huge and maybe its just kinda big’.
@baconbits
Are you saying the issue is that stock prices should have fallen as bond yields fell? There’s no contradiction there if investors expect the fed to act to lower interest rates regardless of whether a recession will occur.
No, I am saying the interpretation of ‘stocks up and down because people are changing their minds about how serious the corona virus is daily’ doesn’t fit with how other markets are behaving. Your statement only works if the Fed’s rate cuts odds are independent of the severity of the coronavirus, which implies the Fed is reacting to something else which implies that the implication that the market is only reacting to the virus is also false/unlikely.
Its easy to explain the housing crisis to a child to if you explain it incorrectly.
The questions that this answer brings up are
1. Why were the US stock markets making all time highs while the virus was spreading? There was a 2-3 month lead in with the virus spreading and the markets were going up off already all time highs.
2. Why was the Fed injecting hundreds of billions in liquidity from October on with UE extremely low levels and stocks at or near all time highs?
3. Why did the coronavirus cause the fastest 10% decline off an all time high in history? This is not just crazy price action, its historically crazy price action.
Good old-fashioned FOMO. In retrospect, TSLA punching through $600 should’ve been an obvious sign of a blow-off top.
That is a good question. A bunch of hedge funds and mortgage REITS borrow in the repo market for some reason; the repo market started blowing out in mid-September for some reason (rates spiked to 10%) so the Fed started buying T-bills to bail them out. They stopped in early February, AFAICT. I have no idea why the repo market started blowing out, or why it stopped. It’s too early to be COVID-19 related.
Options positioning. When long positive gamma persists for a long time, it leads to artificially suppressed volatility. When an exogenous shock (like COVID-19) disrupts that artificially suppressed volatility, it leads to more extreme (positive and negative) swings. This article explains the dynamic very well. Some of what happened was related to forced deleveraging, not just virus fear.
Still digging into special relativity.
First, I don’t think the way I am approaching the math is correct. I can sum up the issue. Suppose a ship at rest accelerates, reaches a point, accelerates back, then slows down approaching it’s starting position, where it is back in rest. Calling the starting time T0, and the ending time Tf, my understanding of the correct approach to figure out the difference in time was to integrate lamda over the course of the path, and multiply this by Tf.
I am reasonably certain this is wrong. One of the substitutions that gives rise to lamda is t=x/c, which is true if and only if t’=x/c. Substituting t=x/c for Tf results in a different result – namely, no time dilation at all. Meaning I’m pretty sure the equation for lamda is invalid for at least part of the path.
Working through this issue now, uncertain of the outcome.
The other issue, which I am less certain of, is the choice of the positive root of the square root in the equation. It isn’t clear to me that the positive root is the correct choice for the entire integral. On this point I am simply confused; generally we say sqrt(x^2) is x, which is to say, the sign is preserved. I am less certain how sqrt(f(x^2)) should be handled, but it doesn’t seem quite correct to discard the sign of x altogether in favor of one root or another.
The correct way to get the time elapsed according to some object is to integrate that object’s proper time over the interval. The infinitesimal proper time is \sqrt{dt^2 – (dx^2+dy^2+dz^2)/c^2}. So if you know the object’s velocity as a function of time between t0 and tf then the amount of time that object experiences during that interval is:
\Delta t_proper = \int_{t_0}^{t_f} \sqrt{1-v(t)^2/c^} dt
In the special case that v(t) is constant you get the standard time dilation result:
\Delta t_\proper = \Delta t/\gamma
Thank you! It took working through why dt^2 equals one to figure out where a prior approach went wildly wrong.
(Equation edited for typo)
Working through this with a trial equation (v(t) = -t + 5) to see which intuitions I can discard and which I can keep.
And indeed the sign of velocity does end up mattering to the square root, so that intuition is being smug at me.
The sign of the velocity at any given shouldn’t matter because it gets squared.
For linear velocities such that u=at + b, where the units of the equation are expressed in terms of proportions of c, the equation should end up being a*integral(1/2 * (arcsin(u) + u*sqrt(1-u^2)) – the unit I chose for -t + 5 was c/6, for t from 0 to 10.
If the sign didn’t matter, the square root portion would have canceled out. Instead I got 2.3712149. (edit: This value is probably wrong, I forgot my unit conversion.)
Getting a weird result when I use constant velocity there, though. The change in proper time becomes 0. Trying to figure that out.
Edit: To be clear, trying to figure out why a constant velocity doesn’t work in the linear velocity equation. I see the integral I am doing is unnecessary, since the square root is a constant. This is leading me to suspect my integral is flawed.
Here’s a general expression you can check against. For an object with a velocity that changes at a constant rate, v(t) = v_i + at, over a time period \Delta t, and using \beta to mean v/c, the proper time is
\Delta t\times \frac{1}{2}\frac{\left[\beta_f\sqrt{1-\beta_f^2}-\beta_i\sqrt{1-\beta_i^2}+\arcsin(\beta_f)-\arcsin(\beta_i)\right]}{(\beta_f-\beta_i)}
(You can see this LaTeX rendered here)
Hrm. Where did the denominator come from? My derivation lacks that.
ETA:
Ah! It is 1/a. I should have divided by a instead of multiplying.
I’d check if you forgot a factor in doing a u-substitution somewhere. The denominator may also show up as a factor of 1/a, where a is the acceleration.
Hrm. Alright. I am satisfied with the behavior of the equation when velocity varies over time.
The only question still remaining is about the behavior of the square root when velocity is constant, namely whether it is correct to take the positive root when velocity is negative (towards, rather than away from, the origin).
I need to think about that one.
ETA:
You were correct on the u substitution! That was the issue.
Hrm. I am now uncertain if I am evaluating some math correctly.
Consider two equations:
v(t) = (t)*c/6
And
v(t) = (t + .9)*c/6
If I am doing the math correctly, evaluating the change in proper time, for t(0,5), the second equation produces a smaller change in proper time than the first, even though the velocity is always higher.
Nice to know I’m not a physics teacher for nothing 😉
As for taking the positive vs negative square root consider the way you derive the the integral. IMO the most basic way is to ask the question “if I give the object a clock identical to my own clock and then let it travel its path, how much more often will I see my clock tick than the object’s clock while it is travelling?”
You can use the basic principles of relativity to answer that question, and you find that the amount of time the object’s clock will tick out is given by the integral. And if pose the question in terms of clock ticks in this way it only really makes sense to take the positive root.
If you want to start wondering whether the clock will start ticking backwards in time you can, but it’s a separate question and not really related to the sign of the square root.
In terms of clocks, it is more about what an observer watching both clocks would see.
Or, more particularly, given a traveling observer traveling along with a clock, exactly when the observer sees the stationary clock ticks faster. (It has to at some point on a round trip, since at the end it is ahead of the observer’s own clock; more, it has to tick faster in proportion to the Lorentz factor).
I have ruled out it happening during the acceleration at the start and end of the trip, because we can have identical accelerations for trips of different distance, and it doesn’t make sense for the clock to tick faster if a trip was longer.
The two possibilities that remain are during the turnaround acceleration, or during the return trip itself. If the negative root should be taken, it is during the return voyage. If it shouldn’t, it has to be during the turnaround acceleration.
I can’t figure out the logic for it happening during the turnaround acceleration, since light is arriving to the traveling observer at the same rate no matter the distance, but that may just be something I am missing.
Hrm. The return trip can be made at a different velocity. I think it may have to be at the turnaround point.
The answer is in the integral. The ratio by which another clock ticks slower than your own is given by \sqrt{1-v^2/c^2} where v is the speed of the other clock relative to you. IF you are in an inertial reference frame.
If you are not in an inertial reference frame you need to use a different rule for calculating the ratio. (Just like how you can’t just use Newton’s 2nd law when in a non-inertial frame)
Alright:
Why does the integral return a smaller change in proper time when there is acceleration than when there is no acceleration, given the same time frame and the same base velocity?
I’m not ruling out “I am doing this math wrong.” But is this what you are referring to?
Ah, wait, less is expected.
But now I’m back to the same problem. More, the acceleration form of the equation doesn’t include position, so I don’t see how the integral resolves this.
Taking the negative root in the integral for negative velocities does resolve the issue – thought about it more and I realized my expectation that it had to be symmetric was in error – and is mathematically valid, but I’m pretty sure that’s not what you mean.
Actually, it may be helpful for me to iterate the understanding I have of how the usual thought process involving this goes:
Suppose there is a clock beacon one light hour away from a ship at rest with respect to the beacon. They are synchronized, so the ship’s clock says 1:00, and the beacon says 12:00, but the one hour delay means the ship should interpret this as 1:00 also.
The ship accelerates to a Lorentz Factor of .5 – meaning the distance from the ship’s perspective is half a light hour, meaning it interprets the beacon as being at 12:30. So when the ship moves to the beacon and decelerates, the half time rate means when it has arrived, the beacon shows 1:00 as expected, and the ship’s clock shows 12:30.
Which is fine if that is how the universe operates, but it implies something significant – two ships can tell who is in motion by comparing the distance both measure between each other. Which, as I understand relativity, is pretty much specifically forbidden. More, if they measure the same distance, or a distance that doesn’t correlate exactly with the Lorentz factor, they can find a true inertial rest frame by both decelerating symmetrically for the case of identical distances, or decelerating in a more complicated way if the discrepancy doesn’t match the Lorentz factor.
My suspicion is that this is just another case of “Teaching something wrong to get students used to thinking in a particular way before they learn the real way it is done.”
If this is really the way it is done, it looks… well, entirely wrong.
Er, that’s off by about ten minutes, so the final times are more like 1:09 and 12:34, setting aside the time dilation owing to acceleration, but close enough.
At any rate, I begin to suspect SR is the wrong framework to ask this question anyways, and I need to get back on learning tensors.
Maybe looking at getting latex support is a serious recommendation…
The standard displacement formula is d = vt + at^2/2 which is derived from the integral of v(t) = v0 + at over dt.
For your other issue, in general sqrt(f(x^2)) != f(x).
Ie. f(x) = x+1, then sqrt(f(x^2)) = sqrt(x^2+1) != x+1
“Space Time Physics” by Taylor and Wheeler is a great SR book. Read it and work through the problems at the end of the chapters.
Note: slightly rambling. Exams, cognitive laziness, old LW rationality (thoughts on parts of sequences), small aside on the Memory Book
I recently gave an exam that I had to spend more-than-normal effort for (short version: end-of-high school exams are disproportionately more important for getting into colleges here than the rest of your high school performance) and I realized that I wasn’t really pushing myself, cognitively, in general. For this particular exam, as it was significant, I spent more time and energy than what I would have done otherwise; and it feels like (modulo the difficulty of the exam) I really did do well, which was a surprise.
It’s possible this is attributable to unconscious cost-benefit analysis, e.g. school exams in general not really mattering for me, but I have to discount that because it’s a wider pattern. Typically my learning process was to either superficially learn things, or try to learn things, fail, and pretty quickly give up because I was clearly ‘not smart enough’.
On one hand, this is probably linked to laziness/lack of self-confidence/lack of perseverance. But on the other hand..I think at least some part of it comes from failing to internalize old-style LW Rationality properly. My primary exposure to rationality was the sequences when I was young (mostly because I didn’t know of anything else at the time) and its focus on ‘quantify your uncertainty’ and ‘make your map match the territory’ and etc made me internalize ‘there’s a chance everything you think is totally wrong’, but not internalize the working of the tools that would let me quantify that wrongness. So I constantly was self-questioning ‘could these thoughts be wrong?’, which, lacking a way to find a proper answer, my brain always responded with ‘yes’. (I don’t think this is a failure mode of learning rationality in general, given I was trying to figure out how to think better and so on; if I was more innately curious, or..perhaps smarter in general, I would most likely have had the confidence to maintain a strong belief in myself while using doubt as a reasonable tool.)
One thing I do like from the sequences, though, is the post on judging thoughts/actions on if they make you stronger or weaker. Relating to the exams thing earlier, I had a lot of guilt about not working as hard as I should sometimes, but since a few days ago I’ve been constantly asking myself the question ‘is what I’m thinking useful?’, which is very helpful when the guilt is, essentially, your brain extracting an emotional cost from you, so you don’t actually have to take any actions to fix the situation. (Maybe it’s something else, but that’s what I think is happening; also I’m not saying this always works, just sometimes.)
P.S. I’ve been reading The Memory Book by Harry Lorayne, and it’s really very good. I thought I wasn’t good at imagination, but apparently that was false. And using imagery to memorize really helps. I haven’t even got to using pegs/etc to remember stuff out of order yet. If you can get your hands on it, I really recommend you try it out; specially if you struggle with memory like I do.
Paper preprint finds that people believe that women are told white lies more often and themselves tell white lies to women more often.
Study one told subjects that a manager gave feedback to an under-performing employee and then presented the subjects with one of 6 descriptions of the feedback, ranging from honest/harsh to very kind/white lies. The subjects were asked to guess the gender of the employee, which they guessed as female far more often when the feedback was kinder.
In study two, subjects were given a shitty and good essay to read, where these the essay were attributed to Sarah/Andrew or vice versa. The subjects were asked to grade the essays for the researcher and also to grade the essays for the writers (the non-existing Sarah & Andrew). Subjects gave a 9% higher grade to Sarah when giving direct feedback, while they gave Andrew roughly the same grades. When the subjects were asked whether they had given a different evaluation to the researcher and the writers, 65% answered yes. Interestingly, when asked how truthful they were to Sarah and Andrew, they answered that they were about equally truthful to both, despite the evidence showing that they actually only lied to Sarah.
The General Discussion section offers up various theories, only to explain that their findings didn’t support them, like the hypothesis:
– that women were lied to because they were considered less competent, which failed because the subjects rated the genders equally competent, yet still lied only to women (btw, how does considering a group less competent make it logical to tell white lies to that group?)
– that subjects considered women ‘warmer,’ which failed as both genders were rated equally warm, yet still lied to women (at least this hypothesis makes sense, by arguing that people don’t feel bad about hurting the feeling of ‘cold’ people).
– that men lied to women out of benevolent sexism, which the authors see as untrue as women lied to women more often than men did. The authors don’t seem to consider that women may exhibit benevolent sexism (or ingroup bias, for that matter).
– that people lied to women out of chivalry, which again proved false as women lied to women more often than men did.
– that people think women are incompetent, but rate them relative to their peers. Yet this doesn’t explain why people would only the increase in rating when giving direct feedback.
The authors did find that subjects rated women as having less confidence. They argue that subjects may attribute women’s failure to low confidence, but not men’s failures, causing them to try to raise women’s confidence to make them perform better.
What I found to be conspicuously absent was the hypothesis that people give kind feedback because they see women as more easily hurt and/or less capable of handling feedback due to higher anxiety and/or sensitivity to criticism. In particular because they later actually cite research that suggests that women interpret feedback more negatively. Yet they fail to consider that people may want to shield women from harm.
—
The authors also discuss the possible impact of these white lies on women, but merely how it may be motivating or demotivating. They completely ignore how a logical consequence is that women become misinformed about their performance. Is the narrative of the gender pay gap being caused by discrimination or the existence of discrimination in hiring and promotions so believable to many because they see women get less pay and get hired/promoted less than men who get worse feedback, so they think that those men get paid/hired/promoted unjustly, even though these men actually perform better, but get criticized more harshly than women who perform worse?
The study, which seems written from a severely female-centric point of view, also ignores how harsh feedback for men might partially explain bifurcation among men, who more often drop out and who more often get top positions. This merely requires that some men are motivated by harsh criticism and others are demotivated by it. If true, this hypothesis would also suggest that harsher criticism of women would result in more women getting into top positions, although at the expense of having more drop out (and given the higher anxiety in women, probably more so than for men).
If people phrase feedback towards women more kindly, aren’t women correct in interpreting the same feedback more harshly?
E.g. on a 1-10 scale, a woman’s 4 would be phrased like a man’s 6, so the woman interprets this type of feedback like a 4, while a man would interpret the same words as a 6.
So there’s not really a problem to be solved here, just a gendered difference in phrasing. In my experience, women generally have kinder and gentler norms of social interaction than men, and just use different words and gestures to convey the same meaning.
Yes, which is actually good to know if you’re dealing with someone who isn’t neurotypical who doesn’t understand that when we’re told to treat men and women equally, they interpret criticism more harshly.
So if someone criticizes both a man and a woman in the exact same way, which he thinks is a 5, the woman hears a 4 and the man hears a 6.
@Kaitian
That’s possible, although the study found (narcissistic?) hypocrisy in that people interpreted criticism by someone else as having a gender bias, but not their own criticism.
If women similarly have (narcissistic?) hypocrisy where they interpret criticism aimed at themselves as being unbiased, but criticism aimed at other women as being biased, then they would still feel badly treated.
Although that would then not explain a feeling that other women are mistreated, unless they project their own feelings of being mistreated on others, which does seem plausible.
I’d add another hypothesis–related to “more easily hurt”, but not the same.
I’m a manager: giving feedback to people who work for me is part of my job. My goal when giving critical feedback is to get better performance–but what kind of feedback is helpful is not the same across people. I might tell one person “that could have gone better” and another “that was a complete shitshow,” and expect their perception of how much improvement was needed to be about the same.
Sure, but managers may also have a higher expectations of some employees than for others, where they criticize some employees more for a similar level of performance.
Men’s tendency to focus more on salary can create a feedback-loop where men are more often better at getting a slightly higher salary than their previous performance justifies, which makes managers judge them more harshly (are they worth that salary?), which causes them to work harder/longer/faster.
Were they told the sex of the manager in the first study? My guess would be that a male manager would be less harsh on a female employee than a male, but women would be less harsh in general.
My theory would be rather similar to yours, though there’s a little bit of daylight between then: It’s seen as less acceptable for men to be harsh on women than it is for men be harsh on men. If you’re an authority and you’re harsh on a man and he’s hurt by it and gets angry, that’s on him. If he cries, that’s even worse on him. If you’re harsh on a woman and she cries, “OMG YOU MADE A WOMAN CRY!”
The paper didn’t address the gender of the manager. Whether this means that the gender was left unspecified is unclear. If it was, it might have been interesting if they had asked what the subject thought the gender of the manager was, but that is not in the paper.
The female subjects in the study were also nicer to the female essay writer than the male writer, so that doesn’t fit your theory. The paper doesn’t state whether male subjects had a significantly different gap in evaluations based on gender, which could fit or not fit your theory, but we don’t know.
Suppose you could control Donald Trump for an hour. What would you order him to do?
Write me a check? An hour should be plenty of time to get it in the mail.
I suppose, if I get to pick a specific hour to control him, I could try to make him veto a law I don’t like or something, but the timing on that would have to be really tight, and I’d have to go to the trouble of figuring out his schedule.
I feel like most Presidential decisions actually take more than an hour to complete, so making him nominate me to the Supreme Court or something wouldn’t be effective. I couldn’t make the decision stick long enough to succeed.
I think “write me a check” is the second-best answer. “Send cash” is the best, because when he regains his senses in an hour he could cancel the check.
Sending large sums of cash is tricky, though.
Maybe a cashier’s check. Can’t cancel those.
The check would bounce.
Use his Twitter account to promulgate my political views.
An hour should be plenty of time for some really juicy stuff.
…thus guaranteeing half the country and most of the newsmedia discovers they are vehemently opposed to your political views?
Good point, maybe I should use the hour to promulgate the opposite of my views.
And don’t use the word “promulgate” when you do. It would be a dead giveaway that Trump is not himself.
I’d start in the confessional style, e.g. “I have to say all this fast before the deep state lizardmen find me and take control again”
Remind me why being ruled by lizardmen is bad, again? Is it just that we’re humans so that’s undemocratic?
@Leafhopper, are you planning to literally talk about lizardmen? If you want to get Trump thrown out of office, that might be the way to do it.
If I’m in a bad mood, start WWIII. If I’m in a better mood, I have him order the FAA to lay off model airplanes, then I have him tweet about how great it is that he did that (to discourage backsliding)
If the stock market is open? Have him target some company, preferably a defense contractor (for maximum vulnerability), for an “investigation”, after I’ve shorted it.
@johan_larson says:
Only an hour?
1) Announce National Dokken Day.
2) Request less Bud and Coors at dive bars and more Samuel Adams “the patriots beer”.
3) Take a nap.
I don’t know, but I’d sure be interested to see Donald Trump’s answer to your question.
Kill Pence.
(My first thought was “kill himself” but that would give us Pence as President. Whereas him killing someone else – and getting caught doing so, which would be pretty much inevitable – would be equally effective at getting rid of Trump.)
That would be effective at getting Pelosi the Presidency, yes.
However, could someone in Trump’s body do it? Presumably, due to his age, he’d need to use a gun or sword or other weapon of the sort the Secret Service would frown on anyone else carrying in Trump or Pence’s presence. What would they say if the President himself brought a gun, or asked for one of their guns? Or is there a ceremonial sword on the office wall that Possessed!Trump could snatch down?
That last sentence brought me the image of Possessed!Trump committing seppuku.
Write a note about how he has a rare moment of clarity and is incredibly sorry for all the harm he has caused to society. Then pull some kind of stunt that will make sure he won’t be re-elected (run outside naked and shout weird stuff is the first thing I can come up with).
Depending on whether I have access to his memories and what they are, maybe use the opportunity to also harm the reputation of the worst guys in US politics, whoever they are.
You win!
I’ve never read Friedrich Hayek’s The Road to Serfdom but have seen summaries of the stages he laid out for the trip from liberal democracy to authoritarian socialism. Prior to 2016 my understanding was that no country had actually traversed those stages; actual authoritarian regimes were the result of violent revolutions and coups, not the kind of frog-boiling Hayek described.
Then Venezuela happened.
My questions to those more in the know than I: how accurately did Hayek’s model predict what happened in Venezuela? Were there any other examples of Hayek’s process in action beforehand that I didn’t know about?
Czechoslovakia after WW2 might fit Hayek´s model, as I remember it from reading his book many years ago. But I am not sure what stages you have in mind.
Surely there was the minor matter of Soviet troops.
Soviet troops arrived only in 1968. Czechoslovakia was one of a few countries in the world where Communist party won relatively democratic elections – in 1946.
Wait, I’m confused. I get that the Communist Party of Czechoslovakia won in the 1946 elections, but weren’t they still a minority that had to resort to a coup in order to maintain power?
@Faza (TCM)
Well, sort of, altought “coup” had a considerable popular support, from their voters, who were large minority. I think that fits Hayek´s model, though – communists came to power via elections, and then abolished them and created totalitarian regime.
I’m hard pressed to think of a coup that didn’t enjoy some popular support. However, I find “power via elections” and “power via coup” to be essentially irreconcilable opposites.
The fact that the Communists won the previous elections isn’t helping their case much, because if they were popular enough to handily win the upcoming ones, they wouldn’t need to stage a coup.
Presumably, you had “elections” throughout the Communist era just like we did?
Well, in Czechoslovak case it is not quite so, since communists first won an election and then used power gained thusly to entrench themselves and abolish democratic competition.
Yeah. In our case, there was only one list of candidates.
To my mind, the important question is: was the communist takeover accomplished by ostensibly-legitimate means, up to and including patently absurd reinterpretations of the nation’s constitution, or did even the communists admit that they were breaking the rules?
@Iago the Yerfdog
It was largely within the rules. There were elections in 1946 into a body which was an acting parliament and constitutional convention at the same time, tasked with drafting a new Czechoslovak constitution. It was called Constitutional assembly. Communists won the elections, but fell short of absolute majority. Then they formed a coalition government.
Now I have to consult Czech wikipedia on what happened next: In 1948, part of coalition ministers resigned in an attempt to stop communist takeover of a security apparatus, which was however legal, since it was initiated within constitutional prerogative of communist ministers. Resigning ministers hoped that other noncommunist ministers are going to join them, since that would under existing constitutional arrangements meant new elections. This did not happen however, and under threats of violence from communist paramilitary brigades, who demonstrates in the street along with various pro and anticommunist protesters, president of the republic accepted resignation of noncommunist ministers, and replaced them with communist figures (I am not sure whether replacements were all party members).
Then Constitutional assembly finalised a new constitution, which formed a legal basis of a new, communist regime. Constitution provided for elections, which were formally held, but they were manifestations of loyalty, not a genuine competition, only communist approved candidates could be elected.
@AlesZiegler
I actually think I’m willing to accept that. While it involved threats of violence, it sounds like it was on a level that I wouldn’t necessarily count as rendering the whole process undemocratic; if it had had any other purpose, I’d just call it “corruption.”
This lays out the stages I was thinking of:
https://cdn.mises.org/Road%20to%20Serfdom%20in%20Cartoons.pdf
That fits Czechoslovakia pretty well.
@Iago the Yerfdog says:
I feel for the guy in step 14 who says: “But I’m not a carpenter, I’m a plumber“, as telling that to my wife doesn’t get me out of doing the work she assigns for the house either!
I don’t know how you define “authoritarian” exactly, but the UK will throw you in jail for mean tweets.
Citation needed
This was for a facebook post but I dont think the law makes a distinction for Twitter.
Oh, but that goes considerably beyond “mean” into outright approving murder. This is criminalized in many European countries, including my own. I do not know about Canada.
I am personally more sympatetic to an American approach to free speech, but laws of this kind are perfectly compatible with liberal democracy.
This fact does not make it any more or less authoritarian.
It is quite possible (and I would suggest reasonably likely) that at some point, the entire world will consist of highly authoritarian states.
Let me repeat. I do not think that law which criminalizes public approval of murder makes the country an authoritarian regime.
Yes, it’s an extreme case. But it’s a slippery slope, and the guy was obviously an immature kid venting his anger. If you take his words at face value, you can say he’s approving murder of this particular teacher. But more likely, he’s saying “I hated her, not shedding any tears over this.” It’s not like he was waging a campaign telling people to go and commit murder.
If Trump was assassinated, and a bunch of people on facebook say “I’m glad he was assassinated”, would you want these people jailed also, or would you consider that they were expressing their dislike of Trump in a very crude way?
What about if someone gets cancer and you despise that person because they are very evil and you post “I’m glad he has cancer”, is that the same thing? Are you approving of cancer? What if you post “Im glad the marines shot Osama Bin Laden”?
I agree that posting that on social media is a terrible idea and should not be done. But jail is excessive by a large margin as a remedy. People naturally express themselves on social media the way they do in conversations. And if I heard what the kid said in a conversation, I would never construe it as approving of murder, even though that’s what it is on its face. I would construe as an extreme and tasteless expression of dislike for the victim. I think others would also.
I do not approve of criminalization of a hate speech in a broad way which is done in most of Europe. But I also do not think that existence of such laws makes those countries authoritarian regimes, as opposed to liberal democracies. There are plenty of laws in liberal democracies with which I personally disagree.
Well those laws are authoritarians in nature, as they claim authority over something which is usually outside the purview of government. How many authoritarian laws do you need to have authoritarian regimes? I’m not sure. These laws certainly make these governments more authoritarian than not.
Do you think that UK kid was actually approving of murder, or just expressing his dislike and anger in an extreme fashion?
@jermo sapiens
I agree. But I also think that many laws which conservatives like, and I couldn’t help but notice that you are quite conservative, make countries more authoritarian than not. Perhaps less loaded word should be used, like “less free”, with recognition that some restrictions on freedom are necessary, although those particular restriction aren´t.
I think that this distinction is nonsensical. He knowingly publicly said that he approves murder, whether he did “really meant it in his heart” is irrelevant.
Authoritarian democracy is a thing.
Correct on both counts. That said, I dont line up with the conservative position 100% of the time, and my preference is for not legislating around things that should be private, unless the legislation is necessary and effective. de minimis non curat lex is one of my favorite latin phrases.
I dont mean what was in his heart, I mean what he was objectively conveying by his post. If somebody says “Trump is literally worse than Hitler”, that person is not conveying the belief that building a wall is worse than the holocaust, even though that’s what they said.
Quibble: I don’t think the person is approving of murder. I doubt hardly anyone approves of murder, except maybe Nietzscheans? The person probably meant that the killing was justified, which is a completely ordinary view even if in this particular case that’s outlandish and offensive. We don’t after all consider killing in self defense murder, and many do not consider capital punishment murder.
@jermo sapiens
I prefer in flagrante delicto, but de gustibus non est disputandum.
If it be approved of, none dare call it “murder”.
Murder is unlawful killing. Since I’m pretty sure no government ever put people in jail for approving of that nation’s soldiers lawfully killing enemy soldiers in wartime, it’s clearly the “unlawful” part that is at issue here, not the “killing” part. Jake Newsome was jailed for approving of a particular kind of unlawfulness.
I think I’m on pretty solid ground in saying that a nation which imprisons people for expressing disapproval of that nation’s laws, is properly called “authoritarian”. And that doesn’t change if they limit it to just the really important laws – as defined by that nation’s government and its supporters.
Also, Streisand Effect, guys?
I knew these phrases in French but never heard them in latin. Interesting how some expressions we use today are very old.
@John Schilling
I basically agree with that, especially about this coming down to “punishing speech that disapproves of our laws,” which was the point I was going to make before deciding to dial it back some. My only caveat is that I think people mean by murder unjust killing, not unlawful. After all, pro-life activists who say abortion is murder know perfectly well that it’s legal, they aren’t contradicting themselves. Same with people who say capital punishment is murder, which is obviously a legal killing.
This sort of boundary around the word murder where it’s always wrong is remarkably unhelpful a lot of the time, because it creates confusions just like this, and it can seem kind of contentless (why don’t we talk about why a particular killing is unjust instead?). But it’s been used that way for a very long time.
Quidquid latine dictum sit, altum videtur
@John Schilling
The way I think about it, there are various axes of authoritarianism. European countries are more authoritarian than US with regards to speech, but US is more authoritarian than Europe in some other respects. For example US criminal law has longer prison sentences and US police behaves more violently towards civilians.
The thing is, places like the UK, Germany, France, Belgium, Canada, the Netherlands, Spain, etc., don’t actually look at all like police states or oppressive regimes. That’s true, even though I think they all have more speech restrictions than they should, they all impose a lot more control on gun ownership than I’d prefer, some of them have a formally recognized state church, all of them lock people up for some things I think ought to be legal, etc. I’d say all those things are potentially steps toward an oppressive regime, but we actually know what oppressive regimes look like, and modern-day Germany, Belgium, Canada, etc., aren’t it.
Having a tough time confirming whether jail/prison is ever used as a sentence for such behavior. But arrests and fines seem to be commonplace. And I have a hard time imagining that most people who are generally okay with arrests/fines for mean tweets would draw the line at prison time (and what happens to people who can’t/don’t pay the fines?)
Look at my link above for a case where a jail sentence was awarded.
Thank you, that looks genuinely bad, especially that 19 year old girl being convicted for posting a Snoop Dog lyrics on Instagram. UK clearly clearly has overly strict hate speech laws.
What I had in mind was specifically a planned economy with forced labor. Not “work somewhere or you starve” kind of forced, but “work on what we tell you to or you go to jail” kind of forced.
So, while it’s hardly comparable to Venezuela, somewhere on the road between “completely free economy” and “full communism” there are a couple pit stops.
One such pit stop is something like “if you want to engage in commerce at all, there are certain protected groups you will be forced to serve, even if you don’t want to.”
I know that the whole “bake the cake” thing isn’t what you’re describing, but it’s certainly an intermediate step. The ability of people within a particular occupation to pick and choose what type of work they will do and what type of customers they will serve is heavily regulated, and becoming increasingly moreso. And everyone is generally fine with that.
Does Venezuela even have this? My googling tells me that there was a proposal floated in 2016 that amounted to something along these lines, but I can’t find any indication it was actually implemented.
Most of the references to the proposal online are from libertarian websites using it to try to make a point, which makes me suspicious that it may not have ever materialized.
I found some sources, including Vice and Amnesty International, that it was a decree signed by Madura in 2016.
Whether the decree was ever actually used, and why it seems to only targets those with a job, would require more digging.
The research I did at the time revealed this to be vastly overblown. What it actually did was allow the government to requisition labor from the private sector. Thus, the government could go to a business and say, “Hello, we need X workers” and the business would have to provide them. At least in their comments to the UN Venezuela claims that people have to consent to be reassigned.
So, the reason they only target people with a job is that this policy reassigned people (with their consent?) from “whatever job they were doing previously” to “working on a farm.”
Now, this clearly sucks and in a country wracked by economic catastrophe, I’m sure it’s pretty easy to get people to “consent” to farm labor if their other option is “being fired” but it’s not bad as the libertarian blogs were making it sound.
That changes my opinion considerably, from “Wow, Hayek’s model was right all along,” to “Yet another tragicomically corrupt nation, albeit an extreme case.”
Thank you.
That hasn’t got anything to do with socialism, though. It’s perfectly plausible to combine a dynamic free market economy with heavy-handed restrictions on speech. That’s pretty much how America was before the obscenity laws were loosened in the ’60s.
Turkey under Erdogan and India under Modi. China is also said to have become more authoritarian under Xi, though it was never a democracy.
And as noted by Matt M, in the UK, and in fact in quite a few Western countries, e.g. Germany or Canada, you can go to jail for mean tweets.
I wonder if this is due to modern IT technology enabling higher government surveillance and making it easier for people to carelessly expose themselves (e.g. your seditious tweets stay online for everybody to read, while the seditious words you used to speak at the pub would vanish unless there was an informant keen to write them down).
I should have been more clear in my initial comment. As I said to others in this thread, by “authoritarian regime” I meant “authoritarian socialist regime,” one with a planned economy and work-where-we-tell-you-or-else kind of forced labor.
Other forms of authoritarianism, such as curtailing certain kinds of speech, are go hand-in-hand with this but are common to socialist, fascist, and theocratic forms of authoritarianism. I’m only interested here in the first one.
Here is the sort of summary of RtS that I’ve read: https://cdn.mises.org/Road%20to%20Serfdom%20in%20Cartoons.pdf
Broadly speaking, I’ll count any republican form of government where elected representatives turn it into a forced-labor regime through at least superficially legitimate means (including absurd reinterpretations of the constitution and legal precedent) as pattern-matching at least somewhat to this model, but my question is how well Venezuela matched the model and if anywhere else did the same sort of thing.
Are you sure you are not exaggerating a little bit when referencing Venezuela? Last I checked, it was estimated that 70% of productive activity was still privately owned there, and people were not being executed on the street there. I just had a friend come back from there, and his testimony was that it seemed like a remarkably normal country…like, Miami, FL but a lot less glitzy. Supermarkets looked normal. I saw pictures too. Even the slummy parts looked hardly worse than the slummy areas I see where I live in Missouri.
Also, it strikes me that the most questionable assumption in that “Road to Serfdom” cartoon is the certainty that, without a common unifying war effort, the population will not be able to peacefully agree on a common plan. The cartoon seems to not dispute that planning obviously works during wartime (which I find to be a surprising concession from Austrian economists…albeit completely justified by the evidence). But the next assumed step, the transition to a peacetime plan inevitably degenerating into confusion, doesn’t really ring true to me. Where does this inability to compromise and agree on a common plan during peacetime come from? Isn’t Britain after WW2 (with the NHS) a counter-example against this?
I could see Leninists agreeing, in a way, with this cartoon…in that they would predict that a peacetime “war on poverty” will inevitably be obstructed by the capitalist class because, for example, building lots of more housing is liable to devalue the exchange-value of existing housing stock and harm the owners of those assets…or, better, more secure living conditions for workers will inevitably increase labor militancy, lead to increased wages, lower profits, etc. (See Kalecki’s article, “The Political Aspects of Full Employment).
But the Leninists would not blame the resulting dysfunction on the plan itself; rather, they would blame it on the attempt to find a common plan for society when there are fundamentally incompatible economic interests still splitting their class society into two opposing camps; the Leninists would thus blame the capitalist class, and also partly the less radical, non-Leninists socialists for NOT being willing to take vigorous measures to overcome the totally predictable “capital strike” from the capitalist class…just like the patriot revolutionaries in the American Revolution had to unleash a terror against “traitorous Tory” loyalists, and just like bourgeois revolutionaries in the French Revolution had to take vigorous measures against their feudal class enemies that were rallying around all of the foreign invaders. Hence, why Leninists will often blame Salvador Allende for not “arming the people” against the completely predictable capital strike that sought to “make the economy scream” and discredit his program.
It’s certainly possible that I read anti-socialist propaganda and uncritically accepted it. That said, as I mentioned in a comment to Guy in TN above, the forced-labor law is real and was a decree signed by Maduro.
As to current conditions, in searching for confirmation of the existence of the law, I ran across this article. I only skimmed it, but it seems to paint a picture of Venezuela transitioning, China-style, toward a kind of “capitalism with Venezuelan characteristics.”
End of an era: NYC’s last remaining sidewalk pay phones will be taken down this year and replaced with WiFi kiosks.
I can’t remember the last time I used a pay phone. When I was a nerdy kid, it was a bit interesting to see the variations from place to place: well after the Bell System broke up, all the regional Bells still used the old standard Western Electric phones, but the different company names let me know where I was in the country. GTE (the biggest non-Bell) had slightly fancier phones that took credit cards, as did the Canadian provinces I visited… and of course there were radically different phones in Europe and Asia, as chronicled on the back page of 2600 magazine. The differences gave you a sense of place within an interconnected system; nowadays with deregulation and homogenization, everyone is using one of the same few brands of smartphone to connect to a mobile network run by a telco that might be over 100 years old but now has a nonsensical name made up by a marketing consultant to shed it of all local connotations. It’s more convenient now, sure, but something’s missing… I don’t know if any of this makes sense, or what…
Anyway. In 2014, the franchises giving ten companies the right to put payphones on New York streets expired, replaced by a single contract to CityBridge for the new LinkNYC internet kiosks. Even before the expiration, the payphones were rapidly falling into disrepair, and are now totally useless as phones: most handsets have a big “no dial tone” sticker, if they haven’t been broken off entirely. Their main function is as billboards, many of them having been sold by Verizon (founded 1879, formerly Bell Atlantic) to billboard companies Titan Outdoor and Van Wagner. And now CityBridge (a joint venture including Titan Outdoor as the lead partner) runs its kiosks mainly as advertising screens. They do include built-in web tablets and VoIP speakerphones with free calling, which is certainly good enough to replace whatever you’d be using a payphone for, but I don’t think I’ve seen anyone using them as such. But at least they look a little nicer than the old phones did.
Last time I used one, most payphones required you to insert 50 cents, which meant anyone with two quarters could make one (at least local) phone call.
What’s the cost of a “minimum viable phone call” a person could make today within a few city blocks of any given urban or suburban-commercial location, without having to ask anyone else for a favor?
The cheapest burner flip phones are, I think, around $14 (or were last time I was on the market for one about 5 years ago), but there are initiation fees, the minimum number of minutes you need to purchase, etc.
You can use the kiosks to make phone calls for free.
If you already have a wi-fi enabled phone, you mean?
No, a personal phone is not required. The kiosks have both a touchpad and a touchscreen, and can be used to call any number in the US for free. There is no handset, but you can plug in earphones if you have them: https://www.link.nyc/faq.html#phone-call.
If you don’t have earphones it looks like your call is played on a speakerphone for passersby to hear. But you can buy $0.99 earbuds at many convenience stores, so I guess that’s a pretty good deal!
As far as I can tell, the only downside compared to payphones are the ads.
In addition, the handful of remaining enclosed phone booths (you can count them on one hand) are being refitted with free VoIP phones. This actually seems like it’d be useful to make more widespread now that TCP/IP is too cheap to meter, but physical services in the public square are vanishing. The concept just seems antiquated now.
We got rid of public bathhouses when the housing codes were updated to require running water in every house and apartment. Cell phones replaced pay phones. Internet cafes, where you could rent a computer and an internet connection for a few hours, now are reduced to a WiFi password on your receipt, because who doesn’t have a laptop or a smartphone these days?
Now a large part of this is the potential for abuse. The Link kiosks used to have web access on their built-in screens, but this was reduced to a few restricted apps after a few incidents of homeless people using them for pornography. Since we’re unwilling or unable to police abuse of public services, instead we just withdraw the services altogether. That’s the motto of the 21st century: We Can’t Have Nice Things.
My wife and I took a train across Canada last spring, and were surprised to see banks of pay phones in various public places (as I recall train stations and airports). I haven’t seen anything like this in the US for years. Although I never saw anyone using them. Is it that they just haven’t gotten around to removing them? Do they still work?
It may be a regulatory issue. The CRTC tends to be stricter than the FCC.
Based upon a conversation in the marriage thread:
Your missions, etc. etc., is to design an appropriate coming of age ritual for your country in present day. It should give young adults a feeling of being full members of society, be unambiguous, and broadly acceptable.
My first pass: At the end of the month in which you earn your first paycheck, you take your buddies out for a night on the town (whatever that looks like to your subculture, so long as no one ends up in a cell or hospital bed).
Then, the next morning, if you have enough sobriety and cash on hand to mail your rent check before noon, the postman will certify you as a functional adult, with all attendant rights and responsibilities. You are thenceforth no longer able to publicly wonder “How do I even adult?”
My standard for successful parenting is that one ought to be comfortable handing one’s newly-minted adults the keys to a sports car, a bottle of scotch, a pack of condoms, and a loaded revolver, saying “have a good time!”, and trust that it will work out OK. Seems to me we could properly ritualize this.
But we’d need another ritual for the “we regret to inform you that your son/daughter…” cases. The military used to be pretty good for that, so I think we’ll do OK there.
Sports cars, liquor, women, and weapons, eh?
What else do you think military enlistees spend their enlistment bonuses on?
Women, men, hermaphrodites, whatever floats your boat. And for that matter, feel free to substitute weed or speed or whatever for the booze.
But, yeah, the four things every American will have to deal with in early adulthood, which can irrevocably screw up their life if they do it wrong. Unfortunately, most American parents want to actively teach their children how to handle about half of these things responsibly, and play the three-monkeys “Just Say No” game with the other half, and they can’t even agree on which goes in which category.
So, if I’m designing the ritual, I’m designing it so that parents know they have to prepare their offspring for all four and then let go.
Military enlistment as a rite of passage to adulthood would in fact work very well, except than I’m not willing to make it anything close to mandatory. And, yes, enlistment – officers should spend a year or two in the ranks before getting their commission.
Do you feel that South Korea’s men have benefited from mandatory service? Or Israeli citizens? (Ooh, I wonder if there’s a study there, to see if Israel’s gender-neutral mandatory service has benefits above Korea’s, or are there too many confounding factors?)
I have a vague “yes, probably” on that, but only to the extent that I’d be willing to consider making it a cultural default after studying it some more. If nothing else, it’s probably better than making a four-year college education the cultural default. Legally mandatory conscription, no.
I hadn’t thought about the implications of mandatory service only for men. Does this mean that women are always more advanced in their educations or careers than men of the same age? Are the military-service-aged women generally in distance relationships with conscripts, or dating older men? Does it mean there’s usually a military-service-length age gap between the man and woman in a couple?
Seems like it would have some fascinating consequences to pull one population out of society for a few years while the other keeps going.
Yeah, we probably don’t want to do that. Unless hypothetically we’re going back to the old cultural default of women as wives and mothers and only occasionally pursuing professional careers, in which case sure, the girls can get their 2- or 4-year pre-wed degrees while the guys do two years of military service and then get their professional education, and the women can “marry up” in terms of socioeconomic status and age at the same time. But, however well this might have worked in the past, we’re probably not going there again.
If men and women are going to be socioeconomic equals, and the men are going to do two years of military service or the like, then women should do the same. Or something similar, at least. Fortunately, the military isn’t even mostly infantry or other front-line combat troops any more, so we can find roles for everyone if we’re doing the Israeli or Korean thing.
And if we expand the cultural default from “2 years military service” to “2 years military or other public service”, and it turns out the guys volunteer 80/20 for the military over the Public Health Corps or whatever and the girls do the reverse, that’s also fine.
For Korea, one factor not mentioned here is that hiring practices are not very meritocratic, meaning that men are favored in the job market through the connections they make during their time in the service. (And as John Schilling points out, the cultural assumption of women as wives and mothers.)
Whoops, I forgot tattoos – that’s another favorite!
That would actually be another good part of an adulthood rite of passage, that I’m not quite willing to make even quasi-mandatory.
Oh, and give everyone their first credit card on Adulthood Day, with a limit tailored to let them have plenty of fun if they’re careful and plenty of impending grief if they’re not.
I notice you provide 3 of the dangers, then a safety measure for the fourth. Is the test seeing if they can attract a mate using the booze & car?
For consistency, a gift certificate to the local brothel would work best, but one cultural engineering miracle at a time. And I like your interpretation even better.
No using the handgun to find a mate, though, on penalty of your parents getting the “we regret to inform you…” ritual. And any prospective mate should either have a handgun of their own, or be under the protection of a shotgun-wielding parent, so we may be OK there.
Your ritual catches the “hot” failure modes, but not the “cold”. You catch the exceptionally crazy drivers, but not those afraid to drive (or who drive so timidly as to be a problem). You catch the drunks, but not those who can’t loosen up at all. I guess the condoms mean you catch those who make poor decisions about sex, but you don’t catch those who can’t get any. You catch the reckless with weapons, but not the fearful of them.
Perhaps intentionally.
Bus-taking, unarmed, incel Teetotalers can be, if not the most successful, at least productive members of modern society in the way a primitive man unable to hurl a spear could not–and modern hot-rodding, gun-blasting, promiscuous drunkards cannot either.
I remember reading that in some societies such ritual required to survive a certain time in the woods, alone or teamed up with other adolescents. We can go a similar route.
Young people are sent alone in another city far away which they’ve never visited before (preferrably) and are blocked from communicating with anyone they knew before, with a possible exception of others such adults-to-be. They’re given a certain allowance of money per month which is just above what’s needed to survive there, and some basic housing (maybe in the form of prearrange rent for which they pay with their allowance or get kicked out == fail, maybe it’s just provided and allowance is accordingly smaller). They are allowed and perhaps facilitated to team up with others being tested, and have some task they need to accomplish to pass the test. The task and allowance are choosen in such away that they have some slack to trade off between comfort, fun and risk of failure, but not too much, so there’s still almost no way of passing without learning basic things like keeping budget, cooking, doing laundry etc. And of course they’ll have to learn to work in a self-motivated manner to complete the task. I’m not sure about the appropriate timeframe but probably something between a month and half-year. That’s a long time for a ritual but you can compensate for it with the task being something actually useful.
Which of course sounds like and modelled from the freshman’s year for many people, with the main difference being the communication ban and absence of any fixed schedule. Which – together with the rest of the college – kind of sort serves a similar purpose anyway, so why not just decouple it from education.
ETA: I actually like John’s suggestion more, but muh expenses muh mortality rates..
I’m reminded of Kiki’s Delivery Service….
Meh, I feel much cynicism over how any proposed task would get gamed into easy mode by the upper class, so things quickly become YA Dystopia-Lite, and then warped by playground politics into a frat-hazing type ritual.
Mail your rent check? Mail your rent check?
I sometimes forget the small, specialised ways in which America is bizarrely backwards. I’m pretty sure in most Western countries this would be more appropriate for a senescence ceremony than a coming of age one. I haven’t had a chequebook for well over a decade (and I barely used it when I did have one). Even my dad doesn’t use cheques any more, and he’s 63.
I pay all my other bills automatically, but my landlord requires that I pay by check and deposit it in a mailbox.
Have you talked to your bank about having them automatically write and mail the check for you each month? A nice way to automate that process for particularly particular landlords.
Is that even possible?
My usual process was to stop at the bank on my way home, get a cashier’s check* and an envelope, and walk straight there to drop it off. It was all pretty convenient seeing as the bank is 10 minutes’ walk from my house and the landlord’s drop box is right on the way. But it would be much more convenient if I could have it taken out of my account automatically, of course….
*cashier’s checks normally have a fee, but they always gave them to me for free
It’s definitely possible. “Online bill pay” as offered by most banks (not the type offered by the people you are paying) is effectively this. You input information into a form and they literally generate (and in some cases mail) a printed copy of an actual check. It’s a lot less “high tech” than people think.
@Matt M
Good to know, thanks.
You got me, although like Nick this is the single instance of check writing we do as a household. Also usually the single instance of cursive writing as well.
FWIW, the few times I use paper checks I print everything except my signature. I have only had one check rejected due to handwriting issues, but that was due to a combination of a)the amount being very large (it was for the down payment of my current home) b)my handwriting being VERY bad and c)I lined out a mistake and initialed the line out rather than “wasting” a check by starting again with a new one. My lawyer’s paralegal wrote out the check for attempt #2 for me.
Mailing rent checks isn’t exactly rare, but I would say it is unusual at this point. Probably less than 30% of rentals.
I write a paper check and drop it off at the office in the apartment complex every month. The landlord recommended using Venmo, but I’m old fashioned and refuse to use it.
In Canada, almost 30 years ago, it was normal (and often required) to give your landlord a stack of checks, each dated for when they’d be due. US law would allow the landlord to cash them all right away, before the date on the check, so that wasn’t done there, as I discovered when I moved.
Here in the US, I still write an average of 1 check per week, handed to individual service providers who aren’t set up for credit cards, let alone something like paypal – either of which would charge them rather more than the bank charges for cashing my checks. (The bank probably requires that they have an account with some minimum balance, so there is a cost, but OTOH they need to have some bank account somewhere. Credit cards take a %, and I’m not sure what PayPal does.)
Most of my regular monthly payments are set up to be paid electronically from my checking account; that doesn’t work with fee for service.
I also use cash a couple of times in an average month.
I’m 62, living in the US.
None of this feels backwards to me. Except perhaps the inability to pay someone with a check they can’t cash right away, for service they’ve contracted to give you in the future.
I choose to pay my credit card bill with a paper check, as a firewall between random merchants and my bank account. Since I’m going to go through the trouble of reviewing the bill every month anyway, the added “difficulty” is trivial. Other than that, I’m with DinoNerd at one check a week for specialty merchants or service providers.
A technology being new and shiny does not necessarily make it better. Paper checks are very good for certain applications, and there’s really not much room to improve on them.
Why use a paper check rather than your bank’s bill-pay service, though? As you say, either way you have to review the bill and using the service saves the (small but not zero) trouble of filling out the check and mailing it.
The “trouble” of filling out the check and mailing it, is really quite close to zero. It’s not literally zero, but neither is using my bank’s bill-pay service. Or did you find a bank that employs staff psychics to divine and implement your fiscal intent without so much as a keystroke on your part? Because I’d be concerned with the security implementations of that.
I don’t understand how paying via check is any more of a firewall than paying your credit card bill via ACH or whatever electronic transfer initiated by the card company? They get your routing and account numbers either way, and in neither case is that information exposed to the merchants you paid with the credit card.
Several people have brought up the bank’s bill-pay service, and I’ll be honest…I would rather just write the check myself than use the bill pay.
Of course, I would rather just pay the bill electronically myself than do either.
It sounds like you are envisioning an automatic transfer of whatever your bill is from the checking account to the CC account? Probably because you use a CC through your bank?
I have a CC through a different institution. Each month (or more often) I check the balance, skim the transaction list, then click the link to tell it to transfer the amount from checking to CC.
Do you expect at some point that the money will be transferred without your notice if they have your account info?
@John Schilling
My bank is Wells Fargo, so the only thing their psychics can do is divine my intent to open an account that provides employees with the biggest bonuses. (Though they didn’t even do that with me, for some reason)
But I find it significantly easier to fill out a few boxes on a web page and hit “submit” rather than fill out a bunch of checks and put them in a bunch of envelopes and mail them. (It’s not “one envelope” and wouldn’t be even if I went down to one credit card, because there’s also power, water, and internet bills.)
@acymetric
Why is it better to pay the bill yourself (I assume you mean using the biller’s web site?) than using the bank’s bill pay? It’s an ACH either way. I use the bank because it collects them all in one spot.
I do the same as John Schilling, I think. Everyone is payed as anonymously as possible. Only the deepest intermediaries get a credit card, and only the credit card gets the check from the bank account.
@Tarpitz says:
The last time I paid rent was in 2012, and it was be check…
…but since my single biggest expense is California property tax I suppose you could say I’m still paying rent – just to the State of California, other than that I use checks to pay other State and Federal taxes, the monthly bill to the gas and electric company, and the every three months bill to the water and sewer agency (note: as a citizen/customer I notice almost no difference in service between the regulated “share holder owned” power company, and the used to be a private company decades ago municipalized water agency, both seem fine to me, and I feel no urgency for the State to take over one or privatize the other).
A few other checks in a decade for classes and recertification tests, and sometimes my wife asks for one to supplement the cash I give her, but that’s about it, mostly I pay cash, though my wife uses cards more than cash, and she buys stuff on-line which I only do about twice a decade, the last time for brake pads for a 1979 Raleigh Tourist I gave to my son that he uses to get to his computer classes that I had to order from a shop in Boston, Massachusetts as none of the local shops had them or would order them (I have mixed feelings about this, on the one hand my still being able to get the parts is great, on the other hand in a more just, true, right, good, and beautiful world I should be able to walk into one of the two bicycle shops that are within a mile of my house and pay cash for the parts, and for that matter pay for a newly made full bicycle that is in every way identical to the then new made in England Raleigh‘s, made in the U.S.A. Schwin‘s of the ’70’s, or even better a made in France René Herse of the 1940’s, also more Chuck Berry and Credence Clearwater Revival should be on the radio, and Doctor Who, and Star Trek should be on broadcast television!).
Something that’s struck me this week with the Democratic primaries is how uninfluential the “Blue Tribe” has been on the “Blue” Party in 2020.
In 2016 non-college graduates favored Sanders more than they did Hillary, and (just like then) he’s second in delegates, maybe he’s favored more by grads now, but all I’ve seen is stuff about how he’s favored by younger poorer Democrats, especially Latinos.
Biden has been favored by older Democrats, especially blacks, but also more by non-college graduates than by graduates.
A far distant third in delegates is Warren, has been favored more by college graduates, as had been Bloomberg, Buttigieg, and Klobucher.
On Warren’s failure to win more delegates Matthew Yglesias of vox.com wrote:
I’ve banged on this drum before, the ‘Blue-Tribe’ is small, and the leading candidate for the Democratic nomination is ahead because of ‘Red-Tribe’ (and Red State) support within the ‘Blue’ Party.
About 45% of American eligible votes just don’t vote, among voters about 40% are Republicans or strongly Republican leaning “independents”, slightly less than half of voters lean or are Democrats, and slightly over 10% are ‘swing’ and third-party (the last time I checked), only about one-third of 2016 voters had college degrees, the share of graduates among Hillary Clinton voters was higher, at 43 percent, so 55.7 (percentage 2016 turnout) x 48.2 (percentage who voted for Hillary) × 43 (percentage of Hillary voters with college diplomas), that gives us just under 12% of eligible voters who are both “Blue-Tribe” and ‘blue voters’, which is just under 21% of actually-bother-to-vote voters (not counting college students, but many who attempt college don’t graduate, and the young vote far less).
I don’t want to give the impression that “the blue tribe” isn’t influential, Hillary was almost our President and she’s who the Blue-Tribe supported, I just want to remind y’all how small the Blue Tribe is, and that they alone can’t win elections by themselves (outside of some cities, and maybe Massachusetts).
For “Red-Tribe” Democrats, I’d define them as “Democrats without college diplomas, especially rural ones”, i.e. a lot of the South Carolina primary electorate.
I’d define “Blue-Tribe” Republicans as “Republicans with college diplomas who live in cities” (a small percentage of the electorate).
“Red-Tribe” Republicans are, well, most Republicans.
“Blue-Tribe” Democrats are (see above) about 43% (and growing) of Democrats.
My line of thought on this is kinda spent now, but I invite others conclusions and comments on these matters.
The problem with being a core constituency for a party is the party tends to take you for granted. If the “Blue Tribe Democrats” aren’t in play, there’s no reason for the Democrats to pander to them and every reason for them to attempt to peel off possible Trump voters (such as the Red Tribe Trump voters who usually voted Democratic).
@The Nybbler says:
Yes, there’s been an argument among Democrats over whether to pursue more “suburban women” or win back “rust belt voters”, if “pursue suburban women” was the electorate than Buttigieg, Klobucher, or Warren would still be in the running, Sanders is running a hard “win the working-class campaign”, but what he’s won is youngsters instead, Biden is a compromise candidate, doesn’t scare older suburbanites as much as Sanders, but doesn’t have as much anti-magnetism to working class white voters as Hilary did (black women still turned out for her, black men not so much, white men, especially non collegiate white men just said “Nope!” to Hillary and voted for Trump).
I think it’s not so much about raw numbers as about the identity of battleground voters. If the election is going to be decided by working class voters in the Rust Belt, you’d better appeal to them. Perhaps in the past, those voters were more reliably Democrat and a higher proportion of highly educated voters were moderates who could be swayed to either party, which incentivised a different kind of pitch.
@Tarpitz,
Yes, see my response to @The Nybbler, I’m trying to bite my tongue and not do any gloating in front of the lovely ladies of the Blue-Tribe (I just don’t care as much about the gents feelings), but it’s hard, as Biden #1, Sanders #2, they rest (especially Bloomberg) out, is almost the best that I could hope for (a Jim Webb/Sherrod Brown ticket in 2016 would’ve been even better though!).
Blue Tribe definitely can’t win elections by themselves, but I think you’re underestimating their power strength somewhat. Pete and Klobuchar had some decent performances in NH and Iowa, and Warren still earned a hell of a lot of votes. There’s also still Blue Tribe support lining up behind Biden, Bloomberg, and especially Bernie, especially among younger generations. Where Bernie got clobbered in 2016 was older black voters over than Millennial, who are decidedly NOT Blue Tribe.
Blue Tribe is influential enough that they are pulling the entire party leftward, and this has been extremely obvious for the last 2 cycles, and visible for the last decade. 2008 Obama wouldn’t get the time of day in this primary, let alone 2004 and 2000 Kerry and Gore. Even 2004 Dean would be a moderate! And, yes, Biden will be the “consensus” and “moderate” candidate, but the platform is being pulled left.
@A Definite Beta Guy,
A quote from part of: Biden’s Rise Gives the Establishment One Last Chance
If he fouls this up, we’re doomed.
By David Brooks
March 5, 2020
Okay, if Brooks’ guess is correct anti-Trump former Republicans are actually a thing instead of a rare few (my mental model had Trump being the candidate their base had been waiting for), or Texas and Virginia hold “open primaries”, it could be that Republicans crossed over to vote for the less socialist Democratic candidate, but will still vote for a Republican in November.
The polling on black Democrats is more clear though, they mostly just want the candidate with the best chance to win in the general election.
There some echoes of 2008 and 2016, but this election cycle seems weird to me, first the strongest generational divide in candidate preference that I’ve seen in decades (maybe since ’76? hard to tell, I was eight years old them!), next black Democrats trying to guess how white Americans will vote, and then older white Democrats taking their cue from black voters.
I was struck be the reports of the campaign in South Carolina by how much the Democrats there seemed swayed like how old fashioned Republicans were, endorsements from elder Statesmen, church meetings instead of campus rallies, I’m now more convinced than ever of the numbers and importance of ”Red Tribe Democrats”, but yeah the Blue-Tribe is the tail that wags the dog, every Democratic candidate, including Biden, is now running to the Left of Obama and Bill Clinton.
Donald Trump is to the left of Bill Clinton, so that isn’t surprising.
@EchoChaos,
And Clinton was to the Right of Nixon, but that’s a fair point!
Minor point, but I wouldn’t actually expect “Blue Tribe Democrats” to be a group that’s growing all that much.
Perhaps I’m defining “Blue Tribe” too narrowly but I always thought there was more to it than just “urban/college-educated;” especially going on Scott’s original definition the more people and more diverse people that have college degrees, the smaller the proportion that hit his other characteristics (which were largely Stuff White People Like).
Plus that’s probably the least-likely to reproduce of the four categories you bring up, so they’re not going to grow the “old-fashioned way” either.
Or perhaps I misread, and you mean specifically that they’re growing as a proportion of Democrats, not simply growing. That makes more sense.
This is why I wish people would stop saying “blue tribe” and “red tribe” and say what they actually mean. The concepts seem to be uselessly vague.
The concepts are significantly worse than uselessly vague. Red tribe, blue tribe, and gray/grey tribe (maybe just the word tribe altogether) should be added to the banned words list for comments.
I’ll give the benefit of the doubt that the individuals that use the term a lot are using the term with a consistent definition rather than using it to mean different things when it serves whatever argument they are presenting, but collectively everyone means something different when they use those terms.
I don’t even think Scott did a terribly good job of articulating what he meant by each tribe in his original post (it was a good post in terms of promoting some thinking about how people group together and along what axes, but it certainly didn’t provide some clear framework for how to actually assign people to a tribe in discussion, even though that is how it is used in the comments here).
@acymetric says: “The concepts are significantly worse than uselessly vague. Red tribe, blue tribe, and gray/grey tribe (maybe just the word tribe altogether) should be added to the banned words list for comments.
I’ll give the benefit of the doubt that the individuals that use the term a lot are using the term with a consistent definition rather than using it to mean different things when it serves whatever argument they are presenting,”
Thanks
“but collectively everyone means something different when they use those terms.
I don’t even think Scott did a terribly good job of articulating what he meant by each tribe in his original post (it was a good post in terms of promoting some thinking about how people group together and along what axes, but it certainly didn’t provide some clear framework for how to actually assign people to a tribe in discussion, even though that is how it is used in the comments here).”
I think our host was clear in his initial post about what he meant, I also think he was wrong in a lot of assumptions, and some of his subsequent posts, especially his New Atheism: The Godlessness That Failed post really confirmed for me that his model for what most Democrats and Republicans are like (if you include the bulk of his listed characteristics in his I Can Tolerate Anything Except The Outgroup post) is just wrong, he’s brilliant, and we all have blind spots (Lord knows I can’t follow a lot of the millennial/internet/pop-culture references discussion here), but he really seems to me to make the perception mistake described in the The New York Times The Democratic Electorate on Twitter Is Not the Actual Democratic Electorate, he’s very internet/Millennial focused, and his model of “meat space” isn’t mine, and he admits he (and many many people’s have a unique “bubble”, and I give him credit for that, but damn when basic assumptions about the world are so different communication is difficult!
I agree the way the tribes are deployed are not great, but part of what they get at is class, and we need to talk more about class in the US, not less.
I’m all for talking about “working class whites” or “educated suburanites” or whatever, but when people say red/blue tribe, it’s often unclear what they are actually referring to. Everyone seems to have a different idea of what it means.
@Loriot says:
That’s fair, my use of the terms is an artifact of our host’s use (but he seems inconsistent with them, so I’m not sure what he means either.
See my response to @profgerm for a longer expansion of my guess of what “Blue-Tribe” and “Red-Tribe” mean, but for a short hand:
“Blue-Tribe” = most of the urban, and much of the suburban professional class.
“Red-Tribe” = damn near everyone else.
@profgerm says:
My first response to our host’s list of Blue-Tribe characteristics was: “All of this list looks like some ladies I know (and a couple of gents in the public defenders office), most of the list looks like many ladies I know”, and my first response to @Scott Alexander’s Red-Tribe characteristics list was: “All of this list looks like some men I know (and one lady deputy, and one lady plumber), most of the list looks like most guys I know”.
Really, the “Blue-Tribe” is “Most of Scott’s family, former classmates, and co-workers”, and the “Red-Tribe” is “What @Scott Alexander imagines most Republicans are like”, or the folks “The Stuff White People Like” call “the wrong sort of white people”.
For my use I have Blue-Tribe = urban professionals (academics, lawyers, physicians, public-school teachers in large cities), and Red-Tribe = the non-college graduate majority, especially those who don’t live in large cities.
There’s some wiggle room here, more college graduates are atheists than not college graduates are on average, but frequent church-goers are also more likely to be college grads (“The working class” in the sense the NYTimes uses the term tends more to be believers but only intermittently go to church), and in our host’s “the tribes as cultures” sense, without knowing anything else about them I’d guess a college graduate atheist, Methodist (probably), or Unitarian (very likely) who live in a city are “Blue-Tribe”, but a college graduate suburban Baptist is “Red-Tribe”, as probably would be a non-college graduate Baptist, whether city, rural, or suburban. Catholics are a little harder, they’re so diverse in many ways, I’d still go city and collegiate = Blue-Tribe, non-collegiate and rural = Red-Tribe, but I’m sure there’s (likely a woman) who didn’t graduate from college, lives in the country, but buys “organic food”, drives a Sububu and listens to Joan Baez, and I’m sure there’s a college graduate guy who lives in the city, but drives a F-150, collects guns, and listens to Hank Williams Jr. (actually I know there is, his name is Gary).
Heh.
I’ll link (again) to The New York Times piece “The Age That Women Have Babies: How a Gap Divides America” (much of it repeated here).
That’s what I meant “43% of Democrats and growing”, though I suppose with enough second generation immigrants that may not be true anymore.
Here are two groups of people. Which one does Joe Biden instinctively fit into:
Group 1: Jimmy Carter, Ronald Reagan, Bill Clinton
Group 2: Mitt Romney, John Kerry, Bob Dole, Walter Mondale, George McGovern
Obviously these are all the candidates who ran against an incumbent President, and one thing that sticks out to me is the charisma deficit the second group has against the first.
Joe Biden seems like the next member of group 2 than the next of group 1 entirely agnostic of the actual underlying election mechanics.
This is probably based on my personal bias, of course, but I’d love feedback on this.
Was Jimmy Carter all that charismatic? More charismatic than Gerald Ford, certainly, but nothing like Reagan or Clinton.
Gotta say, my first thought was Group 1.
Group 1 is full of people who successfully presented (deserved or not), some sort of camaraderie with and/or affinity for the blue-collar working man.
Group 2 is full of people who may have tried to do that, but mostly failed at it, and came across as technocratic elitists (at least compared to their direct competition).
Biden is winning the primaries specifically because he’s closer to Group 1 than Group 2, at least as compared to his current competition. As compared to Trump, he’ll probably lose that particular battle, but still…
I too initially thought the lists were of “blue collar” vs “elitist” candidates before I got to the end and saw it was about perceived charisma.
Note that Biden is a long-time Washington insider and “elite” to me, since he’s been a Senator forever.
All the people on list 1 are governors, to use another point.
I am not saying there is a “right” or “wrong” choice for them. Charisma jumped out to me, so seeing “blue collar” as something else that jumped out is interesting.
I also think it’s possible that we’re all, collectively, as a society, post-fitting our models here.
Like, the fact that Person X wins an election over Person Y means they must have been more charismatic, right? Or that they must have done a better job appealing to blue-collar voters, right?
Because we all assume that blue-collar voters are the swing voters who decide elections, and that they always prefer the more charismatic candidate.
In a world where a thousand people in Florida vote differently in 2000, I definitely think popular history would record that Al Gore was considered a popular man of the people as opposed to George W Bush who was considered an entitled and spoiled child of an elitist political legacy family.
The most obvious distinction between Group 1 and Group 2 is that Group 1 consists of famous winners, Group 2 consists of famous losers. That’s probably the causative factor here…
@Matt M
I did exclude no-incumbent elections for that reason.
But let’s look at objective differences. Group 2 has no governors (kind of Romney, although he wasn’t serving at the time), Group 1 is only governors, for example.
I suspect that you can find pre-2004 election sources confirming that Kerry was perceived as more elite than Bush.
Bush had an elite background, but he did a good job of pretending to be “the guy you want to have a beer with”. And of course, the media was happy to play up his supposed stupidity, which made him seem less elite.
This is just hindsight talking. If John Kerry had won, we’d remember him as charismatic.
A fair argument. Not sure I’m 100% disagreeing.
I suspect that you can find pre-2004 election sources confirming that Kerry was perceived as more elite than Bush.
Bush had an elite background, but he did a good job of pretending to be “the guy you want to have a beer with”. And of course, the media was happy to play up his supposed stupidity, which made him seem less elite.
Jimmy Carter strikes me as an outlier in group 1, mainly because he won not because of his own merit, but because of Watergate.
I don’t remember McGovern in ’72 (I was four years old and we didn’t have a television!), but I remember when he ran again in 1984, and from that list Biden most reminds me of Bob Dole, and he least reminds me of Kerry, with Romney a close second for “least seems like”. Of recent candidates for President the one’s Biden reminds me most of are Howard Dean, and George W. Bush.
From list one, when Biden is being empathetic there’s some Carter like touches, but I absolutely can’t imagine Carter ever saying something like he’d like to “take Trump to the back of the gym”, though I can imagine both Reagan and Clinton saying something like that. Clinton was a chameleon, with charisma to spare, and Reagan had literally been an actor, Biden just isn’t as persuasive as Clinton and Reagan were, so on balance I’d place Biden more with group two.
What is it about sitting on a toilet that makes your legs fall asleep vs. sitting on a chair?
No support from your sit-bones. You’re sitting on the soft tissue of your leg and thigh.
> sit-bones
Medically known as theischium.
Force distribution on your buttocks.
On a chair, especially one that’s upholstered, you’re supported by a large area, centred on each buttcheek.
On a toilet seat, you only have support from a ring, and much of the force ends up on the upper thighs.
So if I were to lose weight, this would buy more time before they snooze?
I don’t understand how this is such a problem for people.
You sit down, lean forward, and shove it out. Takes 2-3 minutes tops. Why would someone sit on a toilet for 20-30 minutes doing nothing?
+1 I too have heard people complain about this and never had any idea what they are talking about.
Some people’s bowels sometimes don’t let it come out so smoothly, so there’s more pushing involved, waiting for it to settle, and then pushing more.
Congratulations on your body working perfectly, I guess?
Alternatively, you’ve never had stomach flu or anything else that temporarily replaces your rectum with a cloaca?
That is some haunting imagery…
Carl Sagan’s Pale Blue Dot speech and Charlie Chaplin’s speech from The Great Dictator are both well-known as inspirational speeches with a humanist bent. What are some examples of similar speeches?
Kaiki Deshuu’s big speech at the end of Koimonogatari
The I Have a Dream speech by MLK is probably the most well-known example among Americans.
The Oration on the Dignity of Man is a speech that was drafted (but not immediately given, because he was prevented by the Pope) by Pico della Mirandola in 1486. It is a bold-text item in what remains one of the standard American textbooks on the Renaissance, so while it’s not exactly well-known, it’s about as well-known as these things get. It’s not quite humanist, exactly, but it’s really quite something when you consider the time period.
Al Gore’sWilliam Jennings Bryan’s Cross of Gold speech, 1896.Possibly a bad example, but John Galt’s speech in Atlas Shrugged (Bad example, because it’s a hundred pages long, so over an hour long, and I’ve never managed to make it through the whole thing.)
I don’t think you can call it “humanist” if it is impossible for a neurotypical human to read it.
Go with the classic: Oration on the Dignity of Man.
Paternity Leave is ending soon, so I’ll chuck up a list of pros and cons.
Pros:
-Blood pressure down 15 points
-No heart palpitations!
-Weight down 6 pounds
-Sleep 6+ hours everyday (you can always nap while baby naps)
-Dishes done every day
-Laundry done every day
-Nice home cooked meals every day (we made Sauerbraten yesterday: it was pretty sweet! Not the thing I would normally cook if I have to work constantly
-Lots and lots of time to D&D Prep
-Take nice long walks during the day, when it’s warm and sunny
-More time to read books during the day
-More time to
yellcomment on the internet-Time to binge entire seasons of shows on Netflix or Amazon Prime while feeding baby
-Play with baby whenever I want
-Do not have to leave baby with strangers I barely know
Cons:
-Not currently sustainable due to $$$
-Simmering feeling of “what the fuck am I doing with my life” constantly is there, because you know you have to go back to work and are picking money over baby time. And you will continue doing this, until you are 65, when there is a solid chance you will be dead 5 years after that, and a large chunk of that time may be spent at an especially poor quality of life. 5 years is nothing. It is the time I’ve been married to my wife.
Things that were not cons:
-Not being able to talk to adults: I get to talk to adults a lot. I still have lots of friends that I text through the day. We hosted a dinner party and several D&D sessions over paternity leave, and I got to spend more time with both my parents and my in-laws. If anything, I get to spend more time dealing with adults that I actually want to see. The local library also has events 3-4 times a week for babies/parents where you can interact with other adults. Granted, it’s almost exclusively stay-at-home Moms, but there are adults.
-Not working on interesting problems: Guys, I don’t know what to tell you. I’m working a job that College ADBG would have dreamed about. It’s not fun. It sucks. It sucks less than most other jobs, but there’s no way on Earth I’d be doing it if I were doing it free.
Ending conclusion:
-Keep working, make sure my kids do not graduate with student loans, so they have an easier time being stay-at-home parents, if they so choose.
-Revealed Preference: Not Having to Worry About Money and Having Nice Things is worth an awfffffffullllll lot, or else I just wouldn’t show up on Monday.
Congrats! Sounds like it was a great time.
I feel like I know you well enough that anything I could say on the policy or revealed preferences would be obvious and we’d probably agree. I guess one possibility is that you could start adjusting your remaining career around towards working fewer hours or retiring sooner, but at a lower consumption level.
Already funding 529s given the conclusion then?
This reinforces a point I made in a recent discussion, that in general, stay at home moms are not especially burdened by the experience, compared to the likely alternative. Exceptions abound, but there is more variety, stimulation, and socializing available in that “career” than is often portrayed, and less in the average career than is often assumed.
Which doesn’t say it is for everybody, or even of course every woman.
But the dissatisfaction that it is assumed to bring is probably related to either burnout in initial extremely busy period, uncertainty, not taking advantage of options like meeting with others or reading, etc., or making comparisons to particularly interesting or high status careers that will never be options for everyone.
Anyways, I’m glad you took advantage of the time to get acquainted with a new person.
Epistemic status: thinking out loud
We currently do a lot in our society in order to persuade, cajole, and push women into careers. I wonder how different the numbers would look if we weren’t doing that nearly so much. Sort of the way that in more egalitarian nations fewer women choose to be engineers not because of sexism, but because they just don’t want to be engineers.
we also do a lot to persuade cajole and push them to have children
The right way is for the woman to do the pushing, of course.
As a datapoint the other way, my wife stayed home with our three kids, and has found it quite hard to go back to work in her field–her experience and knowledge were too far out of date, and the “start from the bottom” kinds of jobs she could reasonably get weren’t too appealing for a mom in her 50s. She’s working again, and enjoying it, but in a completely different field.
That’s… not really at all a data point the other way. (Or rather, it is, but only because of the overly broad phrasing of my thesis, I suppose).
It’s true that homemaker skills might not transfer or that a particular career might not be able to be learned/advanced from the sidelines–that’s a very reasonable concern for a woman with long-term career goals, and an example of a possibly overlooked sacrifice parents might make for children.
The question is, while devoting herself to childcare, did she find it particularly miserable? (I’m open to the idea that I’m generalizing from unrepresentative samples)
IIRC, this explains a big chunk of the gender pay gap.
Childless women don’t earn much less than childless men.
@Randy M says:
FWIW, my wife has been stay at home since ’93, but after our youngest son was born in ’16 she’s expressed a lot more interested in working or being a student again.
how many kids do you have? There’s a 23 year old age gap between youngest and oldest?
@DragonMilk says:
Our first son didn’t live long and we were childless together with my wife being stay at home for over ten years, our second son will be sixteen years old in January, and our third son will be four in June, so about an 12 year gap between them.
Not to go off on a tangent (by which I mean I don’t think the following directly addresses your point) but I’d love to be a student again, too, assuming I didn’t have to also work. College was the most carefree, fascinating, socially engaged time of my life. My mom practically had to drag my dad away when they dropped me off for orientation.
People also often want a change in their life at some point; see the whole mid-life crisis cliche among career men in their, what, 40’s?
But I do take your point, I’m probably over-looking plenty of difficulties. Especially when you are nearing fifty with a toddler. (?)
My college days were not carefree, but I definitely miss the easy socializing.
Oh, there’s the difficult weekend writing a paper or cramming for finals here and there, and there was the time I blew up a beaker while dissolving a cat’s leg in Anatomy…lol… but it wasn’t like anyone’s life depended on anything I did.
A lot of that carefree feeling was deferring expenses via subsidized loans that later made the adult years more stressful… but now I am on a tangent.
@Randy M says: “…Especially when you are nearing fifty with a toddler.(?).”
Both me and my wife are now over 50, and yes having our three-year-old required medical intervention to be born.
@Plumber
Didn’t meant to pry, I was just thinking stamina, really. Even at forty there’s a difference between how we feel having been kept up all night now versus ten years ago.
Mid-20s for me, who is going back to school. Though to be fair this was pretty much always the plan.
Nor stay-at-home dads; my brother has been enjoying that experience more than he did his abbreviated professional career. Not suffering any lack of adult social contact, largely free to pursue his interests/hobbies, on top of the bit where he has two kids to play with.
There’s definitely a “grass is always greener” mentality, on both sides. For one, I am only taking care of a single infant. That’s a much different story than taking care of multiple toddlers, because toddlers tend to require a good deal more attention than infants.
However, most jobs involve a great deal of stress and responsibility, at least that ones that pay good money. I still have to do a bit of work pretty much every single day on my paternity leave, but it’s like 20-30 mins a day rather than my whole freakin’ day. Plus, a lot of the stuff you deal with is insanely boring or political.
To be fair, the politics will crop up once you have multiple kids.
The benefit is that in that polity you are the monarch.
Have the kids ever pulled a Magna Carta?
No, we’re pretty good at keeping them too disunited to effectively rebel.
Seriously I’m trying to think of a historical situation where the sub-states quarrelled more with each other than the emperor. I think the problem is with framing them as the aristocracy, concerned with their own political power, when they’re more like squabbling peasants, concerned over property rights or who farted in whose direction.
I thought you might say that.
I’m thinking now that teens may be more like the barons—the monarch does have to negotiate boundaries and the like at that age.
Good point.
How long are we talking here?
I was only able to take a couple weeks with each of my kids. For financial reasons.
I took three weeks of paid paternity leave for the birth of my most recent daughter about a year and a half ago, and I found that I did miss work for manufacturing challenges more severe than I would give myself automatically and forcing me to solve them.
I really like my job and don’t feel that it sucks.
I think there’s a really underestimated feature of work there. I have a life that’s a fair amount too easy – I have to goad myself to take on challenges that most jobs provide you with for free. And then people pay you to have the satisfaction of undertaking them!
Exactly.
And give you feedback on how well you did with a measurable score of how much they pay you as well!
I don’t know what your job is, but is there any possibility that you could get your boss to let you work from home one day per week?
I work in a manufacturing facility, so that’s a no-go. Technically, yes, I am allowed to do so per corporate policy. In reality, no, not allowed, because the factory feels it would impact performance (and would definitely impact morale if people see the factory controller getting to work from home while they need to come to work).
I’m not saying I’m imagining a metalworking lathe with each wheel attached to many miles of pulley, reaching all the way back to ADBG’s house, but…
No, no, follow Mao’s teachings and establish a tiny factory at ADBG’s house!
No, I’m busy establishing a tiny crappy factory at my own house.
Fun fact: you can use an arc-welder and a pair of carbon electrodes to melt down iron tools into useless slag far faster than they could in the 60s.
@A Definite Beta Guy says:
Happy for you that you had it!
Don’t do some Cat’s in the Cradle shit, man. Spend time with your kiddo, as much as you can. My parents always intended to pay for my education, but my mother dying of cancer put a bit of a damper on that. If my dad had thrown himself into work hard enough to pay my way, I probably wouldn’t have ended up nearly as happy or well-adjusted, or have nearly as good a relationship with him. It cost me a couple years of my life that I dedicated almost entirely to paying off loans, but I gained the kind of childhood my friends are jealous of. I wouldn’t trade that away for those years back.
What kind of childhood are your friends jealous of? I’m asking as a father of an infant who wants to make people jealous.
Congratulations on having an experience most men will never have!
Yes, definitely do this. It is tempting to use the “free time” when your baby naps, but it is better to get enough sleep whenever you can.
This week I work remotely from home. When I need to take a short break from work, instead of going to company kitchen for some free coffee, I can do some dishes or exercise shortly instead. It helps to free my mind the same way, and at the end of my working time, the dishes are done — it’s not something I need to do at the evening when I am tired. This small difference already improves my mood.
Cooking at home is great for health, and it saves money. When I am at work, I have a lunch in the center of the city for 7 €. When I am at home, I make a lunch for the entire family for 3 €; and I have control over how much salt and sugar gets there.
Yeah, if you can multitask something with child care, you get virtually unlimited amounts of time to do that. With small kids — learn to do things one-handed. With bigger kids — keep talking to them, or give them something interesting to do.
I have this feeling for the last 15 years. It’s like being at a playground, but not allowed to play. There are so many awesome things I would like to do. But instead I spend most of my day at job, and then I am too tired and frustrated to do anything meaningful; and there is a good chance it will go on exactly like this until I die. All the things that I dream about… will remain a dream.
My salary is big enough that I can support my wife staying at home. Unfortunately, her salary is not big enough that she could do the same for me. Not everyone is a software developer. (Perhaps me working half-time and her working full-time could pay our bills together. I mean later, when the kids are of school age.)
In but two hours NASA will officially reveal the name of the new Mars Rover. Previous names are Sojourner, Spirit, Opportunity, and everyone’s favorite little robot that could, Curiosity.
The rover names aren’t very… inventive. Or exciting. But they’re simple and cute and any name is better than “Mars 2020”
I nominate, the Corona
It will be the crowning achievement of NASA’s Mars missions.
Update: Its called Perseverance
I was expecting Marsy McMarsyface.
That name would have suited Spirit in hindsight.
Curiosity, Opportunity, and Spirit very much fit in with one of the Puritan trends of naming children (especially women) after virtues. It is not a big step to branch out to other Puritan naming conventions. Therefore I propose we call the next rover “If-Christ-had-not-died-for-thee-thou-hadst-been-damned.”
I’m totally up for a rover named “More Rocks Please”…
I’m down for it, or at least stealing that for my sci-fi game.
I’m not sure if Puritans would dislike Mars for being so brightly colored, or like it for being dour and monochrome.
I suppose that’s better than the Victorian habit of naming their daughters after ornamental vegetation (Rose, Violet, Daisy, Lily, Briar, Heather, etc).
And then there’s the Roman habit of just calling their daughter by the feminine form of their fathers’ family name, and numbering them if there’s more than one of them that need to be distinguished. When I first started reading up on early-Empire Roman history, it had struck me as odd that half the women seemed to be named “Julia”, and that’s is what the reason for it turned out to be (“Julia” = feminine form of “Julius”).
In fairness, Romans used fewer personal names in general, and the number seemed to decline over time.
Hey, I resemble that remark.
My impression based on looking at 19th Century baby name tables in the interests of writing research was that the Victorians mostly named their female children Mary.
Moscow?
Constable Visit the Infidel Planet With Explanatory Pamphlets?
They should have asked Randall Monroe to think of a name.
On the spot. At 3 AM.
We now present the new Mars Rover: Who Are You And What Are You Doing In My House?
Made me chuckle – thanks 🙏
How does parasitic load affect humans? Can we check parasitic load? Can we reduce it?
Is it useful to reify parasite load?
Why not talk about specific parasites?
We do things to reduce the load of polio and measles. Sanitation is a general tool for reducing infection. (Though there is a theory that sanitation delayed polio from infancy to childhood, to disastrous effect.)
Cochran-Ewald-Cochran argue that lots of diseases are parasite load (eg, schizophrenia), without identifying the parasites. This seems like a useful argument, but adding the numbers up and calling that parasite load doesn’t seem very useful to me.
I came across the term, Wikipediaed it and still don’t know much about it. Is there a list of parasites, effects causes, tests, methods of invasion and where they’re found for those that infect humans?
Any other relevant info is appreciated
I vaguely remember hearing a theory that a lack of parasitic load might be part of why allergies are so much more common.
Disclaimer: I don’t remember the source and am not sure if allergies are actually more common nowadays but if this sort of thing interests you, it might be something worth checking, unfortunately, I currently don’t have the time to do so myself and hope going “I think I remember seeing something interesting this way” is ok with the appropriate disclaimer.
I’ve found I like Explore Cuisine edamame pasta– the flavor is a little odd, but I can tolerate it, and the texture is excellent. I need to check, but I think they’re a good bit cheaper at Sprouts than at amazon.
And I forgot to mention that it doesn’t seem to raise my blood sugar significantly.
Anyone know of a good forum for discussing qi gong in general?
The forums I’ve found are for specific systems, but I’d like to find a forum which permits comparing systems and possibly discussing side issues. I’m hoping this exists.
I realize the requirement for only discussing a specific system may be the result of bitter experience.
I’ve not come across any general forums. Perhaps there’ll be enough interest here to generate some useful discussions?
I regularly practice some chi gung (if you’ll forgive my spelling..) mostly as preparation for tai chi. Not a specific system, just some exercises recommended by my first tai chi teacher.
One thing that stands out to me is that what exercises I do is much less important than how I do them. Hence I always think of them as ‘silk-reeling’ exercises.
What detail has thrown you out of a story?
I snagged the question from a semi-private discussion elsewhere.
One thing mentioned was a society which has tailoring but not scissors, and I realized I have no idea whether tailoring could be done if all you had was exacto knives, leaving aside the question of the likelihood of having exacto knives (possibly with heavier blades) without having invented scissors.
My contribution was an X-Files (?) movie where an ancient virus was presented as especially dangerous. I *think* it not having evolved in the presence of modern immune systems would tend to make it less dangerous, but I suppose it could go either way.
The Movie Looper – the plot relied on the future mob sending people back in time to be eliminated by hitmen in the present day. The hitmen would kill these unwitting time travelers for a while and were paid in bricks of silver that were sent back in time with the victims.
Sooner or later, however, the hitman’s future version would be eliminated by the future mob (to get rid of the witnesses). They would be sent back in time to be killed by their past version who would know what was going on since the hitman would be paid in gold bars for this job.
I never understood why the mob just didn’t send the future version of the hitman to a different hitman. Why send someone you want to kill to the only person on earth with a vested interest in not killing them? Over the course of the move we see this happen two times and each time the past version wimps out and refuses to off their future version.
Also, why bother with a hitman at all? Send them back in time to just above an active volcano.
Hitmen are unionized, no automation allowed.
If hitmen were unionized, work rules would require 10 hitmen for each victim, and then you’d need 100 hitmen to off the 10 hitmen and it would just spiral out of control.
Why bother with time travel at all? Just drop them into an active volcano.
According to the premise, there are “tracking systems” that make it near impossible to dispose of a body in the future. How disposing of them into the past is a workaround for this I don’t know…wouldn’t the tracking system still know the last place the victim was before the time travel event? Since the victims are bound up and masked, you would think their last known location would still be “boss’s lair with the time machine.”
Also, I’m not sure why being able to track the bodies matters that much. Great, you’ve tracked the body to the volcano. Good luck gathering forensic evidence.
Oh, but as for why time travel is a good idea for active volcano disposal, we know when the volcano was active in the past, and we’re assuming it’s not active in the future/present. So you take the victim to the dormant volcano and then chuck them into the past when the volcano was active.
The idea was that a global monitoring system was so robust that it could detect the exact time and place of someone dying.
Which invites the immidiate follow-up question of, “wait, why can’t whomever runs this system ALSO just track incidents of time travel?”
or maybe,
“wait, why can’t whomever runs this system tune it to track the gangsters and mafiosos who are implimenting this very complex and illegal system?”
I’m glad other people find this as upsetting as I do. I’ve been furious about this movie for years.
You should have just done what I did. Say “that looks dumb” when they first started advertising it, and then never watch a second of the movie ever.
Thanks… if only there was some kind of technology that would let me convey this information to the past version of myself.
Anyone have any silver bars I can borrow?
Ok, maybe they had to use time travel, but then they could have strapped a bomb to their neck rather than a silver bar to pay a killer, or just teleported them above a volcano as Conrad says, or into the ocean, or into a mountain. Sloppy world-building.
I see a pattern here:
“So, to me the notion of what’s the entire galaxy or world that you are creating or something, I can’t imagine getting excited about creating that. To me what I’m excited about is creating a two hour long experience for an audience to have in the theater. And that means how they engage moment to moment with the story and the characters that are on the screen. And that doesn’t change in either one of those.”
Ok, Rian, this may work for comedies/murder mysteries like Knives Out, but please stay away from fantasy and sci-fi.
In an early James Bond film, the moment when one car chases another down a gravel road…and the sound effects for car-going-around-a-corner include squealing tires. The kind of squealing-tires sound that happens on pavement, and not on gravel.
I don’t remember which Bond film it is, but I know that it starred Sean Connery. At the time, I was trying to watch the Bond movies in sequence.
Trains driven by diesel and nuclear engines through an interstellar network of wormholes. As well as I at least dozen other details in Pandora Star by Hamilton, bit that’s the most egregious. If you can open a wormhole to anywhere within N light-years, you certainly can open it to a hundred or two kilometers above, build as many solar panels there as you need and lay a cable through it. In fact this possibility – go to space by just opening a wormhole there – is specifically mentioned in the book and used for a couple of plot critical things. Hell, Earth in fact does have all its energy generation done by solar panels on the Moon! However all other planets prefer to enjoy their smog and oil spills (also mentioned in the book a good number of times) or go primitive, except for those few that can afford 21st century clean energy technologies.
I think a recurrent theme of Hamilton’s, humanity reaches for the stars and then proceeded to make all the same mistakes as their forebears. Often in exacting detail. One colony expedition I believe doesn’t even bother to supply the settlers with power tools which is just insane given they got there via FTL spaceship.
See also the first man to receive rejuvenation, a visionary genius who takes advantage of the fresh start to… bang his teenage son’s girlfriend.
And, well, I wish I had enough faith in the general competence of humanity to be sure interstellar colonists would be issued with power tools, but…
That happens on the character level too. Ozzie Isaacs spent few hundred years as the richest person and the most famous adventurer in the whole galaxy, and yet he doesn’t know how to console – or send off – a teenager, or how to hook up with a woman. Many others are also surprisingly naive about some things for their stated multi-lifetime experience.
I’ve recently been binging on old Doctor Who (specifically, early Jon Pertwee) and I’ve had a fair bit of that with the Ambassadors of Death storyline.
I know Doctor Who well enough to be aware that the show’s approach to science was always fast and loose, but this particular story’s take on radiation was giving me constant needle-scratch moments. For a start: if something is radioactive enough to kill you outright on touch, it’s probably radioactive enough to give you severe radiation poisoning by standing anywhere near it. Also, make everything in its immediate surroundings insanely radioactive, too. Oh and let’s not forget “isotopes” being used as meaning “radioactive substance” (there’s actually a crate labelled “Warning! Isotopes”, or some such, on screen).
Other than that, I never fail to be amazed how absolutely incompetent UNIT are. It’s almost like the British military took the absolute worst performers that, for whatever reason, cannot simply be sacked and said “put those lot in UNIT, they won’t do any harm there”. I get that they may be outmatched when faced with a hitherto unknown alien menace whose powers greatly exceed our own, but in Ambassadors they repeatedly get their asses handed to them by what are essentially criminals and it’s not because the criminals have superior information (they do) or incredibly cunning plans. UNIT – a military organization, with military-grade gear – is incapable of handling mobsters armed with pistols in a straight up firefight.
While on that subject, they could learn a thing or two about proper security protocols, because – apparently – if you’re guarding a place you know is under threat, you just let enemy agents come and go as they please and/or station solitary guards in key spots to be knocked out by said agents or otherwise overpowered.
Which reminds me of the preceding storyline (Doctor Who and the Silurians) that helpfully informs us – in these trying times – how not to perform a quarantine. Pro tip: if you’re in a sealed underground complex that your military controls and you learn that a highly infectious and deadly pathogen has been introduced for the specific purpose of culling the human race, you do not allow one of the people who were in the room with patient zero, just before he helpfully expires to demonstrate just how bad the disease is, to get on the early train to London before you seal off the place. The correct response is: nobody gets in or out starting right now!
Yup. One vivid example is the combined statistics from the two Demon Core incidents, where scientists running near-criticality experiments with a prototype core for a Fat Man style A-Bomb on two separate occasions (with the same core, hence the name) accidentally sent the core into a supercritical state (i.e. undergoing an uncontrolled fission chain reaction).
In each event, the person closest to the core (working directly with it, and in the second incident, physically touching it to knock the top off the core to take it out of critical) died of acute radiation poisoning, 25 and 9 days later respectively. About a third of the other people in the room at the time eventually died of long-term diseases that can be triggered by radiation poisoning (two cases of acute myeloid leukemia and one case of aplastic anemia; the former in particular hard to draw conclusions from since the lifetime base risk of cancer-related mortality is counterintuitively high, and AML in particular is fairly common and has other well-documented risk factors including smoking). Nobody died right away. I’m not even sure it’s possible to kill someone immediately from radiation alone, short of pumping enough energy into them to literally cook them.
Only for neutron radiation, which you usually only get from an unshielded/under-shielded active nuclear reaction. The forms of radiation you see from radioactive decay, alpha particles (high-energy He+ ions), beta particles (high-energy electrons), and gamma rays, will ionize and denature molecules, but don’t affect atomic nuclei and can’t make anything radioactive itself. Well, technically matter heats up when it absorbs radiation, and it will re-radiate some of that heat energy as infrared or visible light, but that’s not what we mean by “radioactive” in this context.
You’re right, of course. However, the show makes a big deal out of being able to detect where said radioactive sources have been by residual radiation, complete with Geiger counters ticking like a metronome to a Dragonforce song.
Now, technically, this could be the result of radioactive matter being shed by the sources (despite the fact that there appears to be no way this could happen), rather than the environment itself becoming radioactive, but I’d venture that for the very definitely unshielded humans doing the investigating it’s a distinction without a difference.
The only way to deal with it is to accept that “radioactivity” in the context of the show is just another word for “magic”.
You’re right. I’d forgotten about that part, as it’s been years since I last watched that story.
As I recall, the same can be said of “reversing the polarity” in that era of Doctor Who. Kinda like how the later Star Trek series frequently “Quantum” or “Neutrino” to mean “magic”.
I was wondering whether I was remembering it correctly, so I just checked and at some point the radiation being detected from the sources is quoted as being 2 million rads plus a bunch of other numbers that I can’t quite make out over the screaming. The screaming might be me.
Is there any equipmaent that detects rads? I suppose at Mrad levels you could probably do calorimetry. And a time base would be helpful.
Presumably, some form of dosimeter might do (as in: give a readout in rads). The show is from 1970, so it predates the adoption of the sievert.
Huh even the REM wasn’t a thing till ’71.
I googled it and eww, what’s this crap? Reminds me of Adam West’s Batman, but at least that one was supposed to be funny. Anyway, it’s an episode from 50 years ago, I’m sure that in half a century our present shows will look equally idiotic.
Britain’s finest.
I actually like it for the old-school charm. For all my poking fun at it, the writing is generally clever enough to keep me reaching for the next episode, even though it’s way past bedtime.
The core of the story is actually a pretty solid thriller that could really shine if given a few “ok this is silly/doesn’t make sense” passes. Not much we can do about the production values, though, unless we assume a much later date and bigger budget.
In Ocean’s Twelve, as part of the heist, Julia Roberts’ character impersonates famous actress and celebrity… Julia Roberts. My family, with whom I was watching the movie, thought this was clever. I could barely get through the rest of the film.
It’s just set in an alternate world where there is a Julia Roberts and Bruce Willis, but not a Brad Pitt or George Clooney, etc.
“But she doesn’t look a thing like Julia Roberts!”
I agree with your family, I thought that was kind of a clever fourth wall breech.
What bothered me much more was the hall of slowly moving lasers that you can get past by dancing.
Heist show Leverage’s showrunner ran fan Q&As on his blog during the run of the show:
Don’t know if it exactly counts as a detail or more a switch of genre – from supernatural horror story to plain old ‘eek it’s a monster’ horror story – but years ago I was enjoying having the living daylights frightened out of me by a book in which there was an (apparently) supernatural entity slaughtering everyone in a remote town. It seemed to be unkillable and able to go anywhere and get anyone it wished, and nobody had the faintest idea what they could do to stop it. On top of that were the apparent supernatural elements, as I said, which were freaking out the plucky band of ‘demon’s happy meals on legs’.
Then for no discernable reason it swerved off the tracks to be “Surprise, it’s a material animal monster!” which totally killed the entire mood for me and stopped scaring me. Because if it’s material, it can be killed (and was, eventually, by our plucky gang). All you need is a Sufficiently Big Gun and/or bomb(s). You can’t shoot the Devil or a Lovecraftian cosmic horror, but a big ugly monster made out of flesh and blood? No problem (eventually). I did read on to the end, but I was so disappointed – the delicious scares had stopped because I was just turning the pages until our heroes accumulated enough artillery to blow the thing to kingdom come.
Just about any medical drama (or comedy, for that matter) seems to take place in an alternate universe where HIPAA privacy rules aren’t a thing.
There’s one episode of West Wing where John Larroquette’s character storms into the White House Chief of Staff’s office (right down the hall from the President’s office) brandishing a cricket bat and shouting about how he’s going to kill someone. And at no point in the scene does he get tackled by a Secret Service agent.
In the movie 300, the Persian cavalry is shown as having stirrups. This is about as anachronistic as it would be for a movie about Attila the Hun that depicts his warriors as armed with matchlock muskets.
It’s not like any of the other aspects of 300 were much more realistic.
Yeah, but most of those were defensible as artistic license, depicting a combination of how the Spartans themselves would have seen things and how the story would be told today as a fictional story set in a heroic fantasy universe.
For the former, one big example is minimizing the contributions of the Athenian navy and reducing the contributions of the other Greek cities’ soldiers at Thermopylae, especially the Thesbians and Thebans who stayed and died alongside the Spartans to cover the retreat of the rest of the defenders. Another is showing the Spartans going into battle wearing flashy read cloaks and budgie-smuggler loincloths instead of heavy armor and face-covering helmets: Spartan artistic depictions of themselves from that era often show their warriors fighting naked except for a flashy red cloak fluttering dramatically in the breeze, and the loincloths were no doubt added to keep the rating down to R instead of NC-17.
For the latter, the clearest example is probably the Persian army apparently taking their stylistic cues from Mordor.
What’s the excuse for the bit where Gerard Butler gives the speech about how Sparta has no use for individually capable warriors because it’s all about the cohesion of the phalanx, the movie then offers one brief scene, less than a minute, of something close to a proper phalanx, and then it’s all about the individually superb lone-wolf Spartan warriors individually swordfighting the Persian horde into oblivion?
That bothered me, too. Less than the stirrups, since the did at least give us the token scene of them doing it right-ish before switching over to flashy individual dance-fighting.
And even the latter is defensible by a combination of the two categories of artistic license I called out before: Spartan artistic depictions of their warriors in action that I’ve seen appear to be split about 50/50 between showing something like the tight formations they would actually use and showing warriors fighting individually. And the latter also maps better to modern movie fight choreography tropes, so I understand them going with that instead of figuring out a way to keep tight formation phalynx fighting exciting for two hours even though I would have preferred the latter if they could pull it off.
Another is showing the Spartans going into battle wearing flashy read cloaks and budgie-smuggler loincloths instead of heavy armor and face-covering helmets: Spartan artistic depictions of themselves from that era often show their warriors fighting naked except for a flashy red cloak fluttering dramatically in the breeze
As the French history painter Jacques-Louis David demonstrates, all a real Spartan warrior needs is a flower crown and to lace his sandals up right before going into battle. The Romans, being more practical, dispensed with the flower crowns 🙂
I think my favorite part of the second painting you linked is the strategically-aligned scabbard worn by the fellow in the foreground towards the left side of the frame.
The strategically-aligned scabbard
He needs a big scabbard ‘cos he’s got a big sword (if you know what I mean) 😀
Everything else about 300 was perfectly realistic! As soon as you remember that it’s being told within a frame story by the lone survivor trying to raise morale before the real battle. The enemy as a horde of inhuman monsters being carved apart by our superhuman fighters seems par for the course.
Of course none of this explains the stirrups.
Baphomet also makes a cameo in Xerxes’s field tent, which is a completely inexplicable bit of propaganda for the Spartan survivor to make up. More armor on the Persians, deformed giants, bomb-throwing magi, and a rhinoceros, sure, but not a demon first described in the High Middle Ages.
Link between the Spartans and the Knights Templar confirmed?
Baphomet also makes a cameo in Xerxes’s field tent, which is a completely inexplicable bit of propaganda for the Spartan survivor to make up.
Never mind Baphomet, I was highly disgruntled that they went over the top with the S&M rig-out, facial piercings and Jagganath-style chariot for Xerxes but mentioned nothing, nothing, about the plane tree?
Aelian can go whistle, I’m with Xerxes on this 🙂
Mostly, although I’m still left wondering why a Spartan would have portrayed the Ephors as a bunch of perverted, misshapen priests, when every in the audience would have known that they were actually a board of annually-elected magistrates tasked with making sure the Kings didn’t try to go beyond their lawful powers.