David Newhoff’s latest post @ The Illusion of More seems like something right up the SSC-commentariat’s alley, so I thought I’d take the liberty to plug it. Social media and its problems are the topic of the day.
A couple of highlights:
Many years ago while still in college, I was on the train to New York City—a beautiful ride along the eastern banks of the Hudson River. Several rows from me sat a family of American tourists who caught my attention when I heard the dad say, “Look kids, there’s Alcatraz.”
Reasonably confident that Alcatraz sits on an island in San Francisco Bay, I glanced over to see the man pointing across the river and his two children gazing at the fortress of the Military Academy at West Point. The layers of incorrectness in this guy’s armchair tour-guiding is more or less the kind of “information age” social media has amplified at an unprecedented scale. And I remain unconvinced that there is a policy, either public or private, that can do much about it.
For instance, because it’s in my wheelhouse, I’ll note a recent blog post published by my friends at Creative Future on the topic that Google has funded academics who just happen to espouse anti-copyright views. When I scrolled by their post on Facebook yesterday morning, there were 260 comments, so I took a peek. I know. Never read the comments. But the problem with that rule of thumb is that the comments are us. Bots and trolls notwithstanding, they are an anthology of what we think and why we think it, except that we are perhaps just egomaniacal enough that we like to believe the peanut gallery is everybody else.
Linking in the hidden thread just in case – anything to do with social media has a higher-than-normal CW potential, I believe.
I do not understand at all how a guy being wrong about where Alcatraz is is anything like “information age” social media. People have been wrong about lots of things forever and I don’t think social media is any different. If anything it makes correcting people a little easier because when he posts the pic of “Alcatraz” somebody on his feed just might correct him. Something the author did not feel inclined to do in person.
I do not understand at all how a guy being wrong about where Alcatraz is is anything like “information age” social media. People have been wrong about lots of things forever and I don’t think social media is any different.
The difference is that before social media, only his kids would be misled.
Social media is an amplifier that amplifies cluelessness just as well as accurate and valuable information. Given that the former is more readily available than the latter, what you get is loads of people telling other people about things they don’t know.
And if someone does step in to correct them? Well, they must be full of shit, ‘coz Joe over at Facebook has already posted an infographic of the entire timeline of Alcatraz, from its founding by George Washington immediately after his victory against the French in Mexico.
ETA:
For a related, but much more problematic, issue, see xkcd: Citogenesis. Make sure to read the roll-over text.
Along the same lines as the Streisand effect (where a famous person’s attempts to silence some embarrassing thing just draws attention to it), I think there’s a similar effect we can see now. We might call it the Bret Weinstein effect, or the James Damore effect, or currently the Noah Carl effect.
Find someone in a woke-heavy field who’s broadly on the side of the left but slightly politically incorrect in some area. Decide to make an example of them by getting them fired, no-platforming them out of a job they love, making them a pariah, etc. Sometimes, the result is they crawl away into obscurity and you’ve made your point that questioning your ideology gets people fired. But other times, what happens is that you create someone who is pissed off and eloquent, and has nothing else to lose–you’ve already applied all the punishment you can, you’ve already driven them out of their dream job/blackened their name across the whole of their social circle. Their remaining job/friends/life are pretty-much immune to that crap.
But other times, what happens is that you create someone who is pissed off and eloquent, and has nothing else to lose–you’ve already applied all the punishment you can, you’ve already driven them out of their dream job/blackened their name across the whole of their social circle. Their remaining job/friends/life are pretty-much immune to that crap.
But the SJWs have successfully ejected those people from their woke industry, now they are witches and only associate with other witches at the margins of civilization (e.g. Youtube and Reddit). The people who remain in these industries are either woke or have learned from these examples to keep their mouths shut, therefore the policies in these industries will be dominated by the SJWs.
Note that Bret Weinstein and James Damore are way, way more widely known (and their arguments have been widely read, even though a lot of the media coverage of them was embarrassingly bad) than they would have been without the deplatforming/mobbing. And Weinstein in particular is a serious thinker who’s worth reading/listening to at length, whose ideas have gotten a lot wider hearing because of the deplatforming.
I have no proof one way or the other but my view is that only those who are already part of the opposition are familiar with;
1 . what the writer actually said
2. the fact that said writer was made an example of for having said it.
Those are the people for whom this isn’t necessarily news, but the fact that it happens and how it happens only serves to enrage them without necessarily reducing the number of true believers on the other side. It doesn’t pull the masses away from anti-heresy attitudes but it does polarize the heretics.
The people that did the purging in many cases don’t even read the material.
Purging someone for hatespeech is what, in my estimation, is being circulated to the normal newsreading public. If you find out an employee was fired for writing a misogynistic manifesto, do you really care that he lost his job? The deplatforming itself doesn’t mean anything because one can be certain the most relevant details; what had actually been said, isn’t going to be circulated. Goodthinkers know not to circulate hatespeech.
The only time there might be issues is if the person who is purged is able to win a lawsuit against their purgers. In the case of a typical employee working for an ‘at will’ employer this is unlikely.
I think “the opposition” is a fairly leaky group, both ways. Increasing the visibility of some person who’s said to be a Nazi but turns out to be a pretty sensible and decent person whose politics aren’t particularly radical in any direction undermines the credibility of the outrage mobs and the media outlets that take part in them. I had a conversation with my 14 year old son awhile back and mentioned reading the Wall Street Journal–the only thing he knew about them was that they’d run an article claiming PewDiePie was a Nazi–something that was obvious bullshit given his own knowledge[1]. Over time, fewer and fewer people buy the outrage mob’s story.
IMO, the interesting thing about the common deplatforming outrage mobs is that they are built on a Keynesian beauty contest kind of logic. When everyone else is joining the outcry about the racist white kid in the MAGA hat disrespecting a tribal elder, the pressure I feel to join in and add some performative outrage on Twitter depends on whether I think the outrage mob is going to carry the day. If it does, and the official narrative ends up being that the Covington kids were racist Nazi thugs who deserve whatever abuse a bunch of powerful adults can dish out[2], then I have an incentive to join in–not only will that increase my status, but it will also add some protection for me against being accused of insufficient wokeness. If it doesn’t, I’m going to look like a jerk and perhaps suffer some loss of reputation. The more the online mobbings visibly fail, the less incentive there is for anyone to either join in or capitulate. Visible cases where someone weathered the storm and is still out there talking probably weaken the power of the online mobbing types even more.
[1] Something similar happened to me at around the same age, with respect to a moral panic about D&D teaching children to worship the devil.
[2] Note that this more-or-less happened for some other stories. I suspect a majority of Americans still think George Zimmerman was a great big white guy who murdered a little black kid in cold blood, just as they think Saddam Hussein was involved in the 9/11 attacks.
The case of the famous YT’er might prove an exception, simply because you’re dealing with someone who’s viewership rivals if not exceeds the viewership of the legacy print that is defaming him. Most of his viewers were probably apolitical, so this event may have been a formative moment for them. How many people who get defamed can say the same?
In the case of a Damore, or really any regular person, there does not exist a neutral, apolitical information organ that publishes the memo *for which* a substantial portion of the populace is aware and so can experience a narrative clash without writing the source off as illegitimate.
In the real world, only dissidents platform other dissidents, so unless there’s reason to suspect otherwise there should not be any narrative clash.
That makes sense. The end of the megaphone monopoly (YT, podcasts, blogs, even public speaking gigs) has led to people like Bret Weinstein being able to get his own views and message out. And in many ways, I think the woke wing of the broad left would have been better off keeping Bret and Eric Weinstein (and many others) inside the tent pissing out, instead of pushing them outside of the tent and getting them to piss in.
I had a conversation with my 14 year old son awhile back and mentioned reading the Wall Street Journal–the only thing he knew about them was that they’d run an article claiming PewDiePie was a Nazi–something that was obvious bullshit given his own knowledge[1]. Over time, fewer and fewer people buy the outrage mob’s story.
The legacy media going after PewDiePie was obviously idiotic: I think they hate him because they think he’s eating their lunch and they thought they could use their power to incite online mobs to destroy him. The problem is that they overplayed their hand because he has a platform with a larger audience than theirs so he can defend himself and he’s genuinely uncontroversial (the worst they could find on him was a silly Monty Pythonesque Nazi skit that he did 10 years ago).
But what about Bret Weinstein, James Damore, Noah Carl, Alessandro Strumia, and so on? These people were not public figures, the general public never heard of them before the media decided to go after them. And perhaps with the exception of Weinstein, they all expressed ideas that are genuinely heretical for the progressive mainstream.
They have become martyrs for the anti-SJW cause, and perhaps some of them might be able to get a career out of it, but their influence on the industries they were purged from has been destroyed. They might have gained some cultural power, but the institutional power remains firmly in the hands of the SJWs, and in fact has been reinforced because of the chilling effect of the purges.
It’s in Anglin’s interests to claim this. In fact it’s in Anglin’s interest to claim that anyone is secretly a Nazi, as it goads his political enemies into purge either bystanders or their own side by claiming that they are secretly working for you.
I can’t speak to the whole of Monty Python but John Cleese did complain about the demographic state of Britain, which is something only Nazis do, apparently.
If you call someone a Nazi, and it actually turns out that they’re a random guy who makes rude jokes, or a provocateur who likes trolling the mundanes, or a even a person with out-there beliefs on the right who isn’t actually a Nazi, then I’m going to think you’re either dishonest or an idiot.
If you have evidence that PDP is actually a Nazi. please provide a link, and I’ll certainly share it with my son. Otherwise, I’ll probably join him in putting people who claim PDP’s a Nazi into the same bin as people who claim that D&D teaches children to worship Satan.
“Does your 14 y/o know Pewdiepie paid to have people film themselves with antisemitic signs?
I don’t know why people would go to bat continuously for pdp”
The reason given in this particular instance was an attempt to poke fun at Fiver by seeing just what you could pay a person to do. If they subscribed to him then they would have seen that video, presumably.
People seem to be asking how Felix’s subscribers who watch a substantial portion of his videos on a regular basis could be unaware of the obvious hate-thought embedded in his videos which is apparent to non-subscribers who become aware of him through samples of his content procured by journalists who are also not subscribers.
There is a big difference between making fools of people by getting them to do dumb things and actually favoring those dumb things.
There is a video of a guy getting women’s studies students to sign a “End women’s suffrage” petition, where many do, seemingly because they confuse ‘suffrage’ with ‘suffering’. The logical conclusion of this video is not that the guy is actually against women’s suffrage, just like the logical conclusion is not that PewDiePie is an antisemite for pranking people in this way.
That’s different afaik. If the feminist study guy kept doing different things that mysteriously looked like promoting antifeminism, well, I’d have to conclude he is not really a feminist. But he seems to have a documented point.
But the thing with Pewdiepie is that he will keep doing suspiciously nazi/racist stuff.
Like promoting racist videos, or promoting holocaust denier webcomics.
At some point you gotta do a bayesian update or whatever and realize it is not a coincidence.
But somehow the right wing will keep excusing that kind of stuff as jokes no matter what.
@albatross
If you call someone a Nazi, and it actually turns out that they’re a random guy who makes rude jokes, or a provocateur who likes trolling the mundanes, or a even a person with out-there beliefs on the right who isn’t actually a Nazi, then I’m going to think you’re either dishonest or an idiot.
did you see pdp linking to stonetoss, a holocaust denier?
not gonna link it. do your part
at the part where we start with the holocaust denying, I start thinking there is some intellectual dishonesty from his followers to keep justifying
@RalMirrorAd
The reason given in this particular instance was an attempt to poke fun at Fiver by seeing just what you could pay a person to do. If they subscribed to him then they would have seen that video, presumably.
People seem to be asking how Felix’s subscribers who watch a substantial portion of his videos on a regular basis could be unaware of the obvious hate-thought embedded in his videos which is apparent to non-subscribers who become aware of him through samples of his content procured by journalists who are also not subscribers.
Again, there is always gonna be an explanation or an apology, but the linking to racist content seems pretty consistent.
I do not know why the subscribers do not think he is a nazi.
But given the Christchurch specifically said “Subscribe to Pewdiepie” before the shooting, I have my suspicions.
Here’s a prediction: PDP will keep promoting racist content during the upcoming year, and you all three will keep making excuses and not accepting PDP is at the v least racist because you are not being intellectually honest about this.
But given the Christchurch specifically said “Subscribe to Pewdiepie” before the shooting, I have my suspicions.
A blessing from the devil is not a curse from God. That line is neither evidence of Pewdiepie’s nazism, nor is it evidence against. He could have called out to the Pope or to David Duke, and it would make no difference.
At some point you gotta do a bayesian update or whatever and realize it is not a coincidence.
That doesn’t necessarily mean updating to the belief that he is antisemitic. Another explanation is that the modern, funny counterculture tends to be right-wing, because the left is both culturally dominant and got very serious, not daring to be sarcastic or cynical about mainstream beliefs. You know, the opposite of the 60’s/70’s.
Note that yesterday, John Cusack shared an antisemitic cartoon that he thought was pro-Palestine and critical of Israel. Katy Perry and Nicki Minaj have shared Pepe images in the past.
The margins between OK and (supposedly) antisemitic are thinner than ever.
—
As for PewDiePie linking to antisemites, the only incident I’m aware of is when he told people to subscribe to a channel because of their good pop culture videos, although that channel also shared some far-right videos. The good faith assumption is then that he liked the pop culture videos and didn’t notice the others. The bad faith assumption, where he pointed out the pop culture videos, but actually wanted people to see far-right videos, seems quite far-fetched.
I didn’t say that proves anything about PDP himself, but it is evidence that part of the subscriber base is on the racist side, tho.
Pewdiepie claims just short of 100 million subscribers. It would be a surprise if none of them were on the racist side. But even then, the killer shouting “Subscribe to PewDiePie” doesn’t demonstrate it. It demonstrates only that he knew the meme and knew that PewDiePie was accused of racism, and wanted to stir up trouble. If he’d yelled “For the glory of Israel”, would that implicate Netanyahu, Israelis, or Jews generally?
But again, last time he linked to a webcomic artist who happens to be an holocaust denier. At some point you gotta accept that it is not coincidence; regardless of the intentions -we cannot read PDP’s mind- I predict that this kind of linking antisemitic/racist stuff will keep happening.
I don’t know about counterculture/humor being right wing now; I bet we could find humorists back in the 60s/70s who were making fun of the hippies for being against the vietnam war. That doesn’t make the humorists counterculture. I also like how left wing culture is dominant with so many new right wing governments around the world. I dunno what is so counterculture about humor making fun of Trump opponents, for example, given that Trump is the actual guy sitting on the White House.
Similarly, during the american civil war, humorists on the south made fun of the north and viceversa. That does not mean either side was counterculture. Just that both kinds of humor had a market.
If John Cusack keeps posting antisemitic comics, I will update my beliefs; if PDP had stopped around the fiverr incident, I would not be going against him. The difference is that PDP has kept going after that incident, and John Cusack has not (yet? dunno).
@TheNybbler
Again, if the guy yells for Israel, that doesn’t implicate Netanyahu. But it could imply that that shooter was a rabid israel fan? Dunno, don’t live in the counterfactual world.
Remember that the Christchurch shooter was also dedicated enough to leave a manifesto, which was again, pretty racist. Dunno what “stir trouble” here means; obviously the shooting was also going to “stir trouble”. If the guy is calculating enough to leave a manifest, what are the chances his mention of PDP, in the livestream he was doing (again, very calculated) was just a random meme?
Just this week there was another -failed- shooter and he posted right wing memes on facebook and stuff -no PDP tho-. That does not mean that right wing memes are counterculture or anything. Just that the right wing people like right wing humor and left wing people like left wing humor. But it is evidence that the shooter was probably right winger.
Remember that the Christchurch shooter was also dedicated enough to leave a manifesto, which was again, pretty racist. Dunno what “stir trouble” here means; obviously the shooting was also going to “stir trouble”. If the guy is calculating enough to leave a manifest, what are the chances his mention of PDP, in the livestream he was doing (again, very calculated) was just a random meme?
The Christchurch shooter referenced the “Subscribe to PewDiePie” meme precisely because he knew that the media had been smearing PDP as a Nazi, so he poured gasoline on the fire in order to incite a witch hunt against PDP which would have made the mainstream media look even more irrational and unreasonable.
He explained his strategy in his manifesto: he is trying to accelerate the culture war by committing an outrageous act in the hope that it would incite an overreaction from the “ctrl-left”, which would in turn push the “normies” to the right and goad them into action.
Except that left wingers have failed largely at this whole terrorism stuff lately, so I will ask the right wingers here that please don’t terrorize people in an attempt to troll us into violence.
On the bait thing, I guess the Christchurch guy also predicted PDP promoting an holocaust denier so he could see the future and PDP kept signalling for the nazis.
Speaking of intellectually dishonest, many of us draw a distinction between someone being a racist, and someone being a Nazi.
I don’t watch PDP, and don’t much care whether he’s on the side of the angels or the devils. But if you call him a Nazi, that has an actual meaning. The meaning is not “he has bad ideas” or “he links to bad people online” or even “he’s an unapologetic racist.”
Calling someone a Nazi when it turns out they’re not actually a Nazi causes me to update my priors, but in the direction of giving the false accuser less weight in the future.
This also means that the whole point of mentioning PDP on the shooting is to strawman calling PDP a nazi by making it seem a joke of the alt right; either the alt right is trying to smear PDP or they are trying to make calling him a nazi a joke, to give him plausible deniability.
Which, unsurprisingly, the right wingers here do, they just excuse PDP and make calling him a nazi seem like an exaggeration.
At some point you gotta accept that it is not coincidence; regardless of the intentions -we cannot read PDP’s mind- I predict that this kind of linking antisemitic/racist stuff will keep happening.
It still only means that he’s part of an entertainment community that has these people in it, not that he is personally antisemitic.
Currently on the left especially, there is a strong belief in cooties, where people who interact with wrongthinkers are themselves considered guilty of what the person they associate with do. I fundamentally reject this.
I still haven’t figured out what Pewdiepie said, so I’m running on an impression.
If some noticeable fraction of his jokes are based on the premise that disliking anti-Semitism is funny– people are ridiculous to be so sensitive– I think it’s reasonable to call him an anti-Semite, though not a Nazi. He’s getting fun and/or publicity out of making Jews feel worse.
Note that Bret Weinstein and James Damore are way, way more widely known (and their arguments have been widely read, even though a lot of the media coverage of them was embarrassingly bad) than they would have been without the deplatforming/mobbing.
I would like to know their opinion on whether they prefer this outcome, or would rather have kept their original careers undisturbed.
It is possible to profit from being attacked by the woke hate machine. A good example would be Jordan Peterson (although he is definitely not an example of a wokey outwoked by greater wokeys). You need to have a product ready to sell. Peterson has a book, and a self-improvement program (in my opinion both very good, so it is not a blatant attempt to milk the controversy), so when you feel sympathetic, there is a simple way to act on impulse and send him the money. But you must have the product ready now, not a few months later, because a few months later people’s attention will turn to something else. (Also, if you start writing a book after the controversy happened, it will feel more like an attempt to milk it.) This is for short term success. For long term success, Peterson has hundreds of hours of interesting lectures available free online. That means that even when people stop caring about the controversy, some will still be watching the lectures, and maybe sharing them with others. Peterson will not be merely a “controversy guy”, but also an “interesting lecture guy”.
Now compare this with James Damore…
Unless there is something I don’t know about, Damore has no strategy to convert “being widely known” into money. He is not selling anything his sympathizers could buy. His only income is his job, which he has lost. Even if someone else, sympathetic to his cause, hires him, he will likely receive his market value or less (because he now has less choice), so that can reduce his loss, but it doesn’t make a profit. I also suspect that being hired because someone sympathizes with your cause, doesn’t exactly feel good; it’s like a mirror image of being a diversity hire. I would rather know I am being paid because someone respects my skills, regardless of my political opinions. Ten years later, Damore will be old news, but his job opportunities will still be more limited than before the controversy, because a few HR employees will decide after short googling that he is a potential liability.
Shortly, controversy was likely a profit for Peterson, but a loss for Damore. I don’t know much about Weinstein (whether he has a strategy to monetize being known), but I guess it is more likely to be a financial loss for him, too. Fame, unless it leads to sales, is overrated, IMHO.
I also suspect that being hired because someone sympathizes with your cause, doesn’t exactly feel good; it’s like a mirror image of being a diversity hire.
Only if you get hired because someone sympathizes with your cause in the sense that your politics is an advantage, not if you get hired because the company needs a programmer, and they, being sympathetic (or not hostile) to his cause, don’t consider his politics a disadvantage.
“…Bret Weinstein and James Damore are way, way more widely known (and their arguments have been widely read, even though a lot of the media coverage of them was embarrassingly bad) than they would have been without the deplatforming/mobbing…”
A mention by a SSC commenter reminded me of the sad tale of James Damore (which I had forgotten), but until now Bret Weinstein has been unknown by me, and I wouldn’t regard either as particularly “well known’.
Reading The Secret of our Success right now, and want to quickly add that the criticism I’ve seen here doesn’t stick. The author makes points sustained by a broad foundation, then peppers them with almost anecdotic illusrations, usually framed as such: anthropologist X is even suggesting that… It’s those I’ve seen contradicted here, at least so far.
Edit: got to the plant eating infants study. It’s also second hand knowlege clearly framed as such, and the expression is: “many infants”. If even a minority of infants show difference between plants and objects and wait for cultural confirmation, this makes the point of a (still ongoing) evolutionary process.
Comment on the piece “1960: THE YEAR THE SINGULARITY WAS CANCELLED”
I find it strange that this piece advances a hypothesis concerning what explains the slowdown in economic doubling times around 1960, but then completely fails to compare this hypothesis with readily available data that can inform us about its plausibility. The hypothesis, as I understand it, is that growth slowed down because population growth failed to keep up (and what this hypothesis entails can, of course, be interpreted widely in precise quantitative terms). But how does this compare with the data? Not so well, it seems to me. Indeed, in terms of percentage change, growth in world population actually *peaked* around 1960, cf. https://en.wikipedia.org/wiki/Population_growth#/media/File:World_population_growth_rate_1950%E2%80%932050.svg
And that peak would only result in a peak in the growth of the productive work force around 20-30 years later, when these many new kids became adults. Thus, a naive claimed relationship between peak (productive) population growth and peak economic growth would actually place the peak of the latter around 1980-1990.
One may object that this is global data, and most of the world is not that relevant for most of economic growth. Which is true. So let’s look at the US in particular. In 1960, US had around six percent of the world population, yet accounted for 40 percent of global GDP, cf. https://www.forbes.com/sites/mikepatton/2016/02/29/u-s-role-in-global-economy-declines-nearly-50/#5f6822fb5e9e
Yet the growth rate in population per ten year period in 1960 in the US was roughly the same as in 1900-1930 (18.5 compared to 21.0, 21.0, 15.0, 16.2, cf. https://en.wikipedia.org/wiki/Demography_of_the_United_States); and we should again add 20-30 years, indeed probably a good deal more given that we are talking about a developed country, to have this growth reflected in the “productive workforce growth”. So again, not a great match either.
In sum, it does not seem to me that this “population decline hypothesis” explains the observed pattern particularly well. Perhaps it is worth exploring other hypotheses, especially since others in fact are on offer, such as diminishing returns due to low-hanging fruits/significant breakthroughs that can only be made once having already been found (e.g., once communication speed hits the speed of light, you cannot really improve that much more; for some other, similar examples, see: https://dothemath.ucsd.edu/2011/07/can-economic-growth-last/). This hypothesis is explored in great depth, and arguably supported, in Robert J. Gordon’s impressively data-dense The Rise and Fall of American Growth: https://www.amazon.com/Rise-Fall-American-Growth-Princeton-ebook/dp/B071W7JCKW/ (He’s also got a TED-talk on it, but, although it’s nice, it does the book absolutely no justice.)
A final note: If we currently believe growth will explode in the future, then confirmation bias is the friend of that belief. And that foe of ours is surely always worth challenging with data.
I read a review on Wirecutter about “flight crew luggage.” Apparently, some luggage companies will only sell their suitcases to people that can produce an airline employee ID. Why would they have this restriction? We’re talking about suitcases, not, say, chemicals that can be used to make meth. Would the companies not want as many sales as possible? Would flight crew members really think “Uh oh, I just saw a business traveler with a suitcase from ABC Company, better buy my next suitcase from XYZ Company!”? Or is Wirecutter wrong and this type of luggage has always been easily available from Amazon?
Off the top of my head, going by the Nick Cave rule that “people ain’t no good”:
(a) they give discounts to real flight crew, this means people who aren’t flight crew try and get discounts by swearing blind “oh no, I totally work for an airline cross my heart” so they had to institute “nice try but no ID no discount”
(b) ditto for above, but then people who got the discounted flight crew luggage turned around and sold it for full whack on eBay and the likes, maybe even more than full whack for “official flight crew luggage of Airline, cross my heart would I lie to you?”
(c) people trying to pass themselves off as flight crew in other situations (like, I dunno, trying to con their way onto airplanes?) with “look, I really am flight crew, I’ve got the proper luggage and everything”
Basically, if someone somewhere can see an opportunity to make a profit, they will try it, and naive businesses that go “sure I believe you, perfect stranger!” will soon go out of business.
(a) That would explain requiring an ID for the discount, but it doesn’t explain not selling the luggage at all to people without an ID.
(b) If regular passengers can’t buy it legitimately at all, wouldn’t that encourage eBay sales even more? And why would the manufacturer care if it means they can sell even more of them?
(c) There’s an easy to stop people from thinking the luggage is only for airline personnel: sell it to everyone. Also, if some idiot accepts luggage as proof of airline employment the manufacturer can rightly blame the idiot.
Well, there could also be reasons of exclusivity of the brand, but as you say, why then make it available to anyone? I do think it may have something to do with this being, if you like, industrial work-wear style manufacture. Sure, you can buy a hard hat and safety boots if you’re not a constuction worker, but why would you? (Although it seems that within the world of workwear there are brands analagous to fashion brands, so who knows?)
So the assumption may be “if you’ve heard of us, you must be within the industry, so that’s why you want to buy our goods”.
As to why people would want to buy ‘professional’ goods, I imagine to look trendy and more unique than the mass-market luggage (snobbery over luggage sounds ridiculous but since people pay premium prices to wear the ‘right’ brand, it must exist) and I do imagine some will try to use it as a cheat even in small things – you say “if some idiot accepts luggage as proof of airline employment the manufacturer can rightly blame the idiot”, but have you ever worked retail where you get very little to no initiative, ‘the customer is always right’ and if you question anyone about “okay, sorry sir, but I must ask you to prove you really are a flight attendant with Airline X before you get the flight crew discount on coffee/a meal/the parking space” you lay yourself open to being flayed by your boss for pissing off customers and getting the place a bad review online, so you just take it as read that “guy with the same brand of luggage as all the real flight attendants I see passing through here is also a flight attendant”.
That’s the reasoning after all behind airlines and emotional support animals and people abusing this to be able to bring their doggie-woggie on board with them and/or look for special treatment. If you, hapless cabin crew, stop a passenger and question “Do you really need this animal?” you get screaming abuse and a social media storm over “abusive airline denies me necessary psychological support” and people organising a boycott and demanding you be fired and the airline shut down because it’s easy to whip up online mobs.
@Deiseach,
Over here I’ve seen some new brands of work clothes come and go (and for some reason it’s electricians who seem to start them, or at least they advertise themselves that way), but I seldom see anyone in the trades switch for long from mostly wearing Ben Davis, Carhartt, Red wing, and (a little rarer) Wolverine, which were the brands my father wore 50 years ago as well.
My money is on “just wrong”. From FlightAttendantShop.com’s FAQ it sounds like they offer a discount to people with an airline employee ID but sell at a higher price to the public. Airline-branded luggage might be restricted to employees, but that makes sense.
I don’t totally buy the arguments about the nature of web video, but I do agree that (a) it’s impossible to moderate such a large platform, (b) moderation is strictly necessary to avoid abuse and harassment, and therefore (c) we’re fucked. And that’s even before we get to the question of cracking down on extremist political content.
It makes me miss the decentralized internet, back when we were a bunch of disconnected PHPbb and vBulletin installations. Leakage from one community to another was limited. You might be aware that St*rmfr*nt was out there, but if you never visited their forum, you never had to care about them. Whereas now, everything is part of centralized social media platforms with algorithmic recommendations, so no matter how little you want to seek out the [witches], you might find them or they might find you. This is why we care so much more about “adjacency” now. If there’s no longer a bright line separating the extremists from the far edge of the Overton window, if it’s so easy to slip from Weinstein to Peterson to Molyneux to Anglin, well, we just need to treat Weinstein like Anglin to prevent further slippage. And then people read Weinstein and see how totally reasonable he sounds, and conclude there’s nothing wrong with anyone else who’s been deplatformed either…
I don’t like the notion of heavyhanded regulation and censorship, but I don’t think there’s any alternative to it. I used to believe “the answer to speech is more speech” but how would more speech have prevented the Christchurch massacre? And yeah, the centralized gatekeepers of the mainstream media cheered the Iraq invasion. But if YouTube was around, would that have stopped it, or just cheered it on like everyone else?
(Here’s where I get my digs in at Horrible Banned Discourse proponents by pointing out that The Atlantic – not even a right-wing rag like National Review, the frickin’ Atlantic! – published excerpts from The Bell Curve back when it was first published in the ’90s. So whatever you want to say about new perspectives that weren’t allowed in the mainstream back then, this ain’t it, chief.)
Diagnosis to me seem wrong. Youtube is right wing because left wing orthodoxy dominates the media, thus anything sufficiently right wing can be presented to a viewer as forbidden knowledge (because it sometimes is).
This is compounded by a problem that I’ve noticed having to do with what I’d describe as secular anti-racists. This subset includes SJWs, and other activists, but also just your regular history teacher who is a moderate D/R. A huge majority of these people cannot state why racism is bad, they just know it is bad because they have been told so.
So lets say a kid sees some video on youtube about why the white race is clearly superior. The vast majority of rebuttals will (as you point out) be sputtering, stammering, exclamations along the lines of [witches]. Secular anti-racists who successfully can engage on this topic are few and far between. Religious anti-racists tend to be more persuasive (which is probably why both the anti-slavery and civil rights communities emerged from the churches), but I find people to be very reluctant to argue those points of view.
So I don’t think we need any sort of regulation or for tech companies to want to try to prevent Christchurch. Trying to do that is like trying to pin a wave upon the sand. Instead those that fear youtube radicalization need to merely get better. They have become soft.
Is secular anti-racism that difficult for people to defend? Seems to me that ‘not according to the colour of their skin but according to the content of their character’ encodes pretty much all you need – that people should be treated as individuals rather than as undifferentiated avatars of a homogenous outgroup because basic fairness demands that you should only be punished for the wrongs you actually do, not pre-emptively punished for membership of a demographic that someone else dislikes before they’ve even made the effort to find out if you actually embody the faults they are imputing to your demographic.
Is secular anti-racism that difficult for people to defend? Seems to me that ‘not according to the colour of their skin but according to the content of their character’ encodes pretty much all you need
So, your go-to example of “secular” anti-racism is a quote by the Reverend Martin Luther King Jr, Baptist minister and founder of the Southern Christian Leadership Conference?
The reason actual secular anti-racism is increasingly difficult to defend, is that it has increasingly distanced itself from Reverend King’s words and at times seems positively eager to judge people according to the color of their skin.
An addendum to your point, which pairs with my longer post below:
That is a secular anti-racist motto, but as you pointed out, it is cribbed from religious anti-racists. Many of the seculars likely never knew/know why its inherently true or where it derives from. They simply accept it because it sounds good, which is why they cannot defend the system when it is attacked.
Secular anti-racism as we see its current incarnation is a university student attempting to buy alcohol with a fake ID, then attempting to steal three bottles of wine when refused, then when challenged and the cops are called claiming this is motivated by racism, and the university officials falling right into line and claiming the store is small potatoes, racist and has a long policy of profiling students (nah, I think the only ‘profiling’ going on there is ‘students are more likely to try and steal from us because we’re on the doorstep’) and giving aid and support to a howling mob, then emailing everyone about how the jurors who found against you are a bunch of redneck racist idiots and then being all surprised when the punitive damages for you being a bunch of jackasses get hiked up accordingly.
Now, defend that to me if you can, but it’s got hardly anything to do with “you should only be punished for the wrongs you actually do, not pre-emptively punished for membership of a demographic that someone else dislikes” and all to do with “our university is the only reason your little town doesn’t dry up and blow away, so kneel before us, peasants!”
Define to me a conventional one, then, and I’ll be glad to listen. Until then, when the examples of “anti-racism” I see are all “screaming fits of entitled hysteria”, then I’ll take it as it comes.
Such as “guy wears heavy metal T-shirt on Canadian public transport; very woque black person goes off on a Twitter tempest in a teapot about it; despite it being pointed out that this is a heavy metal album T-shirt and not some Nazi white supremacist, several of the commenters persist in “how do I report this violent act of aggression which is making me feel unsafe at my very keyboard?” despite not being black themselves.
The media is a distorting filter. You probably have no idea what most anti-racism on campus looks like, but the media (old, new, and social) will happily bring you all the outrageous infuriating details about how horribly someone’s acting on some campus in the name of anti-racism.
My guess is that the median modern version of anti-racism is some lukewarm sermon on how diversity is our strength given by the EEOC coordinator during mandatory diversity training once a year, with most of the audience tuning her out and reading their phones.
I do not agree with @Deiseach’s definition of secular anti-racism. I agree with your definition of its outward statements about the world. That is modern secular anti-racism being ‘not according to the colour of their skin but according to the content of their character’ is a decent mission statement (although some have certainly strayed from it). Remember, my version doesn’t only include SJWs, its also your average centrist Dem/Republican over the age of 30. Think schoolteachers, small business owners, franchisees, etc.
A mission statement, however, is not a defense of that mission statement. And the secular anti-racist in modern parlance does not defend this POV with phrases like, “endowed by their creator”, “created equal”, etc. Those are, inherently religious if you do not know the works of the classical liberals. If you want to see a man who blends classical liberal justifications with religious justifications for the elimination of slavery, the best example I know of it Abraham Lincoln. But many modern anti-racists would never use much of his rhetoric. So, I find, they are wholly unprepared.
Simply saying the motto doesn’t defend it, because kids notice things. They notice who the bullies at the school are, who are the smart kids, who are the dumb kids. Teenagers notice things, like who makes the football team, who is going to college, who got knocked up, where not to go. Then young adults also notice things, like who commits the most crimes, where the best schools are, etc. And a mission statement doesn’t rebut any of that. “Treat people fairly”, without a strong foundation is just a war against noticing.
There are two things I think secularists on youtube [and elsewhere] have trouble defending:
1. The total absence of any innate, civilization-influencing behavioural group differences (All men are created equal)
2. Arguing against the idea that a subspecies or extended family has a biological imperative to prioritize the well-being of those more genetically similar to those less genetically similar.
If you’re a christian you can appeal to the equality of souls. If you’re a secularist you’re in a world where [especially if you’re stuck on #1] different, non fungible, subspecies of humans are in soft competition with eachother, and modern anti-racism is maladaptive.
#2 is a moral issue, #1 is not.
______________
Arguing for extremely non-controversial rules of conduct between groups of people which are intended to reduce the likelihood of conflict are easy enough to justify on grounds of self interest. But this is so far from what people care about today that talking about it almost makes you suspect in the eyes of others.
#2 is a factual issue–should we expect humans to have instincts in that direction thanks to evolution, and do we actually see that?
As best I can tell, there’s not a lot of evidence that humans are group genetic interest maximizers in any meaningful way, and it’s hard to see how that would have evolved in most of the environments our ancestors lived in. We see the evidence of some level of group selection (IMO) in instincts toward tribalism, but those are often turned against ethnic genetic interests in favor of some nationality, language, ideology or religion.
There’s also a moral issue of whether we should follow such instincts to the extent they exist (rape, stealing, and murdering romantic rivals are also instinctive, but we still lock people up for doing those things). ISTM that’s just the naturalistic fallacy rearing its ugly head.
Nationality, language, and religion tend to be soft-bounded geographically, and for obvious reasons the people you most often would procreate with would tend to share your language and your religion. Colonialism and missionary work of the last 2.5 centuries have blurred this somewhat.
There’s also the fact that a historic near-out-group may have more similarities than a far-group and so in the mind of the person is more of a threat even if the historic rivals share more DNA than the far group.
There are also going to be plenty of on-case overrides. Civic-mindedness can temper the effects of nepotism in society but that’s not the same as neutralizing them or even engaging in reverse nepotism.
But I’ll submit as counter-evidence:
1. The prevalence of racial gangs in prisons and schools
2. The tendency for people to be more trusting of individuals that look more similar to them [in experiments]
3. The tendency of people who are friends who share a higher portion of dna then strangers
Note how #2 does not use the word ‘Race’ here. In-group preference on the basis of some genetic similarity can be done at multiple levels of granularity. It’s not clear to me why drawing a donut around the ‘race’ granularity as bad and then saying that humanism outside and ethnocentrism inside are acceptable makes any sense except because of historic legacy.
there’s not a lot of evidence that humans are group genetic interest maximizers in any meaningful way
There’s a lot of evidence that humans are group interest maximizers, though, where people use clues to decide who is ingroup and outgroup (and intentionally send group signals to make identification easier).
Certain genetic differences do correlate with ethnic differences, which makes them usable proxies for in/out-group discrimination.
Yep. Tribalism due to group selection makes sense to me. Selection to favor your own race/distantly extended family without any particular tribal or other ties doesn’t. Most of human history was people interacting with third/fourth cousins, and very little of it was people interacting with members of other racial groups.
Most of human history was people interacting with third/fourth cousins, and very little of it was people interacting with members of other racial groups.
For modern definitions of “racial”, sure. But while everyone in your hunter-gatherer band for most of human history was probably a third or fourth cousin, those bands did interact with each other, and I don’t think you can make such strong statements about kinship there.
Admittedly this is where my ability to generalize ends — there’s a whole range of inter-band behaviors documented from the forager societies we know something about, from extreme hostility (whether you can call it “warfare” is a matter of definition, but plenty of killing and even more beatings and kidnappings) to obligate marriage exchanges (‘no marriage within band’ isn’t an uncommon norm in cultures like this).
that people should be treated as individuals rather than as undifferentiated avatars of a homogenous outgroup because basic fairness demands that you should only be punished for the wrongs you actually do, not pre-emptively punished for membership of a demographic that someone else dislikes before they’ve even made the effort to find out if you actually embody the faults they are imputing to your demographic
I would agree, but then even though I’m not religious I was raised in a Catholic family. I wonder if this concept of “basic fairness” isn’t basic at all, rather a direct consequence of Christian theology, which focuses on the moral equality of all humans before God.
While there were historical examples of explicitly racist Christian denominations (e.g. the antebellum Southern Baptists), they were unusual. Mainstream denominations sought to convert foreign people, but once they converted they were generally considered morally equivalent. Most other religions don’t do that, they are usually concerned only with a particular ethnicity or nation. (Islam is, in theory, ecumenical like Christianity, but in practice many Muslims societies tend to be multicultural and/or tribal).
Maybe as the Western civilization becomes more secular, the concept of “basic fairness” is bound to fade away and the only options left will be either pragmatic racism or tribal identity politics.
I don’t buy the conspiracy version of it, because impersonal mechanisms like evolution and capitalism are infinitely more powerful than any conspiracy, but the culture war is an always will be a sideshow distraction from the real forces at play. The big content platforms are not going away because they are wildly profitable. They will only ever be moderated in such a way as to avoid killing the goose that continues to lay golden eggs. These are brute facts of the matter. It makes no more sense to have an opinion on whether YouTube (or something very much like it) should cease to exist than it does to have an opinion about whether we should stop having earthquakes.
If you want to make a dent here start working on the problem of witch detection software.
They will only ever be moderated in such a way as to avoid killing the goose that continues to lay golden eggs.
I’m not sure about this because I don’t think the golden eggs are laid much by political provocateurs. I mostly use YouTube for video game content, and the game review / streaming channels I watch have many, many more subs than probably every right wing channel except for Crowder. And that’s just video game stuff. Now look at music videos and makeup tutorials and generic comedy stuff, etc.
Where this would go wrong is Witch Creep. You start with banning Alex Jones and a year later we’ve got Milton Friedman in the list of right-wing hate youtubers on the front of the NYT.
If you want to make a dent here start working on the problem of witch detection software.
Sounds an awful lot like China’s social credit system. This I find dystopian. See above with the Witch Creep.
I don’t necessarily disagree with you and I don’t necessarily disagree with BBA. My opinions on this one are complicated and unstable. My points with the above comment is that they don’t matter. It’s like being against the existence of nuclear weapons, completely pointless.
The problem is that a lot of what I’ve said about internet video here also applies to writing. Literature is also solitary, composed in silence, read in silence; it’s a fundamentally pathetic and asocial activity.
Yeah, it does feel a little like he’s noticed some uncomfortable implications from his argument and hastily tried to semi-acknowledge them at the end, but TBH the way things are with lefty CW stuff, it’s probably to his credit that he acknowledges them at all. (He does say literature rather than blogging though, and there’s an earlier reference to the uselessness of blogging against neoliberalism, so I’m not sure if it’s fully reducible to ingroup/outgroup dynamics. Who’s even in that guy’s ingroup these days?)
Having ploughed through that article, it seems to boil down to “We lefties can’t be as effective as the horrible righties because we’re just too cool and compassionate and nice” which may be flattering but doesn’t help his case.
Given that I had to look up who Sam Kriss was, and the results pointed to him having been embroiled in a little online campaign scandal of his own, I would have thought he’d be more sympathetic to alleged witches, but no – witch hunts happen because of all the horrible witches out there, the only solution is to burn the whole thing down because us nice guys are too nice to effectively fight back.
Also, I laughed at the obligatory “showing off my Eng Lit chops” bit at the end:
In writing, we also talk to the inscription-machine more than we do to any actual reader: as Derrida argues in his commentary on Lacan’s seminar on The Purloined Letter, a letter never reaches its destination.
I recall there was a study that found that conservatives are more attractive than lefties. Maybe it won’t replicate, but if it’s true then it might explain why conservatives come across better on video.
Anecdotally, I don’t find hard to see why Jordan Peterson or Lauren Southern get more views than, I dunno, ContraPoints?
Trying to read it charitably, but it takes a lot of trying. Without it, it sounds like “lonely people allowed to express themselves inevitably turn into Nazis”. I’m not sure why the author is confident in labeling any pathology coming out of YouTube “right wing” (have they never heard of tumblr? Different format, same outcome, mostly other wing) so it’s hard not to assume this is just a “boo outgroup” piece. Calling them the politics of “loneliness” a) feels awful damn rich coming from a blogger and b) feels like just another in a long line of insulting introverted nerds because it’s easy and feels good.
I don’t see any solutions in there, and stoking the fires of “everyone who disagrees with me must be stopped and deplatformed now before their minor thoughtcrimes become Christchurch massacres, which of course they inevitably will” feels really scary and counterproductive.
Did you read the same piece I did? I thought it was very explicitly against deplatforming right-wing YouTube. Re: “Different format, same outcome, mostly other wing,” isn’t that exactly what he’s getting at here?
The left that takes shape on YouTube and the various other social media platforms tends to be a gloss over something that remains fundamentally reactionary: bickering and resentment, cringiness and vituperation, a bitter identification with imagined national, cultural, racial, or political communities, a subject at war with the world around them and everything in it.
It’s a kinda weird definition of “right-wing”, but I think it tracks – in SSC terms, he’s just trying to look at the ideology rather than the movement.
Maybe we didn’t read the same post – the one I read was all about the problems with “right wing YouTube” and how YouTube was “always going to be ruled by the right”.
Kriss doesn’t seem to have any real issue with deplatforming right-wingers on YouTube except that he thinks it might be ineffective (the free speech argument is just a “squabble” and anyway the right wingers aren’t really engaging in speech, it’s just a “concentrated torrent of non-communication” to zero audience (again, quite rich coming from a guy blogging into his own personal void because more mainstream outlets dropped him for getting too aggressive with his girlfriend and then issuing a non-apology apology)). His problem with deplatforming, other than it not working, is that it catches up Antifascists. Left unexamined of course is whether the Internet black hole plays any part in the radicalization of Antifa. Kriss’ preferred solution?
It can’t be drowned out and it can’t be switched off. The only way to shut down the fascist creep on YouTube is to shut down YouTube itself.
As for the weird definition of right-wing, it sounds like he’s just taking shots at “the wrong sort of left-wing”. In SSC terms, he’s attacking his near-outgroup. You can’t defend that as an idiosyncratic definition of “right wing”. It’s a pure partisan attack. Leftism is “mass participation politics”. Rightism is “the politics of loneliness”. Again, it’s hard to separate Kriss’ argument from the usual “righties are gross lonely neckbeards in their parents’ basement”, just wrapped in better turns of phrase.
Charitably I think he’s noticed a real issue about the “black hole” effect of something like YouTube, but he doesn’t really say anything profound about it after that. The rest of the piece is all about defending the idea that the badness of YouTube is fundamentally right wing and carefully avoiding talking about similar phenomena elsewhere online that don’t feed the partisan slant of his narrative.
Speaking of people doing things together, perhaps Kriss’ friends should do an intervention to get him to confront his metaphor habit before it gets completely out of hand.
The Christchurch massacre killed 51 people. The Iraq War killed… well, we lost count, numbers vary wildly, the low bound seems to be around 110,000.
You can’t just elide around the fact that the mainstream forces who would be carrying out any “deplatforming” have multiple orders of magnitude more blood on their hands than the people they’re trying to silence, and a history of silencing people opposed to their wars of aggression. That’s arguably the #1 problem with the idea in the first place.
Wait, the Iraq War is the responsibility of the deplatformers of the world? Cheney and Rumsfeld and Wolfowitz are antifa, is that the idea?
EDIt: That comes across as much snarkier than I intended, I’m not trying to start a fight here, I’m legitimately confused as to what you’re trying to say. There were hundreds of thousands of people on the streets protesting the Iraq War, and I think most of those who are responsible for deplatformings were either there, or would have been if they’d been old enough. So what you’re saying is extremely confusing to me.
Deplatforming is carried out by a small number of massive tech monopolies, typically in response to outrage pieces at a small number of media outlets; protesters have little or nothing to do with it. Many of the media outlets trafficking today in said outrage pieces backed the Iraq War to the hilt at the time. Something like a third of Vox, for instance, is owned by NBC, which in 2003 fired (one might even say deplatformed) Phil Donahue for being antiwar.
If deplatforming hit the New York Times and Vox Medias of the world, instead of their competitors, I’d be far more sympathetic to the idea. It doesn’t and never will. Censorship exists to protect those who already have power, and media and tech barons alike want to preserve a status quo that’s keeping them wealthy and powerful.
If there’s no longer a bright line separating the extremists from the far edge of the Overton window, if it’s so easy to slip from Weinstein to Peterson to Molyneux to Anglin, well, we just need to treat Weinstein like Anglin to prevent further slippage. And then people read Weinstein and see how totally reasonable he sounds, and conclude there’s nothing wrong with anyone else who’s been deplatformed either…
I’m gonna be honest, to me this reads like an argument against deplatforming anyone.
I don’t like the notion of heavyhanded regulation and censorship, but I don’t think there’s any alternative to it. I used to believe “the answer to speech is more speech” but how would more speech have prevented the Christchurch massacre?
It’s impossible to keep the incidence of any crime (including murder) at precisely zero. The standard way of preventing crime is deterrence; it mostly works, though not always. If we’ve tried enhancing the usual ways of fighting crime, and tried any other plausible ways that don’t involve suspending fundamental liberties, and the crime rate is still excessive (say, terrorist bombings killing dozens of people every day), it may be reasonable to contemplate suspending fundamental liberties if it can be expected to help the situation significantly.
But, crucially, the threshold above which the rate of a crime is considered “excessive” enough to suspend fundamental liberties can’t be “anything above zero”. A standard that it’s OK to curtail fundamental liberties (such as speech) as long as they might indirectly lead to a slightly increased incidence of murder (or even terrorism) would lead to a system with no civil liberties at all — and it still wouldn’t achieve the goal of zero terrorism. Currently we have something like one ideologically motivated, non-Islamist murder a year throughout the Western world (there aren’t much more Islamist ones either), which is about the lowest possible rate above zero. In particular, IMO single events (such as a terrorist attack) are essentially never legitimate reasons for restricting basic rights.
All of the above also applies to censorship by private companies. While IMO private companies should have the right censor their content, we should consider it undesirable for much of the same reasons we consider it illegitimate if done by the government; we should discourage rather than encourage it. That’s if the platform has no effective alternative, and thus its censorship would have a significant effect on public discourse. Censorship by private companies is not much of a problem if the platform has major alternatives and thus the censorship has little effect on public discourse — but in that case, it can’t achieve the desired effect either.
In general, I tend to be very skeptical about using single events (such as a terrorist attack) as justification for policy change. Much of the time the policy change is only tangentially related to the event, and the “justification” is mostly to paint opponents of the policy change as insensitive, or not sufficiently opposed to the evil terrorists.
Currently we have something like one ideologically motivated, non-Islamist murder a year throughout the Western world
I think you mean terror attack, not murder, since the Christchurch attack alone killed 51 people.
there aren’t much more Islamist ones either
There were 4 deathly terror attacks by Islamists in the West* in 2018, 8 in 2017, 9 in 2016 and 8 in 2015. So that’s 4-9 times more than what you claim, for the previous 4 years.
* With a relatively strict definition of the West, excluding Israel, Bosnia, Russia, etc.
In particular, IMO single events (such as a terrorist attack) are essentially never legitimate reasons for restricting basic rights.
Terrorist attacks aren’t single events. They are part of a pattern.
Secondly, your entire argument is very weak, since a single incident can divulge a weakness. The Chernobyl incident was a good reason to reconsider how to build and run nuclear reactors.
I think you mean terror attack, not murder, since the Christchurch attack alone killed 51 people.
I counted a mass-murder as one.
There were 4 deathly terror attacks by Islamists in the West* in 2018, 8 in 2017, 9 in 2016 and 8 in 2015. So that’s 4-9 times more than what you claim, for the previous 4 years.
I didn’t check the details on Islamist attacks, as the post I replied to was about censorship of far-right views, and far-right terrorist attacks. In any case, when looking at orders of magnitude, Islamist terrorism in the West is still much closer to 1/year than to, say, the total number of murders, which is orders of magnitude more.
Terrorist attacks aren’t single events. They are part of a pattern.
In that case the pattern may be a reason for a policy change, not a single event.
Secondly, your entire argument is very weak, since a single incident can divulge a weakness.
A “weakness” of the form that we can’t prevent some particular event (e.g. a particular crime) with 100% certainty is something we have to live with, and not a reason to curtail basic rights. The only exceptions are single events of extreme scale, such as a major war.
Definitely no to the first. I’ve never heard of him before now. And googling does not reveal him to be well known.
Judging by this essay. I’d say no to the second. It’s possibly he just wasn’t batting well that day, but he has not managed to convince me to read a second one. I’m curious what your judgement is.
@quanta413,
My reaction was much the same as yours, but othet commenters seemed to indicate previous familiarity with the author and I was curious about why.
I’ve heard of him and read him before, but I don’t think he’s prominent, and you have nothing to be ashamed about. I know him for the Atlantic no-trees-on-flat-earth article and for the sexual harassment thing.
That said, three points in defence of my practice: writing is not embeddable within a concentrated technical platform; the materials of writing are not (necessarily) a global communications infrastructure but an emergent and mutually agreed-upon system of words; writing is removed from its object, and therefore involves a properly significatory aspect that video – which can only enframe, capture, and replicate – lacks. As such, it’s intersubjective in a way that video can not be, because words are not an exterior technology but the foundational stuff of subjectivity.
isn’t writing for you and me. It’s the verbal equivalent of contemporary art music.
Utter nonsense. The passage you quote is perfectly lucid. If you think otherwise, I suspect it’s because you’re pattern-matching to unrelated academic writing that happens to use some of the same words, and thus shutting down before you even reach the stage of trying to parse the sentence.
I’m reading that, after several tries, as “writing is different from video because it uses words instead of directly capturing images, and this is significant because [something].” There are lots of YouTube videos that are just people talking (with words), but I suppose that doesn’t count for some reason.
The bit about writing not being involved in global communications infrastructure eludes me entirely, unless you have a very finicky definition of what does and does not involve comm infrastructure. I’m sending this post over the same internet YouTube uses, and there are plenty of uploaded text content sites which are at least roughly analogous to YouTube.
EDIT: I went ahead and read the link, and this man appears to be dishonest, biased, and thoroughly pretentious to boot. He opens with a bizarre and unhelpful metaphor about slippery membranes, and goes on to define YouTube as right-wing because all the loud leftists on it are assholes and therefore “reactionary” in character.
It didn’t seem clear to me either, I’d just write that off as my being undereducated, but @brad seems very educated to me so I’m inclined to believe his word on the piece’s quality.
The essay is incoherent junk. There’s no attempt to quantify whether he’s getting an endless stream of “Nazis” because he’s looking for it or if all roads on youtube inevitably leads to “Nazis”. Where he seems to confuse many not Nazi things with Nazis.
The truth is he found a bunch of right wing crap in his youtube recommendations because he is looking for it. Youtube fed his obsessive mind what he wanted. He cites a video that he fully admits he may have been the only viewer of.
On the other hand, youtube feeds me key and peele, k-pop music videos, videogame vlogs, and cooking shows. Because that’s what I watch on youtube. It’s not rocket science. Nazi videos? 0. Alt-right videos? Also 0. Jordan Peterson videos? Also 0. Not that there’s anything wrong with watching Jordan Peterson.
~million subscribers sounds like a lot but it’s actually not that big. You can find 18th century cooking series on youtube with a similar number of subscribers. Many cooking or eating vloggers have larger subscriber counts.
Stop it you. That’s slander. After all, everyone knows David Friedman is into medieval cooking so if he did a cooking series it would be set several centuries before the 18th century. He would never be so historically careless.
However, the series of medieval cooking tips would include information on sourcing the correct ingredients in the current day. By its very nature, this information would be supporting of markets and free trade, which the essayist would consider to be fundamentally right wing..
“YouTube was always going to end up being ruled by the right, because right-wing politics are a politics of loneliness.” Subtle.
Here’s my equally charitable counterpoint: “Mainstream media was always going to end up being ruled by the left, because left-wing politics are a politics of weakness.”
Snark aside, isn’t the right-wing associated with stronger communities and less loneliness? “Family values” and such?
I will ignore the left/right aspect of all this, because I think that the problem is advertising.
It would probably be possible to set up a large website in a way that makes most people happy, by keeping everyone in their bubble and pretending that the rest of the world does not exist. (I am not saying that would be a good thing to do, only that it would be possible.) But that is not what maximizes the number of ad views. People reading only stuff they agree with would get bored after a while, and leave. People who are pissed off will stay and keep “fighting”.
The greater the controversy, the more ad views, and the greater the profit… until some moment when things become too controversial, and some companies start having second thoughts about being associated with that kind of content. Then, a ritual sacrifice must be made to appease those companies. Maximizing the profit means riding this wave carefully… to be not too controversial, but also not too uncontroversial.
And yes, this entire thing can be manipulated by politically motivated people throwing a public hissy fit about something quite mild, because that is a weapon that works. But even without such people, there would be always someone angry at something, because if you are not making anyone angry, you fail at maximizing the ad views. There is an optimal amount of anger, and it is greater than zero.
Therefore, the large websites will keep making you angry.
Yeah, the Kriss essay is kind of a disaster. As am I.
I want to scream THIS HAS ALL GONE HORRIBLY WRONG, CAN’T YOU SEE THAT? NOW TURN THOSE MACHINES OFF BEFORE THEY KILL US ALL but nobody ever listens to the raving lunatic in the opening scenes of a disaster movie. I’m finding it impossible to articulate my deep existential dread that we’ve unleashed forces we won’t be able to defeat… or maybe, that human nature trends towards autocracy and xenophobia and all the progress we’ve made towards liberal democracy and multiculturalism could vanish in the blink of an eye.
(And no, this is not just Trump Derangement Syndrome. Trump is a joke. I’m worried about the next joke not being so funny.)
Or I could just be losing it. If I ever had it to begin with. Whatever. As Warren Zevon said, enjoy every sandwich.
All the more reason why enemies of the liberal order should be deplatformed with extreme prejudice.
Good God, I thought M*ldb*g was huffing glue when he argued for imposing totalitarianism in order to prevent totalitarianism, but it’s starting to make sense to me…in the abstract anyway. Object level I’d much rather be ruled by the people he wanted suppressed.
All the more reason why enemies of the liberal order should be deplatformed with extreme prejudice.
But deplatforming itself is illiberal, so you don’t have a liberal order if you are deplatforming people.
Furthermore, with human nature being what it is, you can’t just give some people very strong tools of oppression and then expect them to limit that to actually only oppressing the illiberal, rather than oppressing what they dislike, don’t understand, what puts a burden on them, etc.
Humans react badly to short-term crises. We think we’ll get over this specific tragedy right now with deplatforming, but what are we giving up?
Even the New York Times scare article about alt-right YouTube showed that the person could be walked back out of the rabbit-hole with differing views. Or the story a month ago in an OT about “my son joined the alt-right after being labeled a sexist, but left when my husband and I showed him some respect.” Or Daryl Davis.
I might be wrong, but when it’s really easy to make mistakes in favor of “let’s crush our outgroup into dust.” If you think you can lose weight eating 30 pounds of jelly donuts a day, you need to realize there is a significant bias in your head. It doesn’t mean the jelly-donut-diet is wrong, but you need to slow down and think coolly.
There are heavy restrictions on thought and speech.
Restrictions on behavior are kept lax (or maybe fun is also mandatory?)
The censorship to combat xenophobia seems narrow to me though. Insofar as multiculturalism is a public policy why should censorship be limited to defending one policy against critics? Why not all policies? Bad monetary and trade policies might be more damaging then immigration restrictionism.
I mean, you are going to have a very narrow discourse then. Every TV station is going to be Charles C.W. Cooke arguing with Victor Davis Hanson and Elizabeth Nolan Brown.
I can understand that there are people who would approve of such restricted access to information for the proles, but I don’t see why I would ever agree with them, especially when I’m a prole by their reasoning.
I’m finding it impossible to articulate my deep existential dread that we’ve unleashed forces we won’t be able to defeat… or maybe, that human nature trends towards autocracy and xenophobia
That’s exactly the position I would articulate as a conservative.
For 500,000 years humanity has been stuck in an iterated prisoner’s dilemma. For 500,000 years, any time a tribe has had the cultural high ground they’ve defected, and crushed their ideological opposition. When the tables turn (Every 40-100 years or so?) the new dominant group crushes the old.
For the last 75-150 years (in the US) the red tribe and blue tribe have been cooperating. rights have steadily expanded, and political violence are extremely rare. There are people on the right (and left) who want to change that paradigm. But at least for now Neither side has defected. The best weapon the defectors have is to try convince the cooperators that the other side is about to defect, so we should defect first.
In an iterated prisoner’s dilemma the best strategy is to keep cooperating until the other side defects, then punish them for their defection. but in our game “the tribe” can’t always control their defectors. So I propose the best strategy is to get as many people as possible to pre-commit that they’ll keep cooperating across tribal lines, even during years where enemy defectors seem to be taking control. Personally, I wouldn’t want to live in any other world.
TL;DR The best strategy to win our modified prisoner’s dilemma is to promise you’ll cooperate unconditionally, and hope the other side reciprocates.
I see where you’re coming from. But in my mind, even if “my tribe” defects first and crushes out opposition, its just a matter of time before we get crushed in turn. I really believe that there is no guarantee that modern, liberal, egalitarianism will ever come back once we lose it.
I’m open to other ideas.. what kind of strategy would you recommend?
I really believe that there is no guarantee that modern, liberal, egalitarianism will ever come back once we lose it.
I’m open to other ideas.. what kind of strategy would you recommend?
Tit-for-tat. Make casualties from both sides until the survivors agree to a truce. How do you think modern, liberal, egalitarianism come into existence in the first place? Certainly not by one side rolling onto its back.
How do you think modern, liberal, egalitarianism come into existence in the first place?
I think (and may be wrong) that a lot of the great leap in egalitarianism comes from oppressed groups escaping the tit for tat in europe, and moving over to america. Its my impression the tit-for-tat cycle was largely broken at that point due to a combination of factors that historically didn’t exist.
Surplus of resources and space (due to the collapse of the indigenous population)
The influence of the Quakers.
Large communities were stronger communities, and the threat of an outgroup leaving to build their own community was enough to shift the balance of power.
Then later on
The need for culturally diverse states to unite against britain forced some measure of egalitarianism.
Federalism creates an atmosphere where tit-for-tat isn’t necessary. If you don’t like your state, you can leave.
Slavery being a south centric practice allowed the northerners to be virtuous at no personal cost up until secession triggered a civil war.
After a bloody civil war that fought in the name of outgroup rights, I think momentum carried us to where we are today, with WW2 spreading it to europe.
I think that the tit-for-tat last for 500,000 years.. and only stopped gradually over the past 400 years because we got a bunch of surplus resources. I imagine its possible we get another boost like that if we ever become an interstellar species.. but I think that if we slip back into the old ways, it may be many hundreds of years before things go back to the way they are now.
(I’m a conservative but consider myself gray tribe, so I hope you’ll forgive me if I use red tribe’s perspective here)
I don’t imagine my tribe can oppress/kill the Left for a few years decades, and then one day Chris Matthew’s or AOC or Bernie Sanders go on TV and say “Ok guys, you win, please stop oppressing us and we promise we won’t try to make you bake any cakes” and then the right, led by Trump/Sean Hannity/Paul Ryan say, “ok we accept your surrender, and because we are so magnanimous we will stop doing things you find oppressive like allowing discrimination based on gender identity”
There is nobody on either side who could negotiate a truce even if everyone wanted to! It seems to me a tit-for-tat strategy would result in everyone killing each other until there are so few people left that it doesn’t make any difference anyway.
The US didn’t leave tit-for-tat behind at all during the period you see. They explicitly built it right into the system. All that checks-and-balances stuff was them saying “look, if you push too hard here, those other guys are going to be able to push back there, so better keep yourselves in check.”
While everyone avoided titting for fear of being tatted, tatting was not actually necessary; an ideal tit-for-tat looks the same as unconditional cooperation, because there’s no actual defection to punish. We got so used to that that we forgot the need to tat, people got mixed up about why the previous period worked (cooperation is good, but only possible because of the willingness to punish defection), and so now the defectors were given free reign and it’s spiraling out of control.
There are always grifters who make money off of encouraging their fellow tribemates to defect. Saying that the outgroup is evil is a guaranteed audience. And it’s a rational strategy, as long as they never push things too far. There are feedback problems: they tend to get more money the more they push, and the only way of knowing when they’ve pushed too far is that a bunch of things tragically break.
I don’t know how exactly we got into this cooperate pattern, but it works, and keeping the stable world order going is so important that it only takes a small leap of faith to see that everyone else will keep it going. The best you can do is talk down the shit-stirrers in your own tribe, because your own tribe is where you will have the most effect.
@jaskologist I think separation of powers is important, and helps prevent any specific group from gaining to much power over another, but I’m not sure I see how tit-for-tat can be described as explicitly built it right into the system maybe you can elaborate on what you mean? I don’t really see any mechanisms in the government to make it easy to punish each tribe’s outgroup. In fact, I would say just the opposite. That our constitution and bill of rights were specifically designed to prevent people from oppressing their outgroups.
Opponents of stop and frisk, and opponents of coerced cake-baking are both pointing to the bill of rights to protect themselves from their outgroups.
The best weapon the defectors have is to try convince the cooperators that the other side is about to defect, so we should defect first.
Uh, we’re long past this. Both sides have been trying (mostly successfully) to convince their own side that the other side already has defected and that we must respond in kind. And they’ve been doing it for years.
@souleater,
I understand where you’re coming from philosophically but from my perspective politically motivated violence in “the west” and especially the United States is far less now than in the first half of my lifetime, and I think the blood spilled from the bullets and bombs assassinations and lynch mobs are far worse things than the ink and pixels “character assassinations” and “Twitter mobs” of today.
The increasing incidents of mass shootings are worrying, but the U.S. murder rate is still less now than it was for most of my life, so what prompts your fears?
I think we actually agree Plumber, I would say that people born today, and in the west in general are among the luckiest people in human history. There are, without a doubt more people living longer, healthier, and more prosperous lives than ever before in human history. I’m just afraid tribalism is fracturing our society and that a some people in both major tribes like to sell the fiction that ingroup can maintain society while excising it’s outgroups.
I have a lot of concerns, but in the interest of brevity I would just point out that it has become very popular to talk about deplatforming the outgroup before the outgroup deplatforms you. IMHO its counterproductive and I think it does more to radicalize the given tribes outgroup than anything else.
@souleater,
True enough, but so far all the extra effort that’s going to “de-platform” and “out-tweet” the “other side” (whichever) seems to me to be sapping the energy that in earlier times went towards building actual bombs, so I’d call it a win overall.
People love to bemoan the “Millennials”, but this “Gen-X’er” is just old enough to remember the domestic terrorism of the ’70’s, so I’d have expected something similar when we had a repeat of a great mass of 20-somethings, but nope!
“Generation Y” was incredibly peaceful, they mostly confined their vitriol to just pixels!
Charlottesville is an outlier, as mostly when politically motivated violence has occurred it’s been scheduled “Anti-Fa” vs “Alt-Right” fistfights!
if it’s so easy to slip from Weinstein to Peterson to Molyneux to Anglin
What’s the deal with Molyneux ranking right next to Anglin? I watched a couple of his videos back during the election (he had some good bits explaining how the media distorts stuff) but haven’t since. He is (or was) a libertarian.
Molyneux is also a nationalist who doesn’t shy away from making H*D arguments. Not just the IQ stuff (for which I think there is relatively solid scientific evidence), but also speculative stuff like claiming that people in the Global South have an average genetic preference towards collectivism while Westerners have an average genetic preference towards individualism. This is how he combines libertarianism with nationalism: immigration from the Global South to the West should be prevented in order to maintain a market economy.
I think any critereon you use for deciding that Molyneux isn’t allowed to express his views will be broad enough to suppress a lot of worthwhile discussion, true claims of fact, and so on. If you have to suppress such views to protect the liberal order, you’re not going to have much of a liberal order when you’re done!
I don’t like the notion of heavyhanded regulation and censorship, but I don’t think there’s any alternative to it. I used to believe “the answer to speech is more speech” but how would more speech have prevented the Christchurch massacre?
If discussion of the banned topics was allowed then perhaps the Christchurch shooter could have been talked out of doing what he did. It may have not worked, of course, some people are just crazy and looking for any excuse to commit a massacre, but by keeping all the discussion as forbidden knowledge whispered in the darkest corners of the Internet, it will attract maladjusted and dissatisfied people who will then radicalize each other in their echo chambers with zero chance of coming across serious intellectual counterpoints from the other side.
Btw, shall we ban all Muslim preaching in order to prevent the next Islamic attack? Shall we ban teaching of heliocentrism and the theory of evolution because it could lead to atheism which could in turn lead to communist revolutions?
(Here’s where I get my digs in at Horrible Banned Discourse proponents by pointing out that The Atlantic – not even a right-wing rag like National Review, the frickin’ Atlantic! – published excerpts from The Bell Curve back when it was first published in the ’90s. So whatever you want to say about new perspectives that weren’t allowed in the mainstream back then, this ain’t it, chief.)
Would The Atlantic publish it now?
Anyway, what do you propose? Shall we maintain a masquerade to suppress forever public knowledge of a true fact about the world which may have large practical implications, for instance, on the effects of migration policies?
Truth is a social construct. You have your facts and I have mine, we both have studies backing them up, whose facts are “true” is determined more by the election returns than anything else. The studies may not replicate, but nobody ever bothers trying to replicate them. Nobody cares. Nothing matters.
That’s all I have to say about any of this, for now.
Truth is a social construct. You have your facts and I have mine, we both have studies backing them up, whose facts are “true” is determined more by the election returns than anything else. The studies may not replicate, but nobody ever bothers trying to replicate them. Nobody cares. Nothing matters.
I sincerely hope the folks who build the bridges I drive over, the medicines I use when I’m sick, and the electrical system that I use to heat and cool my home think differently.
I can’t work out a way to interpret your comment here in a non-crazy way. I know you’re a smart person who has interesting things to say, so I assume there’s some rational meaning you’re getting at. But it’s hard for me to figure out what it is.
There’s some objective truth about whether or not, say, blacks and whites differ in average IQ, or whether men and women differ in average physical strength, or whether human CO2 emissions are causing changes to the climate, or whether leaded gasoline caused the 90s crime wave, or whatever. Even if we can’t always agree on what the truth is, we can agree that there *is* such a truth, and that truth does not ultimately depend on whose side wins the next election or whose side controls the New York Times.
At some point, we have to make decisions, at both a personal and societal level. “Truth is a social construct” seems to imply that we can choose our own reality when making those decisions and yielding the consequences. But that’s just not true–some decisions we can make will have catastrophic consequences even if everyone in the world thinks they’re right. If we convince ourselves that AIDS is caused by drug use and degeneracy instead of a virus, and convince *everyone* of that claim, we’ll just have more and more people with AIDS, unsafe blood supplies, and no development of the drugs that keep HIV from developing into AIDS. The universe doesn’t really care about our social constructs.
I think what vV_Vv is getting at is that any particular policy position, will have an overwhelming amount of “evidence” both for and against it. The effect of the minimum wage is a great example (I think Scott mentioned this recently) I can give you 10 studies saying its good, and 10 that its bad, and the truth remains opaque and may just be industry/experience level/region/time span specific in complicated ways.
I can prove, experimentally, bernoulli’s equation. But I can’t really prove in a reproducible way the effects of migration policies.
For any policy decision we might consider, for any scientific question we might ask, there are likely going to be ways we could be wrong. That’s a good reason for some epistemic humility, but it’s not a good reason to let anyone fuzz out some facts or claims of fact because nobody can be 100% absolutely certain they’re right.
Suppose I smoke two packs a day. You can point to the extensive literature on how bad smoking is for my health, but of course, I can find justifications for dismissing all that literature if I’m motivated enough. Correlation is not causation, the researchers had an anti-tobacco agenda, all the researchers are suffering from groupthink, biology is so complicated and messy almost anything might be going on, etc. I can keep making those explanations until I keel over from lung cancer, if I like. But reality isn’t actually going to be fooled by any of that nonsense.
We actually do have to make some decisions in this life, despite our imperfect certainty. Someone who wants to toss the best available (though imperfect) data on which we might make those decisions, with some discussion about how reality is socially constructed, is extremely unlikely to help us make those decisions well.
If the same person wants to toss the race/IQ data in the trash because the story is complicated and scientists have biases, but doesn’t want to do the same for (say) the lead/crime hypothesis, or the value of Headstart or universal pre-K, or the dangers of rising CO2 emissions w.r.t. climate change, the only way that makes sense is as an isolated demand for rigor.
Probably a better way to think about it is that people have a tendency to think they’re arguing over facts, when really we’re arguing over which facts matter.
“Probably a better way to think about it is that people have a tendency to think they’re arguing over facts, when really we’re arguing over which facts matter”
So much this.
I’ve been a little obsessive about reading polls that break down which demographic and economic groups tending to vote for whom, and often from the outside what looks like “voting against their own beliefs/interests/whatever” on closer inspection turns out to be people voting for and against specific things that aren’t the big newspaper headline “issues” of the day.
The same people who want to deplatform/suppress some discussions of fact also oppose any discussion of whether those forbidden questions-of-fact are, in fact, relevant to political policy or personal choices.
More to the point, the people who plan to lie to us for our own good, or decide which information we’re not fit to know, are also the ones who told us Iraq had WMDs and was a grave threat to US security, and more-or-less cheerlead for any proposed bombing or invasion anywhere ever. They’re the ones who, whenever they cover any technical story where I know the subject matter, get all the details wrong.
I see no reason to think that those people are either smart enough, wise enough, or moral enough to be trusted with the power to decide who’s allowed to speak or what ideas may be discussed.
I think that CNN at least has published some mea culpa, tho probably has been buried.
A question…are you american? living in America at the time? Cause the rest of the world knew the whole WMD thing was bullshit.
I ask because it seems weird to me that you blame the media, but had not really realized the american media had supported this, only thought the republican party and -a sizeable- part of the democrat party had supported it, but assumed the media, like us, knew it was bullshit.
but assumed the media, like us, knew it was bullshit.
No, albatross11 said:
I see no reason to think that those people are either smart enough, wise enough, or moral enough to be trusted with the power to decide who’s allowed to speak or what ideas may be discussed.
i.e., they weren’t smart or wise enough to see through the government’s lies, but now do think they’re smart and wise enough to, for instance, “lie to us for our own good.”
I think the media at the time largely thought the same thing I did: “Why would Colin Powell lie to me?”
ETA: These days I think the media is mostly lying smear merchants. That is, they’re willfully doing the decieving. For instantce, NY Times headline: Alex Jones’s Legal Team Is Said to Have Sent Child Porn in Sandy Hook Hoax Case. This headline (and most people don’t read past the headlines) heavily implies that Alex Jones possess or traffics in child pornography, which we all pretty much agree is universally evil. Except if you read further:
Norman Pattis, a lawyer for Mr. Jones, said that the allegation had already been investigated by the F.B.I. and that Mr. Jones had been cleared of any wrongdoing.
And yes, the emails were things sent to Jones by some malicious third party, and Jones was in fact the victim of…I don’t know exactly what you’d call it, an “attempted planting of criminal material.” The NY Times know this, but chooses a headline maximally damaging to Jones.
This headline (and most people don’t read past the headlines) heavily implies that Alex Jones possess or traffics in child pornography, which we all pretty much agree is universally evil.
No it doesn’t. It implies that “Alex Jones’s legal team is said to have sent child porn”. If there were allegations that Alex Jones possessed child pornography, the headline would be “Alex Jones alleged to possess child porn”. The actual headline describes something different, so one can infer there are no allegations. If you think it’s misleading, how would you write a headline to describe the event “Alex Jones’ legal team sends child porn”?
No it doesn’t. It implies that “Alex Jones’s legal team is said to have sent child porn”. If there were allegations that Alex Jones possessed child pornography, the headline would be “Alex Jones alleged to possess child porn”. The actual headline describes something different, so one can infer there are no allegations. If you think it’s misleading, how would you write a headline to describe the event “Alex Jones’ legal team sends child porn”?
Those are some very nice trees, but how about the forest? If someone wrote an article about your conversation here titled Thisheavenlyconjugation’s Internet Friends Said To Have Defended Nazis, who is being tarred?
If you think it’s misleading, how would you write a headline to describe the event “Alex Jones’ legal team sends child porn”?
I would probably not write it in a way that makes most people who read it assume the victim of the crime is the perpetrator. I either 1) wouldn’t run the story because it’s not particularly newsworthy or 2) would say “Court Disclosure Reveals Alex Jones As Victim of Child Porn Smear Plot.” Probably sounds too passive voice, but at least no one is going to read it and think Jones was the one possessing/spreading child porn. I’d think that’s the really vital goal because, man, there are few things that make you want to go apeshit on somebody like the idea they’re diddling kids. Falsely leading people to believe somebody’s a kiddy diddler is like capital B Bad. But the NY Times has no problem doing that to their competition.
@Conrad Honcho
Relax, there’s no passive voice in your headline. But I’d probably go with Alex Jones’s Legal Team Cleared of False Allegation. Which, uh, is passive.
@Nick
@Edward Scizorhands
Read the article again, or preferably this more recent one which is clearer. You don’t understand the situation: “Alex Jones’s Legal Team Cleared of False Allegation” and “Alex Jones’s Legal Team Is Incorrectly Said to Have Sent Child Porn in Sandy Hook Hoax Case” are just wrong. What happened was that some unknown party sent InfoWars some emails with child porn attached, and then Alex Jones’ lawyers sent these emails to the plaintiffs in the ongoing case against Jones’. The FBI say that Jones’ is innocent of deliberately possessing and spreading the images, but no-one is accusing him of that. The complaint from the plaintiffs is that his lawyers should’ve done due diligence and not accidentally sent them.
Then post-modernists go on to say that if someone in a different culture thinks that the sun is light glinting off the horns of the Sky Ox, that’s just as real as our own culture’s theory that the sun is a mass of incandescent gas a great big nuclear furnace. If you challenge them, they’ll say that you’re denying reality is socially constructed, which means you’re clearly very naive and think you have perfect objectivity and the senses perceive reality directly.
Shall we ban teaching of heliocentrism and the theory of evolution because it could lead to atheism which could in turn lead to communist revolutions?
You don’t even need to bring up the prospect of communism to justify banning the theory of evolution: there was a direct link between evolutionary theory and the eugenics and scientific racism of the 19th and 20th centuries.
You know what I’ve noticed? Nobody panics when things go “according to plan.” Even if the plan is horrifying! If, tomorrow, I tell the press that, like, a gang banger will get shot, or a truckload of soldiers will be blown up, nobody panics, because it’s all “part of the plan”. But when I say that one little old mayor will die, well then everyone loses their minds!
There are other, less charitable, explanations but I’ve come to the conclusion that they’re too spicy for SSC.
I’m hoping I’m doing an honest job with these notes, but if you comment, could you mention whether you’ve listened to the podcast?
This is part of a long series about Evergreen University– the school Bret Weinstein was driven out of.
I’ve only listened to a few of them, but I was left curious about how things are at Evergreen these days, and behold, here’s a podcast.
A few points: Evergreen has quite a strong STEM side. Good professors, good students, and a good ratio between them. Perhaps relatedly, Evergreen doesn’t pay professors very well, which means a good working environment is crucial.
In the opinion of Belinda Bratsch (the interviewee) a lot of what’s wrong at Evergreen is the lack of a strong grievance process– students really didn’t (and don’t– nothing’s been fixed) have a good formal way to complain about professors, and that’s part of why things blew up.
There’s some general discussion about scientists not knowing how to talk to or write for the general public and not not wanting to learn. This isn’t just a problem at Evergreen.
Also, good professors, good students, good studios for the ceramics department. (Casual observation by the STEM student taking a shortcut.)
Melinda Bratsch thinks things could blow up again, and worse. SJWish is still pretty strong there. However, it’s possible to speak against SJW and still be a student there. However, it’s a hostile environment for white cis male professors, and becoming more so.
Bratsch says that the National Science Foundation says that 80% of our thoughts are negative and 95% are repetitive. I can believe this from personal experience, but does anyone of research on the subject? Good research?
Evergreen seems to select for students with initiative and a strong work ethic. [Me speaking: I guess you take your chances.]
Have not listened, have a question: is “However, it’s a hostile environment from white cis male professors, and becoming more so” correct, or is it “for” rather than “from”?
Did not listen, so take this with appropriate grains of salt.
it’s a hostile environment for white cis male professors, and becoming more so.
Speaking as a cis white male not-quite-professor, I don’t believe for a second this is true. I have spent most of my life in extremely left wing circles, and there’s never been a single moment when I felt hostility directed at my race, gender, or sexual preferences.
There have absolutely been instances, however, where I’ve seen someone attacked for their refusal to go along with left wing orthodoxy, including orthodoxy about race, gender, and sexual preference. But that’s not “being a white cis male”, that’s “vocally disagreeing with the fundamental axes of acceptable discourse in a given tribe”. Going to the Vatican and saying “Fuck the pope”, that kind of thing.
To be clear, I’m definitely not trying to argue that anyone who is attacked for these reasons deserves it. Frankly, I’m a big fan of saying “Fuck the pope”, even when it’s my pope. But it’s important to understand what the reasoning behind the attacks is. And like much discussion of left-wing thought I see here and elsewhere, the characterization you gave it seems so fundamentally wrong-headed I felt the need to point this out.
I’m less negative about SJWs than you, leaning towards their side myself, but I do think that over the past 20 years or so, the main brake on hostility – respect for freedom of speech – has greatly diminished, especially among the youngest and most vocal contingent. This isn’t precisely new – people were complaining about very similar behaviours from the left in the 80’s – but I do think it’s gotten worse.
I will say that I don’t think there’s anything terribly unusual or surprising about most of the issues I’ve heard about. People with power tend to do what they can to maintain it. Freedom of speech was never a massive concern for many of the activist contingent (see the discussion of the purges of black faculty in the late 60’s at Chicago in Bloom’s The Closing of the American Mind). I do think this is (mostly) a BAD THING, just not a surprising one (and absolutely something that many on the right are just as guilty of, see the response the BDS movement for example).
I’m probably overdue for writing about where I agree with the SWJs, and I what I think is true that I’ve learned from them. I’m saving it for the next CW thread.
In your experience of such environments, can a cis white male professor express neither support for nor criticism of left wing orthodoxy without problems, or is the tolerance only for those who appear to support?
I think sometimes a lack of overt support can get interpreted as hostility. This might be especially likely if one was in a situation where there’s an action we’re all supposed to take to signal allegiance and someone chooses not to. To be honest I can’t think of any cases I’ve witnessed like that, but it seems likely to me it happens.
I should perhaps note that despite the fact that I’m willing to criticize the shunning of those with opposing views (at least online and pseudonymously), for the most part I’m a fairly overtly orthodox member of the lefty tribe, so as I said the hostility doesn’t get directed towards me, and I may not notice it being directed towards others.
I just thought of a ceremony I took part in at the beginning of a work retreat, which was facilitated by a couple of Native American women. To begin with we went through this process where everyone was walking around on blankets, and they began removing some of the blankets and making people move to the side – this was intended to symbolize the destruction and fragmentation of Native cultures and the death of the majority of their people. Then everyone sat in a circle and my coworkers and I (none of whom were Native) each were asked to say a few words about what we’d learned through the ceremony, or to say something about our own experiences with Native culture.
Though people had the opportunity to pass, very few people did. I don’t think anyone thought anything negative of those who passed, but certainly if anything remotely critical had been voiced, there would have been whispers throughout the remainder of the retreat. So I’d think that anyone there who didn’t agree with some of what had been said would have felt very uncomfortable. But again, I think if they’d interpreted their own discomfort as stemming from some kind of threat towards their whiteness, malenesss, or sex, they would have been missing the point.
Sorry for the garbled quality of the above, I shouldn’t write at 4 in the morning.
But that’s not “being a white cis male”, that’s “vocally disagreeing with the fundamental axes of acceptable discourse in a given tribe”.
The message seems to be that members of disfavored demographics don’t have a right to an opinion. As one person in such a place put it, “By being a white male you are in a privileged class that is actively harmful to others, whether you like it or not. So no, you really actually don’t get to complain about your right to an opinion.”
I’m not sure if the hair you’re splitting splits so fine. There’s an old joke about racism
What is the difference between Northern and Southern racism?
A southern racist doesn’t mind blacks living nearby, as long as they don’t get “uppity.”
A northern racist doesn’t mind blacks getting “uppity” as long as they don’t live close.
What you seem to be saying is SJWs are like the southern racists; they don’t mind cis white males, as long as they don’t get “uppity.”
No, the point I’m trying to make is that white cis males are not disfavoured demographics. There’s just as much hostility directed towards women, Indians, whatever, who disagree with the fundamental principles of SJW-ism. In fact, in my experience of such cases, possibly more so. Consider how much hostility gets directed towards Laura Ingraham or Candace Owens and so on.
As it happens, I’m getting a ring side seat to a current ongoing incident. It’s taking place online, where no one really knows your race and gender – and it includes one contingent accusing everyone who disagrees with them of being (gender) abusers and gamer gaters. To listen to the (female) member of the other side who’s been bending my ear about it, the actual issue is transparency and lack of due process, likely part of a naked power grab, but possibly just a matter of the real offense being “came in conflict with high status person, so no need to look at things like evidence etc.”
My friend is irate that she’s been being accused of being a male chauvinist pig (to use outdated terminology).
I’ve vaguely been watching that same incident, from a much bigger distance than you, and the only thing I found perfectly predictable was “of course this superweapon was going to be used against you eventually.”
You can only speak out against a superweapon when your side is using it. If you say “well, the other side is bad, it would be really bad if they win, and they probably deserve it,” it’s too late.
I suppose the answer should be something along the lines of making it a very free place to do business and allowing automatic citizenship to anyone with skills. But I don’t really think it would work, because there’s already The Bahamas, The Cayman Islands and other nations in the region which have no corporate taxes and friendly banking rules, and the economy of those places is still dominated by tourism. What could make Gonave more appealing to businesses than those other places? The geography just seems bad, particularly if Haiti must be your enemy as well. Singapore wouldn’t be Singapore if it weren’t so well placed on the map.
I suppose my answer is that I’d try to get Paul Romer on the phone and ask him what he would do.
1. Hire advisors from rich, advanced, powerful countries. Become a protectorate of one of them. This should solve the continued independence issue, and give you a pipeline to foreign funding and technology access.
2. Start importing population from rich, advanced, powerful countries by a variety of means. Your patron should be interested in helping you do this, to staff all their investments.
3. Wait.
4. PROFIT!
That is a really tough question, as the island has little in the way of natural resources, and doesn’t really have a natural geographic advantage like Singapore, or a first-mover advantage like the Bahamas or Cayman Islands in regards to taxes. I’m not really sure there is a path for the island to become a rich, advance, powerful country within a single lifetime. I think even bringing it up to the median will be a difficult job.
Members from my church have founded an organization focused on development of La Gonave called Starfysh that tries to improve the quality of life for people on the island. Things they sponsor include research farms, schools, and clean water projects. It’s really interesting to see some of the work they do, and how a lot of the time it seems like they try to use modern version of really old techniques to plug the tech gap between what can be done in a modern economy, and what can be done on an island with limited tech and access to the rest of the world. (e.g. pushing the use of biochar to improve field yields due to lack of access to modern fertilizers.)
Gonave Island secedes from Haiti and elects you to be its dictator for life. How would you go about making it into a rich, advanced, powerful country?
Recruit just enough capital to start a business as a bank. Advertise the island as a suitable retirement destination, but you have to invest in the bank. Relax any business regulations that wouldn’t obviously lead to the destruction of the island. Diversify into solar energy technology (mostly parts). Wait a hundred years or so. Result: Mauritius.
Say I’ve got a manufacturing business with 1000 employees. I’ve made a ridiculous amount of money, and for some reason, maybe because I’m a little crazy, I’ve decided to found a new town somewhere in the USA where there’s currently nothing. I’ll move my factory and all my employees there. I hope that one day it will turn into a thriving city. Where should i found my town?
Bonus: Same question, but say I want to do this in a different country. What country and where precisely?
1000 employees doesn’t sound like enough critical mass to create enough demand for all the things you’d want out of a city, so you would want to be less than a few hours’ drive from somewhere at least large enough to support a Wal-Mart. If you want to allow for long-term growth into a large, thriving city, then you’d want somewhere flat enough to allow easy expansion, ample fresh water, at least two forms of inexpensive transportation, and most importantly, a permissive regulatory environment.
This basically rules out anywhere on the vertical coasts of the US, so you’re looking at inland rivers, the Great Lakes, or the Gulf Coast. The latter is perhaps less than ideal due to climate change, or at least the already-present risk of hurricanes. Flood insurance might make the whole thing a no-go outside of already established urban areas. The Great Lakes seem like they’d welcome the investment but there are reasons the Rust Belt is past its prime.
I’m going to go with a region I’m at least somewhat familiar with, and say somewhere around Huntsville, Alabama. There’s plenty of empty, cheap land, the Tennessee River for a water supply and shipping, access to both I-65 and the Norfolk Southern railway, an “international” airport, and a local concentration of labor for both manufacturing and research from the local auto plants and the Marshall Space Flight Center. All this in a state that’s hurting for economic development and abhors regulation. The flip side is that if you’re building a company town from nothing, you might need to provide your employees with private schools to convince them to relocate – Alabama isn’t exactly known for its high-achieving public school system.
Any hint why? Business reasons, legal, just plain eccentric? You probably want to pay attention to legislation and politics. Once you build a town you also have a city council voted by your employees, with a lot of power over rules and regulations that impact your business. Depending on how aligned your employees are to your goal this could be good, or could be union on steroids.
Also a random tidbit: in a statistic of communities started from scratch, the overwhelming majority of those that succeeded were religious.
There was also a comment here about a month ago on how rust belt cities are dead for good because of …cars. Nobody wants to live in the small town the factory is located, when they can live in a much bigger town with a 40 minute commute.
I don’t know how many employees/franchisees Domino’s Pizza had when Tom Monaghan sold out in 1998, but if you want to found your own town, this is how he did it 🙂
In the Great Lakes region, especially along the lakeshores, there are lots of small towns with open land nearby.
Most of the shoreline of the Lakes are already settled in some fashion, though the towns are typically small and have services to support transient populations of people who do summer vacations along the Lakes. Some of the towns have a used-to-be-more-important feel: depending on whether it used to be a location for shipping lumber/iron-ore/copper/corn/grain.
If access to railroads or cargo docks are important, you’re going to be choosing a location near a current town/city that has such things.
One recommendation is to pick a town with a few hundred people, within 50ish miles of the City of Marquette. Marquette is the largest city in the area, and has lots of resources that cities of ~20000 people, plus the support network and student body of Northern Michigan University. (If you want to locate nearer a tech-oriented school, it may be possible to find a similar location within 50 miles of the city of Houghton, on the Keweenaw peninsula…but Houghton has a population of ~7000 or so, plus the student body of Michigan Tech.)
These are more along the lines of turn a sleepy town into a newer, thriving town, or introduce a new business into a town that used to be a thriving for other reasons than build a small town from nothing.
Stuff like that is why it is so easy to give contemporary pagan/Wiccan traditions a good kicking, which is why I generally don’t – so long as they leave me alone, I leave them alone, and doing “coloured candles and ribbons magick” is mostly harmless and well-intentioned. In the wake of Gerald Gardner’s alleged rediscovery of hidden continuous magical tradition in the 40s, an awful lot of this kind of Golden Bough-lite ‘history’ of witchcraft got churned out (so you had a Romany granny who read the cards – from an ordinary deck not Tarot – over tea for her neighbours? Let us tell you all about the secret esoteric wisdom that really means!)
I only get on my high horse and start swinging the sword when we get the “Ackshually, all so-called Christian festivals are ripped off from Real True Authentic Pagan Traditions” type of looking down the nose about how the newly-fledged Wiccan is so much more authentic and genuine and chronologically superior in their ceremonies, especially when the persons celebrating Samhain (a) couldn’t pronounce it in the authentic native pronunciation to save their lives (b) have no awareness of how authentic natives celebrated it and (c) completely confuse various traditions and have no idea of the history of the Church feast of All Saints and All Souls Days (so Americans tend to only think in terms of Día de los Muertes and assume that it was culturally appropriated from authentic natives, and have no idea of how the influence worked both ways and that the original celebration moved to the Christian feast day and was heavily incorporated into it, and heavily incorporated the Christian tradition into itself).
Though to be fair to most pagans and Wiccans, mostly it’s idiot urban fantasy novelists who do the looking-down-the-nose thing: have their heroes roll their eyes over the notion that Christian practices or symbols could have any real power because c’mon all that stuff was only invented two thousand years ago, but uncriticially accept that somebody prancing around waving a sprig of oak can perform Real Magick because, y’know, the Green Man goes back to prehistory. Or it’s canny magic-supplies-and-books shop owners hitching their wagon to the latest controversy du jour to publicise themselves and their businesses (“Now you can buy my latest SJ spellbook on Amazon!”)
According to Dakota Bracciale, the co-owner of a Brooklyn occult store responsible for organizing a recent public hexing against Brett Kavanaugh, using spells for political protection is nothing new. “[Witchcraft] was always practiced by the people who were the outliers, who were on the fringes,” Bracciale says. “Those people oftentimes had to also be the arbiter of their own justice.”
Yeah, that one didn’t work out so good, did it, Dakota? 😀
For what it’s worth, the neo-pagans I know (a fair number of them) believe that their religion isn’t strongly connected to ancient traditions. “We go to the same source our ancestors did– our imaginations”.
I have no idea what he proportion of those who believe they’re following ancient ways is.
I’d imagine the core of fraud triumphalists is fairly small, their activities mostly online. I have an old college friend I wound up unfollowing on FB after he kept posting the same hooey and ignoring my cited corrections. He wasn’t averse to debate–the opposite, really–he just somehow failed to assimilate even rigorously documented citations that Constantine did not compile the New Testament and Ishtar has nothing to do with Easter.
Deiseach, I’m just noting that the article was about problems with earlier folklore studies, though it isn’t surprising that bad research filtered out into people developing modern paganism.
Speaking vaguely of, I was a bit shocked to find that early pagans didn’t celebrate eight astronomically based holidays. (Soltices, equinoxes, and halfway between each pair.)
On reflection, it’s plausible that early pagans didn’t necessarily have a modern sense of symmetry about dividing the year, and possibly couldn’t afford eight holidays.
And while we’re sort of on the subject, I wish there were a modern paganism based on how we actually live rather than one built around primitive agriculturalism. The weather matters, but there should also be rituals built around the economy, a thing which behaves erratically and affects people’s quality of life a lot.
On the other hand, I seem to be the only one bothered by this, and I don’t seem to have it in me to invent modern paganism, especially since I don’t seem to have met anyone else who wants it.
On yet another hand, I’m impressed that neo-paganism has a ritual structure which is strong and flexible enough that a lot of people can improvise pretty good rituals to fit in it.
Speaking vaguely of, I was a bit shocked to find that early pagans didn’t celebrate eight astronomically based holidays. (Soltices, equinoxes, and halfway between each pair.)
Ah, you mean the famous Wheel of the Year? Pardon me a moment while I wipe the smirk off my face.
Yeah, it’s got mostly Irish Celtic festivals but they had to lump in some Welsh and at least one Norse (Yule) to make it come out according to Western European calendrical usage. I’m no expert on ancient Irish calendars, but the way the calendar is set up as Gaeilge it doesn’t handily map onto things like Equinoxes and Solstices (despite the fact that our ancient monuments are engineered to mark these) – so the important days on the calendar are Imbolc/St Bridget’s Day, 1st February and the start of Spring in Irish tradition (not astronomical or meteorological spring); Bealtaine/May Day, 1st May; Lúnasa/Lughnasadh/Lammastide, 1st August; and Samhain/1st November. So, for example, the importance of May Day is shown by the poem attributed to Fionn Mac Cumhail about it (Fionn is a legendary hero alleged to have lived in the 3rd century AD; earliest references to Fionn and the Fianna date from about the 7th century AD).
To get the “Wheel of the Year” you have to stick in Ostara (you will remember the controversy over this as a Real True Authentic Rotten Christians Stole It Off Us pagan festival from previous comments), Litha for Midsummer which is I don’t know what (apparently it’s another one of St Bede’s ‘what the Anglo-Saxons round here call the months’ list), Mabon which is Welsh, or at least derived from Welsh mythology, and Yule which is Germanic/Northern.
So it’s a syncretic list created by modern Wiccan-types to give them a proper handily organised list of Sabbats and y’know, okay for that, good luck to them. But it’s about as “authentically prehistoric real true enduring tradition passed down in secret through the Burning Times to our modern workings” as my left shoe.
“The weather matters, but there should also be rituals built around the economy, a thing which behaves erratically and affects people’s quality of life a lot”
I like this idea. Central bankers dress up in weird robes and perform mysterious chants about interest rates; politicians holding rituals in arcane, unnatural language about unemployment and inflation attended only by a secret, select group of initiates; wild-eyed wandering prophets denouncing cities for ritual impurity and predicting great woe having perceived dreadful portents in the flight of birds.
Arguably this already occurs.
Business suits should probably be viewed as ritual attire.
I am thinking of this as something that hoi polloi would be doing. We’ll as almost as subject to the winds and storms of the economy and politics as we are to the weather.
I just realized that we have a concept (the economy) for what’s happening on the large scale with money, but no comparable concept for the political state of things.
They seem to be mainly complaining that practicing mindfulness in it’s various flavors and incarnations is preventing people from becoming rock throwing antifa types, which is what correctly thinking people should be doing instead, with a side helping of complaining that turns out that people are willing to pay to be helped and thus so there are people willing to be paid to try to help.
Ah the Guardian, it never surprises, it never disappoints.
Downthread there is discussion about an asteroid that might hit Earth soon. The author at the link claimed it would have the energy of 50 Hiroshimas if it hit, although another commenter (who strikes me as more knowledgeable than the angry ranter behind the original link) claims it would likely not harm anyone even it directly “hit” a major city.
I just want to make a geography game out of this scenario. Suppose you somehow had the unenviable power and responsibility of determining exactly where this rock struck the earth, but it has to be a city of at least 100k people. Assume everyone within a 50 mile radius is instantly destroyed.
If you wanted to cause the least amount of damage to the economy (local and global), which city on the globe would you choose?
I don’t know nearly enough about geography and economic networks to know a good answer. I’m just curious to see if others here know enough to hazard a guess.
Then there is the Beginner’s Level question: The destruction of which city would cause the most economic damage? I’d guess New York or Tokyo on that one.
I’m worried that perhaps it seems offensive to say “The destruction of X city in northern India would cause the least amount of damage to the global economy”… if so, you can blame me for asking the question.
although another commenter (who strikes me as more knowledgeable than the angry ranter behind the original link) claims it would likely not harm anyone even it directly “hit” a major city.
Without jumping in on your question, I’ll suggest that this seems at least as improbable as the idea that it would have the energy of 50 Hiroshimas…is that commenter suggesting it would burn up in the atmosphere and not make impact?
The 50 Hiroshima bombs isn’t too high. If anything it’s probably too low. Hiroshima was only about 15 kt. But yes, while I have absolutely no expert training in the area, it is very likely that the asteroid would explode as an airburst at an extremely high altitude, something like 50,000 – 100,000 feet. That is according to impact simulators and also in accordance with historical precedent like the Chelyabinsk meteor (close in size, high-altitude airburst) and the Tunguska event (no crater so very likely massive airburst, size of impactor estimated to be at least 2-3 times greater than the asteroid in question).
A nuclear blast also emits radiation in a way that an impact explosion doesn’t, so while I wouldn’t say it’s “safe” by any means, if it happens way up high, there’s somewhat less danger from the fallout. You’d still get chunks of falling asteroid that would do damage, but you wouldn’t have people on the ground being immediately vaporized like in the “Daisy” commercial when the blast is happening 10-20 miles up in the atmosphere, nor a lingering radiation poisoning fate.
Magadan, man. It’s 93k, but combined with Ola some 15 miles east should be exactly 100k. The world economy will never ever miss it. Most of the citizens too.
As for the maximum damage, I’d suggest to pick the biggest nuclear waste dump and hit there, having its content spread all over the planet can cause more deaths and damage long-term then cleanly destroying even the most populous city.
Probably either Dunedin, New Zealand or Reykjavik, Iceland. Both are around 120,000 people, and not within 50 miles of other population centers. Dunedin is isolated at the southern tip of New Zealand, over twelve hundred miles from Australia, and much, much farther to anywhere else that isn’t Antarctica. A blast of that size on the coast would produce a substantial tsunami, but it would have to travel over 5,000 miles before hitting South America or Asia. Reykjavik is on the west coast of Iceland, pointed at Greenland, so any resulting tsunami would be largely blocked from northern Europe by Iceland’s mass, and Greenland and sparsely-populated arctic Canada, 2,000 miles away, would absorb most of the tsunami. The blast itself wouldn’t likely have major effects that far out from either chosen city.
If it’s “instantly destroying” everything within 50 miles, it’s got to be much larger than a sub-megaton blast, which also means it’s got to be a lot bigger than the 30 m asteroid previously under discussion. I input a model about 10-15 times larger than that one, which the online models suggested would have (very approximately) a 50-mile total destruction radius for thermal radiation and blast wave, per the hypothetical. If you get an impact of that size on a coast, just about half the blast is going to be in water, and half the crater. So yes, I think it would generate a tsunami.
Reykjavik is, admittedly, probably a worse choice. They also have a lot of banking (or did, before the 2008 crash, and I assume are again).
Right, my bad, I missed that “50 mile radius” part somehow, I thought just the city will be instantly destroyed. Then it’s sure much larger then the asteroid that was discussed, even if it’d detonated near the surface. My pick is still Magadan though, there’s nothing of any value hundreds of miles from there. There’s Japan thousand miles south, but they don’t seem to have any major cities on the coast facing north, and they deal with tsunami all the time anyway, so whatever.
Tsunamis are in a whole different category, energy-wise. From wikipedia’s TNT equivalent page:
The energy released in the 2011 Tōhoku earthquake and tsunami was over 200,000 times the surface energy and was calculated by the USGS at 3.9×1022 joules,[27] slightly less than the 2004 Indian Ocean quake. This is equivalent to 9,320 gigatons of TNT, or approximately 600 million times the energy of the Hiroshima bomb.
A local impact megatsunami might not need as much energy input to achieve a much greater flooding height, while not affecting anything near as far away as a conventional seafloor fault tsunami.
Would people know that I was in control, or would everyone think it was an act of God?
I could take out Pyongyang, or some other hostile regime, without any retaliation. (Depending on a bunch of psychological factors that are hard to predict because I’m not a character in an Orson Scott Card novel . . . or am I?)
It’s a poor town in the highest population extremely poor country. It has no paved roads, no running water, and no electricity. Baraka is utterly irrelevant to the global economy and fairly irrelevant to the national one.
Tabletop RPG thread!
What is the social contract of these games, and how do the rules of different systems support or undermine that?
Dungeons & Dragons has a history of mismatch between character survivability rules and player expectations. Gary Gygax was the original killer DM, and that was OK, because your next character could be ready in 5 minutes. Later player assumptions changed in the direction of expecting their first PC to survive to the end of the DM’s planned story. I had the bizarre experience of DMing 3.5 under these assumptions, which was surreal because the Rules As Written were much, much more lethal than B/X if you knew how to optimize. The DM’s job in the contract became to know the system well and never leverage it, instead graciously losing every fight scene in the story.
This was then hard-coded into D&D 4E, which nonetheless was a relative failure. 5E reverted to 3rd in many ways, but got rid of most of the damage and Armor Class acceleration that optimizers had. It also completely changed what happens at 0 Hit Points, from “unconscious, bleeding, instant death at -10” to “unconscious, you have to fail 3 death saves and it’s physically impossible for an enemy to kill your unconscious body with one blow.”
I can only speak to the situation at my tables, but the disagreements I’ve seen aren’t over character death per se but about what the DM’s role is when playing monsters or other hostile NPCs.
My view has always been that the DM needs to be a fair referee, which means not fudging the dice or NPC stats, and that the DM should play NPCs in character whether that means going all out with Save-or-Die abilities or running away in fear. As such, character death is always a possibility in a fight. It doesn’t happen often, because my players are skilled and D&D has become much less lethal over time, but it has happened and will happen in the future.
I’ve had players who were horrified when they realized that the near-misses their characters had encountered were actually lucky rolls and not me injecting drama. The expectation there was more of a choose-your-own-adventure novel type story, where PC decisions shape the direction of the game’s plot but the big-ticket items like death only occur at pre-determined “dramatically appropriate” points in the story.
I don’t think there’s anything inherently wrong with a “narrativist” view of the DM’s role, but honestly at that point I would rather just boot up a Bioware game. To me the thing that separates tabletop RPGs from computer games, and makes them superior, is that you can figure out what would “actually happen” if you did X instead of Y without the inherent limitations of a computer simulation. The result probably won’t follow a traditional narrative structure, but it doesn’t need to.
I’ve been playing pathfinder with the same group of coworkers for about 5 years, and I’ve been DMing for about 18 months. Each DM in my group has had their own preferences on how kill happy they’re inclined to be. Personally I like a lot of character development in my games, so I avoid killing players unless they really force my hand. Even then I try to deter them.
DM: “You find a suspicious black glowing liquid in the demon sanctuary”
Player: “I immediately drink it!”
DM: “…”
DM: “Just to be clear… you found some millenia old, possibly probably evil, magical liquid, and you want to ingest it.. do you understand how that might end really badly for you?
Player: “I’m pretty sure it will be fine”
DM: “I’m pretty sure it won’t… I mean.. you can do it if you want to… but… I’m just telling you people who go around drinking random chemicals usually die.”
I’m not trying to railroad their character.. but I can’t encourage my players to invest in their characters if I blindside them with character deaths.
My policy is
Death by DM fiat: bad DMing
Death by random chance: still bad DMing
Death by player stupidity: I’ll warn you, but I won’t protect you from yourself.
Death by noble sacrifice: cool! go for it!
when I’m a player its usually more like
Death by DM fiat: bad DMing
Death by random chance: sucks, but it happens
Death by player stupidity: sucks, but it happens
Death by noble sacrifice: cool! go for it!
DM: “I’m pretty sure it won’t… I mean.. you can do it if you want to… but… I’m just telling you people who go around drinking random chemicals usually die.”
The D&D arcade game from 1995 [1] had a sequence where the players had the choice to enter a cave, and the game said “Are you sure? You will die.” and then really asked them if they wanted to die. Then they died.
There was some special story that was gated behind making this decision so experienced players would choose it on purpose, having to feed in another quarter each.
I think the player-perspective version of your chart probably has to be the true one, or Gary Gygax was a bad DM.
Now I would say that if you’re making your poor, innocent players spend a week filling out a character sheet for your system of choice, you’re a bad GM by breaking a very fundamental social contract of group leisure by putting them in a situation where random chance has any possibility of killing that character they had to study hard to make.
It’s the same principle by which player elimination is obsolete in board games.
I’m not sure there is a “true perspective” but I definitely agree the Gygax probably hewed closer to the player-perspective. Even so, I prefer my way for a few reasons.
1. Pathfinder has a lot more customization than early D&D. So it very likely means they would spend a week filling out a new character sheet.
2. We only play a few times a month for about 3 hours a session. So if a character dies, it means the player sits quietly in the corner for the rest of the night and is reduced to a spectator for our bi-monthly event.
3. My campaign is homebrewed so I’m basically deciding on how strong the enemy is the night before, There is a fine line between “random chance” vs I screwed up the CRs.
4. I only have 3 players, so it’s very easy for a single death to turn into a cascade failure and TPK.
I’m going to use this to soapbox my hate for FATE.
From the FATE SRD:
Characters in a game of Fate are good at things. They aren’t bumbling fools who routinely look ridiculous when they’re trying to get things done—they’re highly skilled, talented, or trained individuals who are capable of making visible change in the world they inhabit. They are the right people for the job, and they get involved in a crisis because they have a good chance of being able to resolve it for the better.
In practice, however, the FATE point economy undermines this pretty heavily. In terms of Jenna Moran’s (excellent and incomprehensible) Wisher, Theurgist, Fatalist, Aspects have weak truth, mechanical support, and valence unless an Aspect is Invoked. And that takes a FATE point. IME, this means that it’s very easy for players to spend most of their time buffeted around by their surroundings, taking meaningful actions only when they really care about something. And that stands in very, very strong tension with the position:
Characters in a game of Fate should be proactive. They have a variety of abilities that lend themselves to active problem solving, and they aren’t timid about using them. They don’t sit around waiting for the solution to a crisis to come to them—they go out and apply their energies, taking risks and overcoming obstacles to achieve their goals.
That’s been my thoughts about FATE as well (with regards to Aspects), but we didn’t find it to be too much of a problem in play. Characters still did stuff, and were more competent than not.
Lately I came around to the idea that I’d been playing it wrong, and that Aspects are meant to have normal truth, but mechanical support and valence are weak until a point is spent. But perhaps this isn’t a correct interpretation either.
I will join your hate for Fate. Aspects are… pretty bad. They’re directionally useful — they provide ways to quantize bonuses that have traditionally been very systems-heavy to quantize — but they’re a bad implementation of the idea.
That said, relative competence is very, very hard to build into any game system that’s not a complete straightjacket. The GM can always up the power of opposition until the players fail a lot, or lower the power of opposition until the players succeed a lot. I think that the passages that you excerpted are meant to be prescriptive, not descriptive.
Killing low-level PCs in D&D is something you should avoid unless you know the player in question won’t get too upset. Once Raise Dead becomes available, PC death is on the table again; if the party can’t get their casualties raised in time, that’s on them.
I think that something that Forged in the Dark games tend to do well is avoid a failure cascade. In D&D, it’s easy to fail a spot check, then get bitten by a a giant spider, then fail a CON check and get paralyzed and abducted by the spider, then your friends fail their listen checks, then you die. Exaggerated, but you get the idea.
Forged in the Dark games almost work like fighting (video) games to some extent in that they encourage the DM to offer players a way to “reset to neutral” pretty regularly. Making narrative advantage state a mechanical aspect of the game avoids failure cascades by allowing players to react to things going badly; there can still be save-or-die moments, but only if things have already gone catastrophically wrong and players are aware of the stakes. I really, really like that. It succeeds in allowing danger and agency to coexist.
The social contract is something that is worth discussing with each group. A “session 0” is rightly a popular concept.
Some games do this well by having rules that encourage such a discussion. Others manage it well by having an explicit social contract and rules that support it. D&D has had problems by having an implicit social contract and/or not directing players to decide one.
In a Superheroes game, it should be understood that the player characters ought to be powerful. Likewise, in a horror game players should not be surprised if their characters start dying… one by one.
But D&D can and has encompassed a variety of styles and it’s hard to know what to expect without talking about it.
I like DCC’s character funnel. Paranoia’s clone tallies are also fun. Death and Dismemberment tables (in the style of WHFRPG) are cool. Character creation in Traveller is an adventure in itself.
It makes sense in general that a DM shouldn’t “optimize” adversaries…it’s neither realistic nor good narrative to have every ogre you encounter be the deadliest possible ogre. Ogres have other things going on, they can’t spend all of their time prepping to murder humans. And for narrative purposes D&D has things like “Encounter Levels”, which it seems like it’d be the DM’s job to honor the spirit of rather than subvert through munchkinry.
Back when I DMed, I generally had a target amount of failure and death depending on what kind of story I thought the players were expecting. If you were in danger of exceeding it by too much for whatever reason, there would be a deus ex machina, and if you fell too far below it I’d start making things harder next time.
It makes sense in general that a DM shouldn’t “optimize” adversaries…it’s neither realistic nor good narrative to have every ogre you encounter be the deadliest possible ogre. Ogres have other things going on, they can’t spend all of their time prepping to murder humans.
Me optimizing ogres was never an issue The PCs cut down more than a thousand unoptimized humans and monsters. The issue was when I wanted there to be an adversary with any depth, I’d think up their personality, have to take a long time to make a character sheet… and then they die in Round 1 unless I minmaxed their AC, so the players forget whatever snippets of personality I remembered to make them say in time.
The most-remembered antagonists from that campaign were 2 optimized liches and a couple of Epic-level puzzle monsters they encountered without using the combat rules. That’s how hard it was to keep any antagonist alive after Round 1 of combat.
And for narrative purposes D&D has things like “Encounter Levels”, which it seems like it’d be the DM’s job to honor the spirit of rather than subvert through munchkinry.
Well, yeah. That worked out OK for mooks, since the assumptions underlying the CR system is “CR = average PC level is the level of enemies PCs can expect to murder to the last man by using 25% of their resources.” Having anyone who could survive a mild stink-eye from the PCs as an interesting ongoing character in the world was where it broke down.
I might be repeating someone else as I haven’t read all the responses, but this is worth repeating, 3.5 is totally down to player skill, and good players are incredibly hard to stop RAW, assuming anything even vaguely like ‘fair play’ and of course if the DM doesn’t like to play fair, I think the last version of Punpun I read achieved godhood from a level 1 commoner with no character levels.
If I remember correctly, the original Punpun had to be a kobold who took a special dragon-ish ability to be rules-legal. To which I mentally responded “Oh, so the Gamemaster is God and you’re Satan.”
I haven’t played D&D since I was 12, but when I did I was a ruthless DM. Most characters died fast and hard. As a DM, I was just trying to interpret the rules of the game as objectively as I could. This turned out to work well, because when the players did survive an adventure, they relished their lives and their gold. When, after months, characters achieved higher levels, there was great excitement. Their power was something new in the world. The players cherished it.
I say be as ruthlessly objective as possible. The drama is in the dice.
Napoleon said that repetition is the only successful form of rhetoric*, so I’ll take the opportunity to bang the same drum I always do.
D&D’s Original Sin was that Gygax wasn’t very good at explaining his idea of the game (not forgetting Dave Arneson, it’s just that he was overshadowed by Gary since day one).
My approach to playing (old-school=TSR) D&D is as follows:
If you so much as touch the dice, you’ve already lost. **
The aim of the original game wasn’t “kill monsters, get treasure”. It was “get treasure”. Killing monsters was something to be avoided, if at all possible, because it could easily turn into “get killed by monsters”.
Player skill, back in the day, manifested in coming up with creative ways to avoid rolling dice. Setting off a trap often resulted in save or die, so you really don’t want to set off a trap. Intelligent monsters can be reasoned with (I remember reading a story from Gary’s own table, where one of his players playing a demon – low-hit-die, of course – doused himself with oil and set himself on fire – to which he was naturally immune – in order to cow a bunch of goblins to obey him; Gary allegedly loved it). Unintelligent monsters can be distracted by dropping food and typically shouldn’t be a fight-to-the-death encounter anyway, etc., etc.
Tomb of Horrors was written the way it was to show that you can’t simply rely on your character’s powers to succeed. Foolish player, meet Sphere of Annihilation.
The role of the DM, as I see it, is to be the interface for the players‘ clever plans. When designing encounters, it’s a good idea to write in at least one way to avoid danger through smart choices (and a couple of clues to drop into the description). If the players insist on rolling the dice, go with it and let them fall as they may.
It helped that characters were cheap back in the day.
WOTC took the game and completely failed to understand the premise. The result was insanely expensive characters (in terms of creation time) and “system mastery” over clever thinking during play. We’re meant to be rolling dice all the -ing time (otherwise all that time deciding how to spend your points would’ve been a waste, wouldn’t it?), so you get goblin dice – rolls that don’t actually mean anything, but we pretend that they do.
Make the die rolls actually meaningful and you’ve suddenly got a problem. Your player ain’t gonna be happy that the character they spent two hours preparing died five minutes into the game.
My absolute favourite example of how old-school D&D should go is the Misadventures in Randomly Generated Dungeons/Fellowship of the Bling thread on RPG.net. It’s a long read, but well worth it. Only after I read it did I really understand what D&D was, even though I’d been playing it for decades.
* He might not have, but that’s my story and I’m sticking to it.
** On consideration, this might warrant a clarification. It doesn’t mean that rolling dice should result in death/failure. Simply that if you’re rolling the dice, you’ve missed an opportunity to take randomness out of the equation.
So I’ve just been pouring cold water on nice aspirations, or being Cranky Grumpy Old Biddy on the Internet once more.
See, there was this nice vaguely motivational slogan on a Tumblr post. Very nice image as well, the kind of hip blackboard messaging (are blackboards hip? I’m always unsure what is and is not in fashion nowadays, particularly when it comes to “stuff in my childhood decades ago” – is that fusty old rubbish or so-old-it’s-back-in-style?).
Anyhow, it was “You will never look into the eyes of someone God does not love. Always be kind.” So the general sort of affirming niceness, that is sometimes (not in this case I think, but sometimes in other usages) used to rebuke the conservatives/backwards/-phobes and -ists of various stripes.
And y’know, that’s a nice soft gentle squishy message. It’s quite true as well, but here’s where the cold water pouring/grumpy old biddy bit comes in.
It is true. But true on a level that I don’t think (though I may be doing them an injustice) the ‘put up a nice inspiring reminder to be nice’ nice people who do this sort of things have necessarily thought about.
God does love everyone. That means God loves Hitler. God loves the BindTortureKill murderer. God loves Fascists and the Nazis you want to punch. God loves rapists, racists, murderers, paedophiles and the fat-cat big corporate climate-destroyers on the boards of multinationals that are ruining the world through short-sighted capitalism. God loves TERFs.
The people that you want to feel good about despising, because they’re on the wrong side of history and besides they are horrible mean nasty people who are all -phobes and -ists? The people that you would write smug little thinkpieces about how they’d go Nazi? God loves them.
Love is not nice, love is scary. Good Is Not Nice (to quote that time-sink site you all know and love). Lenny knew it, too: “Love is not a victory march, it’s a cold and it’s a broken hallelujah”. Love is the burning furnace of charity, and if you’ve ever been anywhere near a blast furnace or even a glassblower’s furnace, you’ll know how not-cuddly that is.
So yes, I’ve been crushing nice people’s nice little affirming messages on the Internet, what have you been doing today? 🙂
So yes, I’ve been crushing nice people’s nice little affirming messages on the Internet, what have you been doing today?
Working, mostly, but now I’m waiting for stuff to run so I can see why it doesn’t work. Crushing people’s affirmations is like stealing candy from babies; sure it’s easy and fun and the candy is quite tasty, but it makes the babies cry and then everyone else gets mad at you.
I’ve never actually had a chance to ask someone who believes what you just described w/r to divine love: what, exactly, do you think that kind of love means? What kind of behavior would you expect to see motivated by that feeling? How can you reconcile the idea of that kind of unconditional love with the Catholic doctrine of eternal damnation? If you don’t want to answer, you don’t have to.
How can you reconcile the idea of that kind of unconditional love with the Catholic doctrine of eternal damnation?
God will love you every step of the way as you march yourself into damnation. God will forgive you at any step, and thinking “Oh I’m too big and terrible of a sinner, even God can’t forgive this” is making a fool of yourself, you’re not that important in the universe. No, not even if you’re Adolf H.
But God is not “nice vaguely senile old Santa Claus gift-dispenser in the sky”. God is also just, and if you choose to the very end to say “Non serviam”, then you will go to Hell. For all eternity. And it will be terrible (whether we want to think of it in the old burning fire and torture sense, or the absence of God sense). Heine’s alleged deathbed aphorism “Of course God will forgive me, that’s His job” is a double-edged sword; there’s not necessarily any “of course” about it. You can’t slide right up to the very end not having a particle of contrition or intention to do anything but your own way every moment of your life, then expect “well God is supposed to forgive me, I don’t need to do anything about it”.
There’s a lot of trendy forgiveness about, I’ve seen some examples of it online since this is Pride Month, about what Christianity ‘really’ is or what Jesus ‘really’ meant. It’s ironic because it’s “don’t be judgemental, and I’m judging you for being judging”, but never mind that. Love, forgiveness and Hell are hard sayings and hard doctrines. People have been trying to fudge around it for a long time – either by downplaying the love (sinners in the hands of an angry God) or doing away with Hell: either nobody goes there since everyone is saved (so you can go on torturing eight year olds to death until you drop dead yourself and you’ll still be forgiven and saved), there is no Hell (so ditto), or the souls of bad people just go ‘poof!’ when they die so good people like us will eternally exist in The Nice Place but there won’t be any bad people (so we can feel nice and superior about not having eternal suffering, but we don’t have to deal with the problem of evil either).
God does love everyone – that’s the hard bit. Because for all the transgender lamb cartoons, God also loves the TERF flock and people doing the stone-throwing (that’s the bit that gets elided by the crowd going “Gee, I wish the conservative transphobic church congregations realised this is what true Christian compassion is and what Christ was really all about”, the compassion extends to the mean ole conventional cis people too). And Hell exists – that’s also the hard part; the parable of the lost sheep is that the sheep was lost and needed to be brought back; your transgender lamb will have to abide by the rules of the flock after all. And the non-straying flock can’t be any too sure that they won’t end up in Hell if they simply rely on the fact that ‘I’ve always kept the rules – well, the ones that were convenient and socially advantageous to keep’.
People don’t like either of those messages. They want unconditional love and mercy (for me and those like me) and punishment (for the bad people who aren’t like me and those like me). God loves everyone (even the Nazis whom you want to punch). There are consequences of our behaviour (even if we thought punching was okay because we were punching bad people after all).
so you can go on torturing eight year olds to death until you drop dead yourself and you’ll still be forgiven and saved
Just to check if I understand it correctly — the only problem with this behavior is if you keep doing it literally until you drop dead; not giving yourself at least five minutes to stop and repent.
And the kids you tortured will probably go to hell, because they likely hated you intensely until literally the moment they dropped dead. (So they didn’t repent one of the capital sins.)
But God is not “nice vaguely senile old Santa Claus gift-dispenser in the sky”. God is also just, and if you choose to the very end to say “Non serviam”, then you will go to Hell.
The problem I have with this concept is that I don’t believe that there is anyone who, faced with direct empirical evidence of the existence of God, Heaven, and Hell, would choose to go to Hell. If God gives people this choice face-to-face, so to speak, then it’s not really a choice; I expect that even a super-hard-core atheist would kneel. If God insists that people make this decision before dying, without empirical evidence, then he’s basically playing a prank on humanity, and that decision is incompatible with any reasonable definition of love.
I really that this isn’t original thinking on my part; theologians have been debating the Problem of Hell for millennia. I just haven’t ever seen an apologia I considered adequate.
I can see a couple of ways out, but they’re heterodox at best, heretical at worst.
Possibly the least objectionable* observation is that God’s Word came down to us through people and is laden with those people’s misconceptions and misunderstandings of what they were being told. Layer these on top of one another and you’ll have generations of really smart people (theologians, rabbis, etc.) trying to come up with a coherent whole.
Suffice to say that God, if He exists, has some kind of plan for what to do with sinners and that plan need not be anything intuitively obvious to us – or consistent with the orthodox message – because the limited human mind cannot comprehend the divine. Mysterious was, and all that.
My favourite bit of not-serious-but-perhaps-more-serious-than-you’d-think theology comes from Tolkien (a devout Catholic, as we all know), via Eru to Melkor: “there is nothing that doesn’t have its roots in me, and anything you do will ultimately contribute to the glory of my creation”.**
Seems an uplifting message to me.
* Other than to biblical literalists, at least, but if I were any kind of Christian, I’d be a Roman Catholic, so there.
** My chief beef with Tolkien was that he awoke Lewis’s faith and Lewis wasn’t the kind of person you want to think about theology. Ugh.
The problem I have with this concept is that I don’t believe that there is anyone who, faced with direct empirical evidence of the existence of God, Heaven, and Hell, would choose to go to Hell.
It’s quite simple, I think. God calls you before Him, then points at this statement:
If God insists that people make this decision before dying, without empirical evidence, then he’s basically playing a prank on humanity, and that decision is incompatible with any reasonable definition of love.
God says, “Admit this is wrong, and you can come to Heaven.”
The problem I have with this concept is that I don’t believe that there is anyone who, faced with direct empirical evidence of the existence of God, Heaven, and Hell, would choose to go to Hell.
Richard Dawkins (I believe it was him) once said that, even if he woke up one morning to discover that the Second Coming was in progress, he’d assume that he was dreaming, or hallucinating, or that some sort of mass hysteria was going on, or that technologically-advanced aliens were playing a practical joke on us, rather than that God actually existed and that Christianity was true. So there’s at least one person who claims that he wouldn’t accept direct, empirical evidence of the existence of God.
More generally, people do all sorts of things that are clearly making them miserable without giving up, and ignore all sorts of inconvenient truths if they don’t like the implications. So I don’t think it’s at all implausible that this sort of behaviour would continue after death.
As for the “no evidence” claim, in my experience lots of people don’t even know what would count as evidence in the first place. Often people say something along the lines of “If you can find something that’s inexplicable by science, I’ll take that as evidence,” and then when you point out such a thing — the existence of consciousness, for example, or the existence of the universe in general — they dismiss this on the grounds that it’s nothing but “God of the gaps” reasoning, and therefore inadmissible. Of course, if the only evidence you’ll accept is a gap, and you dismiss the use of any such gaps as inadmissible, it follows pretty trivially that you’ll never find any evidence to convince you, but it isn’t God’s fault that your own standards are self-contradictory.
Another option is just to say (with Bertrand Russell and Richard Dawkins, inter alios) that maybe the universe just exists, and there’s no explanation for it. Again, though, this is just a piece of wilfulness, not a proper argument, and is almost never applied consistently or in good faith (just trying going up to Dawkins and telling him “The existence of transitional fossils isn’t evidence for evolution, because they might just be a brute fact with no explanation!” and see how seriously he takes you).
Another one that’s just occurred to me is to stack the deck in your favour so much that your naturalism becomes unfalsifiable, by declaring that any non-supernatural explanation is ipso facto superior, and hence there can be no evidence for theism provided that you can cobble together some naturalistic just-so story to oppose it. Sure, your account of the Resurrection requires ignoring the primary sources whenever they contradict your theory, positing a whole load of stuff with no justification other than “It would be more convenient for my theory if this happened”, and hypothesising a huge Dan Brown-esque conspiracy theory which somehow never got revealed despite the people being involved having every motivation to squeal — but it doesn’t involve any reference to God, so checkmate, theists!
In short, what does or does not count as “evidence” is sufficiently indeterminate and subjective for you to arrange things beforehand (or even on the fly, if necessary) so that nothing can possibly count as evidence against your preconceived beliefs, and appeals to lack of evidence should accordingly be taken with rather a large grain of salt.
If God insists that people make this decision before dying, without empirical evidence, then he’s basically playing a prank on humanity, and that decision is incompatible with any reasonable definition of love.
God says, “Admit this is wrong, and you can come to Heaven.”
Now what?
Haha, explain to me why it is wrong (to my satisfaction) and I will admit that it is wrong. I can do no else, without making a mockery of the word “admit”.
(Incidentally, even without the proviso I would likely choose Hell, since Heaven implies worship (at least in its classiclal conception) and I will not worship an unjust God, which the God of the Bible appears to be.)
I’ve never actually had a chance to ask someone who believes what you just described w/r to divine love: what, exactly, do you think that kind of love means?
I was wondering this question, and remembered a story recounted in the Gospel of John.
To set the scene: a woman has been caught in the act of adultery, breaking the Law of Moses. [1] A set of religious leaders drag her in front of Jesus, requesting that He join in their judgement. [2]
Jesus looks at her, looks at them, and uses a bit of rhetorical judo on them. Let the one who is without sin cast the first stone. He pauses to the words sink in, then He stoops to write something in the dirt. [3] One by one, the accusers leave as Jesus looks around.
Finally, Jesus talks to the woman. Who condemns you? She replies that no one is condemning her. He then says Neither do I condemn you. Go, and soon no more.
The love that Jesus showed to that woman was to not punish her in the way that the Law of Moses said that she deserved.
But it was also love to instruct her to leave the life of sin behind.
There are other parts of the Gospels that talk about love, forgiveness, and calls to righteousness. There are also teachings about eternal punishment of those who don’t put any effort into following the instructions to go and sin no more.
———-
[1] After five or six times reading this, I realized something that I had missed, and never heard much discussion of.
Where was her partner in this crime…er, sin?
I think the other person involved got away, somehow.
[2] Jesus was popular, or at least notorious. Maybe these leaders wanted to accuse Jesus of being too loose with the Law, or maybe they were genuinely interested in whatever response He would provide.
[3] Some part of that interaction between Jesus and the accusers is in this act of writing on the ground. Was the writing itself important? We’re the things written important? Or was Jesus just waiting for the accusers to get the hint and walk away?
Whatever it is, it’s less important than the conclusion of the story.
[3] Some part of that interaction between Jesus and the accusers is in this act of writing on the ground. Was the writing itself important? We’re the things written important? Or was Jesus just waiting for the accusers to get the hint and walk away?
Whatever it is, it’s less important than the conclusion of the story.
I’ve heard a theory that he was writing the names of the accusers and their sins in the dirt. I don’t know how much credence to give that, but it seems to make sense given their reaction.
The whole concept of divine love or charity is incredibly confusing because of the poor choice of names. It’s not anything like love as is normally understood, neither romantic nor platonic, and it’s certainly not like charity in the ordinary sense of the word.
Maybe it was clearer in Latin or Greek but it seems hard to fault people for interpreting the language of Christianity by its straightforward English meanings instead of through the lens of dead languages. Given that even the Catholics conduct mass in the vernacular, would it kill them to be a little more clear on what they’re talking about?
Maybe it was clearer in Latin or Greek but it seems hard to fault people for interpreting the language of Christianity by its straightforward English meanings instead of through the lens of dead languages.
I mean, I feel like Modern English has to share in the responsibility here. It’s not good that so much of the straightforward meaning of “love” became “I want to have sex in a respectful, socially-approved way.” 😛
it seems hard to fault people for interpreting the language of Christianity by its straightforward English meanings
The straightforward English meanings have been boiled down to “Love is what we celebrate on Valentine’s Day, and by celebrate we mean sell chocolates and flowers and hotel breaks for romantic weekends, because the only love that counts is sexual/romantic”.
The whole concept of divine love or charity is incredibly confusing because of the poor choice of names. It’s not anything like love as is normally understood, neither romantic nor platonic, and it’s certainly not like charity in the ordinary sense of the word.
It’s the original choice of names. It’s not St. Paul’s fault that we moderns have bastardised the meanings of common religious terms beyond all recognition.
Why is it a thing that geographical names get translated into different languages? Why is Brasil Brazil in English? What is the point? Why is Praha Prague? Again, what is the point? Why can’t we all call things what the natives call them, at least to the extent we can easily pronounce them?
Edit: A hypothesis I just came up with while noticing that getting the spelling mostly right doesn’t mean one gets the pronunciation right: Maybe, for some reason, it’s better and less offensive to intentionally call something different from what the natives call it, than to try to call it what the natives call it and fail.
Because language is spoken first and written second. People hear about places and they are later written down. Once it’s written down enough, you’re not going to change it because the natives spell it differently, particularly if their writing system doesn’t even have the same characters
Another one that changes are the Netherlands; in Spanish, it’s called either Paises Bajos (Lower Countries) or Holanda (I was told by a Dutch guy this is offensive to some people, kind of like calling Spain Castille, because Holland is just a province, not the whole country).
Spain has a pretty consistent name AFAIK, derived from Hispania.
But Chinese names to many countries don’t sound at all like the names we use (from when I was studying Chinese). It’s the same with Korean; they don’t use easily recognizable country names, even for countries where contact was made relatively recently (and thus no mutation happened).
Alemania was the part of what the Romans called Germania inhabited by the Alemanni tribe after breaking through the Roman borders in the Crisis of the Third Century. Since it included the banks of the Rhine and the upper Danube River basin as far as the confluence with the Lech River, it’s not surprising that the Spanish took the name of the proximate part for the whole.
And Deutschland is the endonym, while some languages prefer the Latin exonym because it’s Latin.
Holland is actually two provinces now, although this wasn’t the case for much of Dutch history. From about 1101 to 1806, Holland was a single entity. During this period, the most intense interactions with Spain happened (including the Dutch War of Independence aka the Eighty Years’ War).
Louis Napoléon Bonaparte separated Holland in two provinces and then named the entire country, the Kingdom of Holland. So at one point in time, Holland did refer to the entire nation. However, this only lasted 4 years, before older brother Napoléon Bonaparte got fed up with how decently his little brother tried to rule.
People who live outside of North and South Holland tend to dislike it when the entire country is referred to as Holland, which is part of a more general feeling of being overlooked.
Also place names are subject to phonetic evolution overtime just like other words. That’s why the city of “Florence” is called “Florence” in French and English, “Florencia” in Spanish, “Firenze” in Italian and “Fiorenza” in the local Tuscan dialect, all deriving more or less directly from the original Latin noun “Florentia”.
Seems like it would be the job of dictionary and atlas makers to be the authority on the correct names. In this case, the correct name should be determined from the top down.
I’ll note that in a few cases it seems like we have changed the name in English to match what the natives prefer. For instance, Iran instead of Persia. Also, I assume it’s Mumbai instead of Bombay now because that’s closer to the native term, but I don’t really know the history of that particular change.
Also, I assume it’s Mumbai instead of Bombay now because that’s closer to the native term, but I don’t really know the history of that particular change.
AIUI a lot of the natives actually dislike the name Mumbai, because it’s based on the name of a Hindu goddess and the Hindu nationalist party changed it as a sort of territory-marking exercise. As a rough analogy, imagine if a white nationalist party was voted into power in New York and renamed it Confederacytown.
Almost as if a bunch of Brits showed up and sneeringly told the local “Janke”‘s that they now live in New ENGLAND instead of New HOLLAND, and renamed New AMSTERDAM to instead be New (Some Rando Town in Middle England Somewhere).
Yankee comes from Jan Kees, which were two extremely common Dutch names (and still very common, but not as dominant as they were). So it derives from non-Dutch people interacting with Dutch people and noticing how often they had one of these two names. This became a generic term in the same way that peculiar names that are common in a subculture are sometimes used to refer to that subculture.
The name itself is not directly linked to an English name, but it is a diminutive (hypocorism) of Cornelis, which is the Dutch version of Cornelius. Dutch (ethnic) protestants regularly adopt diminutives as the legal name. Dutch (ethnic) Catholics more commonly have a legal name that they never actually use, favoring a ‘calling name’, although having a separate legal and daily name is a fairly common practice in The Netherlands. For example, Anne Frank’s legal first name was actually Annelies.
Quite a few birth announcements state both the legal name and the calling name, which sometimes is completely different from the legal name.
Cornelius is a Roman name that is rarely used in English. Chevy Chase is actually called Cornelius Chase. Ex-senator Robert Byrd was born as Cornelius Calvin Sale Jr, but his name was changed after adoption. Cornelius Oswald Fudge is a character in Harry Potter.
Or sometimes the exact opposite of that. Anglophones clearly call Paris “Pear-iss” instead of “Pah-ree” because more people saw it written down than heard a francophone speak it, and pronounced it according to English phonetic rules. There are towns in the US called Versailles (pronounced “vuhr-sales”) and so forth, too.
It doesn’t even need to cross languages. There’s a river in Connecticut called the Thames, named after the one in England but pronounced the way it looks.
My favorite is Houston Street in New York, which is correctly pronounced as house-ton. Anyone who pronounces it like Houston, Texas is outing themselves as being from out of town.
This raises a question I’ve long had. Is Houston Street in NYC named after Sam Houston or someone else? If someone else, how was that person’s name pronounced?
This raises a question I’ve long had. Is Houston Street in NYC named after Sam Houston or someone else? If someone else, how was that person’s name pronounced?
According to Wikipedia, it’s named after William Houstoun (with an alternate spelling). The street was apparently originally part of his father in law’s estate.
because more people saw it written down than heard a francophone speak it
Actually, in Old French, the “s” in “Paris” was pronounced. While in French, final “s” was later lost in pronunciation (but the spelling wasn’t updated to reflect the change), that phonological change never happened in English. So the “s” is pronounced in English because it’s always been pronounced in English.
Funny story, in Interwar America it seems to have been pronounced the French way, at least by some people. Presumably all the American boys sent over to fight in France adopted the French way of pronouncing Paris, and it stuck around for a while before going back to the standard English pronunciation.
The thing that blows me away is that we (ie, anglophones) call Firenze “Florence.” THERE’S NOT EVEN AN L IN THERE. And it’s not like this is some exotic far-away language with phonemes and alphabets we just can’t deal with.
I wouldn’t be surprised with a Paris-style thing where the real name was given an anglicized pronunciation. Like if we called it “Fire-ence,” okay, sure. But how did “Feer” get turned into “Flo”?
Also baffling: there is a city near me called Vallejo. It is universally pronounced “Val-ay-ho.” Why? Why not either “Vay-ay-ho” or “Val-eh-joe”?
I am pretty sure the J as H is more common knowledge than LL as Y for Americans trying to pronounce Spanish more authentically, these kinds of mistakes are pretty common where I am from, jajaja.
I wouldn’t be surprised with a Paris-style thing where the real name was given an anglicized pronunciation. Like if we called it “Fire-ence,” okay, sure. But how did “Feer” get turned into “Flo”?
It didn’t. “Florence” is from the Latin name, “Florentia”; it’s the Italians who changed the “Fl-” to a “Fi-“. It’s not our fault that those guys can’t speak proper Latin.
An example: In case of Prague, oldest written name of the city is Praga, in Latin. People started writing in Czech several hundred years after this city was founded. Praga was translated to French as Prague, and English uses French name instead of Czech one. I think broadly similar processess account for other discrepancies between native and English names of places.
Worth mentioning is the fact that there was likely a consonant shift in the interim:
The spirantisation of Slavic /g/ to /h/ is an areal feature shared by Ukrainian (and some southern Russian dialects), Belarusian, Slovak, Czech, Sorbian (but not Polish) and minority of Slovene dialects. This innovation appears to have travelled from east to west, and is sometimes attributed to contact with Scytho-Sarmatian. It is approximately dated to the 12th century in Slovak, the 12th to 13th century in Czech and the 14th century in Upper Sorbian.
I wasn’t able to easily find out when the earliest record of Praga as a name is from, but I’m assuming it pre-dates the consonant shift. Given that most people in France or Britain wouldn’t have had any contact with spoken Czech for centuries, the people responsible for passing the name down would continue to do so completely oblivious of the change.
I believe there are often many steps involved, Marco Polo first hears of ‘Japan’ from a Chinese person, not much hope of the eventual English version being very correct.
A substantial number of Native American tribes in North America are known by the name their enemies or neighbours gave them. Comanche means enemies in Ute. Ute derives from the Apache word for mountain people. Apache is thought to derive from the Zuni word for Navajos, which in turn means enemies.
The Zuni actually called themselves that, but Navajo is from the Tewa term for a large field. The Tewas call themselves that, but most people known them as the Pueblos, which literally just means “towns” in Spanish. Because unlike the other natives in the area, they lived in fortified towns.
In South-East Asia the Palaung call the Jingpo “khang” which means something like “mudblood”. The Jingpo use the same term for the Chin, and “yeren” meaning “wild men” for the Lisu.
The West Germanic word walhaz means “stranger” or “foreigner”, which is how we got Wallachia, Vlachs, Wallonia, Walloons, Cornwall, Wales, the towns of Wallasey and Welche, and Włochy the Polish name for Italy. In contrast the Slavic word for Germans is derived from a word that meant “mutes”, whereas Slav means “one who speaks”.
So while there’s not a lot of terms that mean “those bastards”, there sure seem to be a lot of variants on “those people”.
Another example: “Cologne” in French and English, “Köln” in Standard German, “Kölle” in the local dialect. All from the unreasonably long Latin name “Colonia Claudia Ara Agrippinensium”.
Based on these examples, it appears the usual explanation is that the locals change the name more than foreigners do.
Based on these examples, it appears the usual explanation is that the locals change the name more than foreigners do.
Wouldn’t surprise me, as frequently-used words seem to undergo change more than infrequently-used words. E.g., “to be” is irregular in most languages, AIUI.
You have it backwards; “to be” is irregular because it’s undergone less change. Take Latin, where do, dare, dedi, datum‘s forms partly precede the conjugation family it’s grouped with—you can tell by the irregular second and third principal parts. Or sto, stare, steti, statum. You can see the resemblance to perfect reduplication in Greek and Sanskrit; these forms are relics of older conjugation. From Sihler:
482. The irregular verbs of L[atin] grammars are so designated because in large or small ways they do not conform to any of the four conjugations (elastic as that classification is). The usual reason for this is the survival of the athematic forms of the root class, such as est ‘is’; or of the athem. opt. in -i- (in L terms, the pres. subj.).
(emphasis mine)
About “to be” specifically:
491. … In PIE—unlike G[reek], L[atin], and most attested IE languages—there was very little out of the ordinary about the verb ‘be’. The odder traits of the verb in G and L are the result, paradoxically, of the extreme conservatism of these forms. For example the OL subj. siem 1sg., simus 1pl., sient 3pl., unique to this verb, is actually the sole survivor in L of what was once the normal way of forming the optative system to roots.
(emphasis mine)
If you want examples from English, the -en ending in perfect forms is older than -ed. But -ed has been steadily taking over, and even verbs whose perfect once ended -en are more often rendered with -ed now. Don’t be surprised if “beed” never catches on, though.
Here’s a fairly short explanation of the process:
1. An adventurous egghead travels to a faraway land and records his experiences. He tries to render the local names as well as his written language allows.
2. Subsequent generations of eggheads aren’t that adventurous, but that’s okay, ‘coz they’ve got Written Sources. Thus, they go back to the original work (or – more likely – to works that reference the original work, or even works that reference works that reference the original work and so forth) and faithfully copy the name as originally rendered. It’s all good, they know what they’re talking about.
3. All the while, the eggheads’ language is undergoing gradual changes, as languages do, so the pronunciation of a particular spelling subtly changes as well (or maybe the spelling changes to reflect how the name is pronounced). It’s all good, everyone knows what they’re talking about.
4. Meanwhile, the locals’ language is also undergoing gradual changes and with it, the commonly used names of places that are now pronounced a bit differently. It’s all good, everyone knows what they’re talking about.
5. Centuries later a representative of the eggheads’ culture and a representative of the local culture meet, use the name of the place that they’ve been taught and wonder if they’re talking about the same thing.
5. Centuries later a representative of the eggheads’ culture and a representative of the local culture meet, use the name of the place that they’ve been taught and wonder if they’re talking about the same thing.
Not quite the same thing, but I was involved, with my sister, in a conversation in Japan c. 1963 with some Japanese students. The name of one of the world’s most prominent political figures came up, and they couldn’t tell who we were talking about.
It turned out, as best I could tell, that the Japanese version of Mao Tse Tung was the pronunciation in Japanese of the Chinese symbols for his name. My understanding—someone who knows more about the language is welcome to correct it—is that an individual symbol in Kanji can represent either the sound of the Chinese word it represents or the Japanese word with the same meaning as the Chinese word it represents.
That’s always been my understanding of how Kanji is used, yes.
As I understand it, the reading is context dependent, so mere knowledge of the character does not ensure correct pronunciation. Furigana may provide disambiguation, but I presume it doesn’t appear in most cases where an educated reader is expected to know the correct reading.
Approximately, but the on-readings of Japanese characters don’t map well to any modern Chinese dialects, because they were derived over a period of several centuries, can date from as early as the 5th century AD, and were probably somewhat garbled even then. There are also mistaken readings that have been encoded into the Japanese language, and even some completely synthetic readings for native Japanese characters.
(Don’t ask me to translate any Japanese, I only took a couple semesters and everything I remember now relates to martial arts. But I remember that much.)
Many (all?) of the Slavic languages call the Germans something beginning ‘Nem-‘ followed by ‘-ets’ or ‘-ski’ or suchlike, apparently from an old Slavic root word meaning ‘mute’, i.e. people who can’t speak our language and therefore might as well be rounded off to people who can’t speak at all (though the word for Germany the country is more likely to be something we’d recognise from Western European languages). And the word ‘Slav’ apparently comes from a root meaning ‘word’ – i.e. Slavic people are the people who do speak our language.
Also, check out Finland (Suomi in Finnish): travel north and you cross the border into Norja. Okay, that one makes sense. But travel east and you reach Venäjä. Or travel west and you reach Ruotsi. WTF? Plus their name for Austria is a calque: Itävalta, literally something like ‘eastern dominion’.
Ruotsi is pretty clearly just Russia said with a weird accent, which is not as weird as you might think. The old name for the western coast of Sweden around Stockholm is Ros, now Roslagen. The people from there were thus also known as the Ros. After they went East to rule over the Slavs they were known as the Rus, and eventually the people they ruled started to call themselves the people of Rus, or Russians. And that’s why The Finns calls Sweden Russia, it’s the OG Russia.
Hmmm. Makes sense. Just looks pretty weird nowadays that the Rus have changed position 🙂
Anyway, I looked up the etymology of Venäjä – apparently that’s from an old Germanic word for the Slavs that none of the Germanic languages have any more.
UBS has come under attack for comments from its chief global economist regarding inflation in China (here’s a bloomberg article).
Can anyone explain what was offensive about what he said? I would quote it here, but I’m so far from understanding the reaction that I have no idea what the unintended consequences could be.
Even granted that…unless there is some missing context it sure looks like he was using “Chinese pig” to mean “pigs that are in China” and not some weird reference to actual Chinese people. I agree it seems like a misunderstanding…and a simple enough one that it is hard to believe it has blown up like this.
Rival brokerages in Hong Kong stepped in, urging the bank to fire all people involved in the incident.
Yeah, I think there may have been some intentional fanning of the flames there, with rivals hoping to blacken UBS’ eye and gain market share at their expense and so making a big deal out of “Did you KNOW he called Chinese people PIGS???”.
I know no more than you, but a quick Google suggests to me that pigs are culturally important, high status animals in China and that 2019 is the Year of the Pig, so perhaps making light of a devastating porcine epidemic is more culturally insensitive than it would naturally seem?
Or perhaps China just wants an excuse to knock UBS in order to promote home-grown rivals.
Half-baked thought prompted by this thread: the distinction between the state “giving people free stuff” vs. other kinds of state spending is one of those things that feels more real than it is, and policy proposals can end up introducing inefficiencies by trying to game how it feels.
At one end of the spectrum, I’ve heard people sarcastically describe the existence of a state-funded Navy as “giving away free boats.” This of course doesn’t feel right at all, to anyone, because there’s no obvious free market interaction this is substituting for; nobody can buy a 1 in 300 million share in naval protection. At the other end of the spectrum is, say, Government Cheese, which anyone would have to describe as the state giving away free cheese (even though it also serves a secondary policy goal delightfully termed “quantitative cheesing”). But in the middle of the spectrum, you can change how much a policy feels like “giving away free stuff,” often by adding indirection or complexity. You can give people free subway rides, or you can allow tax-advantaged salary deductions for a special interesting-earning account that can only be used for transportation. If giving away free stuff is off-brand for you, you might be tempted to propose the latter even if it’s a less efficient way of accomplishing the same thing, because routing it through taxes and employers and economic transactions feels more markety and less free stuffy. Conversely, if “free stuff” is currently on-brand for you, “we’ll pay off your unpaid student loans with a one time tax credit” can be more appealing than “we’ll give a one time tax credit to everyone who’s had student loans regardless of whether they paid them off” because the former is more like getting something for free.
Wait, are there people in favor of student loan forgiveness that would oppose a tax credit that also went to people who had paid off their student loans, or was that just a hypothetical? Both of those score as “free stuff” in the sense you’re using, to me though.
Wait, are there people in favor of student loan forgiveness that would oppose a tax credit that also went to people who had paid off their student loans
I’d be shocked if there weren’t.
If the credit applies to say, every living person who ever went to college (note: a lot of people with significant college loan debt didn’t graduate), then the vast majority of people collecting the credit will already have paid off their loans. Additionally, these people will, on net, be much richer, as a group, than the people not receiving the credit.
This would be a quite regressive tax, that could accurately (for once) be described as “tax cuts for the rich.”
Well I would think it would be limited to “every living person who took out a loan to go to college” based on the phrasing in the OP, which would exclude the rich people who paid for college out of pocket. It would seem like it would take a lot of mental gymnastics to frame it as regressive to credit back a 45 year old who had finally paid off their loans a couple years ago in addition to crediting the 25-35 year olds still paying them off.
Well I would think it would be limited to “every living person who took out a loan to go to college” based on the phrasing in the OP, which would exclude the rich people who paid for college out of pocket.
It would also exclude the people who worked part-time and summer jobs to pay for college even though that took up most of their partying time, and it would exclude the people who chose to go to state schools rather than elite private colleges so they could graduate debt-free. No, wait, it wouldn’t exclude those people. It would tax those people, to retroactively pay for the people who made the opposite life choices.
The problem is that the state of naval warfare has changed a bit since the rise of large state-sponsored navies, so any analogies are going to be imperfect.
Well, that’s one problem. The other is that naval power is hard to generate and requires the sort of actions that governments are good at and corporations aren’t. A modern warship is incredibly complex and sophisticated, with a lot of people, both in and out of uniform, supporting it. This is required if you want to compete with someone else who is working on the same level, and I can’t see a corporate-funded navy reaching it. If every navy on the planet was scrapped as part of the grand AnCap collective treaty, we might be able to get away with it. But that world is a very long way from the one we have.
I was thinking more “alternate history where there is no rise of large, state-sponsored navies” and what that would look like than how we would get there from here. Certainly would seem to be an impossible bell to unring.
What a strange hypothetical. Traditionally sea trade was high-risk/high-reward, which meant big rewards for groups of people living together near the sea that developed risk-sharing social “technologies.” This could take the form of a state monopoly on sea trade, where the society is run like the royal family business, or it could take the form of private contracts enforced by the king’s code of laws.
Then more sea trade -> more commerce raiding -> more reward for organizing a state navy. Hell, a big, enduring group of pirate ships basically becomes a small state, as St. Augustine reports the humble captain of one pirate ship telling Alexander the Great.
That seems unlikely. There were large state-sponsored navies in the Ancient World, and pretty much everywhere else that has reached that level of sophistication and used much water transport. There have been brief periods when converted merchantmen were good enough, particularly during times with weak states, but they didn’t last very long.
Before you have large state-sponsored navies, you get merchant marine which can get quite large and organised.
Then as you get big rich merchant vessels carrying valuable cargoes (and passengers), you get pirate fleets preying on them (and possibly having island bases because now it’s worth their while to co-operate rather than every captain with his own ship trying to supply and repair it and for mutual defence).
Then it becomes enough of a problem that either the merchants have to find some way of getting ships that are not merchant vessels but warships designed and built and maintained, or they dump the problem into the lap of the state because “hey, you’re supposed to be the law and the defenders of the citizens round here”.
For a long time the merchant marine basically was the navy. Until the development of line-of-battle ships in the mid-17th century, there were no significant design differences between military vessels and the largest civilian ones; they were similar in size and sail plan, and could be (and sometimes were) similarly armed. Building a naval force often consisted mostly of pressing civilian ships into service; only 28 out of 130 ships of the Spanish Armada, for example, were purpose-built warships.
But there were lots of dedicated warships before the invention of the ship of the line. They were just mostly galleys. I’m not as familiar with the 16th/17th centuries as I am with later eras, but I suspect that it was a combination of merchies being good enough and states too weak to afford large fleets.
Government spending is largely paid for by taxes. So one way to look at things you get from the government, is that it is something you paid for, not much different from getting the good or service from a private company. You don’t call it ‘getting free stuff’ when getting a service or good you paid for, even for insurance, where the payout is need-based.
However, this point of view is very hard to defend from the individual perspective, when the payments are purely need-based and there may not be any period when the person pays into the system. For example, welfare for a handicapped person who can’t and will never have a job.
When the government spending benefits both people who a net tax payers and those who are not, it’s a more hybrid situation. Something similar is true when people tend to get much more from the government than they pay in tax at one stage of their life, but tend to pay more than they use at other stages.
Of course, from a hyper-libertarian perspective, like David Friedman’s, people should always have the right to choose a provider of a service/good and to choose not to buy the service/good, so then all of it is coercive, making people pay for something that they don’t necessarily want or not from that provider, even if they do benefit.
I think one distinction between some things that feel like “free stuff” and other stuff that doesn’t is whether or not the recipient of “free stuff” can turn that benefit into liquid assets relatively easy. It doesn’t distinguish perfectly, but distinguishes some things.
You can’t sell your “share” in protection by the U.S. navy and turn it into cash. You can’t sell your right to drive on the highway either, your driver’s license is nontransferable. On the other end of the scale tax benefits and government checks are just money. In-between is something like food stamps, it’s possible to resell some forms of them even if it’s illegal. But even if you can’t do that, you can buy food with the food stamps and resell it below list price to get cash.
Of course, you do gain wealth in the long run by being protected by the U.S. navy and having access to the highway, but it’s not fungible with cash.
Appeals for somewhat obscure expertise: any good, accessible books on the history of the family? I don’t mean of specific families, but of how family structure and perceptions of it changed over time–something in the vein of Gies’s Marriage and the Family in the Middle Ages, but more general, or at least for other eras. I found one book on Amazon with a search (The Family: A World History by Maynes), but the few reviews seem to agree it’s mostly about criticizing misogyny and not about surveying structure. There’s also a book about ancient Greek families, but that has no reviews and is illustrated with the cover of a completely different book so I’m a little leery of trying it. This is a subject I’m really interested in, but haven’t read about in any systematic way.
I do have an even more obscure recommendation: The Child in Christian Thought, edited by Marcia J. Bunge. It a collection of essays that looks at how theological understanding of children has changed throughout the years – it starts with the New Testament, Augustine, Chrysostom, etc, continues through the middle ages and onto Barth, and modern feminist theology. Bunge also has a book The Child in the Bible which does the same for various books of the Bible (and touching on some contemporary family practices in the ancient world).
The books are accessible in the sense that they’re collections of essays you can dip in and out of, but there’s no particular structure or intend to give a comprehensive overview of how the family changed over time. You also have to be interested in reading theology, not history.
If those sorts of things interest you, I could dig into my old essays from my Family and Ministry studies and come up with some more recommendations. If that’s not your interest, I understand!
Not exactly what I’m looking for. I’m asking because I’m unsure how many of our ideas about family life over the years are rubbish. For example, until fairly recently I believed that the nuclear family, in America at least, was this newfangled thing that postdated WWII. That was what my high school history textbook said. But Gies says nuclear was the norm in Catholic Europe, and my Oxford Dictionary of Byzantium says the same for the East. Heck, Little House on the Prairie shows a perfectly normal nuclear family in the nineteenth century. On the other hand, what little I’ve read of traditional China implies that extended was the norm, and ditto for early Islam. I mostly want a sharper picture of structure and norms.
In traditional society, women tend to live with their parents until marriage (not in the least because it is men’s job to earn enough for a house/farm (or inherit it), whereupon he becomes marriageable). It also is the children’s job to take care of the elderly, if they can’t provide for themselves anymore. This seems strongly based on the lack of alternatives: providing home care or elderly homes is quite expensive.
The dynamic in these societies seems heavily dependent on the ability for men to earn or inherit enough money to become marriageable. The longer this takes, the longer men and women have to wait to start a family of their own & the longer that they tend to stay at home.
Of course, only one of the progeny has to (and can) take in the parents. So with high birth rates, many of the children would not have to take in the parents.
I think that the extended family narrative takes this dynamic and exaggerates it.
Does anyone have much experience with language learning?
I’ve been practicing Chinese on and off for about 5-6 years, and I feel like my vocabulary is very strong for my level (I know 800-1000 words) , but my ability to communicate still feels very limited. I have trouble following books or movies, and can’t really speak with people other than my girlfriend.
I feel like I’m an A2 CEFRL normally, but at B2 with my girlfriend.
Is anyone else in this position? Any tips for how to break out of it?
Possible advice depends on what you are already doing.
I’m not sure if this matches your A2/B2 split, but I have two reasons why I have experienced that type of discrepancy:
(1) If I spend a lot of time with one person or small group, then both accommodations and internal references develop. For example, the native speaker might realize that I always mispronounce a particular word or incorrectly use a particular construction, but they have learned what I mean.
(2) Sometimes, I’ve really focused in one subject area for a particular language and then become relatively strong when the conversation is about that thing, however, I might be totally lost when the conversation is something “easier” or more common. For example, I can (or could at one time) conduct business meetings regarding bank loans and credit risk in German and Thai, while not being able to talk about popular sports. I think hospitals/doctors observe a more widespread version of this when 2nd generation immigrants are asked to translate between the doctor and 1st generation patients. While the kids may even appear are functionally fluent in both languages, they may completely lack the necessary medical vocabulary in either/both languages.
Reading general interest magazines or watching news can help with both of these issues. For some languages there are groups that produce this type of material that is simplified for language learners (vocab choices are more common variants, more background is given to understand context). Diving into material that was created for fluent speakers can be really frustrating if you aren’t already pretty strong.
In some respects, regular books/movies/tv aren’t good for learning language, because entertaining writing tends to have unrealistic dialogue. (And it being unrealistic is a feature, not a bug.)
Either watch/read kid’s media (which is designed to teach people things), or watch non-narrative Chinese media, particularly their variety shows. They’ll beat a joke to death, which means tons of repetition to learn complete phrases rather than words, but also getting you more used to actual conversational banter. They also like to throw extra captions on everything for comedic effect, which helps with referencing words you don’t know.
(The regular talk shows still have the issue of leaning towards more esoteric words, since they usually double as documentary/promotional bits.)
As usual when this topic comes up, I will recommend LingQ as a good site for building your listening and reading comprehension (or a free equivalent like Learning With Texts, though I think that there you need to import your own content), and just booking a load of tutoring sessions over iTalki or some other language teacher marketplace (or free exchanges with Chinese speakers who want to practice their English, depending on whether time or money is more of a constraint for you).
E: and is this an academic or functional interest? Because I’m not going waste our times linking a primer on impedance or digital logic if you really care about building a robotic hatrack.
You’re going to want to look at basic circuits – circuit laws and passive components. Load balancing may be of use. Turbines are mostly thermo, not electrical, but if you want to understand the EE side of how they work a basic understanding of electric motors and rectifiers should do you good. Most EE courses will dive into semiconductor devices and amplifiers – I’m not sure if this will be interesting to you. Chances are that if circuit analysis makes you hungry for more they will.
If you want to get into grid design, there’s going to be a LOT more involved, and I’m not the one to look to for it.
Tangentially related to the downthread question of whether work is getting too intellectually intensive for people to do – is there a reason not to expect metic knowledge to develop around the kind of high-technology work the future seems likely to increasingly hold?
Arguments for:
1- Farming is really hard, and the fact that your average agrarian bear was capable of it is incredibly impressive. Collective knowledge is obviously good enough that it can overcome a lack of reasoning. There’s a wide history of success here to draw on.
2 – The benefits of individual epistemic (not like, epistemology, but in the sense of episteme) intelligence in occupations like engineering, programming, teaching, accounting, etc. seem limited. The sky may be the limit for the tippy top of the field, but there’s a lot of work in those fields you don’t need much reasoning for.
Arguments against:
1 – Metic knowledge works better when the landscape isn’t shifting under you. It’s possible that technological destruction is too fast for this sort of knowledge to develop.
2 – The kind of technological work that’s being done isn’t conducive to the production of metic knowledge.
3 – there’s too much atomization of society/the workforce for metis to condense
The first two objections seem fake as hell to me. At least in my field, experienced techs have way more of a clue than even experienced engineers about some things, and one of our biggest challenges is institutionalizing the knowledge they have. A tooling engineer is only half of what we need to build tools half the time. This isn’t surmountable with more training either; nothing but “ask the person who uses the tool” seems to be a satisfactory way to answer to the question, “how should this tool work?” If the objection is that this wouldn’t apply to software, I’ll simply repeat mine and John Schilling’s statements from a few threads ago:
most of the software I use professionally has cross-platform problems, usability problems, capability problems, and modifiability problems that have been there for roughly a decade but don’t get fixed. To the extent that the tech industry is making software more useful, we benefit minimally. Meanwhile, the SAAS model comes across as “we’re going to fire the PhDs who wrote this shit that you used to be able to talk to and replace them with phone banks full of people who don’t know what a mode shape is.” Features I don’t need are added, features I do aren’t. And the whole industry seems liable to be driven by design/tech fads that infect non-technical people who do not understand the underlying causal relationships we have to deal with
whatever the internal culture is, the products they insist on saturating the market with have an order of magnitude too much “move fast and break things” and an order of magnitude too little “if it ain’t broke don’t fix it” in the mix. Stuff that works tolerably well but still needs a lot of post-release fixing is abandoned because hey, it’s time to move fast and create the next generation with more shiny features. And more bugs that nobody will ever fix because see above.
These failure modes aren’t something that can be reasoned out of. At least not easily. But I’m the sort of person who actually believes that “Big Data” is a dumb meme.
The third worries me more. If we’re engineering the vectors for metis out of society, we might be fucked. I consider it by far the most serious problem.
The obvious response to “The big corporation making a one-size-fits-all approach doesn’t actually fit our size” is “develop it in-house.” My current consulting job is banging out tiny little programs that solve some specific problem in a particular factory’s workflow, sometimes connecting to a big one-size-fits-all program’s database to slurp up some particular data they need. AFAIK, this is a fairly common thing for consulting companies to do.
The downside to writing a program that solves one specific problem really well, incorporating all the metis your end users have developed, is that obviously you can’t then copy it to a thousand other sites that have slightly different workflows and expect it to work equally well. So you’re giving up one of the key advantages of software if you do that – the ability to copy it – which is why big companies providing 80% solutions continue to exist and make more money than dinky little consultants.
So I think that software has exactly the same issue that other fields do when it comes to building institutional knowledge and metis and improving on it. Which I guess puts me firmly in the camp of “metis will continue to exist.”
There is one thing that might be unique to software: Open-source software has the potential to let someone take the code you’ve developed for one thing, and then tweak it to fix whatever specific problem they have. This lets you get some of the benefits of the one-size-fits-all software while maintaining your metis. Of course, this means you introduce all sorts of maintainability issues once you fork the codebase… which means now your developers effectively have metis of their own that they need to maintain.
Really, when you think about it, software is just a giant pile of accumulated knowledge – every bugfix is someone saying “Actually, the obvious solution doesn’t work, because we didn’t know…”
The thing about in-house, and to a lesser extent any kind of niche software, is that it will be tailored very well to the problem it is addressing but everything else about it will be bad. The UI will be clunky, the performance won’t be great, it’ll be buggy and those bugs will be fixed slowly or not at all, it won’t be well documented, they’ll likely be serious security concerns, and so on and so forth.
This is because it’s really hard to write AAA software (by analogy to AAA games). It takes a lot of people with many different skill sets and ongoing attention.
It depends very much on how much the company is willing to spend on it and how serious they take it. There is some very well-documented in-house software where bugs get fixed fast. See NASA, Boeing Airbus, etc.
The finding of bugs tends to correlate with user base and intensity of use.
The clunkiness of the interface tends to negatively correlate with freedom of use (much in house software has to be used as part of the job, so workers have no choice but to use it).
Complexity of the interface tends to correlate with intensity of use and the capabilities of the users. Power users tend to prefer software with advanced capabilities and high learning curves over software with limited capabilities, but an easy to learn interface.
Incentives of in-house software tend to be different than on-the-shelf software, but so are the incentives of open-source software (which also tend to be poorly documented and have more complex and less friendly interfaces).
The main advantage of open sourcing in-house software is to expand the user base & to share development effort with others*. It doesn’t suddenly create AAA software incentives/outcomes.
* Including features that are valuable to the company, but not valuable enough to implement themselves. If another company does want to spend the money, you get the feature for free.
Software quality is like airplane seats. Everyone likes to complain about it, but nobody is willing to pay even a little bit more money to make it better.
Programming is almost entirely reasoning and intelligence, *especially* at the bottom end of the field. At the top end of the field, you might actually need to have read some theory about compilers, operating systems, theorem proving or whatever. At the bottom end, you can just figure things out by trial and error without too much difficulty.
I do, in fact, pay a little bit more money to make my airline seats better. The airline industry almost always makes that option available, and I pay for it. The software industry, not so much.
Use open source, and then actually pay developers to fix what you need. If there isn’t an open source solution to replace your inhouse software, then open source your inhouse software, and then pay developers to fix what you need. If you can’t open source your inhouse solution because it belongs to some vendor who has you by the balls, then pay more developers to develop an open source solution to replace the vendor solution.
If you complain about paying developers, you lost the argument.
That sounds an awful lot like “If you don’t like economy-class airline seats, you should charter a private jet. If you complain about the cost of chartering private jets, you lose the argument”.
The airline industry at least offers a range of services between lowest-common-denominator crap and bespoke, because no, hiring someone to produce a custom solution for just one customer isn’t a sufficient alternative.
If you can’t open source your inhouse solution because it belongs to some vendor who has you by the balls, then pay more developers to develop an open source solution to replace the vendor solution.
The problem isn’t so much paying developers as it is paying dynamics PhDs for ~5 years of work. Take a look at open source multiphysics stimulation or CAD software; it’s very, very bad.
If there isn’t an open source solution to replace your inhouse software, then open source your inhouse software, and then pay developers to fix what you need.
Much in-house software is so tailored to the company that there isn’t actually a market for it, even at zero cost. Open-sourcing just makes it easier for hackers to attack the company, but provides none of the benefits of successful open sourcing: having a larger user base, sharing development with others, etc.
An important difference is that it actually saves the airline money to squish people into an economy seat by default, rather than give them a business class seat. It’s not (just) artificial scarcity.
In software, it is often no (or only slightly) more costly to offer the full feature set. So differentiated pricing is then going to be purely to capture more of the consumer surplus.
Also note that the differentiated pricing often depends on there being a dichotomy in the market. For example, it seems to me that business class is only viable due to business travelers (hence the name), who don’t pay for their own travel.
I regularly see software try to capture more consumer surplus, when a dichotomy in the market exists. For example, Adobe has long tried to do this with Photoshop, trying to get lots of money from professionals and less from prosumers.
Often it’s not that people aren’t willing to pay, it’s that they aren’t really interested in quality improvements, they are interested in signalling to others that they care about quality improvements. Which means they end up paying for the wrong thing. A lot of time is spent building the airstrips, with profound disinterest in whether planes can actually land there.
I’m not sure that some things are about raw intelligence.
There’s nothing like working with MD’s constantly to make it clear they’re normal humans who can be idiots sometimes.
Guy with both an MD and Phd who uses excel every day of his working life: complains that he needs the data sorted…. according to a column in the spreadsheet in front of him. To his credit he was embarrassed when someone pointed to the “sort” button.
A lot of things have a fairly small hurdle to understand them. A lot of very bright people never push themselves over such hurdles.
I’ve got a very strong case of “learned helplessness” for anything produced by Microsoft. They have a history of radical UI changes, requiring me to entirely relearn how to use whatever-it-is. Consequently, I never put in the effort needed to become an expert user of anything they produce, since it’ll all be thrown away in around 2 years.
Apple is less bad – often, the things I learned still work after their UI “improvements”, even though they are no longer discoverable (not visible on menus etc.). I have a 2 page list of things I do to a new Mac, to make it behave the way I expect (in many cases, the way Macs did at the time I learnt some particular feature), but at least it’s possible. So I’m a bit more of a power user of OSX than of Windows.
FWIW, I pay the premium to buy Apple because of the relative UI stability. Or I use Linux, but that has its own collection of problems, less relevant to this comment.
Consequently, I never put in the effort needed to become an expert user of anything they produce, since it’ll all be thrown away in around 2 years.
You said Linux wasn’t relevant, but this sentence is exactly me and Linux. It’s probably worse in the open-source world, because at least with the proprietary OS’s you have the alternative proven-to-work reward-system called “getting a paycheck.” Linux keeps on reinventing things that worked acceptably well because the reward for inventing a new subsystem is so much greater than marginally improving an existing subsystem.
OpenBSD seems okay, because its built for the people who write it so that they can use it, as opposed to being built for the kudos. (The fact that other people can use it is a lucky bonus.) When I come back to OpenBSD after a few years, I still recognize it, and my old tools typically work.
Most of my problems with linux are really problems with distros, which is why I didn’t want to go down that path. And those generally mess up the window manager, the packaging system, and (less commonly) the way processes are launched, while leaving almost everything else alone, at least at the level of a user.
At the level of a programmer, linux (both distros and base) change a lot more than this, and I’m unhappy that many/most distros make it all but impossible to e.g. get debuggable core dumps, even of programs you built yourself.
And to get really arcane, the arms race between kernel changes and the capability of the kernel core dump analyzer (crash) is insane – in any other environment, the 2 teams would coordinate.
But meanwhile, gnucash hasn’t changed drastically in the past decade; mutt still reads local email; postfix still handles my spool file; emacs still edits files with substantially the same UI as always; shell scripts are backwards compatible with the ancient /bin/sh etc. etc.
Yeah, MS has a real love for radically changing the interface and breaking the workflow of all its users every couple years. I don’t know if this is somehow helpful to their business model, or if it’s just something they’re big enough to get away with, but it sure is annoying.
My own response is to try to use open source tools as much as possible. In a pinch, I’ll settle for Apple tools, which seem less inclined to randomly change how they work in a way that requires a few weeks to get used to. I actively try to avoid using Word for anything substantial, though I often am forced into using it in collaborations with people who can’t or won’t use anything else.
Features I don’t need are added, features I do aren’t.
Having just received the latest Windows 10 update, very much this. Most of the faffing about (and that’s what they did, on a cursory look) is minor; some of it I’ll use, some of it I won’t. One particular feature is just about driving me up the frelling wall after only three days of it and I honestly think the only reason this was included is “well we have to seem like we’re doing something what with the subscription model we’re charging our customers”.
I see that all the time. Some task that took 1.5 months of constant, grinding work with major updates for some new feature: “Oh, that’ll be nice”. Something that took someone 2 hours “That’s amazing! This is so great, thank you!”
Effort need not correlate with usefulness. The effort Apple put in 2 years ago to *remove* labels from icons in the “dock” on their iPhones and iPads contributed only to making the devices harder to use. It’s possible they could fix this intentionally-introduced deficiency by changing a single line of code (turn it back on); it’s equally possible they’d have to rewrite the code entirely to work with a radically changed underlying system. Either way, the usability improvement would be the same.
Something that took someone 2 hours “That’s amazing! This is so great, thank you!”
It seems like a major failure of the organization if there are lots of two-hour changes that customers would really appreciate but that aren’t being made.
Wait, you expect appreciation from customers? I figure I’m doing good if the customers aren’t screaming and cursing my name. (But then again, I do infrastructure programming; if the customers are thinking of us, something is probably wrong)
It isn’t about expecting appreciation, it’s that what generates positive user feedback will drive to some extent what gets focused on, and users give positive feedback for dumb (and even wrong) reasons.
Then he put in a new color scheme — just a new color scheme, nothing else — and everyone raved about how fast the app was now.
Well I tell you this, my friend: if in the latest update they actually had put in a new colour scheme, I would be raving about it right now 🙂
At the moment, for the Office Theme in Word, I can have “Colorful (don’t get excited, that’s ‘light grey’), Dark Grey, Black, or White”.
If I use White, that’s retina-searing after a couple of hours with no contrast between the background and the onscreen page, so if I’m doing a full day’s work on the ol’ wordprocessing front, I pick Dark Grey.
A couple updates back, they let us have light blue as a choice and that was great, but in one of the updates they gave us the new improved “you can have black, slightly less black, or white” as Colourful! New! Themes! Ain’t You Glad! choices.
If they could manage to put in light blue, light green, etc. as backgrounds to save my poor aging eyes, I would be much, much, much happier than “No, I don’t actually need to be able to link straight to Wikipedia launched from my Word programme, thanks all the same”.
I observed back in college that professors were really bad at using our learning management system, which when I started was Blackboard. They complained that they could never find x or y and that z and a and b were confusing. Their solution? Get a new learning management system, of course! So professors pushed to switch to Canvas, and the exact same complaints ensued. The problem wasn’t with the software—the problem was that they didn’t know how to use the software, like really use it, no metis. But their increment was too short to learn, or they’d developed a learned helplessness from knowing it would be too short. The result was that they were all shit at using our LMS and classes would have constant, and I mean constant, problems with files “missing”, tests not appearing, grades not being entered, turned in homework being “inaccessible”. Which could of course be exploited by students saying, Why yes I did turn it in, gosh I don’t know what happened, has this ever happened before….
Now that I’m working I see this elsewhere too. There’s a related, or perhaps a more general, problem where folks think the solution to a problem is software, when it’s really (for lack of a better term) process. Consider a hypothetical: my boss wants us to use a task management program. But the program is useless so long as no manager is in the habit of entering and monitoring tasks and so long as no worker is in the habit of checking them off. And those habits don’t depend on the program, anyway—we could do this all with the whiteboard that’s in the room now.
This is our second, by the way. He didn’t like the first. He doesn’t know the latest one any better. He won’t know the third one six months from now. That which has been is that which shall be, and that which is done is that which shall be done, and there is nothing new under the sun.
Slightly OT: could someone define metis? I have a vague idea from context but I’d like more. Google isn’t giving it to me, and I suspect this is a rational-sphere-ism I don’t know.
I think it is a seeing-like-a-state-ism(or at least that popularized the term here), and it is just built up local knowledge, like all the related tricks and information needed to farm in Papua New Guinea, that are hard to derive from first principles.
The problem wasn’t with the software — the problem was that they didn’t know how to use the software, like really use it, no metis.
While this is definitely a problem (shiny new software installed everywhere but no training in how to actually use it), in defence of ‘people interacting with shiny new systems’, I have to point out that sometimes the designers/coders don’t know the fine details of what the systems will be used for, so they unknowingly set up roadblocks in the way of the end users.
For instance, the housing database I was using that didn’t allow you to enter apostrophes in surnames. This in a country with O’Briens, O’Byrnes, O’Boyles, O’Mahoneys and O’Mahonys, O’Gormans, O’Callaghans, O’Sheas, O’Donnells (distinct from the McDonnells or indeed McDonalds), O’Neills, O’Reillys, O’Sullivans and several more.
Which meant there was no consistent system used for entering names, so everyone had their own way. And since the search function was case sensitive, this meant many happy hours trying variants on “Did the person who processed this application enter the name as O Brien, OBrien, 0Brien, O. Brien, Obrien or some other version?” before you could find the application in question, if you could find it.
(They did fix it in a later iteration, after every town, city and county council in the country yelled at them about it. But you see what I mean? They were used to thinking of apostrophes in the context of programming, and never considered at all “entering surnames onto the database” because it never occurred to them, and it never occurred to the people asking for the shiny new software to mention this, presumably because they assumed ‘ah shure, they’ll know about that anyway without having to be told!’).
Haha, that’s a fun case. With databases, apostrophes can get you into real trouble. The least your developers should have done is trim the apostrophes from input automatically instead of forcing users to remove them themselves; that way you at least consistently get name minus apostrophes as the result, instead of a bunch of 0Briens (who the hell does that?). But really those should have been parameterized, which escapes apostrophes for you, and is secure generally from injection. This isn’t 1965—a lot of work has been done so that developers don’t have to think about these things, and their code libraries handle edge cases like these gracefully. Of course, this isn’t the case everywhere; the software could be very old, the developers could be idiots, shared code can still have bugs, and the pretty abstractions you build on top of all the plumbing are just as susceptible to bugs, though at least they’re less insidious. Since I worked at IT, though, I had access to our learning management system as a student and a teacher, though, and I can tell you there weren’t serious, experience-ruining bugs. Actually, only one that I can recall: I’d typed some code in a comment on an assignment I uploaded to my professor, and the code was breaking the page my professor used to download assignments, because the idiot developer never html encoded my input. In retrospect, I should have used that for evil, but I just reported the bug instead.
Ha ha ha ha ha (that sound you hear is hollow laughter from the memories).
People who tried inputting “O’Brien”, had the machine rear up and spit at them, and decided to try a different character to make sure it wouldn’t explode on them. If you’ve crashed the entire system just typing someone’s name in, and it’s going to take three days to fix (because the developers are all up in Dublin and all changes, requests, etc. have to be referred to them and they take their own sweet time answering), then you’re going to be very wary about anything that looks like it might make the system crash (thank whomever the patron saint of low-level clerical officers is that never happened to me, at least).
As I said, there was no consistent “okay everybody, make sure you do it this way” method, most likely because everybody involved on the data entry level bitched about it amongst themselves but nobody thought of asking “So can we get a consistent rule about this?”
The first time this happened to me, I went “Oh yeah, because apostrophes are used in programming” but then I went “Yeah, but nobody thought about that when designing a system that needs inputting names that have apostrophes in them, in a country that has lots of surnames with apostrophes in them? This does not seem like good design!”
the software could be very old, … and the pretty abstractions you build on top of all the plumbing are just as susceptible to bugs
To be fair, I think this was mostly the case. The original system was a pilot version done on a trial basis in limited areas, and when it seemed to work they decided to roll it out nationwide. But being government contract work, the time between “let’s pick a tender to build this”, the version that was delivered, and the version that went nationwide was a long(ish) time. So the original software was old, and then of course once the database started being used by everyone and not just the selected trial site, everyone wanted something different added, taken out, tweaked or solved, and that resulted in a creaky superstructure being tacked on top.
Like all top-down decisions, if they’d asked the people on the ground who were dealing with applications what they needed and how they did the job, then designed around that, it would have saved a lot of trouble because we could have told them “This is the paper form we use, this is the information we need, this is how we enter it, we need to be able to put six different addresses in for people and variant names because our clients change their names and dwellings more often than they change their socks” and so on.
But why ask the little people, when the top brass have had a Brilliant Idea and are full steam ahead on how this will be More Efficient and Less Costly? 🙂
With databases, apostrophes can get you into real trouble. The least your developers should have done is trim the apostrophes from input automatically instead of forcing users to remove them themselves
No. No. No!
But really those should have been parameterized, which escapes apostrophes for you, and is secure generally from injection.
The most basic rebuttal is that code acts on data. Without code interacting with data, you have no computer.
Modern computers use a Von Neumann architecture, where data and code are stored in the same memory and are transported over the same bus. So code and data meet in memory and meet on the bus.
At a more high-level, most code written in programming languages is treated as data to produce actual CPU-level instructions. So code becomes data becomes code.
To truly have separation between code and data, you need a hardware-program computer (like ENIAC, Enigma, etc), rather than a stored-program computer.
PS. You probably mean that code and data should be delineated more clearly.
Like all top-down decisions, if they’d asked the people on the ground who were dealing with applications what they needed and how they did the job, then designed around that,
I work on the other side of this, trying to make sure that this kind of thing doesn’t happen (to military software) and as a counterpoint, users are often terrible. For every time they talk to us and we come away with a clear idea of what they need, there’s another time when we end up more confused and end up having to make a bunch of changes because they didn’t communicate clearly, or one group tells us that what another group told us to do is idiotic. It sounds nice and simple to talk to the users, but different people use the system in different ways, so the system they build on your advice would probably be different than the one they build from the person who sits next to you.
(This doesn’t excuse the apostrophe thing, and it’s probable that the devs in question are idiots, but “talk to the users” isn’t a panacea. They need someone like me to sort it all out.)
There are architectures that mark some regions of memory as non-executable, or that design things so that the only code that can run is in ROM of some kind. This makes attacks harder, but not all *that* much harder. Google for “stack oriented programming.”
@albatross11:
Try as I might, I can’t make the connection between “mark some regions of memory as non-executable” and “stack oriented programming”. To me, stack-oriented is pretty much the epitome of mixing code and data in one bowl (the stack).
Eh. I do think you need to sanitize, or maybe normalize is a better way to put it, a name field not because of sql injection–which should never be handled that way–but for data cleanliness reasons. If your users are at all likely to try O’brian, O’Brian, Obrian, obrian, etc. for the same person (which is a SME question) then you want them to be considered one and the same in your application.
I don’t think your example really supports your points. I’m at a university that switched from Blackboard to Canvas. With Blackboard there were a lot of complaints and problems. (It was truly awful. Every task took so many clicks, and was so unintuitive.) With Canvas there are also complaints, and people who are incapable of getting what they want done. But the number of complaints is less, and the general unhappiness with the learning management system is less. (According to random people I talk to and the IT support people. So the problem is perhaps to some extent process, but it’s also, to a sizeable extent, software.
There is a 1 in 7,300 chance that a 30 meter in diameter asteroid will hit the earth this September. Shouldn’t we have tried to reduce the odds of impact?
I wonder too how did you come up with this number? Resources I was able to find say about 2.5-3% of landmass is covered by cities, which gives a ~1% of total area (with oceans included), which gives ~1/730000 chances total, 1/100 conditional on hitting the Earth.
My first thought on that “Goddamn is humanity that bad at reacting to small probabilities of huge risks”, but then I did calculations and in fact ignoring the asteroid is surprisingly rational thing to do. The chance of it hitting a city is 1 in 790 000 (per calculations in my answer to Uribe). It’s hard to guess how many casualties there will be if it does in fact hit a city, but I think it’s safe to say it’ll be well before 1mln. So making 100% certain the asteroid doesn’t hit the Earth is equivalent to certainly saving one person at most. Space launches come at tens of millions USD price range, and that doesn’t include a nuclear warhead, probe, R&D, premium for haste and so on. We don’t routinely spend anywhere near that much money to save a single person so we shouldn’t spend that amount to save proportionally more people from proportionally smaller risk.
We might get some positive utility from attempting to reduce the odds, figuring out that something we think we can do we cannot, and then fixing the process so that we can do that quickly. Like, a dry run for the next asteroid that has a 1:20 chance.
OTOH, we might get some negative utility from building a process for “get a nuke into space fast.”
That’s another matter, I agree we should try at some point on something, before it becomes critical. Don’t know though, maybe there’s more suitable candidates in the near future than this one.
What do you mean by a process of getting nukes to space fast though? Regular nuclear warheads are more then capable to survive a space launch, because they are in fact launched in space, just not into orbit. Load one or more on a regular space rocket should be trivial. And in fact the technology to launch a warhead specifically into low earth orbit has also been developed and even deployed briefly by USSR, before being specifically banned.
I am speaking from a meta-level here, not relying upon the exact process for delivering a nuke.
They are under some kind of guard and a hardened control process, and if someone (even the President) wants one put onto a rocket capable of leaving Earth orbit, it takes a certain amount of paperwork and safety to make sure we are still keeping careful track of them. Maybe creating the process for being able to get a nuke onto a rocket quickly creates more risk from loose nukes than it reduces risk from stopping impacts.
Depends where it hits as to how much damage it will do (as of recent impacts, Russia and Australia seem to be the targets the Cosmic Gods are playing pitch and toss with). Also depends if you think “I can give you a selection of lotto numbers with a 1/7,300 chance they’re the winning numbers, wanna pay me $100?” is a bargain you would take. Yes, that may be much better odds than randomly picking numbers yourself, but would you really spend $100 on it?
The Chelyabinsk meteor was of a comparable size. Slightly smaller, estimated around 20 m diameter. It was about a 400-500 kT airburst at 100,000 feet, with the heat and gas penetrating to about 85,000 feet. It did occur in or near a populated area, and the shockwave resulted in zero deaths and only minor injuries from broken glass etc. At 30 meters, 1.5 times bigger, you would expect approximately 3-3.5 times higher mass (1.5 cubed) and thus (approximately) yield 1.5 to 2 MT. I could be wrong, but eyeballing it and also running a few numbers through the impact simulator, I expect that even if it was directly above a city center, the airblast would be high enough up (impact simulator suggests no penetration other than airblast/shockwave past about 50,000 feet) that it would be expected to cause few if any fatalities, probably no major damage.
So my opinion is that it is not of sufficient concern even in the worst case to call for an attempted course alteration.
A little downthread, @achenx brought up the release of Commander Keen as a mobile game, and it got me thinking. What are some of your favorite DOS/pre-Windows 95 games either for the experience or just the nostalgia?
For me (mostly in order):
Galactix (I absolutely adore this game…eat your heart out Space Invaders)
Nibbles (Qbasic)
Gorillas (Qbasic)
Dark Forces (am currently replaying this one)
Day of the Tentacle
Sam & Max Hit the Road
Indiana Jones and the Fate of Atlantis
Commander Keen
Duke Nukem
I am definitely forgetting some important ones, but those are the ones that come to mind. Any other favorite classics out there?
The three games I remember most fondly from this era are Command & Conquer (and Command & Conquer Red Alert), the Secret of Monkey Island and the Indiana Jones and the Last Crusade adventure game (for which Fate of Atlantis was a sequel).
Given that two of them are from the sadly defunct LucasArts (as are four games on acymetric’s list), obviously at one point the studio had something going for it. I can think of a handful of recent games that have almost the same level of humor I remember from LucasArts at its prime, I can think of none as consistent.
For the Secret of Monkey Island, I remember the Insult Sword Fighting quips most of all; it seems unique in that it built the humor into the gameplay.
For the Indiana Jones and the Last Crusade adventure game, while some of the gags have lodged themselves firmly into my memory (“Hi, I’m selling these fine leather jackets…”), what stuck with me was the subversion of just about every other licensed game I can think of as the game rewarded you for thinking beyond the original story. For example, there’s a scene in the movie where Indiana Jones (in disguise as a German officer) sneaks into a Nazi book-burning rally to recover his father’s diary. On the way out, he runs into Hitler himself, who autographs the diary. In the game, you can play it straight, or if you’re a quick thinker you can hand him a copy of Mein Kampf (which you can then use to bribe your way past any guard) or the easily-missed Travel Authorization Form (which will then get you past EVERY guard).
LucasArts put out a ton of great games (including, obviously, a lot of Star Wars ones). I missed out on a bunch of them because I was young enough that I was reliant on my parents for game procurement.
Man, I was a huge LucasArts fanboy back in the day. I think three separate times I received collections of theirs as Christmas presents. I played almost every game they put out and really liked most of them.
That’s exactly how I ended up with Day of the Tentacle, Indiana Jones, and Sam & Max. There were six disks, I can’t remember for the life of me what the other 3 were. One was demos, I think.
Ok, looked it up. One was a 3 level demo of Star Wars: Rebel Assault (and I played the crap out of those three levels), one was a “Screen Entertainment Utility” (I assume backgrounds and screensavers or something), and the last one was demos like I thought (although I do not remember playing all those demos, particularly Tie Fighter…maybe my system couldn’t support it).
Full Throttle was just awesome. Rebel Assault was great, and if I’d had the maturity to play Tie Fighter and X-wing vs. Tie Fighter properly I would have enjoyed those even more.
Under a Killing Moon and The Pandora Directive were my favorites though. Only trouble was having to change CDs all the time.
I loved C&C and C&C: Red Alert, esp. the latter with its gonzo history.
Other than that, turn-based strategy was my genre back then. I had Civilization 2 and Master of Orion 2 by early 1997… might have been birthday and Christmas ’96 respectively. Civilization went on to bigger and better things, but MoO2 is still the peak of that franchise. Oh, and its fantasy sister game Master of Magic was never improved on, AFAIK.
There was also an obscure Space 4X that came out before MoO2, Ascendancy, which was leaps and bounds better aesthetically and as SF (they put a ton of thought into the species and technology, while MoO just copied tech from Star Trek/Wars and mostly used bipedal Earth animals as races). Unfortunately, the AI was skull-crushingly dumb, so it failed as an actual game.
Did Age of Empires require you to DOS Boot out of Windows? That was a great folding of Real-Time Strategy with its resource harvesting into historical 4X.
Dungeon Keeper! That also fits your definition, and DK1/2 were a wonderful way to experience a dungeon fantasy setting.
The original Civilization is a big one. And Caesar II, and Simcity 2000.
Also yeah all the Apogee (etc) platformers and shooters. Aside from Keen and Duke, I loved Cosmo’s Cosmic Adventure (same designer as Duke), and the early efforts of Pharaoh’s Tomb and Arctic Adventure. Galactix is great, yes!
ZZT. I read about Epic and Tim Sweeney earning billions of dollars from Fortnite or whatever, and I still think of them in terms of ZZT.
The thing about Galactix. When I was a kid, I could breeze through to the last stage easily, but no matter how many times I tried I could not beat that last big red ship. Fast forward to 5-10 years ago, I decided to find a copy of it so that I could play through it again. The big red ship was unbelievably easy. Mild disappointment, like going to the huge slide at your childhood playground and finding out it was only like 5 feet high.
Also, whenever I played it as a kid I had to restart it like 20 times because it would always start up running at like 10x speed or something. No idea what caused it, sometimes 10x speed and sometimes normal.
Some really ancient games measured time in CPU clock cycles instead of seconds, and so would run at different speeds depending on the CPU speed. Maybe Galactix also used CPU cycles, but had some buggy method of trying to figure out how fast the CPU was and compensate, which sometimes worked and sometimes didn’t.
I had a computer back in the day where you could press a button on the front to make the CPU run slower. Useful for those old 8086-era games that ran unplayably fast on a 486.
The Kroz games were open-sourced a few years ago, and since the programmers didn’t know how to (or couldn’t?) use accurate timing, they just ran an empty while loop in between cycles. You could specify that you had a “faster” computer to make the loops longer.
Acymetric, the DOSBox emulator has a function that allows you to adjust the CPU speed of your emulation on the fly. I found it very handy for some older games.
I’m pretty sure getting Galactix to run was one of the first times I learned about the 640k memory barrier.
Also I have a distinct memory in a late elementary school class, having a writing assignment where you could write about anything you wanted, and I described everything about Galactix in extreme detail to meet the length requirement. I should have apologized to my teacher later.
Dune
Dune 2
The Wing Commander series
Warcraft 1 and 2
Full Throttle
The Kings Quest series
Star Wars: Rebel Assault
Leisure Suit Larry 2
Myst
Battle Chess
Theme Park
Master of Magic is an absolute classic that is still unmatched despite many efforts to imitate it.
Fantasy General was pretty much just Panzer General, but it had an amazing soundtrack.
Fantasy Empires was… weird, but a lot of fun.
And of course Dungeon Hack was one of the best D&D games
Master of Magic is an absolute classic that is still unmatched despite many efforts to imitate it.
What’s up with this? It seems like an indie developer could put out a multi-racial Civ game that cut & pastes MoM’s magic system and zooming in on blocks of troops when your units meet resistance that would at least match it, with HD graphics.
There have been several games that had parts of the MoM set. I can’t remember their names, since I tend to play for a few hours and just get disappointed in the bits that are missing. The things from MoM that I like to see:
1) A ton of spells from distinct sets, that include a wide range of effects (enchantments over the entire map, buffs for units, buffs for cities, creation/summoning, battle spells)
1a) While you can customize what your character is good at, nobody can get all of the spells
2) Races that have unique playstyles (Draconians all getting flight, dark elves all generating magic for example)
3) Cool and customizable heroes (I still love Warrax’s design, even if he looks kinda generic now)
4) Armies clashing at once (a lot of games now have turned to the 1 unit per hex system, which isn’t nearly as fun)
MoM also had a bunch of other features that were cool, but not vital to recreating it. The magic nodes lead to natural points to fight over outside of cities. The ruins/dungeons provided a fun early-mid game difficulty that encouraged you to never neglect your army, especially with how good the rewards could be. The two-world system with multiple points of transition between the two added some complexity to the strategic layer. And being able to design custom items for your heroes made them feel a lot more unique.
Have you tried Thea? The inventory system is super cumbersome, the random starts can be frustrating, but the storyline is worth completing once or twice
Unfortunately, I’m addicted to the broken trait system where you can create mana by creating and destroying items and therefore get your heroes super well equipped early on (I forget which combo of attributes it is, but basically they made a mistake of arithmetic rather than geometric discounting) and have a horde of blood hounds roaming the map while doing so.
In any modern game this oversight would be immediately patched out.
*sigh* I don’t currently have anything running any version of Windows. If it won’t play in Wine. or dosbox, or natively on Mac or linux, I don’t get to play it – and this won’t; I just checked.
I recall I got Duke Nukem when I was like 8 years old, when my father bought me one of those airplane-style joysticks for the PC, and it came bundled with 3 games including that one. I somehow managed to beat the entire 1st episode while playing with the joystick, which was an absolutely atrocious way to control a sidescrolling shooter compared to just the keyboard. It was the very first video game that I ever beat, so it has a spot in my heart.
Indiana Jones and the Fate of Atlantis also has a spot due to it being the 1st point-and-click adventure game I ever beat, and I played it with my best friend at the time in 9th grade, talking and brainstorming with each other to figure out solutions to the various puzzles. Also, the line “is that a broken ship mast in your pocket, or are you just happy to see me?” had us giggling like 9th grade boys and shocked that such a line could make it into a video game.
Hey, I had Lemmings for the ZX Spectrum. Monochrome, no mouse control (cursor was controlled with the arrow keys), and you had to load every level from tape separately, and re-load if you failed, but it was pretty impressive that they managed to squeeze the game onto that machine at all.
(Also, the Dizzy games. Those were good fun and ate up probably an unreasonably large chunk of my childhood)
In the early FPS genre, DOOM was the Alexander the Great to Wolfenstein’s Philip. Heady days, those were. Everyone was happily enjoying the Apogee Software “first episode is free” business modelwagon, and here comes id Software with all these promises that sound trivial today, but were a big deal back then: full 360-degree motion, 3D (well, 2.5D), high FPS on a dinky old VGA card, full sound and music, bullet holes stay on the wall, etc. And then it delivered on every single thing. Everyone thought John Carmack was Einstein-level genius. (They thought John Romero was a rockstar, too, until Daikatana…)
Star Control 2 was great for the story. The gameplay (fly around, mine, upgrade, repeat) has been largely co-opted by later franchises (Mass Effect, Far Cry, Assassin’s Creed), with the exception of wacky ships with different abilities and playstyles. But the story was a great mash of epic and funny. Plus, the music was basically crowdsourced to a bunch of Finns from the demoscene. And they figured out how to play it out of a plain PC speaker.
Lemmings has a modern-day successor in the Dibbles series, playable on Kongregate. Roughly the same morbid whimsy, and as mindbendingly hard.
NetHack is still my favorite of the roguelikes – hack’n’slash on a randomized map where death is permanent. You find potions, scrolls, and wands, all unidentified, including the Identify scroll. You can try various experiments to figure out what’s what, though – you can dip your weapon in a potion, try writing on the floor with a wand, dropping a scroll on the floor and seeing if your pet will walk over it, and at last resort, try it and hope. Monsters leave corpses. You can eat them. Sometimes this can help you. Sometimes not. Eating a dead cockatrice, for example, is not recommended. However, you can wield it as a weapon. (Doing so without gloves is a bad idea.) This is very powerful (unless you’re fighting a xorn), but be careful; if you’re carrying too much and descend stairs, you might fall, and will likely fall on that corpse. This is a few of literally hundreds of interactions different objects have with you and each other. And “the devteam thought of everything”. And it runs on a VT100 terminal – you don’t even need a graphics card.
Play the first Diablo and you’ll notice what it borrowed from the roguelikes, as well as what it threw away.
Zork was among the first mass market text adventures. Great story, and freaking hard. Later came the Spellcasting series, which was easier, but featured Steve Meretsky’s spot-on humor.
Civilization and MOO were both Microprose games at the time, which had a reputation for VGA-era games that were massively complex yet fun. Not just 4X, but similar sims in general. One I haven’t seen mentioned here was Darklands, an RPG with serious attention paid to the history and mythology of medieval Germany. The monsters weren’t stock D&D stuff. Kobolds, for instance, weren’t wimpy dogmen, but rather house spirits. Your heroes didn’t have classes, but could specialize in skills, and could pray to dozens of Christian saints for favor. It felt like you were learning about medieval German life as you played. Darklands was hinted as the first in a series of such games set in various parts of the world, but I guess it didn’t sell well enough.
Dragonlance had a flying dragon combat simulator. That was pretty cool. Never got to play it all the way through though.
Ultima had a lot of games, but I only ever played the Underworld series. 3D, but the screen was tiny, and you got this nice feel of claustrophobia and fear of what was waiting out there in the dark. Meanwhile, I remember going through every possible combination of my runestones to discover new magical spells. I played those games to death.
King’s Quest was great. I only really played III and IV. I liked Space Quest even more, and I could go for a sequel today.
Riven was my favorite of the Myst series. (I never played the first.) Great graphics for the time, but mostly I enjoyed being able to solve puzzles by imagining how a device would work if it were made to be used by the inhabitants there on a routine basis. You could logic your way through. The latest game I’ve played in this subgenre is Obduction, just a few years ago. It’s not bad.
One I haven’t seen mentioned here was Darklands, an RPG with serious attention paid to the history and mythology of medieval Germany. The monsters weren’t stock D&D stuff. Kobolds, for instance, weren’t wimpy dogmen, but rather house spirits. Your heroes didn’t have classes, but could specialize in skills, and could pray to dozens of Christian saints for favor. It felt like you were learning about medieval German life as you played.
Oooh, I’m glad someone finally informed me!
Darklands was hinted as the first in a series of such games set in various parts of the world, but I guess it didn’t sell well enough.
I first played it long after Win95 came out, but Nethack is still the closest thing anyone’s written to an old-school D&D experience on PC, and it’s worth playing for that alone.
But that “old-school” includes things like “brutally difficult to the unprepared” and “highly reliant on memorizing the documentation” and “entirely possible to die to a falling rock trap on your first move”, so caveat emptor.
Incidental use-of-language note – I’ve only ever heard the expression ‘die to [x]’, as opposed to the more usual ‘die from [x]’ in the context of computer games, and it still sounds weird. Is it common these days? I guess prepositions are pretty arbitrary, but I’d have thought that ‘die from’ was well-enough established that it would crowd out any new forms even in computer game territory.
Star Wars Rebellion remains one of the best, and most underrated, strategy games of all time. My friends and I still play it, using multiple layers of emulation.
I mostly played shareware games in the DOS era. I found PTROOPER.EXE (one of the very first PC games) memorable, though I could never take out those damn jets.
At school we had a few computers from pre-DOS platforms. There was the Apple II series, of course, but also the TI-99/4A, which practically nobody remembers today. I was the one kid who didn’t put in a cartridge and wrote little programs in BASIC to amuse myself.
Apologies for the unwarranted nerdiness, but the “precede DOS” bit triggered me. 🙂
I knew that the only one that could possibly qualify was the original Ultima for the Apple II. Did it? Turns out that it very much hinges on what we mean by “precede” and “DOS”.
A bit of quick research tells us that Ultima was released in June 1981, but also that CPC (the publisher) registered a copyright for it in September 1980.
What about DOS?
Wiki says that the initial MS-DOS release was in August 1981 (presumably as PC-DOS, the IBM-branded version for the IBM PC), but MS-DOS was itself a re-branding of SCP’s 86-DOS that was released somewhere in mid-1980.
It would therefore seem that Ultima preceded DOS, if by “DOS” we mean the Microsoft/IBM-branded release for the IBM PC, but it may not have preceded DOS, if by “DOS” we mean QDOS/86-DOS prior to MS/IBM involvement (and the PC itself). “May not” because we should also specify whether we’re interested in the release date or the completion date for Ultima (if release date, then no; if completion, maybe).
This concludes our home computing trivia segment for the day.
No Apple ][ disk game preceded “DOS” if you’re being pedantic, because the first Apple disk drives were released with Apple’s Disk Operating System (DOS 3.1, I believe; they started with 3.0 but I don’t think it was released). There were some Apple ][ games which preceded DOS, notably Wozniak’s Little Brick Out. I believe Scott Adams Adventureland may have preceded it also. Of course there were also pre-DOS arcade games — Pong, Spacewar/Galaxy Game, and Space Invaders for instance.
I may be alone on this, but I always thought Zak McCracken and the Alien Mindbenders was much better than Maniac Mansion (which got a lot more attention)
I loved the hell out of that game. In part because it was the only point and click adventure game of that era which I actually owned myself, rather than playing bits and pieces of at somebody else’s house, but having played others now, I’ll still say that while it may not have been very well polished, it had a grander scope, sillier premise, and better sense of humor than its contemporaries, including Maniac Mansion.
That game sparked my love for weird Weekly World News-style fake tabloids, which still persists to this day, and I’ll never look at a pair of Groucho glasses or a microwave the same way again. Plus, for some reason I always got the impressions that if he were a game designer rather than a musician, this would have been the game Weird Al Yankovic would have designed. I don’t know why, but the senses of humor always seemed remarkably similar.
The Super Solvers games: Gizmos and Gadgets, Operation Neptune, Ancient Empires, Midnight Rescue, Treasure Mathstorm. Ancient Empires and Operation Neptune are the standouts here – complicated and challenging even before they try to teach you math or history.
Also, Raptor: Call of the Shadows was the best shmup on DOS, while Duke Nukem and Cosmo are tied as my favorite DOS platformers. Really, anything by Apogee back in the day was a pretty good bet.
Also, one game that I didn’t really like as a kid, but revisited as an adult and found amazingly unique: Sid Meier’s Covert Action. So many spy games have been made, but nobody else has made one that’s really about looking for clues rather than just stealthing or shooting your way through a mission that has a clue at the end. It gave you freedom to investigate anywhere and choose how you gathered information, which meant that you had to think about where you wanted to go and where you’d be likely to find clues.
Sid Meier’s Pirates — Man, this game was fun. Sailing, sword-fighting, sun-sighting, treasure digging,…
Sid Meier’s Civilization — Just one more turn!
Wizardry 6/7 — I loved making uber-characters in these. I still had my saved game from 7 to import when 8 finally came out!
Might & Magic 4/5 (World of Xeen) — Still the best Might & Magic games.
Ultima Underworld — I remember being absolutely amazed by the graphics and my character just being able to walk in any direction
Quest for Glory (Hero’s Quest) — This whole series was great fun, and you could replay each game as the different character types, solving the puzzles in different ways each time.
Heroes of Might & Magic — My brother and I played this game against each other for hours at a time, winning the same areas back and forth.
Albion — One of the more original RPG worlds ever.
Star Control 2 — This was just so much fun exploring the galaxy, meeting the different aliens, and finally beating those Ur-Quan.
Wing Commander series — First person space combat and even good cut scenes.
AD&D Gold Box Games — I always preferred the low level ones, but they were all pretty good. I even liked the Buck Rogers ones.
Railroad Tycoon — Laying track and scheduling trains…
Out Of This World (a.k.a Another World) — Accidentally transported to an alien world, you make and alien friend and escape danger.
System Shock — So creepy. SHODAN scared the crap out of me.
Jagged Alliance — Really fun combat, really interesting characters.
A whole bunch of Infocom games — I still have my hand-drawn maps!
Gabriel Knight — Man, I loved these games. They were so atmospheric.
Frederick Pohl’s Gateway — I loved the book, and the game was good, too (with a completely different plot). Legend also made some other good games, like Eric the Unready.
Populous — The second one was better, too.
Betrayal at Krondor — Amazing game.
Prince of Persia — The original. Like Karateka, but much better.
Wasteland — Precursor to Fallout.
I never actually owned a Microsoft PC from that era — my first was a Win95 machine. The Atari ST had a pretty good stable, though. Some of my favorites on it were Bitmap Brothers releases: GODS, Magic Pockets, Cadaver. Other titles that stick in my mind include Blood Money (Psygnosis), Oids (FTL), and Archipelagos (Astral Software). And Lemmings, which also saw a lot of releases on other platforms.
The original Marathon (1994) just makes it in, which means that Pathways into Darkness also does. I played a lot of Warlords and its sequel on the early Macs, too.
I see I’m not the only one who’s mentioned Betrayal at Krondor. One of the few RPGs that made travelling all over the place actually feel like travelling: the need for food, travelling by night being a dumb idea, etc.
I still fire up Master of Orion from time to time. A game in a small map takes two hours, and it can be very brutal, so if I am in the mood of trying to conquer the galaxy, I get to experience a full game in one sitting.
It’s my favorite game of the genre, surpassing things like Civilization because Civ adds way too much useless busywork. MoM just has some sliders to build what you want, one planet per system (Master of Orion 2 adds more than one planet per system and a ton of busywork with it) and that’s all.
I wish MoM 1 came to other platforms, untouched. The sequels, and the recent reboot just add stuff on top of it for no good reason.
As a small child, I was obsessed with The Ancient Art of War. It was arguably the first RTS; essentially the 5 1⁄4-inch floppy version of the Total War series. You had both a strategic map where you maneuvered squads over varied terrain and dealt with attrition, and tactical battles where user-made formations of units (with three types) had a linear, mostly automatic battle (although you did have control over retreats/advancements). Still kind of amazing to me that they were able to program something that sophisticated in 1985.
That was a good game. Never got a chance to play the follow-ups (At Sea and In the Skies), but the original was one of my faves back in the day.
Wiki says there’s a new version out, and Moby Games has some additional info, but it’s not available through my usual sources (Steam and GOG), so I’ll be giving it a pass, it seems. Looking through the screenshots on MG, I’m not really sold on the graphical style. I like that they’re trying to keep it simple, but I feel that it just goes to show that a good pixel artist is worth every penny.
Turns out Archive.org has a playable version of the original. It might be time for the kingdom of Ch’u to put Wu back in its place again…
So I’ve seen Wing Commander mentioned repeatedly in this thread, and did some reading.
So it’s a MilSF flight sim where your carrier fights space kitties? And the first games made extensive use of pixel art cut scenes, then switched to digital movie sequences in Wing Commander III. Wow, remember when digitized graphics of actors were a thing? I know I’ve compared games in the standard AAA game template of “walk around a 3D world, fight, and experience in-engine cut scenes” to Hollywood blockbusters here before, but whatever happened to that earlier attempt to make video games movie-like?
Somewhat more seriously and kind of related (although not pre-Win95), the cut scene that got me the most hyped was the intro for Mechwarrior 4: Vengeance. Kind of shockingly well done for live action acting in a video game (it was admittedly brief).
Not only was Wing Commander III using live-action cutscenes, they featured Mark Hamill, Malcolm MacDowell, John Rhys Davies, Tom Wilson (Biff from the Back to the Future movies), and… Ginger Lynn Allen.
There were even thumbnail displays during missions of your fellow squadmates, many of whom I recognized as college classmates. (WC3 was made by Origin Systems, based in Austin. I was attending UT at the time.)
Not only was Wing Commander III using live-action cutscenes, they featured Mark Hamill, Malcolm MacDowell, John Rhys Davies, Tom Wilson (Biff from the Back to the Future movies)
Indeed. It would have been relevant to the point for me to mention that.
The Command & Conquer series had the more central examples of live-action cutscenes with non-actors, and when Red Alert 3 came out with a cut-scene cast of Hollywood actors, they defended it in the press as charmingly retro.
There were also games in that era that used in-engine 2D graphics of filmed stuntpeople. Think Mortal Kombat.
Wow, remember when digitized graphics of actors were a thing?
Heh, I mean, depending on what you mean by “digitized”, I may have a few upcoming titles like Death Stranding and Cyberpunk 2077 [WARNING: Some Violence, Blood, and Adult Language] to draw your attention to.
My personal “Golden Age” is probably more like 1995 through the early 00s due to games like Baldur’s Gate 1-2, Planescape: Torment, Fallout 1-2, and so on, but there are plenty of earlier games I quite like.
My list won’t be exhaustive because Littleskad has already hit so many of them that if not for the addition of strategy and sim games I never cared for I’d think he was my evil (good?) twin. So you can +1 pretty much everything he listed, but I will go into a bit more detail on a few:
Quest For Glory 1-4: You Got Your RPG in my Point-And-Click Adventure! No, you got your Point-And-Click Adventure in my RPG! This is admittedly sort of an acquired taste, but I thought that the traditional and amusingly mean-spirited Sierra Deaths meshed well with old school murderhoboing, and I actually liked the mix of silly jokes and puns with surprisingly interesting serious characters. Not to mention the basic conceit of a game that played very differently for different classes with unique content gave it a lot of replayability for the time. There are high-res remakes of some of the earlier titles, and even a new spiritual successor by the original creators in the form of Hero-U: Rogue To Redemption on Steam. Pro-Tip: If playing through the originals, import your QFG1 hero into the sequels and either go Paladin (which has its own, increasingly rich, set of story options as you play, and which the creators obviously favored), or multi-class into magic (which was sort of an unintended glitch) and as a fighter-mage or rogue-mage utterly BREAK the games over your knee in all sorts of amusing ways.
Betrayal At Krondor: I’m adding my voice to Acymetric and Dndnrsn here, because it’s a massively underappreciated game. In addition to a surprisingly gripping story at times, a satisfying combat system, the feel of travel that has already been mentioned, I loved the way the text was designed to read as if you were reading through one of Raymond E. Feist’s novels.
System Shock: The game that gave us the Audio Log, and arguably some of the best versions of it. This is the grandparent whose legacy gave birth to series like Bioshock and Deus Ex. Plus, if you’re a SSC poster, you’ll probably enjoy one of the great unfriendly AIs, up there with AM, Hal, and Durandal. Speaking of Durandal….
Marathon Trilogy: These came to Mac first, but they’re the spiritual parent to the Halo games and already display Bungie’s love of certain SF tropes: Supersoldiers in norse-themed power armor, complex multi-species alien empires, deep time, and AIs as both mission control and major character.
And now, one to add:
Buck Rogers: Matrix Cubed: A SF RPG from SSI using their Gold Box engine (The gold box games have been mentioned already, and I’ll second them as classics), managing to combine space combat, planetary exploration, and a surprisingly interesting setting for something based on Buck Rogers of all things. TSR’s Buck Rogers XXVc was a pretty solid tabletop RPG, and I always wanted to see more done with it.
System Shock was good, but crippled by the fact that nobody had invented mouselook yet. If you’re going to replay it you’ll definitely want the re-released version that adds it.
System Shock 2 was really good, but also a bit too recent for this question.
they’re the spiritual parent to the Halo games and already display Bungie’s love of certain SF tropes: Supersoldiers in norse-themed power armor, complex multi-species alien empires, deep time, and AIs as both mission control and major character.
I’d say Marathon and sequels has in many ways the better story. Mostly because there’s more room for it: Halo, shipping on DVD, told its story through cutscenes and mission dialogue. Marathon originally shipped on floppies, later on CD-ROM, and couldn’t have fit that on disk, so it told its story through computer terminals scattered around the levels. They could go on for pages, and they ranged from straightforward to ominous to screamingly funny. You really got to know Leela and Tycho and especially Durandal, moreso than anyone in the later games’ cast.
It helps that Feist actually worked on the game, himself.
But yeah, even if you ignore the story and writing, Betrayal at Krondor was lightyears ahead of its time, (i.e. much like how lightyears measure distance and not time, what Betrayal at Krondor was doing and what other CRPGs of the time were doing couldn’t really be compared using the same unit of measurement :p ) and doesn’t get nearly the appreciation it deserves.
Frankly, I’m amazed that nobody has mentioned X-Com yet! It may have come near the end of this era, but it was definitely pre-Windows 95!
As much as I love the remakes (the Firaxis X-Com was the game that made me think, “damn, why has nobody made a 4th Edition D&D-based video game yet? It might suck as a tabletop RPG system, but as a turn-based strategy video game it would be amazing”) I still haven’t seen a game in the genre that manages to capture what the original X-Com did, from the complexity of both the strategic and tactical layers, and how they complimented each other, to the general atmosphere and feeling of terror you get over what might be lurking in the fog of war. It may have been broken in some ways, but it’s still one of the greatest games of all time, warts and all.
I’m looking for an in-depth, thorough and rigorous defense of the idea of technological unemployment (that it is a credible risk we should be worried about), and most importantly, that it is not really an argument about AGI, superintelligence, artifical conscious beings that have rights, etc. In other words, restricted entirely to advanced technology without resorting to anything outside of the realm of “prosaic” tech.
The reason I’m looking for this is mostly because 1) People like Eric Weinstein and Andrew Yang are convinced that it is or will be a problem very soon, and they are smart people, and 2) Classical economics basically concludes that, for many reasons, tech progress should not result in long term, chronically high unemployment within a free market society. 3) Also, because our discussions surrounding this issue as a rule involve arguments for or against certain policy initiatives, most of which, due to reasons in 2), would seem to be more harmful than helpful in the long term.
If you look at the data, it’s pretty clear that it’s not happening right now. It only appears that way because the effects of the recession took a really long time to recover from and more baby boomers are retiring.
The most steelmanned position I’ve seen is this: Long term technological unemployment is not really a thing. While some people disagree with this, they are mostly practicing incredibly heterodox economics and shouldn’t be taken too seriously.
However, short term technological unemployment is absolutely a thing and no serious person thinks otherwise. There is strong evidence this has a permanent, negative effect on the workers and communities it affects that they do not ever recover from. At best, their children do. At worst, it can lead to generational poverty because even as society returns to full employment the individual community or descendants of the individual experiencing feel ripple effects.
On top of that, unrest is a thing too. Even where the wages of technological progress are obvious, people who are displaced will suffer. They will object to this suffering even if it is in the service of their narrow interests at the expense of society.
This justifies policies that look like (but are not exactly equal to) technological unemployment remedies. Listen to Yang actually talk about the Freedom Dividend. He uses corporate language for a reason: by giving people a literal, dividend paying share in America he hopes people will have an interest in the overall performance of America. This is how he sells it to the rich and corporations: it will give a people an interest in general economic performance and reduce pressure for (in his opinion destructive) policies like a $15 an hour minimum wage. It will reduce things like Ludditism.
There are reasons to critique that position but it’s not obviously wrong.
To piggy back on this, long haul truckers currently number something like 3.5 million in the US. Autonomous self-driving trucks, even if they are only from and to the local “last mile post” hub, are going to really hurt that employment sector.
It was hard enough to get an experienced driver backed up properly in our too-small, poorly configured dock area (let alone just getting them backed up to the right dock). An automated truck would have been a nightmare.
Agreed, if anything you’d have an “automated” truck driven by a real driver until it reaches a truck station where it will soon approach the freeway. The trucker would commute to the truck station each day where a bus takes the truckers to docks, etc.
FWIW, as someone who worked at a major UPS hub, a substantial portion of the complications at our loading docks were very human in nature, including angry shifter drivers intentionally parking in obnoxious ways and then calling for a union rep if anyone but their direct report asked them to move their vehicle.
To piggy back on this, long haul truckers currently number something like 3.5 million in the US. Autonomous self-driving trucks, even if they are only from and to the local “last mile post” hub, are going to really hurt that employment sector.
The question is WHEN do they hurt that employment sector, and the answer is AFTER some untold number of engineering hours have been invested, and after production and retrofitting of self driving trucks is instituted.
Manufacturing as a sector stopped growing decades ago largely due to automation, but the total number of jobs was roughly as high in 2000 as it was in the late 60s. The large declines prior to 2000 are associated with recessions with employment picking back up (at least in total number of employed) and not associated with the introduction of masses of labor saving devices.
I think we’re going to see some extreme regulatory capture in this area, such as a requirement that there be a human in the cab at all times that can take over driving if “needed”.
However, short term technological unemployment is absolutely a thing and no serious person thinks otherwise
It depends on what you mean, typically people discussing long term technological UE are discussing net UE, some people discuss specific UE (ie coal miners losing work and remaining unemployed for long stretches).
There is strong evidence this has a permanent, negative effect on the workers and communities it effects that they do not ever recover from.
It depends on what you mean, typically people discussing long term technological UE are discussing net UE, some people discuss specific UE (ie coal miners losing work and remaining unemployed for long stretches).
To take HBC’s example, no one seriously denies that automated trucks will cause truck drivers to become unemployed or a net increase in unemployment for at least some time period. Even those who think that they will have a bunch of equally good jobs waiting for them (which I’ve never heard), there’d at least be frictional unemployment.
Or perhaps someone does and I’m unaware of them. Do you know of anyone?
To take HBC’s example, no one seriously denies that automated trucks will cause truck drivers to become unemployed or a net increase in unemployment for at least some time period.
Unemployment is not simply no longer working at a job, it is the loss of a job and the inability to find another. In the context of this discussion the receipt of unemployment benefits itself would not be a sufficient criteria as preferring UE benefits to a job that is available is not out of the question, however that is just a caveat I want in there from the get go.
For a truck driver to become unemployed due to self driving trucks he will have to
1. Lost his job due to self driving trucks
and
2. Be unable to find a new job.
There is functionally no reason to believe that under market conditions these two things will be met for any substantial portion of the labor force either empirically or theoretically. Empirically there have been multiple transportation revolutions that greatly reduced the number of man hours necessary to transport goods. Trains are a great example (thanks to I think John Schilling who brought this up months ago in one of the open threads discussing driver-less cars) where a small number of operators could run a train that can carry enormous quantities of goods much further and faster than was previously possible. There is no real UE associated with the expansion of train lines because while expanding train lines cost some jobs it opened up an enormous number of others. What you might expect to be a transition period, which is the claim of temporary net increases in UE, is unlikely to occur for structural reasons. The basic logic goes as follows
1. Trains replace horses and carts.
2. Horses and carts do not stop working or being valuable until AFTER trains start running.
3. Trains require large amounts of capital investment which includes labor.
So to complete the circle you have to start out with HIGHER employment during the period in which people are working horse and buggy plus also designing, testing and building locomotives, train cars, signals, tracks etc, etc, etc. There is no particular reason to expect a discontinuity of work here, as every freight load requires up front labor while also opening up opportunities on both ends of the load.
Even those who think that they will have a bunch of equally good jobs waiting for them (which I’ve never heard), there’d at least be frictional unemployment.
There is always frictional UE, but technology typically reduces rather than increases frictions, and that reduction is applied across the entire economy.
Care to elaborate?
The full quote
There is strong evidence this has a permanent, negative effect on the workers and communities it affects that they do not ever recover from. At best, their children do. At worst, it can lead to generational poverty because even as society returns to full employment the individual community or descendants of the individual experiencing feel ripple effects.
The areas where these effects are observed are typically one factory towns/one industry cities. Industry* brings with it many competitive benefits, it produces lots of infrastructure, allows for dense living and opens up many other investment opportunities. Towns that boom from a single employer but fail to diversify do so because of some significant flaws, and these are the places that end up with the worst outcomes. Blaming the expansion of a new industry, or a trade agreement on these outcomes is like blaming them for the inequitable distribution of mineral wealth, or competence in governance, or luck. The shifts are real for the people experiencing them, but preventing the shift wouldn’t reduce the number of people who do experience them.
*Some exceptions would be industries that produce a lot of on site pollution, but even these usually end up with net positive externalities (see stockyards in Chicago).
Unemployment is not simply no longer working at a job, it is the loss of a job and the inability to find another. In the context of this discussion the receipt of unemployment benefits itself would not be a sufficient criteria as preferring UE benefits to a job that is available is not out of the question, however that is just a caveat I want in there from the get go.
We’re using two definitions of unemployment then. I mean something closer to the current federal definition. In order for you to prove my statement wrong, you would need to prove that automation will not lead to anyone getting fired and then spending some time not working while searching for a new job. No one, as far as I know, denies that will happen.
The areas where these effects are observed are typically one factory towns/one industry cities.
To the contrary, it’s not a limited phenomenon. Imagine, for example, someone who spends ten years working in a factory. They’ve invested a lot in factory worker skills. When they go into a new industry they have to (to some extent) start learning new skills and from the bottom of a career ladder. This depresses total lifetime earnings.
I’ll decline to comment on the rest. Your points are valid and I’m steelmanning someone else’s position.
We’re using two definitions of unemployment then. I mean something closer to the current federal definition. In order for you to prove my statement wrong, you would need to prove that automation will not lead to anyone getting fired and then spending some time not working while searching for a new job. No one, as far as I know, denies that will happen.
Those are two separate discussions, one is ‘what happens under our current system’ vs ‘what happens under hypothetical market capitalism’, but it was just a caveat I put in there so that I can refer to it later if I want to, none of my other points relied on it.
you would need to prove that automation will not lead to anyone getting fired and then spending some time not working while searching for a new job
No, because UE for truckers isn’t at zero. If there are 3.5 million truckers with 5% of them generally unemployed at any time then there are 175,000 unemployed truckers. The economic shift that creates driver less trucks could cause job shifts such that the total number of UE truckers was never more than 175,000 at any one time, which would refute the general claim even if some of those truckers on UE lost their jobs to driver-less trucks.
To the contrary, it’s not a limited phenomenon. Imagine, for example, someone who spends ten years working in a factory. They’ve invested a lot in factory worker skills. When they go into a new industry they have to (to some extent) start learning new skills and from the bottom of a career ladder. This depresses total lifetime earnings.
But you have ignored everything else. These shifts cause higher productivity and increased wealth, whatever caused their factory to close was related to the things that made cars better, air conditioning more accessible, better general working conditions, vaccines for their kids etc, etc, etc. If the only effect of technological growth was better sewing machines was to make T-shirts 1% cheaper and to cost you your job at the factory then yes, that hurt you on net, but that isn’t how it goes.
Long term technological unemployment is not really a thing. While some people disagree with this, they are mostly practicing incredibly heterodox economics and shouldn’t be taken too seriously.
Can you elaborate? Why is it impossible that eventually the average human will be unemployable in the same way that a chimp or a severely disabled human (e.g. mentally retarded with IQ < 70) is currently unemployable?
Even if we had no statutory minimum wage, there’s a minimum amount a person must make to keep themselves alive. So what’s to keep automation from bringing the value (to employers) of average humans below this point?
Automation makes things cheaper, the cheaper things get the less you need to earn to meet that minimum. Humans have been able to live above subsistence level with roughly zero modern technology helping them, it is astonishingly unlikely that this would happen with modern tech.
In any workplace, insufficiently competent humans are value-destroying not value-creating. (Surely you have encountered some of these.) If you automate away all the basic non-cognitive jobs it’s entirely possible that most people will be zero or negative marginal product in the remaining workplaces.
It seems like the decrease in costs is distributed across the whole population, while the decrease in jobs/pay is more isolated to the given industry, meaning that the cost decrease while a net positive for society does not compensate for the changes to the people in that industry. In other words, everyone’s purchasing power goes up because goods are cheaper, but the people in the industry see their purchasing power go down by more than it went up as a result of job loss/decreased pay.
Additionally, doesn’t it depend a bit on what is getting cheaper? I realize in the case of transportation that would appear to be “everything that gets transported”. In a hypothetical, automation makes Luxury Good X cheaper, increasing the purchasing power of the people who were buying it, making it accessible to the people who previously couldn’t afford it, but making the people who used to make it worse off because they still can’t afford it and now they don’t have a job making it.
On the other hand, making affordable clothes even cheaper, or food, would theoretically increase everyone’s purchasing power (we’ll briefly ignore that this likely comes at the expense of 12 year olds in China or whatever).
It seems like the decrease in costs is distributed across the whole population, while the decrease in jobs/pay is more isolated to the given industry, meaning that the cost decrease while a net positive for society does not compensate for the changes to the people in that industry
It does not seem this way at all to me, primarily because technological advancements are happening across the board and impacting all industries. If you isolate one advance like ‘driver-less trucks’ then you can create imaginary problems where truck drivers lose 100% of their pay while everyone else sees a 1% increase in theirs, but there is no specialized industry creating self driving trucks while not impacting every other facet of the economy. The advancements that allow us to do more than dream of driver less vehicles will effect every corner of the economy. There will be some unevenness in the distribution, but that distribution will be net positive and only an idiosyncratic minority will be on the negative end of things.
In any workplace, insufficiently competent humans are value-destroying not value-creating. (Surely you have encountered some of these.)
Competence is determined by level of responsibility. When I worked the lower end of legal US jobs, stuff like dish washing, night shift bakery work etc, the behavioral issues were value destroying. I worked with several mentally handicapped dishwashers and one was absolutely value destroying- the alcoholic one who harassed all the female servers. The others (I remember 2) had a positive level of production (ie >$0 an hour worth), generally showing up on time, washing dishes and not breaking stuff.
For non handicapped people I have known who were value destroying they all did so through behavior- stealing, not working, lying, coming into work high/drunk/not coming into work.
My wife reports competence issues of programmers she hires, and it is value destroying for her to sign a programmer who cannot (or will not) do the things nor learn to do the things they were hired to do. These people are being hired for jobs at $80,000+, not remotely near subsistence wages.
So you see how people can be value destroying (e.g. negative value), and yet are puzzled by the idea that the value of an employee could possibly be lower than the cost of upkeep?
Imagine a world where anything that employs large numbers of people becomes a target for automation, for obvious reasons. No more dishwashing jobs or the movie-theater ticket people and the like. A lot of the rest of the jobs are going to be the sorts of things that don’t easily absorb unskilled labor. If any type of job *does* start to absorb lots of the surplus unskilled labor, then it suddenly becomes worth automating too, and those jobs go away again. Meanwhile, the jobs which are too difficult to automate are also the ones where unskilled labor has negative value.
I don’t find this terribly implausible.
(And of course in this scenario you can imagine that waterline for what counts as “skilled” labor will continue to rise, until we’re all unskilled workers contributing little or nothing to the work of the productive AIs.)
So you see how people can be value destroying (e.g. negative value), and yet are puzzled by the idea that the value of an employee could possibly be lower than the cost of upkeep?
No, I am not puzzled that some people through a combination of traits/actions/behaviors could be value destroying, I am puzzled by the claims that people whose combination of behaviors/work ethic/intelligence are currently value creating could suddenly become value destroying (or zero value).
To be more specific, in my view there are three basic qualities a worker can have. Intelligence, industriousness, and good behavior, being a near zero in any one situation doesn’t disqualify you on its own. As two of the categories are at least partially in the control of most people I don’t see the difficulty of hiring 70 IQ people translating into the majority of people being unable to work.
As two of the categories are at least partially in the control of most people
I think I agree with your overall thesis here, but a minor nitpick.
I predict that science will eventually discover that no, you aren’t really “in control” of any of these things. Someone can no better “become a harder worker” than they can “become more intelligent.” Industriousness and/or agreeableness will eventually be discovered to be just as heritable as intelligence.
Machines beat any human in industriousness, and of course will not have bad behavior either. Meanwhile just about any job requiring intelligence is likely to have negative marginal product workers.
Machines beat any human in industriousness, and of course will not have bad behavior either.
Comparative advantage makes these types of statements irrelevant. The fact that someone or some class of people are better than you at anything or even everything doesn’t render you useless.
Meanwhile just about any job requiring intelligence is likely to have negative marginal product workers.
Again irrelevant, stemming from the misconception that jobs exist outside of people. Jobs are created to utilize human labor, not the other way around.
The fact that someone or some class of people are better than you at anything or even everything doesn’t render you useless.
Yes, absolutely disadvantage with comparative advantage means you’re not useless, *if* the thing you’re absolutely disadvantaged against can’t be cheaply reproduced. That assumption breaks down with automation. (Think it through – the marginal product gets driven down to zero for the absolute-advantaged producer so the absolutely-disadvantaged producer will have negative marginal product.)
Again irrelevant, stemming from the misconception that jobs exist outside of people. Jobs are created to utilize human labor, not the other way around.
Jobs are created to maximize value produced, not to utilize human labor. It’s a happy circumstance that maximizing value in almost all situations currently requires human brains, but if a superior alternative existed jobs will be organized around that instead.
Yes, absolutely disadvantage with comparative advantage means you’re not useless, *if* the thing you’re absolutely disadvantaged against can’t be cheaply reproduced. That assumption breaks down with automation. (Think it through – the supply of marginal product gets driven down to zero for the absolute-advantaged producer so the absolutely-disadvantaged producer will have negative marginal product.)
No it doesn’t, as the ability to cheaply reproduce labor drives down the cost of living toward zero. If marginal product got literally driven down to zero by automation then ‘workers’ would need to earn zero to be able to afford literally anything. As long as the marginal product of automation is slightly above zero then comparative advantage still exists, and everything is still groovy, you are just driving up real wages by shoving down real prices, rather than by pushing up nominal wages faster than nominal prices.
No, the profit involved in selling to people with zero/negative marginal product is zero. If there are positive-profit opportunities elsewhere in the economy then resources will be redirected to those instead.
Meanwhile, cost of living never reaches zero. Irreducibly you still need 2000 calories a day; implicitly you’re always renting a fair bit of farmland and energy. You also need various amenities like shelter and a reasonably temperature-maintained environment.
Taking the far-future limit as hopefully illustrative – it’s really easy to imagine how a world dominated by Hansonian em-cities might be able to find much better uses for solar energy than growing a bunch of beans for you to eat, and thus to outbid you for it.
Again irrelevant, stemming from the misconception that jobs exist outside of people. Jobs are created to utilize human labor, not the other way around.
If we get AGI or something close -maybe not super intelligences, but just regular intelligence- a bunch of menial jobs could be automated in a short timeframe. Not sure the service economy can absorb all those humans.
I mean, it cannot absorb all available humans right now.
It may all be science fiction, or not happen in our children’s lifetimes tho.
Then the question becomes: in a market full of hungry people with outdated skills/low IQs, all willing to work for not much money, why doesn’t someone find a way to employ all that very cheap labor to do something?
Well, we have Uber and all those gig economy apps.
But Uber’s endgame is using automated cars to get rid of their drivers, so they have built in the assumption that the whole gig economy is just a transitional state.
Maybe there will be a gig/service economy for humans in non-creative jobs, but if the AIs are cheaper than humans, there may not be after all.
You have only gotten here by starting from the assumption that people are zero or negative marginal product. If you define everyone that way then you get your dystopia, but you don’t get it through standard economic analysis.
At literally zero marginal cost to produce then producers are indifferent between producing something and giving it away, and not producing it at all. If you are not specifically at that point in the post scarcity world then comparative advantage still holds and there is no reason to believe that the population is filled with zero and negative marginal product workers.
Thesis 1. If some worker is positive-value today, they will be positive-value tomorrow.
I think this is right. The positive value might be small, but if you know how to do a favor for someone without breaking a piece of equipment or attempting to rape a coworker, you will always be positive-value.
Thesis 2. Given a positive-value worker, they will earn above minimum wage.
I think this is not true. The solution is obvious.
Thesis 3. Given a positive-value worker, they will earn above their subsistence level.
This could be true, but I don’t think it necessarily follows. I think it’s true if you have a “correct” level of redistribution. In the normal American-Overton-window of foreseeable market forces, I could imagine that class A (which does not include the worker) captures all the value of increased automation, and it does not show up in a reduced costs to the worker paying for their subsistence.
Or maybe it does follow and I’m just not putting the pieces together. I could be convinced here to agree with Thesis 3.
You have only gotten here by starting from the assumption that people are zero or negative marginal product. If you define everyone that way then you get your dystopia, but you don’t get it through standard economic analysis.
I have not started from that assumption. I started from the assumption that things with absolute advantage over many humans could be produced fairly cheaply. Zero marginal product for those humans then follows.
You don’t get it from standard economic analysis because standard economic analysis makes the for-now-reasonable assumption that large amounts of labor supply with absolute advantage cannot be cheaply created.
If you can create large amounts of robots, then the cost of the goods that the robots/humans are producing will end up being set by the robots’ marginal cost of production, which is lower than the humans’ cost of production. Therefore, humans drop out of producing anything. If robots are just better at their jobs than humans, then this can be true even if humans’ wages are zero.
At literally zero marginal cost to produce
Food and other upkeep for humans is never going to be zero marginal cost to produce.
No it doesn’t, as the ability to cheaply reproduce labor drives down the cost of living toward zero.
The ability to cheaply reduce absolutely all labor might drive the cost of living toward zero, but the ability to cheaply reduce most labor drives the cost of living towards an asymptote defined by the non-automatable labor. If e.g. agriculture is 95% ditch-digging and 5% Ph.D. agronomists keeping one step ahead of the latest blights and pesticide-resistant bugs, then automation can drive the cost of food down by 95% while driving the market wages of ditch-diggers down by 100%.
I have not started from that assumption. I started from the assumption that things with absolute advantage over many humans could be produced fairly cheaply. Zero marginal product for those humans then follows.
It does not follow because of comparative advantage. This is literally the textbook insight that comparative advantage demonstrates, if you are better at growing apples and oranges than I am it is still best for you to grow one and trade with my production of the other.
As I have said before you ONLY get this outcome (from a logical perspective) if all human wants are being met, which requires all humans being able to afford to pay for those goods and services. You cannot push comparative advantage down to zero without violating the laws of conservation of mass/energy. As long as there is some cost to production then there is potential comparative advantage.
Can you elaborate? Why is it impossible that eventually the average human will be unemployable in the same way that a chimp or a severely disabled human (e.g. mentally retarded with IQ < 70) is currently unemployable?
So your contention is that the necessary IQ to do work is going up? Do you have any evidence that it is? I’m not aware that’s ever happened. The effect I’ve seen people concerned about is that intelligent people make increasingly more money than the average minimum wage type. But that’s not the same as being unemployable.
Anyway, the simple reason is that long term technological employment has never been observed and the trends today are not significantly different from the general trend of the last two centuries. The more complex reason is that so long as they are capable of producing some value with their labor, it makes sense for society to utilize that labor. And having worked with relatively low functioning individuals, behavioral problems are a much bigger problem than intelligence, especially for low paying jobs.
So your contention is that the necessary IQ to do work is going up? Do you have any evidence that it is?
Necessary IQ might not be going up, but I think it’s very plausible that necessary education/experience/know-how could get high enough that people just can’t retrain in a reasonable amount of time without income assistance.
Suppose that the difficulty of automating a job is correlated with the complexity of tasks in that job – and thus the difficulty of teaching a human to do it. The lowest-barrier jobs would be lost first (I know there are exceptions to this rule, such as engineering drafters or tax accountants). The jobs created by this economic shift would all be higher-skilled, possibly very high-skilled – the person who just lost his burger-flipping job isn’t in any position to retrain as a technician for the BurgerFlipperX29. Someone else will get the technician job, freeing up a space for someone level lower, who frees up another space, until the economic gains work all the way back to the former burger flipper. But that might just take too long, especially if automation happens in big fits and starts.
Then the question becomes: in a market full of hungry people with outdated skills/low IQs, all willing to work for not much money, why doesn’t someone find a way to employ all that very cheap labor to do something?
Perhaps this reflects my own personal bias, but I’m actually less concerned for those with low intelligence than I am for those who are introverted and have low social skills.
As the economy becomes more and more “service based” that means less jobs where you sit at a machine by yourself and press buttons and nobody bothers you, and more jobs where you have to interact with people. As you say, with prices low enough, all of us would consider hiring people to do something. Clean our house, watch our kids, cook our meals, etc. But those jobs require a bit of human interaction and the ability to sell yourself as a desirable person to have around.
The programmer who’s kind of a jerk, but you keep him around because you need programming, is made completely obsolete by the invention of a low-cost programming bot. But to the extent that you enjoy talking to your cleaning lady, or having a human scan your items at the grocery store, or whatever, those people stay around.
Consider that right now, there is an entire class of e-girls who are capable of making a decent living for themselves talking to lonely men online. Some of them take their clothes off, but not all of them do. Some of them have even gotten quite rich in the process. And a whole lot of the really successful ones don’t have what you might think of as like, supermodel good looks. While a certain minimum baseline of attractiveness is required, success in this field seems much more highly correlated with social skills than with raw appearance.
So like, 20 years ago, if you said something like “In the future, nobody will leave their house. They’ll get their food delivered instead of going to Hooters. Strip clubs will be abandoned.” You might expect that would be a disaster for the employment of the “cute young female with decent social skills but low IQ” demographic. What will they possibly do once all of those jobs disappear? If you answered “They’ll sit at their computer and broadcast themselves talking to and doing silly things and wearing different outfits for a global audience of attention-starved men who will throw money at them for doing so” you would have been laughed out of the room. Nobody really saw that coming.
1. Previous eras of automation involved small scale crafts being replaced by humans performing routine tasks with the aid of machines. Very often the labor being made obsolete was equal if not greater in complexity than the work that replaced it. Other times it involved one form of unskilled labor
2. Modern automation involves the programming of tasks for computers by technicians, which mostly or fully replaces the most automated/routine aspects of labor.
Having a higher IQ allows the person in question to work in fields where tasks are unsupervised, complex, and extremely difficult to automate. Lower IQ jobs tend to be routine and therefore the easiest to replace with some kind of computer program.
Modern automation disproportionately shrinks the jobs available to low IQ persons and increases demand for high IQ labor to a degree that previous automation did not. [So i believe]
Most defenders of the status quo don’t take this issue seriously because they believe that all job skills are a matter of training, some of them don’t even acknowledge that there exists such a thing as intelligence. New jobs will appear and people will simply retrain themselves to learn them, goods will be cheaper and so everyone will prosper.
It is possible if not likely, especially in a highly regulated economy, that low IQ job opportunities won’t grow as fast as past low IQ jobs. The combination of stagnant low end wages and programs like disability and UE benefits may paper over the unemployment rate at the low end and give the false impression that the economy is reabsorbing the layoffs at a reasonable pace.
The true unemployment aspect of this is perhaps over-emphasized. Functioning markets should be able to, given enough time, price labor such that anyone that you don’t get unemployability. But that speaks nothing to the emisseration that will attend the necessary wage stagnation.
all willing to work for not much money, why doesn’t someone find a way to employ all that very cheap labor to do something
Any single thing that ends up employing a lot of people becomes a target for automation. Cheap labor can survive if it can do something difficult to automate, or if it can find a small enough niche for itself that no one finds it worth automating.
One reason it might resist automation is that many people like having it done by a human.
True, but that gets at Matt M’s point. There’s virtually nothing that people like having done by just any human. We like having stuff done for us by attractive, personable humans.
I think “attractive, personable” humans oversells it. I go to a Starbucks all the time. Most of the workers there aren’t “attractive” in anything other than the conventional sense that they aren’t burn victims or anything, and their personability is, like, average.
I do agree that it might be hard for people who are particularly unattractive or particularly socially awkward.
@Matt M That reminds me of the, silent Uber driver thing, maybe awkward introverts would hire awkward introverts to perform their services because they don’t want a house keeper who talks to them?
That reminds me of the, silent Uber driver thing, maybe awkward introverts would hire awkward introverts to perform their services because they don’t want a house keeper who talks to them?
It’s certainly possible.
Although when I say “people with good social skills”, you know, a huge part of having “good social skills” is being able to effectively read your audience.
The socially adept uber driver is able to very quickly and painlessly read his passengers to determine whether they’d like to engage in lively conversation, or whether they’d like to sit quietly. The loudmouth who never shuts up, even with introverted passengers, might seem more sociable, but doesn’t really have any “better” social skills than the driver who never says a word.
I go to a Starbucks all the time. Most of the workers there aren’t “attractive” in anything other than the conventional sense that they aren’t burn victims or anything, and their personability is, like, average.
Now imagine the people they don’t hire!
In all seriousness though, in the current state of the economy, there are tons of jobs available, the vast majority of which are far more desirable than “Starbucks barista”, and good social skills are desired in almost all of them. There’s no reason to expect that today, Starbucks would attract the people with the best social skills. Those people are working in pharmaceutical sales or something like that. And honestly, I’m not sure there’s any scientific/mathematical task requiring conventional intelligence that would be harder to automate than “convince this doctor to start prescribing your company’s overpriced, unnecessary new drug.”
I think “attractive, personable” humans oversells it. I go to a Starbucks all the time. Most of the workers there aren’t “attractive” in anything other than the conventional sense that they aren’t burn victims or anything, and their personability is, like, average.
Sure, but that job is definitely automatable in the not-too-distant future. Would you a pay much of a premium to order your coffee from an average-looking barista as opposed to punching a button on a kiosk? If not, you don’t really prefer the human for that service.
Similarly, if self-driving cars were common, would you pay a premium for a human-driven Uber? Probably not.
Maybe I would, and maybe I wouldn’t (I probably would’ve back when I just had one kid and wanted to knock around Starbucks for a while to give my wife a break, I wouldn’t now). But my point here is not Starbucks in particular, but any place where you actively want some human contact — and the more that the rest of the world is automated, the greater interest there will be in some places that do have human contact. Is that place particularly a coffee shop? I dunno. But it’s something.
My point is, human contact can be nice when the people are roughly median in terms of attractiveness/social skills, they don’t have to be top 20% or 10%. Now, bottom 20% or bottom 10% might be in trouble.
The actual automation going on at Starbucks involves placing an order online. The coffeeshop is still there with humans, and you pick up your order from a human, but this probably significantly cuts back on the need for cashiers. OTOH, lots of people like sitting in the Starbucks, and for that, having some humans working there is important.
On your point #2, I don’t really think there’s a fundamental economic reason that tech progress should not ever result in chronically high long-term unemployment. Rather, it’s an empirical fact about the world that the vast majority of humans have historically been able to create significant value through their work, and so it’s reasonable to assume that they will continue to be able to do so.
But doesn’t short-term technological unemployment suggest that long-term technological unemployment is also possible and could be in fact undergoing?
Is there any cognitive reason that makes it difficult to learn a new profession at age 50?
I tried to google it but can’t find convincing studies on IQ and aging: some early studies found that IQ peaked at 20-30 but these studies were confounded by the Flynn effect, more recent longitudinal studies that follow the same cohort over the year find that IQ decline only become significant in ones 60s, but these studies might be confounded by self-selection and survivor bias.
If IQ or at least fluid intelligence declines quickly then we can expect short-term and medium-term unemployment, but not necessarily long-term: the 50 years old truck driver who may never #LearnToCode, but his children might, if instead fluid intelligence stays nearly constant until retirement age then it means that his children and further descendants are also going to have a hard time at finding employment.
Sure, I agree that it’s possible. I don’t think there’s any real evidence that it’s ongoing right now, but I find it entirely plausible that it will happen within the next few decades. Real AI is the sort of seismic shift in the economy that could upend the historical pattern of humans being able to figure out ways to produce value.
But doesn’t short-term technological unemployment suggest that long-term technological unemployment is also possible and could be in fact undergoing?
“Short term technological unemployment” is a little misleading. It’s certainly an economic shock that causes unemployment but you could see a similar effect from trade. Some people have a hard time adjusting and theoretically this could have long term ramifications that are hard to recover from but it’s a very different thing than what people usually mean when they talk about “technological unemployment”. Also, I think this is more controversial among economists than Erusian is letting on.
Also, I think this is more controversial among economists than Erusian is letting on.
Could you name the economists? I can name a few but they are mostly heterodox. Socialists are particularly fond of the idea. But people who subscribe to more mainstream views, from Keynes to Austria, tend to not believe that long term trends lead that way. At least in my experience. Again, happy to read new sources.
I was talking about this claim being controversial:
There is strong evidence this has a permanent, negative effect on the workers and communities it affects that they do not ever recover from.
Isn’t that based on just one study?
I wasn’t talking about the claim of technological unemployment in general. Honestly though, I think it’s very plausible that we get to a point where most jobs are so pointless, soul-sucking, degrading and low paying,(imagine getting paid to wipe someone’s ass) that it might as well be technological unemployment. In that situation, everyone would just rather live off welfare than do any of these jobs and it could easily break our system.
More than one. But I agree there are people who disagree with that. That’s why I said ‘strong evidence’ and not something like ‘it’s absolutely certain’. I just meant that a reasonable person might find the studies convincing.
Honestly though, I think it’s very plausible that we get to a point where most jobs are so pointless, soul-sucking, degrading and low paying,(imagine getting paid to wipe someone’s ass) that it might as well be technological unemployment. In that situation, everyone would just rather live off welfare than do any of these jobs and it could easily break our system.
Getting paid to wipe someone’s ass was a real job, actually. And that presumes not working is an option. But more to the point, I think the future is likely to actually be the opposite. The tasks we’re good at automating are precisely the ones that are soul-sucking, degrading, and repetitive. It’s precisely complicated, judgment call requiring jobs that are hard to automate.
I’m pretty sure it’s still among the duties just like when I did the job, as it’s just not something paraplegics can do on their own, nor (as far as I know) have machines yet been made to do it, if you want to meet someone who still does the task try asking thr staff at a nursing home.or for referrals at the Center for Independent Living.
The clients (typically) had to hire their own attendents from funds provided by the State of California In-Home Supportive Services (IHSS) Program which would be enough for minimum wage.
The irony of how crippling the back pain felt from lifting people out of and back into wheelchairs was noted by me at the time.
I recently visited a large* housing and care facility for the severely mentally handicapped, many of whom also have physical disabilities. They have a pretty neat setup in the central day care building, with a cuddle/sensory room, their own kitchen, different living rooms for different groups with permanently assigned staff (with the severely autistic having a room dedicated to their needs, the severely demented having a room, etc).
They have an (expensive) ceiling-mounted lift system in some rooms, where people tend to be most handicapped; as well as a movable lift system for other rooms.
There also is a swimming pool, a nice gym, etc.
If I become mentally handicapped, it seems like a nice place to live.
The tasks we’re good at automating are precisely the ones that are soul-sucking, degrading, and repetitive.
Historically the tasks that we are good at automating have been things that we can brute force. The more nuance, even if its repetitive nuance, the harder it is to do so profitably.
Humans are very weird though, or very contextually dependent. The phrase ‘imagine wiping another person’s ass’ made me shudder a bit, but I am a stay at home parent with 3 small kids. I have literally been wiping another person’s ass as part of my job every day for 4.5 of the past 6 years.
Okay, but now imagine getting paid $100K/year to wipe someone else’s ass 20 hours a week. This doesn’t sound nearly so soul crushing. Better pay and better conditions take a lot of the sting out of otherwise-unpleasant jobs. And as baconbits pointed out, every one of us who is a parent has spent a fair bit of time wiping other peoples’ asses (and getting peed on, and cleaning up their puke, and….). We didn’t even get paid a cash wage for doing it!
Short-term technological unemployment is probably dependent on other forms of frictions, like location and reservation wages. If you are intelligent enough to, I don’t know, fix typewriters, you are smart enough to work at McDonald’s. You might have to take a pay cut, but that doesn’t mean machines made you redundant. You can still add value SOMEWHERE.
Given the dramatic aging of our population, there will likely be additional jobs in health care for generations, particularly if we are so rich that we can simply eliminate every other low-skill job out there.
Imagine you’re a factory owner. Technology increases to the point where you can replace half your workforce with automated assembly lines (imported from Japan), for a 5% cost reduction. Now you enjoy the pleasure of 5% savings, but society at large still has to support the half of the workforce that you fired – unemployment, reconversion, welfare etc. Ergo, automation may be beneficial for individual businesses, but not for society (and in the end for businesses as well since they support society with taxes). The feedback is however too long to actually affect business owner behavior.
Another scenario: economists like the concept of Competitive Advantage – no matter how behind you are technologically, you can still do something of value on the market. But what happens if you’re priced out of using your time by minimum wages? It’s a kind of competition between employers: make a profit by paying $8 per hour, or the government will cut in and replace you with welfare.
Technology increases to the point where you can replace half your workforce with automated assembly lines (imported from Japan), for a 5% cost reduction. Now you enjoy the pleasure of 5% savings, but society at large still has to support the half of the workforce that you fired
If the automated assembly lines cost so much that the total savings will be just 5%, the affected workers can then offer to work for 10% less than before, so you don’t replace them. Or you may be able to demand all your workers to accept a 5% reduction, threatening to fire those who don’t accept it. Your workers may not accept it and quit instead, but (assuming a free market) only if they have a better job available.
In practice, wages tend to be sticky (in nominal terms), mainly due to worker “protection” laws such as right to strike, collective bargaining requirements, restrictions on firing, or the minimum wage. However, the processes we are talking about are actually gradual. If automation is becoming available in a sector, then workers probably don’t have alternative opportunities that pay better, so a company can get away with not raising salaries which, in a few years’ time, translates into a real wage decrease due to price inflation.
As in this example, automation might change the distribution of income, in this case from the factory workers to either the owners, or to those who make the assembly lines, or the consumers. However, the cost to the workers (or the society that will feed them) is as much or less than the benefit to whoever benefits; your comment makes it sound like the cost to the workers (or society) can be much bigger than the benefits.
But what happens if you’re priced out of using your time by minimum wages?
Sensible countries don’t raise the minimum wage to levels where it would cause a large amount of unemployment; an excessive minimum wage can be undone through inflation.
I generally disagree, wages are sticky for many reasons, one of which is that labor isn’t homogeneous. If an employer is going to replace half his workforce with robots he isn’t going to decide who to keep by random lottery and the workers themselves have a general idea of their relative value, so while some people will be guessing if they will be layed off many employees will be fairly sure one way or the other and that makes an across the board pay-cut difficult to impossible as the bottom end will have to absorb several times over the average pay cut in terms of a % of their salary.
The second issue is that if there is automation available now that will cause your pay to be cut then in a few years you expect that it would have to be cut further to prevent the next generation from replacing you etc, etc. Given that choice workers with the most options will look for other work, and those workers are going to disproportionately be the best workers and the ones that the manager wants to keep around after the switch. A couple of attempts like this and you will have driven off your best employees effectively ruining all the gains you were going to get from automating.
in a few years you expect that it would have to be cut further
Death Deteriorating quality of life by a thousand cuts.
Given that choice workers with the most options will look for other work, and those workers are going to disproportionately be the best workers and the ones that the manager wants to keep around after the switch.
I was present at an organization where employees had to do a solicitation procedure for their own (or changed/new) jobs at the company. They were not amused when some took the opportunity to solicit for a job elsewhere.
This isn’t in-depth or thorough, but the place I’d look first for long term technological unemployment would be for useful jobs all requiring abilities some large percent of otherwise healthy human beings don’t have and can’t learn.
That leaves jobs where the thing actually being produced is prestige – servants doing things that are more effectively done by other means, so that the person they do them for can display their high status on the human totem pool.
I’m not ‘normal’ enough to understand the demand for prestige markers of this kind. The only reason I don’t prefer to interact with ‘bots, signs, documents, ATMs etc. for all tasks is that they are too often incapable of doing what I want efficiently, or the cost to me of figuring out how to make them do what I want is higher than the cost of finding a human being to deal with the problem. (Well, I might enjoy the low grade social contact of saying “hi” to a doorman more than walking past the sensor to open the door, if I were, unusually, not in a state of human interaction overdose. But that’s not likely to happen as long as I’m employed in a world of open offices etc.)
So video game aficionados of SSC, what did you like at E3?
I thought Nintendo had the most things I was interested in.
Obviously, Breath of the Wild 2 (which better be called “Death of the Wild”). Apparently it’s going to use the same Hyrule, so I’m wondering if we get like a Light World / Twilight World thing going on? My dream is playable Zelda, where Link is trapped in the Twilight World and you switch back and forth between Link and Zelda. Or maybe even co-op…
We finally got to see Astral Chain gameplay and it looks really, really good. Very much looking forward to this.
Also, Fire Emblem: Three Houses gameplay, and confirmation that there’s **spoilers** a time skip. I had been on the fence about this one because I wasn’t sold on the whole “Fire Emblem: Hogwarts” thing, but it turns out that’s basically the prologue, and then you get a real war.
Ubisoft had absolutely nothing of interest besides Gods & Monsters, which looks like Ubisoft Breath of the Wild, maybe? And it’s from the same people who did Assassin’s Creed: Odyssey, which was my favorite game of 2018. Speaking of AC: Odyssey, they released a new community quest builder for it, so now fans can make whatever missions and stories and full game expansions or whatever they want. I imagine it will be mostly junk, but I’m sure somebody’s going to recreate the entire main quest line of Skyrim or something, so that could be very cool. If there’s a game award for “Best Post-Launch Support,” Ubisoft deserves it for AC: Odyssey. Every game post-launch should be like that, with the constant QoL improvements, new features, free missions, on-time paid DLC, and all the rest of this. Great job.
Keanu Reeves in Cyberpunk 2077 and Star Wars Jedi: Fallen Order, obviously, but kind of a let-down when MS’s big reveals are third party games. No Halo: Infinite footage. Nothing but multiplayer demos for Gears 5. The content-free “announcement” of a new Xbox.
GhostWire: Tokyo was intriguing. Weird Japanese horror/mystery stuff. But it’s hard to get worked up over a cinematic trailer. It’s too easy to make an amazing cinematic trailer for some boring-as-hell microtransaction mobile game.
I really enjoy the tactical gameplay in Ghost Recon: Wildlands and for a couple of years has been my go-to for screwing around when bored. I’m cautiously optimistic for the sequel, it looks like there are a lot of interesting new features. My one big concern is the change in setting. The change from spec ops in Bolivia fighting cartels to spec ops in fictional archipelago fighting drones takes a lot away from the atmosphere.
I’m afraid its going to go from realistic-ish tactical-sim to sci-fi/fantasy.
Yeah, Call of Duty: Black Ops was great when it was about sneaking through the jungles of Vietnam and that kind of thing. And then it turned into future cyber soldiers with robot hands fighting…more robots by Black Ops III. Completely ruined the atmosphere that made the first game unique.
I would love to play Forza again. I had a racing wheel for my 360 and played Forza 4 and Horizons. I’m building a new gaming PC soon and when I do my plan is to invest in a high-quality racing wheel for that so I don’t have to worry about replacing it every console generation and then dive into the back catalog.
And yes, Watch Dogs 3 was interesting with that “play as anyone” bit with the murder grandma. That might be an interesting enough gimmick to make it worthwhile.
I don’t know what to say about it. I got FF7 when it came out 20 years ago, and the Cloud t-shirt I got free with my preorder eventually disintegrated in the wash about 6 years ago. I’ve kind of played it, so…meh?
looks to be several times longer (so presumably more depth to each part of the story).
I actually doubt this. I predict they will greatly lengthen the Midgar specific sections, while greatly reducing everything else.
It seems that they want to make this thing to be a cool looking marginally interactive modern action movie. That means the parts of the game where you race motorcycles through the city while fighting the corrupt evil corporation are highly desireable compared to the parts of the game where you wander through the countryside battling random imps for no real purpose other than getting stronger, and stay at a series of small town inns for the express purpose of having flashbacks.
That means the parts of the game where you race motorcycles through the city while fighting the corrupt evil corporation are highly desireable compared to the parts of the game where you wander through the countryside battling random imps for no real purpose other than getting stronger, and stay at a series of small town inns for the express purpose of having flashbacks.
“Saigon, I can’t believe I’m back in a Saigon bed and breakfast.”
“Charlie was close. I could smell his breakfast.”
Well, that’s how we got The Force Awakens.The Last Jedi was more like leading the nostalgia cow behind the shed and shooting it. Not that I care about the remake, since I can play the original whenever I want (it’s $12 on Steam).
Banjo in Smash is great news. Animal crossing delay and a lack of Metroid content made me sad. Astral Chain looks neat but I was really hoping for a mainline Atlus game – Persona or SMT on Switch would have been very cool. Not too excited about the Zelda content we’re getting, but BOTW was not too fun for me and I’m really not sold on the Link’s Awakening remake artstyle.
Bethesda’s conference was breathtaking in its stupidity, but was salvaged by Arkane and id. A whopping TWO (and a half, for Wolfenstein) games to be excited about would win them E3 from me if it weren’t for the reanimated corpse of Todd Howard grinning madly at me from the stage. And their 3 mobile games.
The lack of CroTeam projects at Devolver was disappointing but not unexpected. Cyberpunk was I think the literal only thing neat in the MS conference. I gave up on EA and Ubisoft years ago (sorry Conrad, but I’m bored to tears every time I see more than a minute of Ubisoft gameplay).
The Final Fantasy remake and Death Stranding are making me seriously consider picking up a cheap PlayStation. But I have no faith subsequent FFVII “episodes” will maintain PS4 compatibility. I’ll wait until I know.
Also Shenmue 3 on Epic Games Store is nominally disappointing, but Shenmue is a meme anyway.
E: award for “most WTF” goes to The Dark Crystal: Age of Resistance Tactics. Like, what?
Including for some reason bringing back the Commander Keen franchise?
The Venn diagram of “people who remember MS-DOS Commander Keen games” and “people who are interested in a F2P mobile game with derivative gameplay” is essentially two separate circles, right? Why bother calling it “Commander Keen” at that point?
Probably not as small as you think, mostly because “people who are interested in a F2P mobile game with derivative gameplay” is a much larger circle than you think. I might give it a look, although I probably won’t actually end up playing it.
The circle might also include people who remember Commander Keen, and have kids who they think ought to play Cmdr Keen-type games, and whose kids are interested in F2P mobile and are too young to tell derivative gameplay when they see it.
That said, I suspect the real reasoning was some variation of “we have this derivative F2P gameplay app, and we have this old IP lying around, and we have an art department that isn’t doing anything at the moment other than drawing paychecks, so let’s have them reskin this app in Keen art and ship it”.
Eh, the only Ubisoft property I like is Assassin’s Creed. And Mario + Rabbids. I’ve never played a Far Cry or Watchdogs. I was kind of hoping they’d do a reveal of the setting of the next AC game. There’s a writer for kotaku that correctly leaked the last 5 AC games’ settings and he says its Vikings, but it would be neat to get the official reveal.
And agreed, Bethesda was just embarrassing. “Hey, remember that game that last year I said ‘just worked?’ And it turned out to be a completely broken mess that destroyed our already terrible reputation? Totes fixing it now ha ha! Now on to a whole new presentation with sixteen times the lies!”
Also agreed about Dark Crystal. “Who wants this…?”
I burst out laughing when they said Watch Dogs 3 takes place in “post-Brexit” London, which has turned into a police state. Right, right, it would be terrible if the delightful place with cameras on every street corner, where you can’t buy a butter knife without a license, where the police come visit you if you say something naughty about foreigners on the internet turned into a police state of all things!
Eh, I’m not even dealing with the political angle. I just mean that when I played the first Watch Dogs, it struck me as “This is what GTA would be like if it took itself super seriously.”
Which is fine. Was an OK game. Didn’t hate it. But I’ll still take the cartoonish super-violence over an angsty protagonist with family drama and musings on the philosophical nature of modern surveillance programs.
There are certain genres in which I think realism and introspection work very well. I’m just not sure “sandbox shooter” is one of them.
they said Watch Dogs 3 takes place in “post-Brexit” London, which has turned into a police state.
Aaaaaaand I’ve lost all interest in that game. There are very few things that I find as irritating as media that is based in an alternate reality that “proves” their preferred policy is correct.
It would be trivial to instead set it in a alternate reality EU that has turned into a police state and just as silly.
I think somebody made a comment in an early thread about “editorials from the future” that paraphrasing “Its easy to win an argument when you get to decide all the facts.”
Right, right, it would be terrible if the delightful place with cameras on every street corner, where you can’t buy a butter knife without a license, where the police come visit you if you say something naughty about foreigners on the internet turned into a police state of all things!
Funnily enough, I’ve lived in Britain all my life, and neither I nor anybody I know has ever had to get a licence before buying a butter knife. Maybe you should try finding better sources for what life’s like in Britain.
Funnily enough, I’ve lived in Britain all my life, and neither I nor anybody I know has ever had to get a licence before buying a butter knife. Maybe you should try finding better sources for what life’s like in Britain.
I’m pretty sure Honcho was referring to the infamous British knife ban from a few years ago, and exaggerating for humor. No licenses were ever mentioned; just a ban. I doubt it’s gone very far, and indeed, a lot of people were going squinty-eyed and muttering about parodies and Poe’s Law, but apparently it really is or was a thing.
I’m pretty sure Honcho was referring to the infamous British knife ban from a few years ago, and exaggerating for humor. No licenses were ever mentioned; just a ban. I doubt it’s gone very far, and indeed, a lot of people were going squinty-eyed and muttering about parodies and Poe’s Law, but apparently it really is or was a thing.
The closest to a “knife ban” mentioned there is an article calling for some types of knives to be banned. You can find articles calling for all sorts of things, most of which end up being ignored; as indeed the call for a knife ban was ignored, by everyone except ignorant Americans.
British knife law is genuinely quite restrictive, compared to other countries.
Only folding, non-locking knives under 3″ are legal EDC.
For everything else, you need a good reason to have it in a public place.
In practise, the only real inconvenience I find is worry that a non-locking knife will close on me.
But it’s the principle of the fact that walking out of your front door with a butter knife ion your pocket for no reason is a criminal offence.
For everything else, you need a good reason to have it in a public place.
Yeah, but “good reason” is interpreted pretty broadly, AFAIK. Also, the police only ever do knife-searches in places that already have high rates of knife crime; at any rate, neither I nor anybody else I know has even been stopped and searched for illegal knife carrying.
But it’s the principle of the fact that walking out of your front door with a butter knife ion your pocket for no reason is a criminal offence.
I don’t deny that there are some silly consequences of the laws as written (although that particular one strikes me more as an accidental loophole than an attempt to assert the state’s dominance over citizens), but it’s not like Britain is alone in this: pretty much every modern country has a system of laws so enormous and labyrinthine that you have unnoticed absurdities slip in, or that it’s often impossible to be sure that you aren’t committing a crime (is it still the case that the average US citizen commits three felonies a day without knowing it?). I do not think that Britain’s chances of becoming a police state are noticeably different than those of any other western country.
The UK is fairly well known for arresting and convicting people who carry small knives with insufficient excuse. I remember one a few years ago about a guy caught with a boxcutter in his car. What’s he use it for? Opening boxes at work. So why can’t you leave it at work? Guilty, next case.
This one ended in acquittal, but the process was still costly:
The UK is fairly well known for arresting and convicting people who carry small knives with insufficient excuse.
And America is fairly well-known for being full of fascist cops who gun down unarmed black children with no repercussions. What is well-known isn’t always accurate, and listing a few anecdotes is too vulnerable to the Chinese robber problem to tell you much of use.
This isn’t “Chinese robbers” at all. No group besides police in the UK are arresting people for violation of the UK knife laws, of course; the analogy makes no sense.
I like Muppets. I like tactics RPGs. And this Dark Crystal thing is, um, wat.
I certainly don’t blame the Henson Company for wanting to monetize their older properties, especially considering how lousy their new ideas are. Dark Crystal may not be as beloved as Fraggle Rock (let alone the main Muppets, who were sold off to Disney and Sesame Workshop a while ago) but it has its fans, and making a game tie-in to the upcoming Netflix prequel makes sense. It’s just, why a grid tactics game? Who decided that was a good fit?
Between this and the Commander Keen thing discussed elsewhere, I wonder if there’s some sort of marketplace (or app) where game developers can get matched up with aging, dead IPs…
Also Shenmue 3 on Epic Games Store is nominally disappointing, but Shenmue is a meme anyway.
I did not realize until just now that Shenmue 3 was a kickstarter project. So fans backed it for a Steam key, it blew up, attracted publishers, they took that Epic money and made it an EGS exclusive. And aren’t giving backers refunds. That is a dick move. Wow.
David Newhoff’s latest post @ The Illusion of More seems like something right up the SSC-commentariat’s alley, so I thought I’d take the liberty to plug it. Social media and its problems are the topic of the day.
A couple of highlights:
Linking in the hidden thread just in case – anything to do with social media has a higher-than-normal CW potential, I believe.
I do not understand at all how a guy being wrong about where Alcatraz is is anything like “information age” social media. People have been wrong about lots of things forever and I don’t think social media is any different. If anything it makes correcting people a little easier because when he posts the pic of “Alcatraz” somebody on his feed just might correct him. Something the author did not feel inclined to do in person.
If the story happened at all, it sounds like pretty standard dad-trolling-his-kids, which has been around since before the Internet.
The difference is that before social media, only his kids would be misled.
Social media is an amplifier that amplifies cluelessness just as well as accurate and valuable information. Given that the former is more readily available than the latter, what you get is loads of people telling other people about things they don’t know.
And if someone does step in to correct them? Well, they must be full of shit, ‘coz Joe over at Facebook has already posted an infographic of the entire timeline of Alcatraz, from its founding by George Washington immediately after his victory against the French in Mexico.
ETA:
For a related, but much more problematic, issue, see xkcd: Citogenesis. Make sure to read the roll-over text.
Along the same lines as the Streisand effect (where a famous person’s attempts to silence some embarrassing thing just draws attention to it), I think there’s a similar effect we can see now. We might call it the Bret Weinstein effect, or the James Damore effect, or currently the Noah Carl effect.
Find someone in a woke-heavy field who’s broadly on the side of the left but slightly politically incorrect in some area. Decide to make an example of them by getting them fired, no-platforming them out of a job they love, making them a pariah, etc. Sometimes, the result is they crawl away into obscurity and you’ve made your point that questioning your ideology gets people fired. But other times, what happens is that you create someone who is pissed off and eloquent, and has nothing else to lose–you’ve already applied all the punishment you can, you’ve already driven them out of their dream job/blackened their name across the whole of their social circle. Their remaining job/friends/life are pretty-much immune to that crap.
But the SJWs have successfully ejected those people from their woke industry, now they are witches and only associate with other witches at the margins of civilization (e.g. Youtube and Reddit). The people who remain in these industries are either woke or have learned from these examples to keep their mouths shut, therefore the policies in these industries will be dominated by the SJWs.
Note that Bret Weinstein and James Damore are way, way more widely known (and their arguments have been widely read, even though a lot of the media coverage of them was embarrassingly bad) than they would have been without the deplatforming/mobbing. And Weinstein in particular is a serious thinker who’s worth reading/listening to at length, whose ideas have gotten a lot wider hearing because of the deplatforming.
I have no proof one way or the other but my view is that only those who are already part of the opposition are familiar with;
1 . what the writer actually said
2. the fact that said writer was made an example of for having said it.
Those are the people for whom this isn’t necessarily news, but the fact that it happens and how it happens only serves to enrage them without necessarily reducing the number of true believers on the other side. It doesn’t pull the masses away from anti-heresy attitudes but it does polarize the heretics.
The people that did the purging in many cases don’t even read the material.
Purging someone for hatespeech is what, in my estimation, is being circulated to the normal newsreading public. If you find out an employee was fired for writing a misogynistic manifesto, do you really care that he lost his job? The deplatforming itself doesn’t mean anything because one can be certain the most relevant details; what had actually been said, isn’t going to be circulated. Goodthinkers know not to circulate hatespeech.
The only time there might be issues is if the person who is purged is able to win a lawsuit against their purgers. In the case of a typical employee working for an ‘at will’ employer this is unlikely.
I think “the opposition” is a fairly leaky group, both ways. Increasing the visibility of some person who’s said to be a Nazi but turns out to be a pretty sensible and decent person whose politics aren’t particularly radical in any direction undermines the credibility of the outrage mobs and the media outlets that take part in them. I had a conversation with my 14 year old son awhile back and mentioned reading the Wall Street Journal–the only thing he knew about them was that they’d run an article claiming PewDiePie was a Nazi–something that was obvious bullshit given his own knowledge[1]. Over time, fewer and fewer people buy the outrage mob’s story.
IMO, the interesting thing about the common deplatforming outrage mobs is that they are built on a Keynesian beauty contest kind of logic. When everyone else is joining the outcry about the racist white kid in the MAGA hat disrespecting a tribal elder, the pressure I feel to join in and add some performative outrage on Twitter depends on whether I think the outrage mob is going to carry the day. If it does, and the official narrative ends up being that the Covington kids were racist Nazi thugs who deserve whatever abuse a bunch of powerful adults can dish out[2], then I have an incentive to join in–not only will that increase my status, but it will also add some protection for me against being accused of insufficient wokeness. If it doesn’t, I’m going to look like a jerk and perhaps suffer some loss of reputation. The more the online mobbings visibly fail, the less incentive there is for anyone to either join in or capitulate. Visible cases where someone weathered the storm and is still out there talking probably weaken the power of the online mobbing types even more.
[1] Something similar happened to me at around the same age, with respect to a moral panic about D&D teaching children to worship the devil.
[2] Note that this more-or-less happened for some other stories. I suspect a majority of Americans still think George Zimmerman was a great big white guy who murdered a little black kid in cold blood, just as they think Saddam Hussein was involved in the 9/11 attacks.
The case of the famous YT’er might prove an exception, simply because you’re dealing with someone who’s viewership rivals if not exceeds the viewership of the legacy print that is defaming him. Most of his viewers were probably apolitical, so this event may have been a formative moment for them. How many people who get defamed can say the same?
In the case of a Damore, or really any regular person, there does not exist a neutral, apolitical information organ that publishes the memo *for which* a substantial portion of the populace is aware and so can experience a narrative clash without writing the source off as illegitimate.
In the real world, only dissidents platform other dissidents, so unless there’s reason to suspect otherwise there should not be any narrative clash.
That makes sense. The end of the megaphone monopoly (YT, podcasts, blogs, even public speaking gigs) has led to people like Bret Weinstein being able to get his own views and message out. And in many ways, I think the woke wing of the broad left would have been better off keeping Bret and Eric Weinstein (and many others) inside the tent pissing out, instead of pushing them outside of the tent and getting them to piss in.
The legacy media going after PewDiePie was obviously idiotic: I think they hate him because they think he’s eating their lunch and they thought they could use their power to incite online mobs to destroy him. The problem is that they overplayed their hand because he has a platform with a larger audience than theirs so he can defend himself and he’s genuinely uncontroversial (the worst they could find on him was a silly Monty Pythonesque Nazi skit that he did 10 years ago).
But what about Bret Weinstein, James Damore, Noah Carl, Alessandro Strumia, and so on? These people were not public figures, the general public never heard of them before the media decided to go after them. And perhaps with the exception of Weinstein, they all expressed ideas that are genuinely heretical for the progressive mainstream.
They have become martyrs for the anti-SJW cause, and perhaps some of them might be able to get a career out of it, but their influence on the industries they were purged from has been destroyed. They might have gained some cultural power, but the institutional power remains firmly in the hands of the SJWs, and in fact has been reinforced because of the chilling effect of the purges.
Comparing Pewdiepie to Python is a grave insult to the latter (who are funny). Also, note that the Nazis claim him as one of their own.
@thisheavenlyconjugation
It’s in Anglin’s interests to claim this. In fact it’s in Anglin’s interest to claim that anyone is secretly a Nazi, as it goads his political enemies into purge either bystanders or their own side by claiming that they are secretly working for you.
I can’t speak to the whole of Monty Python but John Cleese did complain about the demographic state of Britain, which is something only Nazis do, apparently.
Does your 14 y/o know Pewdiepie paid to have people film themselves with antisemitic signs?
I don’t know why people would go to bat continuously for pdp, given his frequent racist and antisemitic antics.
JP Nunez:
If you call someone a Nazi, and it actually turns out that they’re a random guy who makes rude jokes, or a provocateur who likes trolling the mundanes, or a even a person with out-there beliefs on the right who isn’t actually a Nazi, then I’m going to think you’re either dishonest or an idiot.
If you have evidence that PDP is actually a Nazi. please provide a link, and I’ll certainly share it with my son. Otherwise, I’ll probably join him in putting people who claim PDP’s a Nazi into the same bin as people who claim that D&D teaches children to worship Satan.
“Does your 14 y/o know Pewdiepie paid to have people film themselves with antisemitic signs?
I don’t know why people would go to bat continuously for pdp”
The reason given in this particular instance was an attempt to poke fun at Fiver by seeing just what you could pay a person to do. If they subscribed to him then they would have seen that video, presumably.
People seem to be asking how Felix’s subscribers who watch a substantial portion of his videos on a regular basis could be unaware of the obvious hate-thought embedded in his videos which is apparent to non-subscribers who become aware of him through samples of his content procured by journalists who are also not subscribers.
@JPNunez
There is a big difference between making fools of people by getting them to do dumb things and actually favoring those dumb things.
There is a video of a guy getting women’s studies students to sign a “End women’s suffrage” petition, where many do, seemingly because they confuse ‘suffrage’ with ‘suffering’. The logical conclusion of this video is not that the guy is actually against women’s suffrage, just like the logical conclusion is not that PewDiePie is an antisemite for pranking people in this way.
@aapje
That’s different afaik. If the feminist study guy kept doing different things that mysteriously looked like promoting antifeminism, well, I’d have to conclude he is not really a feminist. But he seems to have a documented point.
But the thing with Pewdiepie is that he will keep doing suspiciously nazi/racist stuff.
Like promoting racist videos, or promoting holocaust denier webcomics.
At some point you gotta do a bayesian update or whatever and realize it is not a coincidence.
But somehow the right wing will keep excusing that kind of stuff as jokes no matter what.
@albatross
did you see pdp linking to stonetoss, a holocaust denier?
not gonna link it. do your part
at the part where we start with the holocaust denying, I start thinking there is some intellectual dishonesty from his followers to keep justifying
@RalMirrorAd
Again, there is always gonna be an explanation or an apology, but the linking to racist content seems pretty consistent.
I do not know why the subscribers do not think he is a nazi.
But given the Christchurch specifically said “Subscribe to Pewdiepie” before the shooting, I have my suspicions.
Here’s a prediction: PDP will keep promoting racist content during the upcoming year, and you all three will keep making excuses and not accepting PDP is at the v least racist because you are not being intellectually honest about this.
A blessing from the devil is not a curse from God. That line is neither evidence of Pewdiepie’s nazism, nor is it evidence against. He could have called out to the Pope or to David Duke, and it would make no difference.
I didn’t say that proves anything about PDP himself, but it is evidence that part of the subscriber base is on the racist side, tho.
When shooter and bombers say “Allah Akbar!” instead, what suspicions does that lead you to?
That they are islamist extremists?
@JPNunez
That doesn’t necessarily mean updating to the belief that he is antisemitic. Another explanation is that the modern, funny counterculture tends to be right-wing, because the left is both culturally dominant and got very serious, not daring to be sarcastic or cynical about mainstream beliefs. You know, the opposite of the 60’s/70’s.
Note that yesterday, John Cusack shared an antisemitic cartoon that he thought was pro-Palestine and critical of Israel. Katy Perry and Nicki Minaj have shared Pepe images in the past.
The margins between OK and (supposedly) antisemitic are thinner than ever.
—
As for PewDiePie linking to antisemites, the only incident I’m aware of is when he told people to subscribe to a channel because of their good pop culture videos, although that channel also shared some far-right videos. The good faith assumption is then that he liked the pop culture videos and didn’t notice the others. The bad faith assumption, where he pointed out the pop culture videos, but actually wanted people to see far-right videos, seems quite far-fetched.
Pewdiepie claims just short of 100 million subscribers. It would be a surprise if none of them were on the racist side. But even then, the killer shouting “Subscribe to PewDiePie” doesn’t demonstrate it. It demonstrates only that he knew the meme and knew that PewDiePie was accused of racism, and wanted to stir up trouble. If he’d yelled “For the glory of Israel”, would that implicate Netanyahu, Israelis, or Jews generally?
@aapje
But again, last time he linked to a webcomic artist who happens to be an holocaust denier. At some point you gotta accept that it is not coincidence; regardless of the intentions -we cannot read PDP’s mind- I predict that this kind of linking antisemitic/racist stuff will keep happening.
I don’t know about counterculture/humor being right wing now; I bet we could find humorists back in the 60s/70s who were making fun of the hippies for being against the vietnam war. That doesn’t make the humorists counterculture. I also like how left wing culture is dominant with so many new right wing governments around the world. I dunno what is so counterculture about humor making fun of Trump opponents, for example, given that Trump is the actual guy sitting on the White House.
Similarly, during the american civil war, humorists on the south made fun of the north and viceversa. That does not mean either side was counterculture. Just that both kinds of humor had a market.
If John Cusack keeps posting antisemitic comics, I will update my beliefs; if PDP had stopped around the fiverr incident, I would not be going against him. The difference is that PDP has kept going after that incident, and John Cusack has not (yet? dunno).
@TheNybbler
Again, if the guy yells for Israel, that doesn’t implicate Netanyahu. But it could imply that that shooter was a rabid israel fan? Dunno, don’t live in the counterfactual world.
Remember that the Christchurch shooter was also dedicated enough to leave a manifesto, which was again, pretty racist. Dunno what “stir trouble” here means; obviously the shooting was also going to “stir trouble”. If the guy is calculating enough to leave a manifest, what are the chances his mention of PDP, in the livestream he was doing (again, very calculated) was just a random meme?
Just this week there was another -failed- shooter and he posted right wing memes on facebook and stuff -no PDP tho-. That does not mean that right wing memes are counterculture or anything. Just that the right wing people like right wing humor and left wing people like left wing humor. But it is evidence that the shooter was probably right winger.
@JPNunez
The Christchurch shooter referenced the “Subscribe to PewDiePie” meme precisely because he knew that the media had been smearing PDP as a Nazi, so he poured gasoline on the fire in order to incite a witch hunt against PDP which would have made the mainstream media look even more irrational and unreasonable.
Same reason he flashed the “white power” OK hand sign: the media had been obsessing with it for two years after 4chan trolled them.
He explained his strategy in his manifesto: he is trying to accelerate the culture war by committing an outrageous act in the hope that it would incite an overreaction from the “ctrl-left”, which would in turn push the “normies” to the right and goad them into action.
And you’ve completely fallen for his bait.
Except that left wingers have failed largely at this whole terrorism stuff lately, so I will ask the right wingers here that please don’t terrorize people in an attempt to troll us into violence.
On the bait thing, I guess the Christchurch guy also predicted PDP promoting an holocaust denier so he could see the future and PDP kept signalling for the nazis.
Speaking of intellectually dishonest, many of us draw a distinction between someone being a racist, and someone being a Nazi.
I don’t watch PDP, and don’t much care whether he’s on the side of the angels or the devils. But if you call him a Nazi, that has an actual meaning. The meaning is not “he has bad ideas” or “he links to bad people online” or even “he’s an unapologetic racist.”
Calling someone a Nazi when it turns out they’re not actually a Nazi causes me to update my priors, but in the direction of giving the false accuser less weight in the future.
Bah could not edit.
This also means that the whole point of mentioning PDP on the shooting is to strawman calling PDP a nazi by making it seem a joke of the alt right; either the alt right is trying to smear PDP or they are trying to make calling him a nazi a joke, to give him plausible deniability.
Which, unsurprisingly, the right wingers here do, they just excuse PDP and make calling him a nazi seem like an exaggeration.
and make calling him a nazi seem like an exaggeration.
If it’s not an exaggeration, then you are calling him a Nazi.
@JPNunez
It still only means that he’s part of an entertainment community that has these people in it, not that he is personally antisemitic.
Currently on the left especially, there is a strong belief in cooties, where people who interact with wrongthinkers are themselves considered guilty of what the person they associate with do. I fundamentally reject this.
I still haven’t figured out what Pewdiepie said, so I’m running on an impression.
If some noticeable fraction of his jokes are based on the premise that disliking anti-Semitism is funny– people are ridiculous to be so sensitive– I think it’s reasonable to call him an anti-Semite, though not a Nazi. He’s getting fun and/or publicity out of making Jews feel worse.
I would like to know their opinion on whether they prefer this outcome, or would rather have kept their original careers undisturbed.
It is possible to profit from being attacked by the woke hate machine. A good example would be Jordan Peterson (although he is definitely not an example of a wokey outwoked by greater wokeys). You need to have a product ready to sell. Peterson has a book, and a self-improvement program (in my opinion both very good, so it is not a blatant attempt to milk the controversy), so when you feel sympathetic, there is a simple way to act on impulse and send him the money. But you must have the product ready now, not a few months later, because a few months later people’s attention will turn to something else. (Also, if you start writing a book after the controversy happened, it will feel more like an attempt to milk it.) This is for short term success. For long term success, Peterson has hundreds of hours of interesting lectures available free online. That means that even when people stop caring about the controversy, some will still be watching the lectures, and maybe sharing them with others. Peterson will not be merely a “controversy guy”, but also an “interesting lecture guy”.
Now compare this with James Damore…
Unless there is something I don’t know about, Damore has no strategy to convert “being widely known” into money. He is not selling anything his sympathizers could buy. His only income is his job, which he has lost. Even if someone else, sympathetic to his cause, hires him, he will likely receive his market value or less (because he now has less choice), so that can reduce his loss, but it doesn’t make a profit. I also suspect that being hired because someone sympathizes with your cause, doesn’t exactly feel good; it’s like a mirror image of being a diversity hire. I would rather know I am being paid because someone respects my skills, regardless of my political opinions. Ten years later, Damore will be old news, but his job opportunities will still be more limited than before the controversy, because a few HR employees will decide after short googling that he is a potential liability.
Shortly, controversy was likely a profit for Peterson, but a loss for Damore. I don’t know much about Weinstein (whether he has a strategy to monetize being known), but I guess it is more likely to be a financial loss for him, too. Fame, unless it leads to sales, is overrated, IMHO.
Only if you get hired because someone sympathizes with your cause in the sense that your politics is an advantage, not if you get hired because the company needs a programmer, and they, being sympathetic (or not hostile) to his cause, don’t consider his politics a disadvantage.
@albatross11 >
A mention by a SSC commenter reminded me of the sad tale of James Damore (which I had forgotten), but until now Bret Weinstein has been unknown by me, and I wouldn’t regard either as particularly “well known’.
Reading The Secret of our Success right now, and want to quickly add that the criticism I’ve seen here doesn’t stick. The author makes points sustained by a broad foundation, then peppers them with almost anecdotic illusrations, usually framed as such: anthropologist X is even suggesting that… It’s those I’ve seen contradicted here, at least so far.
Edit: got to the plant eating infants study. It’s also second hand knowlege clearly framed as such, and the expression is: “many infants”. If even a minority of infants show difference between plants and objects and wait for cultural confirmation, this makes the point of a (still ongoing) evolutionary process.
Comment on the piece “1960: THE YEAR THE SINGULARITY WAS CANCELLED”
I find it strange that this piece advances a hypothesis concerning what explains the slowdown in economic doubling times around 1960, but then completely fails to compare this hypothesis with readily available data that can inform us about its plausibility. The hypothesis, as I understand it, is that growth slowed down because population growth failed to keep up (and what this hypothesis entails can, of course, be interpreted widely in precise quantitative terms). But how does this compare with the data? Not so well, it seems to me. Indeed, in terms of percentage change, growth in world population actually *peaked* around 1960, cf. https://en.wikipedia.org/wiki/Population_growth#/media/File:World_population_growth_rate_1950%E2%80%932050.svg
And that peak would only result in a peak in the growth of the productive work force around 20-30 years later, when these many new kids became adults. Thus, a naive claimed relationship between peak (productive) population growth and peak economic growth would actually place the peak of the latter around 1980-1990.
One may object that this is global data, and most of the world is not that relevant for most of economic growth. Which is true. So let’s look at the US in particular. In 1960, US had around six percent of the world population, yet accounted for 40 percent of global GDP, cf. https://www.forbes.com/sites/mikepatton/2016/02/29/u-s-role-in-global-economy-declines-nearly-50/#5f6822fb5e9e
Yet the growth rate in population per ten year period in 1960 in the US was roughly the same as in 1900-1930 (18.5 compared to 21.0, 21.0, 15.0, 16.2, cf. https://en.wikipedia.org/wiki/Demography_of_the_United_States); and we should again add 20-30 years, indeed probably a good deal more given that we are talking about a developed country, to have this growth reflected in the “productive workforce growth”. So again, not a great match either.
In sum, it does not seem to me that this “population decline hypothesis” explains the observed pattern particularly well. Perhaps it is worth exploring other hypotheses, especially since others in fact are on offer, such as diminishing returns due to low-hanging fruits/significant breakthroughs that can only be made once having already been found (e.g., once communication speed hits the speed of light, you cannot really improve that much more; for some other, similar examples, see: https://dothemath.ucsd.edu/2011/07/can-economic-growth-last/). This hypothesis is explored in great depth, and arguably supported, in Robert J. Gordon’s impressively data-dense The Rise and Fall of American Growth: https://www.amazon.com/Rise-Fall-American-Growth-Princeton-ebook/dp/B071W7JCKW/ (He’s also got a TED-talk on it, but, although it’s nice, it does the book absolutely no justice.)
A final note: If we currently believe growth will explode in the future, then confirmation bias is the friend of that belief. And that foe of ours is surely always worth challenging with data.
I read a review on Wirecutter about “flight crew luggage.” Apparently, some luggage companies will only sell their suitcases to people that can produce an airline employee ID. Why would they have this restriction? We’re talking about suitcases, not, say, chemicals that can be used to make meth. Would the companies not want as many sales as possible? Would flight crew members really think “Uh oh, I just saw a business traveler with a suitcase from ABC Company, better buy my next suitcase from XYZ Company!”? Or is Wirecutter wrong and this type of luggage has always been easily available from Amazon?
Why would they have this restriction?
Off the top of my head, going by the Nick Cave rule that “people ain’t no good”:
(a) they give discounts to real flight crew, this means people who aren’t flight crew try and get discounts by swearing blind “oh no, I totally work for an airline cross my heart” so they had to institute “nice try but no ID no discount”
(b) ditto for above, but then people who got the discounted flight crew luggage turned around and sold it for full whack on eBay and the likes, maybe even more than full whack for “official flight crew luggage of Airline, cross my heart would I lie to you?”
(c) people trying to pass themselves off as flight crew in other situations (like, I dunno, trying to con their way onto airplanes?) with “look, I really am flight crew, I’ve got the proper luggage and everything”
Basically, if someone somewhere can see an opportunity to make a profit, they will try it, and naive businesses that go “sure I believe you, perfect stranger!” will soon go out of business.
(a) That would explain requiring an ID for the discount, but it doesn’t explain not selling the luggage at all to people without an ID.
(b) If regular passengers can’t buy it legitimately at all, wouldn’t that encourage eBay sales even more? And why would the manufacturer care if it means they can sell even more of them?
(c) There’s an easy to stop people from thinking the luggage is only for airline personnel: sell it to everyone. Also, if some idiot accepts luggage as proof of airline employment the manufacturer can rightly blame the idiot.
Well, there could also be reasons of exclusivity of the brand, but as you say, why then make it available to anyone? I do think it may have something to do with this being, if you like, industrial work-wear style manufacture. Sure, you can buy a hard hat and safety boots if you’re not a constuction worker, but why would you? (Although it seems that within the world of workwear there are brands analagous to fashion brands, so who knows?)
So the assumption may be “if you’ve heard of us, you must be within the industry, so that’s why you want to buy our goods”.
As to why people would want to buy ‘professional’ goods, I imagine to look trendy and more unique than the mass-market luggage (snobbery over luggage sounds ridiculous but since people pay premium prices to wear the ‘right’ brand, it must exist) and I do imagine some will try to use it as a cheat even in small things – you say “if some idiot accepts luggage as proof of airline employment the manufacturer can rightly blame the idiot”, but have you ever worked retail where you get very little to no initiative, ‘the customer is always right’ and if you question anyone about “okay, sorry sir, but I must ask you to prove you really are a flight attendant with Airline X before you get the flight crew discount on coffee/a meal/the parking space” you lay yourself open to being flayed by your boss for pissing off customers and getting the place a bad review online, so you just take it as read that “guy with the same brand of luggage as all the real flight attendants I see passing through here is also a flight attendant”.
That’s the reasoning after all behind airlines and emotional support animals and people abusing this to be able to bring their doggie-woggie on board with them and/or look for special treatment. If you, hapless cabin crew, stop a passenger and question “Do you really need this animal?” you get screaming abuse and a social media storm over “abusive airline denies me necessary psychological support” and people organising a boycott and demanding you be fired and the airline shut down because it’s easy to whip up online mobs.
@Deiseach,
Over here I’ve seen some new brands of work clothes come and go (and for some reason it’s electricians who seem to start them, or at least they advertise themselves that way), but I seldom see anyone in the trades switch for long from mostly wearing Ben Davis, Carhartt, Red wing, and (a little rarer) Wolverine, which were the brands my father wore 50 years ago as well.
I’d say price discrimination, but I think it’s the wrong way around.
Or a tax/expenses issue, but there are plenty of shops that show both prices without VAT for trade, and with it for non-trade.
My money is on “just wrong”. From FlightAttendantShop.com’s FAQ it sounds like they offer a discount to people with an airline employee ID but sell at a higher price to the public. Airline-branded luggage might be restricted to employees, but that makes sense.
[CW: CW]
Here’s a deeply depressing piece from Sam Kriss on how YouTube is innately a [witch] platform and the only way to get rid of the [witches] is to shut it down entirely.
I don’t totally buy the arguments about the nature of web video, but I do agree that (a) it’s impossible to moderate such a large platform, (b) moderation is strictly necessary to avoid abuse and harassment, and therefore (c) we’re fucked. And that’s even before we get to the question of cracking down on extremist political content.
It makes me miss the decentralized internet, back when we were a bunch of disconnected PHPbb and vBulletin installations. Leakage from one community to another was limited. You might be aware that St*rmfr*nt was out there, but if you never visited their forum, you never had to care about them. Whereas now, everything is part of centralized social media platforms with algorithmic recommendations, so no matter how little you want to seek out the [witches], you might find them or they might find you. This is why we care so much more about “adjacency” now. If there’s no longer a bright line separating the extremists from the far edge of the Overton window, if it’s so easy to slip from Weinstein to Peterson to Molyneux to Anglin, well, we just need to treat Weinstein like Anglin to prevent further slippage. And then people read Weinstein and see how totally reasonable he sounds, and conclude there’s nothing wrong with anyone else who’s been deplatformed either…
I don’t like the notion of heavyhanded regulation and censorship, but I don’t think there’s any alternative to it. I used to believe “the answer to speech is more speech” but how would more speech have prevented the Christchurch massacre? And yeah, the centralized gatekeepers of the mainstream media cheered the Iraq invasion. But if YouTube was around, would that have stopped it, or just cheered it on like everyone else?
(Here’s where I get my digs in at Horrible Banned Discourse proponents by pointing out that The Atlantic – not even a right-wing rag like National Review, the frickin’ Atlantic! – published excerpts from The Bell Curve back when it was first published in the ’90s. So whatever you want to say about new perspectives that weren’t allowed in the mainstream back then, this ain’t it, chief.)
If I seem like a confused wreck, it’s because I am. I have no idea what to do, or if anything can be done. All I know is: I don’t want to live like this, but I don’t want to die.
Diagnosis to me seem wrong. Youtube is right wing because left wing orthodoxy dominates the media, thus anything sufficiently right wing can be presented to a viewer as forbidden knowledge (because it sometimes is).
This is compounded by a problem that I’ve noticed having to do with what I’d describe as secular anti-racists. This subset includes SJWs, and other activists, but also just your regular history teacher who is a moderate D/R. A huge majority of these people cannot state why racism is bad, they just know it is bad because they have been told so.
So lets say a kid sees some video on youtube about why the white race is clearly superior. The vast majority of rebuttals will (as you point out) be sputtering, stammering, exclamations along the lines of [witches]. Secular anti-racists who successfully can engage on this topic are few and far between. Religious anti-racists tend to be more persuasive (which is probably why both the anti-slavery and civil rights communities emerged from the churches), but I find people to be very reluctant to argue those points of view.
So I don’t think we need any sort of regulation or for tech companies to want to try to prevent Christchurch. Trying to do that is like trying to pin a wave upon the sand. Instead those that fear youtube radicalization need to merely get better. They have become soft.
Is secular anti-racism that difficult for people to defend? Seems to me that ‘not according to the colour of their skin but according to the content of their character’ encodes pretty much all you need – that people should be treated as individuals rather than as undifferentiated avatars of a homogenous outgroup because basic fairness demands that you should only be punished for the wrongs you actually do, not pre-emptively punished for membership of a demographic that someone else dislikes before they’ve even made the effort to find out if you actually embody the faults they are imputing to your demographic.
So, your go-to example of “secular” anti-racism is a quote by the Reverend Martin Luther King Jr, Baptist minister and founder of the Southern Christian Leadership Conference?
The reason actual secular anti-racism is increasingly difficult to defend, is that it has increasingly distanced itself from Reverend King’s words and at times seems positively eager to judge people according to the color of their skin.
An addendum to your point, which pairs with my longer post below:
That is a secular anti-racist motto, but as you pointed out, it is cribbed from religious anti-racists. Many of the seculars likely never knew/know why its inherently true or where it derives from. They simply accept it because it sounds good, which is why they cannot defend the system when it is attacked.
Secular anti-racism as we see its current incarnation is a university student attempting to buy alcohol with a fake ID, then attempting to steal three bottles of wine when refused, then when challenged and the cops are called claiming this is motivated by racism, and the university officials falling right into line and claiming the store is small potatoes, racist and has a long policy of profiling students (nah, I think the only ‘profiling’ going on there is ‘students are more likely to try and steal from us because we’re on the doorstep’) and giving aid and support to a howling mob, then emailing everyone about how the jurors who found against you are a bunch of redneck racist idiots and then being all surprised when the punitive damages for you being a bunch of jackasses get hiked up accordingly.
Now, defend that to me if you can, but it’s got hardly anything to do with “you should only be punished for the wrongs you actually do, not pre-emptively punished for membership of a demographic that someone else dislikes” and all to do with “our university is the only reason your little town doesn’t dry up and blow away, so kneel before us, peasants!”
That’s an unconventional definition of “is”.
That’s an unconventional definition of “is”.
Define to me a conventional one, then, and I’ll be glad to listen. Until then, when the examples of “anti-racism” I see are all “screaming fits of entitled hysteria”, then I’ll take it as it comes.
Such as “guy wears heavy metal T-shirt on Canadian public transport; very woque black person goes off on a Twitter tempest in a teapot about it; despite it being pointed out that this is a heavy metal album T-shirt and not some Nazi white supremacist, several of the commenters persist in “how do I report this violent act of aggression which is making me feel unsafe at my very keyboard?” despite not being black themselves.
Deiseach:
The media is a distorting filter. You probably have no idea what most anti-racism on campus looks like, but the media (old, new, and social) will happily bring you all the outrageous infuriating details about how horribly someone’s acting on some campus in the name of anti-racism.
My guess is that the median modern version of anti-racism is some lukewarm sermon on how diversity is our strength given by the EEOC coordinator during mandatory diversity training once a year, with most of the audience tuning her out and reading their phones.
I do not agree with @Deiseach’s definition of secular anti-racism. I agree with your definition of its outward statements about the world. That is modern secular anti-racism being ‘not according to the colour of their skin but according to the content of their character’ is a decent mission statement (although some have certainly strayed from it). Remember, my version doesn’t only include SJWs, its also your average centrist Dem/Republican over the age of 30. Think schoolteachers, small business owners, franchisees, etc.
A mission statement, however, is not a defense of that mission statement. And the secular anti-racist in modern parlance does not defend this POV with phrases like, “endowed by their creator”, “created equal”, etc. Those are, inherently religious if you do not know the works of the classical liberals. If you want to see a man who blends classical liberal justifications with religious justifications for the elimination of slavery, the best example I know of it Abraham Lincoln. But many modern anti-racists would never use much of his rhetoric. So, I find, they are wholly unprepared.
Simply saying the motto doesn’t defend it, because kids notice things. They notice who the bullies at the school are, who are the smart kids, who are the dumb kids. Teenagers notice things, like who makes the football team, who is going to college, who got knocked up, where not to go. Then young adults also notice things, like who commits the most crimes, where the best schools are, etc. And a mission statement doesn’t rebut any of that. “Treat people fairly”, without a strong foundation is just a war against noticing.
There are two things I think secularists on youtube [and elsewhere] have trouble defending:
1. The total absence of any innate, civilization-influencing behavioural group differences (All men are created equal)
2. Arguing against the idea that a subspecies or extended family has a biological imperative to prioritize the well-being of those more genetically similar to those less genetically similar.
If you’re a christian you can appeal to the equality of souls. If you’re a secularist you’re in a world where [especially if you’re stuck on #1] different, non fungible, subspecies of humans are in soft competition with eachother, and modern anti-racism is maladaptive.
#2 is a moral issue, #1 is not.
______________
Arguing for extremely non-controversial rules of conduct between groups of people which are intended to reduce the likelihood of conflict are easy enough to justify on grounds of self interest. But this is so far from what people care about today that talking about it almost makes you suspect in the eyes of others.
#2 is a factual issue–should we expect humans to have instincts in that direction thanks to evolution, and do we actually see that?
As best I can tell, there’s not a lot of evidence that humans are group genetic interest maximizers in any meaningful way, and it’s hard to see how that would have evolved in most of the environments our ancestors lived in. We see the evidence of some level of group selection (IMO) in instincts toward tribalism, but those are often turned against ethnic genetic interests in favor of some nationality, language, ideology or religion.
There’s also a moral issue of whether we should follow such instincts to the extent they exist (rape, stealing, and murdering romantic rivals are also instinctive, but we still lock people up for doing those things). ISTM that’s just the naturalistic fallacy rearing its ugly head.
@Albatross
Nationality, language, and religion tend to be soft-bounded geographically, and for obvious reasons the people you most often would procreate with would tend to share your language and your religion. Colonialism and missionary work of the last 2.5 centuries have blurred this somewhat.
There’s also the fact that a historic near-out-group may have more similarities than a far-group and so in the mind of the person is more of a threat even if the historic rivals share more DNA than the far group.
There are also going to be plenty of on-case overrides. Civic-mindedness can temper the effects of nepotism in society but that’s not the same as neutralizing them or even engaging in reverse nepotism.
But I’ll submit as counter-evidence:
1. The prevalence of racial gangs in prisons and schools
2. The tendency for people to be more trusting of individuals that look more similar to them [in experiments]
3. The tendency of people who are friends who share a higher portion of dna then strangers
Note how #2 does not use the word ‘Race’ here. In-group preference on the basis of some genetic similarity can be done at multiple levels of granularity. It’s not clear to me why drawing a donut around the ‘race’ granularity as bad and then saying that humanism outside and ethnocentrism inside are acceptable makes any sense except because of historic legacy.
There’s a lot of evidence that humans are group interest maximizers, though, where people use clues to decide who is ingroup and outgroup (and intentionally send group signals to make identification easier).
Certain genetic differences do correlate with ethnic differences, which makes them usable proxies for in/out-group discrimination.
Yep. Tribalism due to group selection makes sense to me. Selection to favor your own race/distantly extended family without any particular tribal or other ties doesn’t. Most of human history was people interacting with third/fourth cousins, and very little of it was people interacting with members of other racial groups.
Yeah, but people are very eager to detect patterns, often erring on the side of seeing patterns where there are none. So an absence of a specific pattern during human evolution doesn’t prohibit people from seizing on it quickly.
For modern definitions of “racial”, sure. But while everyone in your hunter-gatherer band for most of human history was probably a third or fourth cousin, those bands did interact with each other, and I don’t think you can make such strong statements about kinship there.
Admittedly this is where my ability to generalize ends — there’s a whole range of inter-band behaviors documented from the forager societies we know something about, from extreme hostility (whether you can call it “warfare” is a matter of definition, but plenty of killing and even more beatings and kidnappings) to obligate marriage exchanges (‘no marriage within band’ isn’t an uncommon norm in cultures like this).
I would agree, but then even though I’m not religious I was raised in a Catholic family. I wonder if this concept of “basic fairness” isn’t basic at all, rather a direct consequence of Christian theology, which focuses on the moral equality of all humans before God.
While there were historical examples of explicitly racist Christian denominations (e.g. the antebellum Southern Baptists), they were unusual. Mainstream denominations sought to convert foreign people, but once they converted they were generally considered morally equivalent. Most other religions don’t do that, they are usually concerned only with a particular ethnicity or nation. (Islam is, in theory, ecumenical like Christianity, but in practice many Muslims societies tend to be multicultural and/or tribal).
Maybe as the Western civilization becomes more secular, the concept of “basic fairness” is bound to fade away and the only options left will be either pragmatic racism or tribal identity politics.
I don’t buy the conspiracy version of it, because impersonal mechanisms like evolution and capitalism are infinitely more powerful than any conspiracy, but the culture war is an always will be a sideshow distraction from the real forces at play. The big content platforms are not going away because they are wildly profitable. They will only ever be moderated in such a way as to avoid killing the goose that continues to lay golden eggs. These are brute facts of the matter. It makes no more sense to have an opinion on whether YouTube (or something very much like it) should cease to exist than it does to have an opinion about whether we should stop having earthquakes.
If you want to make a dent here start working on the problem of witch detection software.
I’m not sure about this because I don’t think the golden eggs are laid much by political provocateurs. I mostly use YouTube for video game content, and the game review / streaming channels I watch have many, many more subs than probably every right wing channel except for Crowder. And that’s just video game stuff. Now look at music videos and makeup tutorials and generic comedy stuff, etc.
Where this would go wrong is Witch Creep. You start with banning Alex Jones and a year later we’ve got Milton Friedman in the list of right-wing hate youtubers on the front of the NYT.
Sounds an awful lot like China’s social credit system. This I find dystopian. See above with the Witch Creep.
I don’t necessarily disagree with you and I don’t necessarily disagree with BBA. My opinions on this one are complicated and unstable. My points with the above comment is that they don’t matter. It’s like being against the existence of nuclear weapons, completely pointless.
From the article:
So are books.
Left wing ideas got expressed quite effectively by people writing books, however.
From the end of the piece:
Well yeah. That note is pretty weak though. Feels like a narrow distinction just because Sam Kriss’ outgroup chooses YouTube over blogging.
Yeah, it does feel a little like he’s noticed some uncomfortable implications from his argument and hastily tried to semi-acknowledge them at the end, but TBH the way things are with lefty CW stuff, it’s probably to his credit that he acknowledges them at all. (He does say literature rather than blogging though, and there’s an earlier reference to the uselessness of blogging against neoliberalism, so I’m not sure if it’s fully reducible to ingroup/outgroup dynamics. Who’s even in that guy’s ingroup these days?)
Having ploughed through that article, it seems to boil down to “We lefties can’t be as effective as the horrible righties because we’re just too cool and compassionate and nice” which may be flattering but doesn’t help his case.
Given that I had to look up who Sam Kriss was, and the results pointed to him having been embroiled in a little online campaign scandal of his own, I would have thought he’d be more sympathetic to alleged witches, but no – witch hunts happen because of all the horrible witches out there, the only solution is to burn the whole thing down because us nice guys are too nice to effectively fight back.
Also, I laughed at the obligatory “showing off my Eng Lit chops” bit at the end:
Anticipating a counterargument is pretty much the same thing as refuting it, isn’t it?
Thanks. I didn’t make it that far.
Yes, it doesn’t make lots of sense.
I recall there was a study that found that conservatives are more attractive than lefties. Maybe it won’t replicate, but if it’s true then it might explain why conservatives come across better on video.
Anecdotally, I don’t find hard to see why Jordan Peterson or Lauren Southern get more views than, I dunno, ContraPoints?
I think we best put that to the test @vV_Vv, and compare one Catholic with one Communist:
Guy Fawkes vs Che Guevara
Who won?
Jordan Peterson’s channel has more subscribers than ContraPoints, but the trend in view numbers is the opposite direction: 50k-800k vs 500k-2.5m.
Trying to read it charitably, but it takes a lot of trying. Without it, it sounds like “lonely people allowed to express themselves inevitably turn into Nazis”. I’m not sure why the author is confident in labeling any pathology coming out of YouTube “right wing” (have they never heard of tumblr? Different format, same outcome, mostly other wing) so it’s hard not to assume this is just a “boo outgroup” piece. Calling them the politics of “loneliness” a) feels awful damn rich coming from a blogger and b) feels like just another in a long line of insulting introverted nerds because it’s easy and feels good.
I don’t see any solutions in there, and stoking the fires of “everyone who disagrees with me must be stopped and deplatformed now before their minor thoughtcrimes become Christchurch massacres, which of course they inevitably will” feels really scary and counterproductive.
Did you read the same piece I did? I thought it was very explicitly against deplatforming right-wing YouTube. Re: “Different format, same outcome, mostly other wing,” isn’t that exactly what he’s getting at here?
It’s a kinda weird definition of “right-wing”, but I think it tracks – in SSC terms, he’s just trying to look at the ideology rather than the movement.
Maybe we didn’t read the same post – the one I read was all about the problems with “right wing YouTube” and how YouTube was “always going to be ruled by the right”.
Kriss doesn’t seem to have any real issue with deplatforming right-wingers on YouTube except that he thinks it might be ineffective (the free speech argument is just a “squabble” and anyway the right wingers aren’t really engaging in speech, it’s just a “concentrated torrent of non-communication” to zero audience (again, quite rich coming from a guy blogging into his own personal void because more mainstream outlets dropped him for getting too aggressive with his girlfriend and then issuing a non-apology apology)). His problem with deplatforming, other than it not working, is that it catches up Antifascists. Left unexamined of course is whether the Internet black hole plays any part in the radicalization of Antifa. Kriss’ preferred solution?
As for the weird definition of right-wing, it sounds like he’s just taking shots at “the wrong sort of left-wing”. In SSC terms, he’s attacking his near-outgroup. You can’t defend that as an idiosyncratic definition of “right wing”. It’s a pure partisan attack. Leftism is “mass participation politics”. Rightism is “the politics of loneliness”. Again, it’s hard to separate Kriss’ argument from the usual “righties are gross lonely neckbeards in their parents’ basement”, just wrapped in better turns of phrase.
Charitably I think he’s noticed a real issue about the “black hole” effect of something like YouTube, but he doesn’t really say anything profound about it after that. The rest of the piece is all about defending the idea that the badness of YouTube is fundamentally right wing and carefully avoiding talking about similar phenomena elsewhere online that don’t feed the partisan slant of his narrative.
Speaking of people doing things together, perhaps Kriss’ friends should do an intervention to get him to confront his metaphor habit before it gets completely out of hand.
The Christchurch massacre killed 51 people. The Iraq War killed… well, we lost count, numbers vary wildly, the low bound seems to be around 110,000.
You can’t just elide around the fact that the mainstream forces who would be carrying out any “deplatforming” have multiple orders of magnitude more blood on their hands than the people they’re trying to silence, and a history of silencing people opposed to their wars of aggression. That’s arguably the #1 problem with the idea in the first place.
Wait, the Iraq War is the responsibility of the deplatformers of the world? Cheney and Rumsfeld and Wolfowitz are antifa, is that the idea?
EDIt: That comes across as much snarkier than I intended, I’m not trying to start a fight here, I’m legitimately confused as to what you’re trying to say. There were hundreds of thousands of people on the streets protesting the Iraq War, and I think most of those who are responsible for deplatformings were either there, or would have been if they’d been old enough. So what you’re saying is extremely confusing to me.
Deplatforming is carried out by a small number of massive tech monopolies, typically in response to outrage pieces at a small number of media outlets; protesters have little or nothing to do with it. Many of the media outlets trafficking today in said outrage pieces backed the Iraq War to the hilt at the time. Something like a third of Vox, for instance, is owned by NBC, which in 2003 fired (one might even say deplatformed) Phil Donahue for being antiwar.
If deplatforming hit the New York Times and Vox Medias of the world, instead of their competitors, I’d be far more sympathetic to the idea. It doesn’t and never will. Censorship exists to protect those who already have power, and media and tech barons alike want to preserve a status quo that’s keeping them wealthy and powerful.
I’m gonna be honest, to me this reads like an argument against deplatforming anyone.
It’s impossible to keep the incidence of any crime (including murder) at precisely zero. The standard way of preventing crime is deterrence; it mostly works, though not always. If we’ve tried enhancing the usual ways of fighting crime, and tried any other plausible ways that don’t involve suspending fundamental liberties, and the crime rate is still excessive (say, terrorist bombings killing dozens of people every day), it may be reasonable to contemplate suspending fundamental liberties if it can be expected to help the situation significantly.
But, crucially, the threshold above which the rate of a crime is considered “excessive” enough to suspend fundamental liberties can’t be “anything above zero”. A standard that it’s OK to curtail fundamental liberties (such as speech) as long as they might indirectly lead to a slightly increased incidence of murder (or even terrorism) would lead to a system with no civil liberties at all — and it still wouldn’t achieve the goal of zero terrorism. Currently we have something like one ideologically motivated, non-Islamist murder a year throughout the Western world (there aren’t much more Islamist ones either), which is about the lowest possible rate above zero. In particular, IMO single events (such as a terrorist attack) are essentially never legitimate reasons for restricting basic rights.
All of the above also applies to censorship by private companies. While IMO private companies should have the right censor their content, we should consider it undesirable for much of the same reasons we consider it illegitimate if done by the government; we should discourage rather than encourage it. That’s if the platform has no effective alternative, and thus its censorship would have a significant effect on public discourse. Censorship by private companies is not much of a problem if the platform has major alternatives and thus the censorship has little effect on public discourse — but in that case, it can’t achieve the desired effect either.
In general, I tend to be very skeptical about using single events (such as a terrorist attack) as justification for policy change. Much of the time the policy change is only tangentially related to the event, and the “justification” is mostly to paint opponents of the policy change as insensitive, or not sufficiently opposed to the evil terrorists.
I think you mean terror attack, not murder, since the Christchurch attack alone killed 51 people.
There were 4 deathly terror attacks by Islamists in the West* in 2018, 8 in 2017, 9 in 2016 and 8 in 2015. So that’s 4-9 times more than what you claim, for the previous 4 years.
* With a relatively strict definition of the West, excluding Israel, Bosnia, Russia, etc.
Terrorist attacks aren’t single events. They are part of a pattern.
Secondly, your entire argument is very weak, since a single incident can divulge a weakness. The Chernobyl incident was a good reason to reconsider how to build and run nuclear reactors.
I counted a mass-murder as one.
I didn’t check the details on Islamist attacks, as the post I replied to was about censorship of far-right views, and far-right terrorist attacks. In any case, when looking at orders of magnitude, Islamist terrorism in the West is still much closer to 1/year than to, say, the total number of murders, which is orders of magnitude more.
In that case the pattern may be a reason for a policy change, not a single event.
A “weakness” of the form that we can’t prevent some particular event (e.g. a particular crime) with 100% certainty is something we have to live with, and not a reason to curtail basic rights. The only exceptions are single events of extreme scale, such as a major war.
@BBA says:
A bit off topic, but should I feel ashamed that before this post I’ve never heard of this “Sam Kriss”?
Is he worth learning more about and reading more of?
Definitely no to the first. I’ve never heard of him before now. And googling does not reveal him to be well known.
Judging by this essay. I’d say no to the second. It’s possibly he just wasn’t batting well that day, but he has not managed to convince me to read a second one. I’m curious what your judgement is.
@quanta413,
My reaction was much the same as yours, but othet commenters seemed to indicate previous familiarity with the author and I was curious about why.
I’ve heard of him and read him before, but I don’t think he’s prominent, and you have nothing to be ashamed about. I know him for the Atlantic no-trees-on-flat-earth article and for the sexual harassment thing.
Someone that writes like this:
isn’t writing for you and me. It’s the verbal equivalent of contemporary art music.
We should respect his wishes and not read him.
> the verbal equivalent of contemporary art music
Utter nonsense. The passage you quote is perfectly lucid. If you think otherwise, I suspect it’s because you’re pattern-matching to unrelated academic writing that happens to use some of the same words, and thus shutting down before you even reach the stage of trying to parse the sentence.
I’m reading that, after several tries, as “writing is different from video because it uses words instead of directly capturing images, and this is significant because [something].” There are lots of YouTube videos that are just people talking (with words), but I suppose that doesn’t count for some reason.
The bit about writing not being involved in global communications infrastructure eludes me entirely, unless you have a very finicky definition of what does and does not involve comm infrastructure. I’m sending this post over the same internet YouTube uses, and there are plenty of uploaded text content sites which are at least roughly analogous to YouTube.
EDIT: I went ahead and read the link, and this man appears to be dishonest, biased, and thoroughly pretentious to boot. He opens with a bizarre and unhelpful metaphor about slippery membranes, and goes on to define YouTube as right-wing because all the loud leftists on it are assholes and therefore “reactionary” in character.
@The Pachyderminator,
It didn’t seem clear to me either, I’d just write that off as my being undereducated, but @brad seems very educated to me so I’m inclined to believe his word on the piece’s quality.
The sentence before that name drops Derrida and Lacan. If I’m pattern matching it’s to some pretty reliable signals.
The essay is incoherent junk. There’s no attempt to quantify whether he’s getting an endless stream of “Nazis” because he’s looking for it or if all roads on youtube inevitably leads to “Nazis”. Where he seems to confuse many not Nazi things with Nazis.
The truth is he found a bunch of right wing crap in his youtube recommendations because he is looking for it. Youtube fed his obsessive mind what he wanted. He cites a video that he fully admits he may have been the only viewer of.
On the other hand, youtube feeds me key and peele, k-pop music videos, videogame vlogs, and cooking shows. Because that’s what I watch on youtube. It’s not rocket science. Nazi videos? 0. Alt-right videos? Also 0. Jordan Peterson videos? Also 0. Not that there’s anything wrong with watching Jordan Peterson.
~million subscribers sounds like a lot but it’s actually not that big. You can find 18th century cooking series on youtube with a similar number of subscribers. Many cooking or eating vloggers have larger subscriber counts.
To be fair, the Friedman family cooking series will lead to right-wing extremist Milton Friedman as recommended videos.
Stop it you. That’s slander. After all, everyone knows David Friedman is into medieval cooking so if he did a cooking series it would be set several centuries before the 18th century. He would never be so historically careless.
However, the series of medieval cooking tips would include information on sourcing the correct ingredients in the current day. By its very nature, this information would be supporting of markets and free trade, which the essayist would consider to be fundamentally right wing..
“YouTube was always going to end up being ruled by the right, because right-wing politics are a politics of loneliness.” Subtle.
Here’s my equally charitable counterpoint: “Mainstream media was always going to end up being ruled by the left, because left-wing politics are a politics of weakness.”
Snark aside, isn’t the right-wing associated with stronger communities and less loneliness? “Family values” and such?
I will ignore the left/right aspect of all this, because I think that the problem is advertising.
It would probably be possible to set up a large website in a way that makes most people happy, by keeping everyone in their bubble and pretending that the rest of the world does not exist. (I am not saying that would be a good thing to do, only that it would be possible.) But that is not what maximizes the number of ad views. People reading only stuff they agree with would get bored after a while, and leave. People who are pissed off will stay and keep “fighting”.
The greater the controversy, the more ad views, and the greater the profit… until some moment when things become too controversial, and some companies start having second thoughts about being associated with that kind of content. Then, a ritual sacrifice must be made to appease those companies. Maximizing the profit means riding this wave carefully… to be not too controversial, but also not too uncontroversial.
And yes, this entire thing can be manipulated by politically motivated people throwing a public hissy fit about something quite mild, because that is a weapon that works. But even without such people, there would be always someone angry at something, because if you are not making anyone angry, you fail at maximizing the ad views. There is an optimal amount of anger, and it is greater than zero.
Therefore, the large websites will keep making you angry.
@all
Yeah, the Kriss essay is kind of a disaster. As am I.
I want to scream THIS HAS ALL GONE HORRIBLY WRONG, CAN’T YOU SEE THAT? NOW TURN THOSE MACHINES OFF BEFORE THEY KILL US ALL but nobody ever listens to the raving lunatic in the opening scenes of a disaster movie. I’m finding it impossible to articulate my deep existential dread that we’ve unleashed forces we won’t be able to defeat… or maybe, that human nature trends towards autocracy and xenophobia and all the progress we’ve made towards liberal democracy and multiculturalism could vanish in the blink of an eye.
(And no, this is not just Trump Derangement Syndrome. Trump is a joke. I’m worried about the next joke not being so funny.)
Or I could just be losing it. If I ever had it to begin with. Whatever. As Warren Zevon said, enjoy every sandwich.
Well, obviously, that is what the conservatives and classical liberals have said all along.
Hopefully I’m not being too snarky. But its right there in the Federalist Papers, Locke, Montesquieu, and Tocqueville.
All the more reason why enemies of the liberal order should be deplatformed with extreme prejudice.
Good God, I thought M*ldb*g was huffing glue when he argued for imposing totalitarianism in order to prevent totalitarianism, but it’s starting to make sense to me…in the abstract anyway. Object level I’d much rather be ruled by the people he wanted suppressed.
But deplatforming itself is illiberal, so you don’t have a liberal order if you are deplatforming people.
Furthermore, with human nature being what it is, you can’t just give some people very strong tools of oppression and then expect them to limit that to actually only oppressing the illiberal, rather than oppressing what they dislike, don’t understand, what puts a burden on them, etc.
Humans react badly to short-term crises. We think we’ll get over this specific tragedy right now with deplatforming, but what are we giving up?
Even the New York Times scare article about alt-right YouTube showed that the person could be walked back out of the rabbit-hole with differing views. Or the story a month ago in an OT about “my son joined the alt-right after being labeled a sexist, but left when my husband and I showed him some respect.” Or Daryl Davis.
I might be wrong, but when it’s really easy to make mistakes in favor of “let’s crush our outgroup into dust.” If you think you can lose weight eating 30 pounds of jelly donuts a day, you need to realize there is a significant bias in your head. It doesn’t mean the jelly-donut-diet is wrong, but you need to slow down and think coolly.
– worries about autocracy
– promotes heavy-handed censorship and deplatforming with extreme prejudice for the enemies of the liberal order.
You do seem really confused.
“At that point, it became necessary to destroy the village in order to save it.”
To steelman this somewhat…
There are heavy restrictions on thought and speech.
Restrictions on behavior are kept lax (or maybe fun is also mandatory?)
The censorship to combat xenophobia seems narrow to me though. Insofar as multiculturalism is a public policy why should censorship be limited to defending one policy against critics? Why not all policies? Bad monetary and trade policies might be more damaging then immigration restrictionism.
I mean, you are going to have a very narrow discourse then. Every TV station is going to be Charles C.W. Cooke arguing with Victor Davis Hanson and Elizabeth Nolan Brown.
I can understand that there are people who would approve of such restricted access to information for the proles, but I don’t see why I would ever agree with them, especially when I’m a prole by their reasoning.
That’s exactly the position I would articulate as a conservative.
For 500,000 years humanity has been stuck in an iterated prisoner’s dilemma. For 500,000 years, any time a tribe has had the cultural high ground they’ve defected, and crushed their ideological opposition. When the tables turn (Every 40-100 years or so?) the new dominant group crushes the old.
For the last 75-150 years (in the US) the red tribe and blue tribe have been cooperating. rights have steadily expanded, and political violence are extremely rare. There are people on the right (and left) who want to change that paradigm. But at least for now Neither side has defected. The best weapon the defectors have is to try convince the cooperators that the other side is about to defect, so we should defect first.
In an iterated prisoner’s dilemma the best strategy is to keep cooperating until the other side defects, then punish them for their defection. but in our game “the tribe” can’t always control their defectors. So I propose the best strategy is to get as many people as possible to pre-commit that they’ll keep cooperating across tribal lines, even during years where enemy defectors seem to be taking control. Personally, I wouldn’t want to live in any other world.
TL;DR The best strategy to win our modified prisoner’s dilemma is to promise you’ll cooperate unconditionally, and hope the other side reciprocates.
Looks like a recipe to obtain the worst possible result.
I see where you’re coming from. But in my mind, even if “my tribe” defects first and crushes out opposition, its just a matter of time before we get crushed in turn. I really believe that there is no guarantee that modern, liberal, egalitarianism will ever come back once we lose it.
I’m open to other ideas.. what kind of strategy would you recommend?
Tit-for-tat. Make casualties from both sides until the survivors agree to a truce. How do you think modern, liberal, egalitarianism come into existence in the first place? Certainly not by one side rolling onto its back.
I think (and may be wrong) that a lot of the great leap in egalitarianism comes from oppressed groups escaping the tit for tat in europe, and moving over to america. Its my impression the tit-for-tat cycle was largely broken at that point due to a combination of factors that historically didn’t exist.
Surplus of resources and space (due to the collapse of the indigenous population)
The influence of the Quakers.
Large communities were stronger communities, and the threat of an outgroup leaving to build their own community was enough to shift the balance of power.
Then later on
The need for culturally diverse states to unite against britain forced some measure of egalitarianism.
Federalism creates an atmosphere where tit-for-tat isn’t necessary. If you don’t like your state, you can leave.
Slavery being a south centric practice allowed the northerners to be virtuous at no personal cost up until secession triggered a civil war.
After a bloody civil war that fought in the name of outgroup rights, I think momentum carried us to where we are today, with WW2 spreading it to europe.
I think that the tit-for-tat last for 500,000 years.. and only stopped gradually over the past 400 years because we got a bunch of surplus resources. I imagine its possible we get another boost like that if we ever become an interstellar species.. but I think that if we slip back into the old ways, it may be many hundreds of years before things go back to the way they are now.
(I’m a conservative but consider myself gray tribe, so I hope you’ll forgive me if I use red tribe’s perspective here)
I don’t imagine my tribe can oppress/kill the Left for a few
yearsdecades, and then one day Chris Matthew’s or AOC or Bernie Sanders go on TV and say “Ok guys, you win, please stop oppressing us and we promise we won’t try to make you bake any cakes” and then the right, led by Trump/Sean Hannity/Paul Ryan say, “ok we accept your surrender, and because we are so magnanimous we will stop doing things you find oppressive like allowing discrimination based on gender identity”There is nobody on either side who could negotiate a truce even if everyone wanted to! It seems to me a tit-for-tat strategy would result in everyone killing each other until there are so few people left that it doesn’t make any difference anyway.
The US didn’t leave tit-for-tat behind at all during the period you see. They explicitly built it right into the system. All that checks-and-balances stuff was them saying “look, if you push too hard here, those other guys are going to be able to push back there, so better keep yourselves in check.”
While everyone avoided titting for fear of being tatted, tatting was not actually necessary; an ideal tit-for-tat looks the same as unconditional cooperation, because there’s no actual defection to punish. We got so used to that that we forgot the need to tat, people got mixed up about why the previous period worked (cooperation is good, but only possible because of the willingness to punish defection), and so now the defectors were given free reign and it’s spiraling out of control.
There are always grifters who make money off of encouraging their fellow tribemates to defect. Saying that the outgroup is evil is a guaranteed audience. And it’s a rational strategy, as long as they never push things too far. There are feedback problems: they tend to get more money the more they push, and the only way of knowing when they’ve pushed too far is that a bunch of things tragically break.
I don’t know how exactly we got into this cooperate pattern, but it works, and keeping the stable world order going is so important that it only takes a small leap of faith to see that everyone else will keep it going. The best you can do is talk down the shit-stirrers in your own tribe, because your own tribe is where you will have the most effect.
@jaskologist I think separation of powers is important, and helps prevent any specific group from gaining to much power over another, but I’m not sure I see how tit-for-tat can be described as explicitly built it right into the system maybe you can elaborate on what you mean? I don’t really see any mechanisms in the government to make it easy to punish each tribe’s outgroup. In fact, I would say just the opposite. That our constitution and bill of rights were specifically designed to prevent people from oppressing their outgroups.
Opponents of stop and frisk, and opponents of coerced cake-baking are both pointing to the bill of rights to protect themselves from their outgroups.
Uh, we’re long past this. Both sides have been trying (mostly successfully) to convince their own side that the other side already has defected and that we must respond in kind. And they’ve been doing it for years.
@souleater,
I understand where you’re coming from philosophically but from my perspective politically motivated violence in “the west” and especially the United States is far less now than in the first half of my lifetime, and I think the blood spilled from the bullets and bombs assassinations and lynch mobs are far worse things than the ink and pixels “character assassinations” and “Twitter mobs” of today.
The increasing incidents of mass shootings are worrying, but the U.S. murder rate is still less now than it was for most of my life, so what prompts your fears?
I think we actually agree Plumber, I would say that people born today, and in the west in general are among the luckiest people in human history. There are, without a doubt more people living longer, healthier, and more prosperous lives than ever before in human history. I’m just afraid tribalism is fracturing our society and that a some people in both major tribes like to sell the fiction that ingroup can maintain society while excising it’s outgroups.
I have a lot of concerns, but in the interest of brevity I would just point out that it has become very popular to talk about deplatforming the outgroup before the outgroup deplatforms you. IMHO its counterproductive and I think it does more to radicalize the given tribes outgroup than anything else.
@souleater,
True enough, but so far all the extra effort that’s going to “de-platform” and “out-tweet” the “other side” (whichever) seems to me to be sapping the energy that in earlier times went towards building actual bombs, so I’d call it a win overall.
People love to bemoan the “Millennials”, but this “Gen-X’er” is just old enough to remember the domestic terrorism of the ’70’s, so I’d have expected something similar when we had a repeat of a great mass of 20-somethings, but nope!
“Generation Y” was incredibly peaceful, they mostly confined their vitriol to just pixels!
Charlottesville is an outlier, as mostly when politically motivated violence has occurred it’s been scheduled “Anti-Fa” vs “Alt-Right” fistfights!
It could be so much worse.
@BBA
Hugs. Everything will be all right.
What’s the deal with Molyneux ranking right next to Anglin? I watched a couple of his videos back during the election (he had some good bits explaining how the media distorts stuff) but haven’t since. He is (or was) a libertarian.
Molyneux is also a nationalist who doesn’t shy away from making H*D arguments. Not just the IQ stuff (for which I think there is relatively solid scientific evidence), but also speculative stuff like claiming that people in the Global South have an average genetic preference towards collectivism while Westerners have an average genetic preference towards individualism. This is how he combines libertarianism with nationalism: immigration from the Global South to the West should be prevented in order to maintain a market economy.
I think any critereon you use for deciding that Molyneux isn’t allowed to express his views will be broad enough to suppress a lot of worthwhile discussion, true claims of fact, and so on. If you have to suppress such views to protect the liberal order, you’re not going to have much of a liberal order when you’re done!
Indeed, I’m fine with Molyneux expressing his views, I was just explaining why is considered adjacent to the neo-Nazi.
If discussion of the banned topics was allowed then perhaps the Christchurch shooter could have been talked out of doing what he did. It may have not worked, of course, some people are just crazy and looking for any excuse to commit a massacre, but by keeping all the discussion as forbidden knowledge whispered in the darkest corners of the Internet, it will attract maladjusted and dissatisfied people who will then radicalize each other in their echo chambers with zero chance of coming across serious intellectual counterpoints from the other side.
Btw, shall we ban all Muslim preaching in order to prevent the next Islamic attack? Shall we ban teaching of heliocentrism and the theory of evolution because it could lead to atheism which could in turn lead to communist revolutions?
Would The Atlantic publish it now?
Anyway, what do you propose? Shall we maintain a masquerade to suppress forever public knowledge of a true fact about the world which may have large practical implications, for instance, on the effects of migration policies?
Truth is a social construct. You have your facts and I have mine, we both have studies backing them up, whose facts are “true” is determined more by the election returns than anything else. The studies may not replicate, but nobody ever bothers trying to replicate them. Nobody cares. Nothing matters.
That’s all I have to say about any of this, for now.
Ignorance is Strength
BBA:
Truth is a social construct.
I sincerely hope the folks who build the bridges I drive over, the medicines I use when I’m sick, and the electrical system that I use to heat and cool my home think differently.
Remember Scott’s post on the ad-overloaded woke APA conference?
[with less snark]
BBA:
I can’t work out a way to interpret your comment here in a non-crazy way. I know you’re a smart person who has interesting things to say, so I assume there’s some rational meaning you’re getting at. But it’s hard for me to figure out what it is.
There’s some objective truth about whether or not, say, blacks and whites differ in average IQ, or whether men and women differ in average physical strength, or whether human CO2 emissions are causing changes to the climate, or whether leaded gasoline caused the 90s crime wave, or whatever. Even if we can’t always agree on what the truth is, we can agree that there *is* such a truth, and that truth does not ultimately depend on whose side wins the next election or whose side controls the New York Times.
At some point, we have to make decisions, at both a personal and societal level. “Truth is a social construct” seems to imply that we can choose our own reality when making those decisions and yielding the consequences. But that’s just not true–some decisions we can make will have catastrophic consequences even if everyone in the world thinks they’re right. If we convince ourselves that AIDS is caused by drug use and degeneracy instead of a virus, and convince *everyone* of that claim, we’ll just have more and more people with AIDS, unsafe blood supplies, and no development of the drugs that keep HIV from developing into AIDS. The universe doesn’t really care about our social constructs.
So what am I missing?
I think what vV_Vv is getting at is that any particular policy position, will have an overwhelming amount of “evidence” both for and against it. The effect of the minimum wage is a great example (I think Scott mentioned this recently) I can give you 10 studies saying its good, and 10 that its bad, and the truth remains opaque and may just be industry/experience level/region/time span specific in complicated ways.
I can prove, experimentally, bernoulli’s equation. But I can’t really prove in a reproducible way the effects of migration policies.
Souleater:
This seems like an isolated demand for rigor.
For any policy decision we might consider, for any scientific question we might ask, there are likely going to be ways we could be wrong. That’s a good reason for some epistemic humility, but it’s not a good reason to let anyone fuzz out some facts or claims of fact because nobody can be 100% absolutely certain they’re right.
Suppose I smoke two packs a day. You can point to the extensive literature on how bad smoking is for my health, but of course, I can find justifications for dismissing all that literature if I’m motivated enough. Correlation is not causation, the researchers had an anti-tobacco agenda, all the researchers are suffering from groupthink, biology is so complicated and messy almost anything might be going on, etc. I can keep making those explanations until I keel over from lung cancer, if I like. But reality isn’t actually going to be fooled by any of that nonsense.
We actually do have to make some decisions in this life, despite our imperfect certainty. Someone who wants to toss the best available (though imperfect) data on which we might make those decisions, with some discussion about how reality is socially constructed, is extremely unlikely to help us make those decisions well.
If the same person wants to toss the race/IQ data in the trash because the story is complicated and scientists have biases, but doesn’t want to do the same for (say) the lead/crime hypothesis, or the value of Headstart or universal pre-K, or the dangers of rising CO2 emissions w.r.t. climate change, the only way that makes sense is as an isolated demand for rigor.
Probably a better way to think about it is that people have a tendency to think they’re arguing over facts, when really we’re arguing over which facts matter.
@Conrad Honcho says:
So much this.
I’ve been a little obsessive about reading polls that break down which demographic and economic groups tending to vote for whom, and often from the outside what looks like “voting against their own beliefs/interests/whatever” on closer inspection turns out to be people voting for and against specific things that aren’t the big newspaper headline “issues” of the day.
The same people who want to deplatform/suppress some discussions of fact also oppose any discussion of whether those forbidden questions-of-fact are, in fact, relevant to political policy or personal choices.
More to the point, the people who plan to lie to us for our own good, or decide which information we’re not fit to know, are also the ones who told us Iraq had WMDs and was a grave threat to US security, and more-or-less cheerlead for any proposed bombing or invasion anywhere ever. They’re the ones who, whenever they cover any technical story where I know the subject matter, get all the details wrong.
I see no reason to think that those people are either smart enough, wise enough, or moral enough to be trusted with the power to decide who’s allowed to speak or what ideas may be discussed.
I did not know that GWBush and Rumsfeld were trying to deplatform *checks notes* James Damore.
I was thinking of these guys and these guys.
I think that CNN at least has published some mea culpa, tho probably has been buried.
A question…are you american? living in America at the time? Cause the rest of the world knew the whole WMD thing was bullshit.
I ask because it seems weird to me that you blame the media, but had not really realized the american media had supported this, only thought the republican party and -a sizeable- part of the democrat party had supported it, but assumed the media, like us, knew it was bullshit.
No, albatross11 said:
i.e., they weren’t smart or wise enough to see through the government’s lies, but now do think they’re smart and wise enough to, for instance, “lie to us for our own good.”
I think the media at the time largely thought the same thing I did: “Why would Colin Powell lie to me?”
ETA: These days I think the media is mostly lying smear merchants. That is, they’re willfully doing the decieving. For instantce, NY Times headline: Alex Jones’s Legal Team Is Said to Have Sent Child Porn in Sandy Hook Hoax Case. This headline (and most people don’t read past the headlines) heavily implies that Alex Jones possess or traffics in child pornography, which we all pretty much agree is universally evil. Except if you read further:
And yes, the emails were things sent to Jones by some malicious third party, and Jones was in fact the victim of…I don’t know exactly what you’d call it, an “attempted planting of criminal material.” The NY Times know this, but chooses a headline maximally damaging to Jones.
No it doesn’t. It implies that “Alex Jones’s legal team is said to have sent child porn”. If there were allegations that Alex Jones possessed child pornography, the headline would be “Alex Jones alleged to possess child porn”. The actual headline describes something different, so one can infer there are no allegations. If you think it’s misleading, how would you write a headline to describe the event “Alex Jones’ legal team sends child porn”?
Those are some very nice trees, but how about the forest? If someone wrote an article about your conversation here titled Thisheavenlyconjugation’s Internet Friends Said To Have Defended Nazis, who is being tarred?
I would probably not write it in a way that makes most people who read it assume the victim of the crime is the perpetrator. I either 1) wouldn’t run the story because it’s not particularly newsworthy or 2) would say “Court Disclosure Reveals Alex Jones As Victim of Child Porn Smear Plot.” Probably sounds too passive voice, but at least no one is going to read it and think Jones was the one possessing/spreading child porn. I’d think that’s the really vital goal because, man, there are few things that make you want to go apeshit on somebody like the idea they’re diddling kids. Falsely leading people to believe somebody’s a kiddy diddler is like capital B Bad. But the NY Times has no problem doing that to their competition.
@Conrad Honcho
Relax, there’s no passive voice in your headline. But I’d probably go with Alex Jones’s Legal Team Cleared of False Allegation. Which, uh, is passive.
Alex Jones’s Legal Team Is Incorrectly Said to Have Sent Child Porn in Sandy Hook Hoax Case
But any headline that becomes more accurate by pointing out that it’s wrong should not exist in the first place.
So maybe:
Trolls Send Child Porn to Alex Jones Judge
Court Investigates Child Porn Submitted in Alex Jones Case
Investigators in Alex Jones Court Case Hunt For Party Responsible For Entering Child Porn
@Nick
@Edward Scizorhands
Read the article again, or preferably this more recent one which is clearer. You don’t understand the situation: “Alex Jones’s Legal Team Cleared of False Allegation” and “Alex Jones’s Legal Team Is Incorrectly Said to Have Sent Child Porn in Sandy Hook Hoax Case” are just wrong. What happened was that some unknown party sent InfoWars some emails with child porn attached, and then Alex Jones’ lawyers sent these emails to the plaintiffs in the ongoing case against Jones’. The FBI say that Jones’ is innocent of deliberately possessing and spreading the images, but no-one is accusing him of that. The complaint from the plaintiffs is that his lawyers should’ve done due diligence and not accidentally sent them.
Post-modernists sometimes say things like “reality is socially constructed”, and there’s an uncontroversially correct meaning there. We don’t experience the world directly, but through the categories and prejudices implicit to our society; for example, I might view a certain shade of bluish-green as blue, and someone raised in a different culture might view it as green. Okay.
Then post-modernists go on to say that if someone in a different culture thinks that the sun is light glinting off the horns of the Sky Ox, that’s just as real as our own culture’s theory that the sun is a mass of incandescent gas a great big nuclear furnace. If you challenge them, they’ll say that you’re denying reality is socially constructed, which means you’re clearly very naive and think you have perfect objectivity and the senses perceive reality directly.
Shall we ban teaching of heliocentrism and the theory of evolution because it could lead to atheism which could in turn lead to communist revolutions?
You don’t even need to bring up the prospect of communism to justify banning the theory of evolution: there was a direct link between evolutionary theory and the eugenics and scientific racism of the 19th and 20th centuries.
Bah, on one hand Darwinism leads us to eugenics, but on the other creationism will lead us to religious fundamentalists.
It’s time to give it a try to good old Lamarckianism.
Why no interested in preventing the Sri Lankan massacres or the Sutherland Springs massacre? What makes Christchurch so special?
Charitably, I think that the Joker said it best:
There are other, less charitable, explanations but I’ve come to the conclusion that they’re too spicy for SSC.
You choose which atrocity to focus on, depending on what policy argument you want to support?
https://www.youtube.com/watch?v=hGTaSIoQWis
I’m hoping I’m doing an honest job with these notes, but if you comment, could you mention whether you’ve listened to the podcast?
This is part of a long series about Evergreen University– the school Bret Weinstein was driven out of.
I’ve only listened to a few of them, but I was left curious about how things are at Evergreen these days, and behold, here’s a podcast.
A few points: Evergreen has quite a strong STEM side. Good professors, good students, and a good ratio between them. Perhaps relatedly, Evergreen doesn’t pay professors very well, which means a good working environment is crucial.
In the opinion of Belinda Bratsch (the interviewee) a lot of what’s wrong at Evergreen is the lack of a strong grievance process– students really didn’t (and don’t– nothing’s been fixed) have a good formal way to complain about professors, and that’s part of why things blew up.
There’s some general discussion about scientists not knowing how to talk to or write for the general public and not not wanting to learn. This isn’t just a problem at Evergreen.
Also, good professors, good students, good studios for the ceramics department. (Casual observation by the STEM student taking a shortcut.)
Melinda Bratsch thinks things could blow up again, and worse. SJWish is still pretty strong there. However, it’s possible to speak against SJW and still be a student there. However, it’s a hostile environment for white cis male professors, and becoming more so.
Bratsch says that the National Science Foundation says that 80% of our thoughts are negative and 95% are repetitive. I can believe this from personal experience, but does anyone of research on the subject? Good research?
Evergreen seems to select for students with initiative and a strong work ethic. [Me speaking: I guess you take your chances.]
Have not listened, have a question: is “However, it’s a hostile environment from white cis male professors, and becoming more so” correct, or is it “for” rather than “from”?
Thanks. Typo corrected.
Did not listen, so take this with appropriate grains of salt.
Speaking as a cis white male not-quite-professor, I don’t believe for a second this is true. I have spent most of my life in extremely left wing circles, and there’s never been a single moment when I felt hostility directed at my race, gender, or sexual preferences.
There have absolutely been instances, however, where I’ve seen someone attacked for their refusal to go along with left wing orthodoxy, including orthodoxy about race, gender, and sexual preference. But that’s not “being a white cis male”, that’s “vocally disagreeing with the fundamental axes of acceptable discourse in a given tribe”. Going to the Vatican and saying “Fuck the pope”, that kind of thing.
To be clear, I’m definitely not trying to argue that anyone who is attacked for these reasons deserves it. Frankly, I’m a big fan of saying “Fuck the pope”, even when it’s my pope. But it’s important to understand what the reasoning behind the attacks is. And like much discussion of left-wing thought I see here and elsewhere, the characterization you gave it seems so fundamentally wrong-headed I felt the need to point this out.
I’m willing to believe that the SJW culture at Evergreen is unusually toxic.
Fair enough, and thanks for the restrained response.
You’re welcome.
The other hing I believe is that there’s nothing in SJW ideology to stop people from being as bad as the worst of Evergreen, and worse than that.
What stops them is personal and group decency.
If there are brakes on hostility in the ideology, please tell me.
I should probably listen to the podcast.
I’m less negative about SJWs than you, leaning towards their side myself, but I do think that over the past 20 years or so, the main brake on hostility – respect for freedom of speech – has greatly diminished, especially among the youngest and most vocal contingent. This isn’t precisely new – people were complaining about very similar behaviours from the left in the 80’s – but I do think it’s gotten worse.
I will say that I don’t think there’s anything terribly unusual or surprising about most of the issues I’ve heard about. People with power tend to do what they can to maintain it. Freedom of speech was never a massive concern for many of the activist contingent (see the discussion of the purges of black faculty in the late 60’s at Chicago in Bloom’s The Closing of the American Mind). I do think this is (mostly) a BAD THING, just not a surprising one (and absolutely something that many on the right are just as guilty of, see the response the BDS movement for example).
The pattern of left wing students shouting down speakers they disapproved of goes back at least to the 60’s.
I’m probably overdue for writing about where I agree with the SWJs, and I what I think is true that I’ve learned from them. I’m saving it for the next CW thread.
Fair point.
In your experience of such environments, can a cis white male professor express neither support for nor criticism of left wing orthodoxy without problems, or is the tolerance only for those who appear to support?
I think sometimes a lack of overt support can get interpreted as hostility. This might be especially likely if one was in a situation where there’s an action we’re all supposed to take to signal allegiance and someone chooses not to. To be honest I can’t think of any cases I’ve witnessed like that, but it seems likely to me it happens.
I should perhaps note that despite the fact that I’m willing to criticize the shunning of those with opposing views (at least online and pseudonymously), for the most part I’m a fairly overtly orthodox member of the lefty tribe, so as I said the hostility doesn’t get directed towards me, and I may not notice it being directed towards others.
I just thought of a ceremony I took part in at the beginning of a work retreat, which was facilitated by a couple of Native American women. To begin with we went through this process where everyone was walking around on blankets, and they began removing some of the blankets and making people move to the side – this was intended to symbolize the destruction and fragmentation of Native cultures and the death of the majority of their people. Then everyone sat in a circle and my coworkers and I (none of whom were Native) each were asked to say a few words about what we’d learned through the ceremony, or to say something about our own experiences with Native culture.
Though people had the opportunity to pass, very few people did. I don’t think anyone thought anything negative of those who passed, but certainly if anything remotely critical had been voiced, there would have been whispers throughout the remainder of the retreat. So I’d think that anyone there who didn’t agree with some of what had been said would have felt very uncomfortable. But again, I think if they’d interpreted their own discomfort as stemming from some kind of threat towards their whiteness, malenesss, or sex, they would have been missing the point.
Sorry for the garbled quality of the above, I shouldn’t write at 4 in the morning.
The message seems to be that members of disfavored demographics don’t have a right to an opinion. As one person in such a place put it, “By being a white male you are in a privileged class that is actively harmful to others, whether you like it or not. So no, you really actually don’t get to complain about your right to an opinion.”
I’m not sure if the hair you’re splitting splits so fine. There’s an old joke about racism
What you seem to be saying is SJWs are like the southern racists; they don’t mind cis white males, as long as they don’t get “uppity.”
No, the point I’m trying to make is that white cis males are not disfavoured demographics. There’s just as much hostility directed towards women, Indians, whatever, who disagree with the fundamental principles of SJW-ism. In fact, in my experience of such cases, possibly more so. Consider how much hostility gets directed towards Laura Ingraham or Candace Owens and so on.
As it happens, I’m getting a ring side seat to a current ongoing incident. It’s taking place online, where no one really knows your race and gender – and it includes one contingent accusing everyone who disagrees with them of being (gender) abusers and gamer gaters. To listen to the (female) member of the other side who’s been bending my ear about it, the actual issue is transparency and lack of due process, likely part of a naked power grab, but possibly just a matter of the real offense being “came in conflict with high status person, so no need to look at things like evidence etc.”
My friend is irate that she’s been being accused of being a male chauvinist pig (to use outdated terminology).
I’ve vaguely been watching that same incident, from a much bigger distance than you, and the only thing I found perfectly predictable was “of course this superweapon was going to be used against you eventually.”
You can only speak out against a superweapon when your side is using it. If you say “well, the other side is bad, it would be really bad if they win, and they probably deserve it,” it’s too late.
Gonave Island secedes from Haiti and elects you to be its dictator for life. How would you go about making it into a rich, advanced, powerful country?
Note: Rump Haiti is your mortal enemy, refuses to recognize your country or to trade with you, and forswears to someday reconquer your island.
I suppose the answer should be something along the lines of making it a very free place to do business and allowing automatic citizenship to anyone with skills. But I don’t really think it would work, because there’s already The Bahamas, The Cayman Islands and other nations in the region which have no corporate taxes and friendly banking rules, and the economy of those places is still dominated by tourism. What could make Gonave more appealing to businesses than those other places? The geography just seems bad, particularly if Haiti must be your enemy as well. Singapore wouldn’t be Singapore if it weren’t so well placed on the map.
I suppose my answer is that I’d try to get Paul Romer on the phone and ask him what he would do.
Step 1. Be the Singapore of the Western Hemisphere.
Step 2. See step 1, really.
Step 3. Profit!
1. Hire advisors from rich, advanced, powerful countries. Become a protectorate of one of them. This should solve the continued independence issue, and give you a pipeline to foreign funding and technology access.
2. Start importing population from rich, advanced, powerful countries by a variety of means. Your patron should be interested in helping you do this, to staff all their investments.
3. Wait.
4. PROFIT!
That is a really tough question, as the island has little in the way of natural resources, and doesn’t really have a natural geographic advantage like Singapore, or a first-mover advantage like the Bahamas or Cayman Islands in regards to taxes. I’m not really sure there is a path for the island to become a rich, advance, powerful country within a single lifetime. I think even bringing it up to the median will be a difficult job.
Members from my church have founded an organization focused on development of La Gonave called Starfysh that tries to improve the quality of life for people on the island. Things they sponsor include research farms, schools, and clean water projects. It’s really interesting to see some of the work they do, and how a lot of the time it seems like they try to use modern version of really old techniques to plug the tech gap between what can be done in a modern economy, and what can be done on an island with limited tech and access to the rest of the world. (e.g. pushing the use of biochar to improve field yields due to lack of access to modern fertilizers.)
Recruit just enough capital to start a business as a bank. Advertise the island as a suitable retirement destination, but you have to invest in the bank. Relax any business regulations that wouldn’t obviously lead to the destruction of the island. Diversify into solar energy technology (mostly parts). Wait a hundred years or so. Result: Mauritius.
1. Ally with the US.
2. Build infrastructure. (clean electricity, desalinization, sewers, roads, airport, seaport, etc.)
Did you mean “forswears”?
Say I’ve got a manufacturing business with 1000 employees. I’ve made a ridiculous amount of money, and for some reason, maybe because I’m a little crazy, I’ve decided to found a new town somewhere in the USA where there’s currently nothing. I’ll move my factory and all my employees there. I hope that one day it will turn into a thriving city. Where should i found my town?
Bonus: Same question, but say I want to do this in a different country. What country and where precisely?
1000 employees doesn’t sound like enough critical mass to create enough demand for all the things you’d want out of a city, so you would want to be less than a few hours’ drive from somewhere at least large enough to support a Wal-Mart. If you want to allow for long-term growth into a large, thriving city, then you’d want somewhere flat enough to allow easy expansion, ample fresh water, at least two forms of inexpensive transportation, and most importantly, a permissive regulatory environment.
This basically rules out anywhere on the vertical coasts of the US, so you’re looking at inland rivers, the Great Lakes, or the Gulf Coast. The latter is perhaps less than ideal due to climate change, or at least the already-present risk of hurricanes. Flood insurance might make the whole thing a no-go outside of already established urban areas. The Great Lakes seem like they’d welcome the investment but there are reasons the Rust Belt is past its prime.
I’m going to go with a region I’m at least somewhat familiar with, and say somewhere around Huntsville, Alabama. There’s plenty of empty, cheap land, the Tennessee River for a water supply and shipping, access to both I-65 and the Norfolk Southern railway, an “international” airport, and a local concentration of labor for both manufacturing and research from the local auto plants and the Marshall Space Flight Center. All this in a state that’s hurting for economic development and abhors regulation. The flip side is that if you’re building a company town from nothing, you might need to provide your employees with private schools to convince them to relocate – Alabama isn’t exactly known for its high-achieving public school system.
Any hint why? Business reasons, legal, just plain eccentric? You probably want to pay attention to legislation and politics. Once you build a town you also have a city council voted by your employees, with a lot of power over rules and regulations that impact your business. Depending on how aligned your employees are to your goal this could be good, or could be union on steroids.
Also a random tidbit: in a statistic of communities started from scratch, the overwhelming majority of those that succeeded were religious.
There was also a comment here about a month ago on how rust belt cities are dead for good because of …cars. Nobody wants to live in the small town the factory is located, when they can live in a much bigger town with a 40 minute commute.
I don’t know how many employees/franchisees Domino’s Pizza had when Tom Monaghan sold out in 1998, but if you want to found your own town, this is how he did it 🙂
In the Great Lakes region, especially along the lakeshores, there are lots of small towns with open land nearby.
Most of the shoreline of the Lakes are already settled in some fashion, though the towns are typically small and have services to support transient populations of people who do summer vacations along the Lakes. Some of the towns have a used-to-be-more-important feel: depending on whether it used to be a location for shipping lumber/iron-ore/copper/corn/grain.
If access to railroads or cargo docks are important, you’re going to be choosing a location near a current town/city that has such things.
One recommendation is to pick a town with a few hundred people, within 50ish miles of the City of Marquette. Marquette is the largest city in the area, and has lots of resources that cities of ~20000 people, plus the support network and student body of Northern Michigan University. (If you want to locate nearer a tech-oriented school, it may be possible to find a similar location within 50 miles of the city of Houghton, on the Keweenaw peninsula…but Houghton has a population of ~7000 or so, plus the student body of Michigan Tech.)
These are more along the lines of turn a sleepy town into a newer, thriving town, or introduce a new business into a town that used to be a thriving for other reasons than build a small town from nothing.
The Green Man as a relatively modern invention. The sense of humor is a lot like Scott’s.
Stuff like that is why it is so easy to give contemporary pagan/Wiccan traditions a good kicking, which is why I generally don’t – so long as they leave me alone, I leave them alone, and doing “coloured candles and ribbons magick” is mostly harmless and well-intentioned. In the wake of Gerald Gardner’s alleged rediscovery of hidden continuous magical tradition in the 40s, an awful lot of this kind of Golden Bough-lite ‘history’ of witchcraft got churned out (so you had a Romany granny who read the cards – from an ordinary deck not Tarot – over tea for her neighbours? Let us tell you all about the secret esoteric wisdom that really means!)
I only get on my high horse and start swinging the sword when we get the “Ackshually, all so-called Christian festivals are ripped off from Real True Authentic Pagan Traditions” type of looking down the nose about how the newly-fledged Wiccan is so much more authentic and genuine and chronologically superior in their ceremonies, especially when the persons celebrating Samhain (a) couldn’t pronounce it in the authentic native pronunciation to save their lives (b) have no awareness of how authentic natives celebrated it and (c) completely confuse various traditions and have no idea of the history of the Church feast of All Saints and All Souls Days (so Americans tend to only think in terms of Día de los Muertes and assume that it was culturally appropriated from authentic natives, and have no idea of how the influence worked both ways and that the original celebration moved to the Christian feast day and was heavily incorporated into it, and heavily incorporated the Christian tradition into itself).
Though to be fair to most pagans and Wiccans, mostly it’s idiot urban fantasy novelists who do the looking-down-the-nose thing: have their heroes roll their eyes over the notion that Christian practices or symbols could have any real power because c’mon all that stuff was only invented two thousand years ago, but uncriticially accept that somebody prancing around waving a sprig of oak can perform Real Magick because, y’know, the Green Man goes back to prehistory. Or it’s canny magic-supplies-and-books shop owners hitching their wagon to the latest controversy du jour to publicise themselves and their businesses (“Now you can buy my latest SJ spellbook on Amazon!”)
Yeah, that one didn’t work out so good, did it, Dakota? 😀
For what it’s worth, the neo-pagans I know (a fair number of them) believe that their religion isn’t strongly connected to ancient traditions. “We go to the same source our ancestors did– our imaginations”.
I have no idea what he proportion of those who believe they’re following ancient ways is.
I’d imagine the core of fraud triumphalists is fairly small, their activities mostly online. I have an old college friend I wound up unfollowing on FB after he kept posting the same hooey and ignoring my cited corrections. He wasn’t averse to debate–the opposite, really–he just somehow failed to assimilate even rigorously documented citations that Constantine did not compile the New Testament and Ishtar has nothing to do with Easter.
Deiseach, I’m just noting that the article was about problems with earlier folklore studies, though it isn’t surprising that bad research filtered out into people developing modern paganism.
Speaking vaguely of, I was a bit shocked to find that early pagans didn’t celebrate eight astronomically based holidays. (Soltices, equinoxes, and halfway between each pair.)
On reflection, it’s plausible that early pagans didn’t necessarily have a modern sense of symmetry about dividing the year, and possibly couldn’t afford eight holidays.
And while we’re sort of on the subject, I wish there were a modern paganism based on how we actually live rather than one built around primitive agriculturalism. The weather matters, but there should also be rituals built around the economy, a thing which behaves erratically and affects people’s quality of life a lot.
On the other hand, I seem to be the only one bothered by this, and I don’t seem to have it in me to invent modern paganism, especially since I don’t seem to have met anyone else who wants it.
On yet another hand, I’m impressed that neo-paganism has a ritual structure which is strong and flexible enough that a lot of people can improvise pretty good rituals to fit in it.
Speaking vaguely of, I was a bit shocked to find that early pagans didn’t celebrate eight astronomically based holidays. (Soltices, equinoxes, and halfway between each pair.)
Ah, you mean the famous Wheel of the Year? Pardon me a moment while I wipe the smirk off my face.
Yeah, it’s got mostly Irish Celtic festivals but they had to lump in some Welsh and at least one Norse (Yule) to make it come out according to Western European calendrical usage. I’m no expert on ancient Irish calendars, but the way the calendar is set up as Gaeilge it doesn’t handily map onto things like Equinoxes and Solstices (despite the fact that our ancient monuments are engineered to mark these) – so the important days on the calendar are Imbolc/St Bridget’s Day, 1st February and the start of Spring in Irish tradition (not astronomical or meteorological spring); Bealtaine/May Day, 1st May; Lúnasa/Lughnasadh/Lammastide, 1st August; and Samhain/1st November. So, for example, the importance of May Day is shown by the poem attributed to Fionn Mac Cumhail about it (Fionn is a legendary hero alleged to have lived in the 3rd century AD; earliest references to Fionn and the Fianna date from about the 7th century AD).
To get the “Wheel of the Year” you have to stick in Ostara (you will remember the controversy over this as a Real True Authentic Rotten Christians Stole It Off Us pagan festival from previous comments), Litha for Midsummer which is I don’t know what (apparently it’s another one of St Bede’s ‘what the Anglo-Saxons round here call the months’ list), Mabon which is Welsh, or at least derived from Welsh mythology, and Yule which is Germanic/Northern.
So it’s a syncretic list created by modern Wiccan-types to give them a proper handily organised list of Sabbats and y’know, okay for that, good luck to them. But it’s about as “authentically prehistoric real true enduring tradition passed down in secret through the Burning Times to our modern workings” as my left shoe.
“The weather matters, but there should also be rituals built around the economy, a thing which behaves erratically and affects people’s quality of life a lot”
I like this idea. Central bankers dress up in weird robes and perform mysterious chants about interest rates; politicians holding rituals in arcane, unnatural language about unemployment and inflation attended only by a secret, select group of initiates; wild-eyed wandering prophets denouncing cities for ritual impurity and predicting great woe having perceived dreadful portents in the flight of birds.
Arguably this already occurs.
I think you’re describing Westminster.
Most of the weird ritual there is about the balance of power between the Queen and parliament.
Bankers already wear weird robes?
Business suits should probably be viewed as ritual attire.
I am thinking of this as something that hoi polloi would be doing. We’ll as almost as subject to the winds and storms of the economy and politics as we are to the weather.
I just realized that we have a concept (the economy) for what’s happening on the large scale with money, but no comparable concept for the political state of things.
The Guardian comes out against mindfulness and meditation.
Nice try Granuiad, but reverse psychology doesn’t work on me.
They seem to be mainly complaining that practicing mindfulness in it’s various flavors and incarnations is preventing people from becoming rock throwing antifa types, which is what correctly thinking people should be doing instead, with a side helping of complaining that turns out that people are willing to pay to be helped and thus so there are people willing to be paid to try to help.
Ah the Guardian, it never surprises, it never disappoints.
Seems like the usual “worse is better”. Anything that improves people’s lives delays the coming of the Revolution, therefore it is a bad thing.
Downthread there is discussion about an asteroid that might hit Earth soon. The author at the link claimed it would have the energy of 50 Hiroshimas if it hit, although another commenter (who strikes me as more knowledgeable than the angry ranter behind the original link) claims it would likely not harm anyone even it directly “hit” a major city.
I just want to make a geography game out of this scenario. Suppose you somehow had the unenviable power and responsibility of determining exactly where this rock struck the earth, but it has to be a city of at least 100k people. Assume everyone within a 50 mile radius is instantly destroyed.
If you wanted to cause the least amount of damage to the economy (local and global), which city on the globe would you choose?
I don’t know nearly enough about geography and economic networks to know a good answer. I’m just curious to see if others here know enough to hazard a guess.
Then there is the Beginner’s Level question: The destruction of which city would cause the most economic damage? I’d guess New York or Tokyo on that one.
I’m worried that perhaps it seems offensive to say “The destruction of X city in northern India would cause the least amount of damage to the global economy”… if so, you can blame me for asking the question.
Without jumping in on your question, I’ll suggest that this seems at least as improbable as the idea that it would have the energy of 50 Hiroshimas…is that commenter suggesting it would burn up in the atmosphere and not make impact?
The 50 Hiroshima bombs isn’t too high. If anything it’s probably too low. Hiroshima was only about 15 kt. But yes, while I have absolutely no expert training in the area, it is very likely that the asteroid would explode as an airburst at an extremely high altitude, something like 50,000 – 100,000 feet. That is according to impact simulators and also in accordance with historical precedent like the Chelyabinsk meteor (close in size, high-altitude airburst) and the Tunguska event (no crater so very likely massive airburst, size of impactor estimated to be at least 2-3 times greater than the asteroid in question).
A nuclear blast also emits radiation in a way that an impact explosion doesn’t, so while I wouldn’t say it’s “safe” by any means, if it happens way up high, there’s somewhat less danger from the fallout. You’d still get chunks of falling asteroid that would do damage, but you wouldn’t have people on the ground being immediately vaporized like in the “Daisy” commercial when the blast is happening 10-20 miles up in the atmosphere, nor a lingering radiation poisoning fate.
Yes. That commenter suggests there would only be heat and p-waves at ground level.
But my hypothetical is to assume an object of a mass that would not burn up in the atmosphere.
Magadan, man. It’s 93k, but combined with Ola some 15 miles east should be exactly 100k. The world economy will never ever miss it. Most of the citizens too.
As for the maximum damage, I’d suggest to pick the biggest nuclear waste dump and hit there, having its content spread all over the planet can cause more deaths and damage long-term then cleanly destroying even the most populous city.
Close enough to Megiddo to get nominative Determinism points, too.
Probably either Dunedin, New Zealand or Reykjavik, Iceland. Both are around 120,000 people, and not within 50 miles of other population centers. Dunedin is isolated at the southern tip of New Zealand, over twelve hundred miles from Australia, and much, much farther to anywhere else that isn’t Antarctica. A blast of that size on the coast would produce a substantial tsunami, but it would have to travel over 5,000 miles before hitting South America or Asia. Reykjavik is on the west coast of Iceland, pointed at Greenland, so any resulting tsunami would be largely blocked from northern Europe by Iceland’s mass, and Greenland and sparsely-populated arctic Canada, 2,000 miles away, would absorb most of the tsunami. The blast itself wouldn’t likely have major effects that far out from either chosen city.
Reykjavik is a nation’s capital and has a major airport nearby. Also, are you sure sub-megaton blust on land near a shore produces tsunami?
If it’s “instantly destroying” everything within 50 miles, it’s got to be much larger than a sub-megaton blast, which also means it’s got to be a lot bigger than the 30 m asteroid previously under discussion. I input a model about 10-15 times larger than that one, which the online models suggested would have (very approximately) a 50-mile total destruction radius for thermal radiation and blast wave, per the hypothetical. If you get an impact of that size on a coast, just about half the blast is going to be in water, and half the crater. So yes, I think it would generate a tsunami.
Reykjavik is, admittedly, probably a worse choice. They also have a lot of banking (or did, before the 2008 crash, and I assume are again).
Right, my bad, I missed that “50 mile radius” part somehow, I thought just the city will be instantly destroyed. Then it’s sure much larger then the asteroid that was discussed, even if it’d detonated near the surface. My pick is still Magadan though, there’s nothing of any value hundreds of miles from there. There’s Japan thousand miles south, but they don’t seem to have any major cities on the coast facing north, and they deal with tsunami all the time anyway, so whatever.
Tsunamis are in a whole different category, energy-wise. From wikipedia’s TNT equivalent page:
A local impact megatsunami might not need as much energy input to achieve a much greater flooding height, while not affecting anything near as far away as a conventional seafloor fault tsunami.
Would people know that I was in control, or would everyone think it was an act of God?
I could take out Pyongyang, or some other hostile regime, without any retaliation. (Depending on a bunch of psychological factors that are hard to predict because I’m not a character in an Orson Scott Card novel . . . or am I?)
Baraka, Democratic Republic of the Congo
It’s a poor town in the highest population extremely poor country. It has no paved roads, no running water, and no electricity. Baraka is utterly irrelevant to the global economy and fairly irrelevant to the national one.
Tabletop RPG thread!
What is the social contract of these games, and how do the rules of different systems support or undermine that?
Dungeons & Dragons has a history of mismatch between character survivability rules and player expectations. Gary Gygax was the original killer DM, and that was OK, because your next character could be ready in 5 minutes. Later player assumptions changed in the direction of expecting their first PC to survive to the end of the DM’s planned story. I had the bizarre experience of DMing 3.5 under these assumptions, which was surreal because the Rules As Written were much, much more lethal than B/X if you knew how to optimize. The DM’s job in the contract became to know the system well and never leverage it, instead graciously losing every fight scene in the story.
This was then hard-coded into D&D 4E, which nonetheless was a relative failure. 5E reverted to 3rd in many ways, but got rid of most of the damage and Armor Class acceleration that optimizers had. It also completely changed what happens at 0 Hit Points, from “unconscious, bleeding, instant death at -10” to “unconscious, you have to fail 3 death saves and it’s physically impossible for an enemy to kill your unconscious body with one blow.”
Thoughts on this in D&D or other tabletop RPGs?
I can only speak to the situation at my tables, but the disagreements I’ve seen aren’t over character death per se but about what the DM’s role is when playing monsters or other hostile NPCs.
My view has always been that the DM needs to be a fair referee, which means not fudging the dice or NPC stats, and that the DM should play NPCs in character whether that means going all out with Save-or-Die abilities or running away in fear. As such, character death is always a possibility in a fight. It doesn’t happen often, because my players are skilled and D&D has become much less lethal over time, but it has happened and will happen in the future.
I’ve had players who were horrified when they realized that the near-misses their characters had encountered were actually lucky rolls and not me injecting drama. The expectation there was more of a choose-your-own-adventure novel type story, where PC decisions shape the direction of the game’s plot but the big-ticket items like death only occur at pre-determined “dramatically appropriate” points in the story.
I don’t think there’s anything inherently wrong with a “narrativist” view of the DM’s role, but honestly at that point I would rather just boot up a Bioware game. To me the thing that separates tabletop RPGs from computer games, and makes them superior, is that you can figure out what would “actually happen” if you did X instead of Y without the inherent limitations of a computer simulation. The result probably won’t follow a traditional narrative structure, but it doesn’t need to.
I’ve been playing pathfinder with the same group of coworkers for about 5 years, and I’ve been DMing for about 18 months. Each DM in my group has had their own preferences on how kill happy they’re inclined to be. Personally I like a lot of character development in my games, so I avoid killing players unless they really force my hand. Even then I try to deter them.
DM: “You find a suspicious black glowing liquid in the demon sanctuary”
Player: “I immediately drink it!”
DM: “…”
DM: “Just to be clear… you found some millenia old,
possiblyprobably evil, magical liquid, and you want to ingest it.. do you understand how that might end really badly for you?Player: “I’m pretty sure it will be fine”
DM: “I’m pretty sure it won’t… I mean.. you can do it if you want to… but… I’m just telling you people who go around drinking random chemicals usually die.”
I’m not trying to railroad their character.. but I can’t encourage my players to invest in their characters if I blindside them with character deaths.
My policy is
Death by DM fiat: bad DMing
Death by random chance: still bad DMing
Death by player stupidity: I’ll warn you, but I won’t protect you from yourself.
Death by noble sacrifice: cool! go for it!
when I’m a player its usually more like
Death by DM fiat: bad DMing
Death by random chance: sucks, but it happens
Death by player stupidity: sucks, but it happens
Death by noble sacrifice: cool! go for it!
DM: “I’m pretty sure it won’t… I mean.. you can do it if you want to… but… I’m just telling you people who go around drinking random chemicals usually die.”
The D&D arcade game from 1995 [1] had a sequence where the players had the choice to enter a cave, and the game said “Are you sure? You will die.” and then really asked them if they wanted to die. Then they died.
There was some special story that was gated behind making this decision so experienced players would choose it on purpose, having to feed in another quarter each.
[1] maybe https://en.wikipedia.org/wiki/Dungeons_%26_Dragons:_Shadow_over_Mystara
I think the player-perspective version of your chart probably has to be the true one, or Gary Gygax was a bad DM.
Now I would say that if you’re making your poor, innocent players spend a week filling out a character sheet for your system of choice, you’re a bad GM by breaking a very fundamental social contract of group leisure by putting them in a situation where random chance has any possibility of killing that character they had to study hard to make.
It’s the same principle by which player elimination is obsolete in board games.
I’m not sure there is a “true perspective” but I definitely agree the Gygax probably hewed closer to the player-perspective. Even so, I prefer my way for a few reasons.
1. Pathfinder has a lot more customization than early D&D. So it very likely means they would spend a week filling out a new character sheet.
2. We only play a few times a month for about 3 hours a session. So if a character dies, it means the player sits quietly in the corner for the rest of the night and is reduced to a spectator for our bi-monthly event.
3. My campaign is homebrewed so I’m basically deciding on how strong the enemy is the night before, There is a fine line between “random chance” vs I screwed up the CRs.
4. I only have 3 players, so it’s very easy for a single death to turn into a cascade failure and TPK.
I’m going to use this to soapbox my hate for FATE.
From the FATE SRD:
In practice, however, the FATE point economy undermines this pretty heavily. In terms of Jenna Moran’s (excellent and incomprehensible) Wisher, Theurgist, Fatalist, Aspects have weak truth, mechanical support, and valence unless an Aspect is Invoked. And that takes a FATE point. IME, this means that it’s very easy for players to spend most of their time buffeted around by their surroundings, taking meaningful actions only when they really care about something. And that stands in very, very strong tension with the position:
That’s been my thoughts about FATE as well (with regards to Aspects), but we didn’t find it to be too much of a problem in play. Characters still did stuff, and were more competent than not.
Lately I came around to the idea that I’d been playing it wrong, and that Aspects are meant to have normal truth, but mechanical support and valence are weak until a point is spent. But perhaps this isn’t a correct interpretation either.
I will join your hate for Fate. Aspects are… pretty bad. They’re directionally useful — they provide ways to quantize bonuses that have traditionally been very systems-heavy to quantize — but they’re a bad implementation of the idea.
That said, relative competence is very, very hard to build into any game system that’s not a complete straightjacket. The GM can always up the power of opposition until the players fail a lot, or lower the power of opposition until the players succeed a lot. I think that the passages that you excerpted are meant to be prescriptive, not descriptive.
Killing low-level PCs in D&D is something you should avoid unless you know the player in question won’t get too upset. Once Raise Dead becomes available, PC death is on the table again; if the party can’t get their casualties raised in time, that’s on them.
I think that something that Forged in the Dark games tend to do well is avoid a failure cascade. In D&D, it’s easy to fail a spot check, then get bitten by a a giant spider, then fail a CON check and get paralyzed and abducted by the spider, then your friends fail their listen checks, then you die. Exaggerated, but you get the idea.
Forged in the Dark games almost work like fighting (video) games to some extent in that they encourage the DM to offer players a way to “reset to neutral” pretty regularly. Making narrative advantage state a mechanical aspect of the game avoids failure cascades by allowing players to react to things going badly; there can still be save-or-die moments, but only if things have already gone catastrophically wrong and players are aware of the stakes. I really, really like that. It succeeds in allowing danger and agency to coexist.
The social contract is something that is worth discussing with each group. A “session 0” is rightly a popular concept.
Some games do this well by having rules that encourage such a discussion. Others manage it well by having an explicit social contract and rules that support it. D&D has had problems by having an implicit social contract and/or not directing players to decide one.
In a Superheroes game, it should be understood that the player characters ought to be powerful. Likewise, in a horror game players should not be surprised if their characters start dying… one by one.
But D&D can and has encompassed a variety of styles and it’s hard to know what to expect without talking about it.
I like DCC’s character funnel. Paranoia’s clone tallies are also fun. Death and Dismemberment tables (in the style of WHFRPG) are cool. Character creation in Traveller is an adventure in itself.
It makes sense in general that a DM shouldn’t “optimize” adversaries…it’s neither realistic nor good narrative to have every ogre you encounter be the deadliest possible ogre. Ogres have other things going on, they can’t spend all of their time prepping to murder humans. And for narrative purposes D&D has things like “Encounter Levels”, which it seems like it’d be the DM’s job to honor the spirit of rather than subvert through munchkinry.
Back when I DMed, I generally had a target amount of failure and death depending on what kind of story I thought the players were expecting. If you were in danger of exceeding it by too much for whatever reason, there would be a deus ex machina, and if you fell too far below it I’d start making things harder next time.
Me optimizing ogres was never an issue The PCs cut down more than a thousand unoptimized humans and monsters. The issue was when I wanted there to be an adversary with any depth, I’d think up their personality, have to take a long time to make a character sheet… and then they die in Round 1 unless I minmaxed their AC, so the players forget whatever snippets of personality I remembered to make them say in time.
The most-remembered antagonists from that campaign were 2 optimized liches and a couple of Epic-level puzzle monsters they encountered without using the combat rules. That’s how hard it was to keep any antagonist alive after Round 1 of combat.
Well, yeah. That worked out OK for mooks, since the assumptions underlying the CR system is “CR = average PC level is the level of enemies PCs can expect to murder to the last man by using 25% of their resources.” Having anyone who could survive a mild stink-eye from the PCs as an interesting ongoing character in the world was where it broke down.
I might be repeating someone else as I haven’t read all the responses, but this is worth repeating, 3.5 is totally down to player skill, and good players are incredibly hard to stop RAW, assuming anything even vaguely like ‘fair play’ and of course if the DM doesn’t like to play fair, I think the last version of Punpun I read achieved godhood from a level 1 commoner with no character levels.
If I remember correctly, the original Punpun had to be a kobold who took a special dragon-ish ability to be rules-legal. To which I mentally responded “Oh, so the Gamemaster is God and you’re Satan.”
Yes originally, but easier means of achieving the same effect were later discovered.
I haven’t played D&D since I was 12, but when I did I was a ruthless DM. Most characters died fast and hard. As a DM, I was just trying to interpret the rules of the game as objectively as I could. This turned out to work well, because when the players did survive an adventure, they relished their lives and their gold. When, after months, characters achieved higher levels, there was great excitement. Their power was something new in the world. The players cherished it.
I say be as ruthlessly objective as possible. The drama is in the dice.
Napoleon said that repetition is the only successful form of rhetoric*, so I’ll take the opportunity to bang the same drum I always do.
D&D’s Original Sin was that Gygax wasn’t very good at explaining his idea of the game (not forgetting Dave Arneson, it’s just that he was overshadowed by Gary since day one).
My approach to playing (old-school=TSR) D&D is as follows:
The aim of the original game wasn’t “kill monsters, get treasure”. It was “get treasure”. Killing monsters was something to be avoided, if at all possible, because it could easily turn into “get killed by monsters”.
Player skill, back in the day, manifested in coming up with creative ways to avoid rolling dice. Setting off a trap often resulted in save or die, so you really don’t want to set off a trap. Intelligent monsters can be reasoned with (I remember reading a story from Gary’s own table, where one of his players playing a demon – low-hit-die, of course – doused himself with oil and set himself on fire – to which he was naturally immune – in order to cow a bunch of goblins to obey him; Gary allegedly loved it). Unintelligent monsters can be distracted by dropping food and typically shouldn’t be a fight-to-the-death encounter anyway, etc., etc.
Tomb of Horrors was written the way it was to show that you can’t simply rely on your character’s powers to succeed. Foolish player, meet Sphere of Annihilation.
The role of the DM, as I see it, is to be the interface for the players‘ clever plans. When designing encounters, it’s a good idea to write in at least one way to avoid danger through smart choices (and a couple of clues to drop into the description). If the players insist on rolling the dice, go with it and let them fall as they may.
It helped that characters were cheap back in the day.
WOTC took the game and completely failed to understand the premise. The result was insanely expensive characters (in terms of creation time) and “system mastery” over clever thinking during play. We’re meant to be rolling dice all the -ing time (otherwise all that time deciding how to spend your points would’ve been a waste, wouldn’t it?), so you get goblin dice – rolls that don’t actually mean anything, but we pretend that they do.
Make the die rolls actually meaningful and you’ve suddenly got a problem. Your player ain’t gonna be happy that the character they spent two hours preparing died five minutes into the game.
My absolute favourite example of how old-school D&D should go is the Misadventures in Randomly Generated Dungeons/Fellowship of the Bling thread on RPG.net. It’s a long read, but well worth it. Only after I read it did I really understand what D&D was, even though I’d been playing it for decades.
* He might not have, but that’s my story and I’m sticking to it.
** On consideration, this might warrant a clarification. It doesn’t mean that rolling dice should result in death/failure. Simply that if you’re rolling the dice, you’ve missed an opportunity to take randomness out of the equation.
So I’ve just been pouring cold water on nice aspirations, or being Cranky Grumpy Old Biddy on the Internet once more.
See, there was this nice vaguely motivational slogan on a Tumblr post. Very nice image as well, the kind of hip blackboard messaging (are blackboards hip? I’m always unsure what is and is not in fashion nowadays, particularly when it comes to “stuff in my childhood decades ago” – is that fusty old rubbish or so-old-it’s-back-in-style?).
Anyhow, it was “You will never look into the eyes of someone God does not love. Always be kind.” So the general sort of affirming niceness, that is sometimes (not in this case I think, but sometimes in other usages) used to rebuke the conservatives/backwards/-phobes and -ists of various stripes.
And y’know, that’s a nice soft gentle squishy message. It’s quite true as well, but here’s where the cold water pouring/grumpy old biddy bit comes in.
It is true. But true on a level that I don’t think (though I may be doing them an injustice) the ‘put up a nice inspiring reminder to be nice’ nice people who do this sort of things have necessarily thought about.
God does love everyone. That means God loves Hitler. God loves the BindTortureKill murderer. God loves Fascists and the Nazis you want to punch. God loves rapists, racists, murderers, paedophiles and the fat-cat big corporate climate-destroyers on the boards of multinationals that are ruining the world through short-sighted capitalism. God loves TERFs.
The people that you want to feel good about despising, because they’re on the wrong side of history and besides they are horrible mean nasty people who are all -phobes and -ists? The people that you would write smug little thinkpieces about how they’d go Nazi? God loves them.
Love is not nice, love is scary. Good Is Not Nice (to quote that time-sink site you all know and love). Lenny knew it, too: “Love is not a victory march, it’s a cold and it’s a broken hallelujah”. Love is the burning furnace of charity, and if you’ve ever been anywhere near a blast furnace or even a glassblower’s furnace, you’ll know how not-cuddly that is.
So yes, I’ve been crushing nice people’s nice little affirming messages on the Internet, what have you been doing today? 🙂
I like this. True and literally charitable, though unbelievers would deny “necessary.”
Working, mostly, but now I’m waiting for stuff to run so I can see why it doesn’t work. Crushing people’s affirmations is like stealing candy from babies; sure it’s easy and fun and the candy is quite tasty, but it makes the babies cry and then everyone else gets mad at you.
I’ve been making a tri-tip roast for my cousins.
I’ve never actually had a chance to ask someone who believes what you just described w/r to divine love: what, exactly, do you think that kind of love means? What kind of behavior would you expect to see motivated by that feeling? How can you reconcile the idea of that kind of unconditional love with the Catholic doctrine of eternal damnation? If you don’t want to answer, you don’t have to.
How can you reconcile the idea of that kind of unconditional love with the Catholic doctrine of eternal damnation?
God will love you every step of the way as you march yourself into damnation. God will forgive you at any step, and thinking “Oh I’m too big and terrible of a sinner, even God can’t forgive this” is making a fool of yourself, you’re not that important in the universe. No, not even if you’re Adolf H.
But God is not “nice vaguely senile old Santa Claus gift-dispenser in the sky”. God is also just, and if you choose to the very end to say “Non serviam”, then you will go to Hell. For all eternity. And it will be terrible (whether we want to think of it in the old burning fire and torture sense, or the absence of God sense). Heine’s alleged deathbed aphorism “Of course God will forgive me, that’s His job” is a double-edged sword; there’s not necessarily any “of course” about it. You can’t slide right up to the very end not having a particle of contrition or intention to do anything but your own way every moment of your life, then expect “well God is supposed to forgive me, I don’t need to do anything about it”.
There’s a lot of trendy forgiveness about, I’ve seen some examples of it online since this is Pride Month, about what Christianity ‘really’ is or what Jesus ‘really’ meant. It’s ironic because it’s “don’t be judgemental, and I’m judging you for being judging”, but never mind that. Love, forgiveness and Hell are hard sayings and hard doctrines. People have been trying to fudge around it for a long time – either by downplaying the love (sinners in the hands of an angry God) or doing away with Hell: either nobody goes there since everyone is saved (so you can go on torturing eight year olds to death until you drop dead yourself and you’ll still be forgiven and saved), there is no Hell (so ditto), or the souls of bad people just go ‘poof!’ when they die so good people like us will eternally exist in The Nice Place but there won’t be any bad people (so we can feel nice and superior about not having eternal suffering, but we don’t have to deal with the problem of evil either).
God does love everyone – that’s the hard bit. Because for all the transgender lamb cartoons, God also loves the TERF flock and people doing the stone-throwing (that’s the bit that gets elided by the crowd going “Gee, I wish the conservative transphobic church congregations realised this is what true Christian compassion is and what Christ was really all about”, the compassion extends to the mean ole conventional cis people too). And Hell exists – that’s also the hard part; the parable of the lost sheep is that the sheep was lost and needed to be brought back; your transgender lamb will have to abide by the rules of the flock after all. And the non-straying flock can’t be any too sure that they won’t end up in Hell if they simply rely on the fact that ‘I’ve always kept the rules – well, the ones that were convenient and socially advantageous to keep’.
People don’t like either of those messages. They want unconditional love and mercy (for me and those like me) and punishment (for the bad people who aren’t like me and those like me). God loves everyone (even the Nazis whom you want to punch). There are consequences of our behaviour (even if we thought punching was okay because we were punching bad people after all).
Just to check if I understand it correctly — the only problem with this behavior is if you keep doing it literally until you drop dead; not giving yourself at least five minutes to stop and repent.
And the kids you tortured will probably go to hell, because they likely hated you intensely until literally the moment they dropped dead. (So they didn’t repent one of the capital sins.)
The problem I have with this concept is that I don’t believe that there is anyone who, faced with direct empirical evidence of the existence of God, Heaven, and Hell, would choose to go to Hell. If God gives people this choice face-to-face, so to speak, then it’s not really a choice; I expect that even a super-hard-core atheist would kneel. If God insists that people make this decision before dying, without empirical evidence, then he’s basically playing a prank on humanity, and that decision is incompatible with any reasonable definition of love.
I really that this isn’t original thinking on my part; theologians have been debating the Problem of Hell for millennia. I just haven’t ever seen an apologia I considered adequate.
I can see a couple of ways out, but they’re heterodox at best, heretical at worst.
Possibly the least objectionable* observation is that God’s Word came down to us through people and is laden with those people’s misconceptions and misunderstandings of what they were being told. Layer these on top of one another and you’ll have generations of really smart people (theologians, rabbis, etc.) trying to come up with a coherent whole.
Suffice to say that God, if He exists, has some kind of plan for what to do with sinners and that plan need not be anything intuitively obvious to us – or consistent with the orthodox message – because the limited human mind cannot comprehend the divine. Mysterious was, and all that.
My favourite bit of not-serious-but-perhaps-more-serious-than-you’d-think theology comes from Tolkien (a devout Catholic, as we all know), via Eru to Melkor: “there is nothing that doesn’t have its roots in me, and anything you do will ultimately contribute to the glory of my creation”.**
Seems an uplifting message to me.
* Other than to biblical literalists, at least, but if I were any kind of Christian, I’d be a Roman Catholic, so there.
** My chief beef with Tolkien was that he awoke Lewis’s faith and Lewis wasn’t the kind of person you want to think about theology. Ugh.
It’s quite simple, I think. God calls you before Him, then points at this statement:
God says, “Admit this is wrong, and you can come to Heaven.”
Now what?
The problem I have with this concept is that I don’t believe that there is anyone who, faced with direct empirical evidence of the existence of God, Heaven, and Hell, would choose to go to Hell.
Richard Dawkins (I believe it was him) once said that, even if he woke up one morning to discover that the Second Coming was in progress, he’d assume that he was dreaming, or hallucinating, or that some sort of mass hysteria was going on, or that technologically-advanced aliens were playing a practical joke on us, rather than that God actually existed and that Christianity was true. So there’s at least one person who claims that he wouldn’t accept direct, empirical evidence of the existence of God.
More generally, people do all sorts of things that are clearly making them miserable without giving up, and ignore all sorts of inconvenient truths if they don’t like the implications. So I don’t think it’s at all implausible that this sort of behaviour would continue after death.
As for the “no evidence” claim, in my experience lots of people don’t even know what would count as evidence in the first place. Often people say something along the lines of “If you can find something that’s inexplicable by science, I’ll take that as evidence,” and then when you point out such a thing — the existence of consciousness, for example, or the existence of the universe in general — they dismiss this on the grounds that it’s nothing but “God of the gaps” reasoning, and therefore inadmissible. Of course, if the only evidence you’ll accept is a gap, and you dismiss the use of any such gaps as inadmissible, it follows pretty trivially that you’ll never find any evidence to convince you, but it isn’t God’s fault that your own standards are self-contradictory.
Another option is just to say (with Bertrand Russell and Richard Dawkins, inter alios) that maybe the universe just exists, and there’s no explanation for it. Again, though, this is just a piece of wilfulness, not a proper argument, and is almost never applied consistently or in good faith (just trying going up to Dawkins and telling him “The existence of transitional fossils isn’t evidence for evolution, because they might just be a brute fact with no explanation!” and see how seriously he takes you).
Another one that’s just occurred to me is to stack the deck in your favour so much that your naturalism becomes unfalsifiable, by declaring that any non-supernatural explanation is ipso facto superior, and hence there can be no evidence for theism provided that you can cobble together some naturalistic just-so story to oppose it. Sure, your account of the Resurrection requires ignoring the primary sources whenever they contradict your theory, positing a whole load of stuff with no justification other than “It would be more convenient for my theory if this happened”, and hypothesising a huge Dan Brown-esque conspiracy theory which somehow never got revealed despite the people being involved having every motivation to squeal — but it doesn’t involve any reference to God, so checkmate, theists!
In short, what does or does not count as “evidence” is sufficiently indeterminate and subjective for you to arrange things beforehand (or even on the fly, if necessary) so that nothing can possibly count as evidence against your preconceived beliefs, and appeals to lack of evidence should accordingly be taken with rather a large grain of salt.
Haha, explain to me why it is wrong (to my satisfaction) and I will admit that it is wrong. I can do no else, without making a mockery of the word “admit”.
(Incidentally, even without the proviso I would likely choose Hell, since Heaven implies worship (at least in its classiclal conception) and I will not worship an unjust God, which the God of the Bible appears to be.)
I was wondering this question, and remembered a story recounted in the Gospel of John.
To set the scene: a woman has been caught in the act of adultery, breaking the Law of Moses. [1] A set of religious leaders drag her in front of Jesus, requesting that He join in their judgement. [2]
Jesus looks at her, looks at them, and uses a bit of rhetorical judo on them. Let the one who is without sin cast the first stone. He pauses to the words sink in, then He stoops to write something in the dirt. [3] One by one, the accusers leave as Jesus looks around.
Finally, Jesus talks to the woman. Who condemns you? She replies that no one is condemning her. He then says Neither do I condemn you. Go, and soon no more.
The love that Jesus showed to that woman was to not punish her in the way that the Law of Moses said that she deserved.
But it was also love to instruct her to leave the life of sin behind.
There are other parts of the Gospels that talk about love, forgiveness, and calls to righteousness. There are also teachings about eternal punishment of those who don’t put any effort into following the instructions to go and sin no more.
———-
[1] After five or six times reading this, I realized something that I had missed, and never heard much discussion of.
Where was her partner in this crime…er, sin?
I think the other person involved got away, somehow.
[2] Jesus was popular, or at least notorious. Maybe these leaders wanted to accuse Jesus of being too loose with the Law, or maybe they were genuinely interested in whatever response He would provide.
[3] Some part of that interaction between Jesus and the accusers is in this act of writing on the ground. Was the writing itself important? We’re the things written important? Or was Jesus just waiting for the accusers to get the hint and walk away?
Whatever it is, it’s less important than the conclusion of the story.
I’ve heard a theory that he was writing the names of the accusers and their sins in the dirt. I don’t know how much credence to give that, but it seems to make sense given their reaction.
what, exactly, do you think that kind of love means? What kind of behavior would you expect to see motivated by that feeling?
Oh, that’s easy — just go read a few biographies of saints and see what sort of things they did.
The whole concept of divine love or charity is incredibly confusing because of the poor choice of names. It’s not anything like love as is normally understood, neither romantic nor platonic, and it’s certainly not like charity in the ordinary sense of the word.
Maybe it was clearer in Latin or Greek but it seems hard to fault people for interpreting the language of Christianity by its straightforward English meanings instead of through the lens of dead languages. Given that even the Catholics conduct mass in the vernacular, would it kill them to be a little more clear on what they’re talking about?
I mean, I feel like Modern English has to share in the responsibility here. It’s not good that so much of the straightforward meaning of “love” became “I want to have sex in a respectful, socially-approved way.” 😛
it seems hard to fault people for interpreting the language of Christianity by its straightforward English meanings
The straightforward English meanings have been boiled down to “Love is what we celebrate on Valentine’s Day, and by celebrate we mean sell chocolates and flowers and hotel breaks for romantic weekends, because the only love that counts is sexual/romantic”.
Personally, I blame the Romantics for this 🙂
The whole concept of divine love or charity is incredibly confusing because of the poor choice of names. It’s not anything like love as is normally understood, neither romantic nor platonic, and it’s certainly not like charity in the ordinary sense of the word.
It’s the original choice of names. It’s not St. Paul’s fault that we moderns have bastardised the meanings of common religious terms beyond all recognition.
Do you think you bring them closer to God by crushing them? Is this God’s work, or yours?
Since this is already a silly thread
I figured the link was going to this.
Silly? No I don’t find it to be silly.
I’ve been converting a draft for kindle of my price theory text into a draft for print.
Which should be easier than it is.
God loves the fascists
God loves the IRA
God loves Herr Fritzl
God loves King Leopold
God loves the whole world
It’s such a brilliant place
Boom dee a dah, boom dee a dah, boom dee a dah, boom dee a dah…
Why is it a thing that geographical names get translated into different languages? Why is Brasil Brazil in English? What is the point? Why is Praha Prague? Again, what is the point? Why can’t we all call things what the natives call them, at least to the extent we can easily pronounce them?
Edit: A hypothesis I just came up with while noticing that getting the spelling mostly right doesn’t mean one gets the pronunciation right: Maybe, for some reason, it’s better and less offensive to intentionally call something different from what the natives call it, than to try to call it what the natives call it and fail.
Because language is spoken first and written second. People hear about places and they are later written down. Once it’s written down enough, you’re not going to change it because the natives spell it differently, particularly if their writing system doesn’t even have the same characters
But in at least some cases it isn’t just spelled differently, it is pronounced differently.
E.g. Deutschland — Germany
In Spanish, it’s Alemania.
Another one that changes are the Netherlands; in Spanish, it’s called either Paises Bajos (Lower Countries) or Holanda (I was told by a Dutch guy this is offensive to some people, kind of like calling Spain Castille, because Holland is just a province, not the whole country).
Spain has a pretty consistent name AFAIK, derived from Hispania.
But Chinese names to many countries don’t sound at all like the names we use (from when I was studying Chinese). It’s the same with Korean; they don’t use easily recognizable country names, even for countries where contact was made relatively recently (and thus no mutation happened).
Alemania was the part of what the Romans called Germania inhabited by the Alemanni tribe after breaking through the Roman borders in the Crisis of the Third Century. Since it included the banks of the Rhine and the upper Danube River basin as far as the confluence with the Lech River, it’s not surprising that the Spanish took the name of the proximate part for the whole.
And Deutschland is the endonym, while some languages prefer the Latin exonym because it’s Latin.
Holland is actually two provinces now, although this wasn’t the case for much of Dutch history. From about 1101 to 1806, Holland was a single entity. During this period, the most intense interactions with Spain happened (including the Dutch War of Independence aka the Eighty Years’ War).
Louis Napoléon Bonaparte separated Holland in two provinces and then named the entire country, the Kingdom of Holland. So at one point in time, Holland did refer to the entire nation. However, this only lasted 4 years, before older brother Napoléon Bonaparte got fed up with how decently his little brother tried to rule.
People who live outside of North and South Holland tend to dislike it when the entire country is referred to as Holland, which is part of a more general feeling of being overlooked.
Also place names are subject to phonetic evolution overtime just like other words. That’s why the city of “Florence” is called “Florence” in French and English, “Florencia” in Spanish, “Firenze” in Italian and “Fiorenza” in the local Tuscan dialect, all deriving more or less directly from the original Latin noun “Florentia”.
Seems like it would be the job of dictionary and atlas makers to be the authority on the correct names. In this case, the correct name should be determined from the top down.
I’ll note that in a few cases it seems like we have changed the name in English to match what the natives prefer. For instance, Iran instead of Persia. Also, I assume it’s Mumbai instead of Bombay now because that’s closer to the native term, but I don’t really know the history of that particular change.
Also, I assume it’s Mumbai instead of Bombay now because that’s closer to the native term, but I don’t really know the history of that particular change.
AIUI a lot of the natives actually dislike the name Mumbai, because it’s based on the name of a Hindu goddess and the Hindu nationalist party changed it as a sort of territory-marking exercise. As a rough analogy, imagine if a white nationalist party was voted into power in New York and renamed it Confederacytown.
As a rough analogy, imagine … New York
Almost as if a bunch of Brits showed up and sneeringly told the local “Janke”‘s that they now live in New ENGLAND instead of New HOLLAND, and renamed New AMSTERDAM to instead be New (Some Rando Town in Middle England Somewhere).
Yankee comes from Jan Kees, which were two extremely common Dutch names (and still very common, but not as dominant as they were). So it derives from non-Dutch people interacting with Dutch people and noticing how often they had one of these two names. This became a generic term in the same way that peculiar names that are common in a subculture are sometimes used to refer to that subculture.
Billy-bob Dumbass
Drove t’ New York City
In a rented Tesla
Popped the collar of his shirt
And called it Gucci Armani
Billy-bob Dumbass
You do you
Billy-bob Dumbass
Dressed up so pretty
Bust a move at the club
And maybe you’ll get lucky
Aapje:
Jan is presumably a variant of “John”, but is “Kees” related to any common English name?
The name itself is not directly linked to an English name, but it is a diminutive (hypocorism) of Cornelis, which is the Dutch version of Cornelius. Dutch (ethnic) protestants regularly adopt diminutives as the legal name. Dutch (ethnic) Catholics more commonly have a legal name that they never actually use, favoring a ‘calling name’, although having a separate legal and daily name is a fairly common practice in The Netherlands. For example, Anne Frank’s legal first name was actually Annelies.
Quite a few birth announcements state both the legal name and the calling name, which sometimes is completely different from the legal name.
Cornelius is a Roman name that is rarely used in English. Chevy Chase is actually called Cornelius Chase. Ex-senator Robert Byrd was born as Cornelius Calvin Sale Jr, but his name was changed after adoption. Cornelius Oswald Fudge is a character in Harry Potter.
Or sometimes the exact opposite of that. Anglophones clearly call Paris “Pear-iss” instead of “Pah-ree” because more people saw it written down than heard a francophone speak it, and pronounced it according to English phonetic rules. There are towns in the US called Versailles (pronounced “vuhr-sales”) and so forth, too.
It doesn’t even need to cross languages. There’s a river in Connecticut called the Thames, named after the one in England but pronounced the way it looks.
My favorite is Houston Street in New York, which is correctly pronounced as house-ton. Anyone who pronounces it like Houston, Texas is outing themselves as being from out of town.
This raises a question I’ve long had. Is Houston Street in NYC named after Sam Houston or someone else? If someone else, how was that person’s name pronounced?
According to Wikipedia, it’s named after William Houstoun (with an alternate spelling). The street was apparently originally part of his father in law’s estate.
because more people saw it written down than heard a francophone speak it
Actually, in Old French, the “s” in “Paris” was pronounced. While in French, final “s” was later lost in pronunciation (but the spelling wasn’t updated to reflect the change), that phonological change never happened in English. So the “s” is pronounced in English because it’s always been pronounced in English.
Funny story, in Interwar America it seems to have been pronounced the French way, at least by some people. Presumably all the American boys sent over to fight in France adopted the French way of pronouncing Paris, and it stuck around for a while before going back to the standard English pronunciation.
This is typical of English–I noticied it when studying French.
In Polish, the capital of Poland is “Warsawa”– var-sah-vah
In English, it’s Warsaw — war-saw
In French, it’s Varsovie – var-so -vee
And so on–English keeps the spelling close, French keeps the pronunciation close.
The thing that blows me away is that we (ie, anglophones) call Firenze “Florence.” THERE’S NOT EVEN AN L IN THERE. And it’s not like this is some exotic far-away language with phonemes and alphabets we just can’t deal with.
I wouldn’t be surprised with a Paris-style thing where the real name was given an anglicized pronunciation. Like if we called it “Fire-ence,” okay, sure. But how did “Feer” get turned into “Flo”?
Also baffling: there is a city near me called Vallejo. It is universally pronounced “Val-ay-ho.” Why? Why not either “Vay-ay-ho” or “Val-eh-joe”?
Just adressed that one post above.
I am pretty sure the J as H is more common knowledge than LL as Y for Americans trying to pronounce Spanish more authentically, these kinds of mistakes are pretty common where I am from, jajaja.
It didn’t. “Florence” is from the Latin name, “Florentia”; it’s the Italians who changed the “Fl-” to a “Fi-“. It’s not our fault that those guys can’t speak proper Latin.
Another fun one: how many English speakers pronounce the ‘j’ in ‘jalapeño’ as an ‘h’, but don’t pronounce the ‘ñ’ as a ‘ñ’?
I’ve heard “halapeeno”, but it sounds odd to my ears. Not as odd as “dzhalapeeno” would, though.
I’d never heard it mispronounced before, but the characters in Trailer Park Boys pronounce it “dzha-la-pa-no”.
I’ve only heard the j pronounced ironically.
An example: In case of Prague, oldest written name of the city is Praga, in Latin. People started writing in Czech several hundred years after this city was founded. Praga was translated to French as Prague, and English uses French name instead of Czech one. I think broadly similar processess account for other discrepancies between native and English names of places.
Worth mentioning is the fact that there was likely a consonant shift in the interim:
I wasn’t able to easily find out when the earliest record of Praga as a name is from, but I’m assuming it pre-dates the consonant shift. Given that most people in France or Britain wouldn’t have had any contact with spoken Czech for centuries, the people responsible for passing the name down would continue to do so completely oblivious of the change.
I believe there are often many steps involved, Marco Polo first hears of ‘Japan’ from a Chinese person, not much hope of the eventual English version being very correct.
It’s a little surprising that there aren’t more demonyms which translate as “Those bastards”.
A substantial number of Native American tribes in North America are known by the name their enemies or neighbours gave them. Comanche means enemies in Ute. Ute derives from the Apache word for mountain people. Apache is thought to derive from the Zuni word for Navajos, which in turn means enemies.
The Zuni actually called themselves that, but Navajo is from the Tewa term for a large field. The Tewas call themselves that, but most people known them as the Pueblos, which literally just means “towns” in Spanish. Because unlike the other natives in the area, they lived in fortified towns.
In South-East Asia the Palaung call the Jingpo “khang” which means something like “mudblood”. The Jingpo use the same term for the Chin, and “yeren” meaning “wild men” for the Lisu.
The West Germanic word walhaz means “stranger” or “foreigner”, which is how we got Wallachia, Vlachs, Wallonia, Walloons, Cornwall, Wales, the towns of Wallasey and Welche, and Włochy the Polish name for Italy. In contrast the Slavic word for Germans is derived from a word that meant “mutes”, whereas Slav means “one who speaks”.
So while there’s not a lot of terms that mean “those bastards”, there sure seem to be a lot of variants on “those people”.
I would rather use “the others”, but that seems to be pretty much the long and short of it.
The terms “barbarian” and “hottentot” are both of imitative origin, and close to what you’re probably going for.
If we were going to rudely name them after how they talk, why wasn’t it Clikclik?
Another example: “Cologne” in French and English, “Köln” in Standard German, “Kölle” in the local dialect. All from the unreasonably long Latin name “Colonia Claudia Ara Agrippinensium”.
Based on these examples, it appears the usual explanation is that the locals change the name more than foreigners do.
Played with in Futurama.
Worm in Fry’s gut: I am the Lord Mayor of Cologne!
Fry: You mean Colon?
Worm: … state your business.
Based on these examples, it appears the usual explanation is that the locals change the name more than foreigners do.
Wouldn’t surprise me, as frequently-used words seem to undergo change more than infrequently-used words. E.g., “to be” is irregular in most languages, AIUI.
You have it backwards; “to be” is irregular because it’s undergone less change. Take Latin, where do, dare, dedi, datum‘s forms partly precede the conjugation family it’s grouped with—you can tell by the irregular second and third principal parts. Or sto, stare, steti, statum. You can see the resemblance to perfect reduplication in Greek and Sanskrit; these forms are relics of older conjugation. From Sihler:
(emphasis mine)
About “to be” specifically:
(emphasis mine)
If you want examples from English, the -en ending in perfect forms is older than -ed. But -ed has been steadily taking over, and even verbs whose perfect once ended -en are more often rendered with -ed now. Don’t be surprised if “beed” never catches on, though.
Here’s a fairly short explanation of the process:
1. An adventurous egghead travels to a faraway land and records his experiences. He tries to render the local names as well as his written language allows.
2. Subsequent generations of eggheads aren’t that adventurous, but that’s okay, ‘coz they’ve got Written Sources. Thus, they go back to the original work (or – more likely – to works that reference the original work, or even works that reference works that reference the original work and so forth) and faithfully copy the name as originally rendered. It’s all good, they know what they’re talking about.
3. All the while, the eggheads’ language is undergoing gradual changes, as languages do, so the pronunciation of a particular spelling subtly changes as well (or maybe the spelling changes to reflect how the name is pronounced). It’s all good, everyone knows what they’re talking about.
4. Meanwhile, the locals’ language is also undergoing gradual changes and with it, the commonly used names of places that are now pronounced a bit differently. It’s all good, everyone knows what they’re talking about.
5. Centuries later a representative of the eggheads’ culture and a representative of the local culture meet, use the name of the place that they’ve been taught and wonder if they’re talking about the same thing.
Not quite the same thing, but I was involved, with my sister, in a conversation in Japan c. 1963 with some Japanese students. The name of one of the world’s most prominent political figures came up, and they couldn’t tell who we were talking about.
It turned out, as best I could tell, that the Japanese version of Mao Tse Tung was the pronunciation in Japanese of the Chinese symbols for his name. My understanding—someone who knows more about the language is welcome to correct it—is that an individual symbol in Kanji can represent either the sound of the Chinese word it represents or the Japanese word with the same meaning as the Chinese word it represents.
That’s always been my understanding of how Kanji is used, yes.
As I understand it, the reading is context dependent, so mere knowledge of the character does not ensure correct pronunciation. Furigana may provide disambiguation, but I presume it doesn’t appear in most cases where an educated reader is expected to know the correct reading.
Approximately, but the on-readings of Japanese characters don’t map well to any modern Chinese dialects, because they were derived over a period of several centuries, can date from as early as the 5th century AD, and were probably somewhat garbled even then. There are also mistaken readings that have been encoded into the Japanese language, and even some completely synthetic readings for native Japanese characters.
(Don’t ask me to translate any Japanese, I only took a couple semesters and everything I remember now relates to martial arts. But I remember that much.)
Many (all?) of the Slavic languages call the Germans something beginning ‘Nem-‘ followed by ‘-ets’ or ‘-ski’ or suchlike, apparently from an old Slavic root word meaning ‘mute’, i.e. people who can’t speak our language and therefore might as well be rounded off to people who can’t speak at all (though the word for Germany the country is more likely to be something we’d recognise from Western European languages). And the word ‘Slav’ apparently comes from a root meaning ‘word’ – i.e. Slavic people are the people who do speak our language.
Also, check out Finland (Suomi in Finnish): travel north and you cross the border into Norja. Okay, that one makes sense. But travel east and you reach Venäjä. Or travel west and you reach Ruotsi. WTF? Plus their name for Austria is a calque: Itävalta, literally something like ‘eastern dominion’.
Ruotsi is pretty clearly just Russia said with a weird accent, which is not as weird as you might think. The old name for the western coast of Sweden around Stockholm is Ros, now Roslagen. The people from there were thus also known as the Ros. After they went East to rule over the Slavs they were known as the Rus, and eventually the people they ruled started to call themselves the people of Rus, or Russians. And that’s why The Finns calls Sweden Russia, it’s the OG Russia.
Hmmm. Makes sense. Just looks pretty weird nowadays that the Rus have changed position 🙂
Anyway, I looked up the etymology of Venäjä – apparently that’s from an old Germanic word for the Slavs that none of the Germanic languages have any more.
Austria is Österreich in German – Öster – Eastern, Reich – Dominion
That’s exactly what I meant – it’s a calque of Austria’s endonym.
Sorry, my mistake.
Having a standardized spelling for everything is a relatively new idea that wasn’t even feasible until dictionaries were made.
A more detailed explanation of the Chernobyl problems by Manly Scott Manley (21 minutes video).
UBS has come under attack for comments from its chief global economist regarding inflation in China (here’s a bloomberg article).
Can anyone explain what was offensive about what he said? I would quote it here, but I’m so far from understanding the reaction that I have no idea what the unintended consequences could be.
The insult formation [nationality]+[animal] seems very common and taken more seriously in Asia than my western sensibilities can fully grok.
This seems like a misunderstanding/cultural confusion/translation error situation.
Even granted that…unless there is some missing context it sure looks like he was using “Chinese pig” to mean “pigs that are in China” and not some weird reference to actual Chinese people. I agree it seems like a misunderstanding…and a simple enough one that it is hard to believe it has blown up like this.
Maybe I’m wrong though.
I think innocuous things that look vaguely controversial blowing up on social media is par for the course.
Granted, I am not actually on social media and only really interact with it through SSC comments.
Rival brokerages in Hong Kong stepped in, urging the bank to fire all people involved in the incident.
Yeah, I think there may have been some intentional fanning of the flames there, with rivals hoping to blacken UBS’ eye and gain market share at their expense and so making a big deal out of “Did you KNOW he called Chinese people PIGS???”.
I know no more than you, but a quick Google suggests to me that pigs are culturally important, high status animals in China and that 2019 is the Year of the Pig, so perhaps making light of a devastating porcine epidemic is more culturally insensitive than it would naturally seem?
Or perhaps China just wants an excuse to knock UBS in order to promote home-grown rivals.
Surely no ulterior motives there! 😉
Yeah… not sure I’d be taking HR advice from my competitors…
Half-baked thought prompted by this thread: the distinction between the state “giving people free stuff” vs. other kinds of state spending is one of those things that feels more real than it is, and policy proposals can end up introducing inefficiencies by trying to game how it feels.
At one end of the spectrum, I’ve heard people sarcastically describe the existence of a state-funded Navy as “giving away free boats.” This of course doesn’t feel right at all, to anyone, because there’s no obvious free market interaction this is substituting for; nobody can buy a 1 in 300 million share in naval protection. At the other end of the spectrum is, say, Government Cheese, which anyone would have to describe as the state giving away free cheese (even though it also serves a secondary policy goal delightfully termed “quantitative cheesing”). But in the middle of the spectrum, you can change how much a policy feels like “giving away free stuff,” often by adding indirection or complexity. You can give people free subway rides, or you can allow tax-advantaged salary deductions for a special interesting-earning account that can only be used for transportation. If giving away free stuff is off-brand for you, you might be tempted to propose the latter even if it’s a less efficient way of accomplishing the same thing, because routing it through taxes and employers and economic transactions feels more markety and less free stuffy. Conversely, if “free stuff” is currently on-brand for you, “we’ll pay off your unpaid student loans with a one time tax credit” can be more appealing than “we’ll give a one time tax credit to everyone who’s had student loans regardless of whether they paid them off” because the former is more like getting something for free.
Wait, are there people in favor of student loan forgiveness that would oppose a tax credit that also went to people who had paid off their student loans, or was that just a hypothetical? Both of those score as “free stuff” in the sense you’re using, to me though.
I’d be shocked if there weren’t.
If the credit applies to say, every living person who ever went to college (note: a lot of people with significant college loan debt didn’t graduate), then the vast majority of people collecting the credit will already have paid off their loans. Additionally, these people will, on net, be much richer, as a group, than the people not receiving the credit.
This would be a quite regressive tax, that could accurately (for once) be described as “tax cuts for the rich.”
Well I would think it would be limited to “every living person who took out a loan to go to college” based on the phrasing in the OP, which would exclude the rich people who paid for college out of pocket. It would seem like it would take a lot of mental gymnastics to frame it as regressive to credit back a 45 year old who had finally paid off their loans a couple years ago in addition to crediting the 25-35 year olds still paying them off.
It would also exclude the people who worked part-time and summer jobs to pay for college even though that took up most of their partying time, and it would exclude the people who chose to go to state schools rather than elite private colleges so they could graduate debt-free. No, wait, it wouldn’t exclude those people. It would tax those people, to retroactively pay for the people who made the opposite life choices.
I mean, there might be, if the state didn’t make it illegal to compete with it…
I’ll go a step further…can’t we be pretty sure there would be based on the state of things before large state-sponsored navies? Someone go find @bean.
You called?
The problem is that the state of naval warfare has changed a bit since the rise of large state-sponsored navies, so any analogies are going to be imperfect.
Well, that’s one problem. The other is that naval power is hard to generate and requires the sort of actions that governments are good at and corporations aren’t. A modern warship is incredibly complex and sophisticated, with a lot of people, both in and out of uniform, supporting it. This is required if you want to compete with someone else who is working on the same level, and I can’t see a corporate-funded navy reaching it. If every navy on the planet was scrapped as part of the grand AnCap collective treaty, we might be able to get away with it. But that world is a very long way from the one we have.
I was thinking more “alternate history where there is no rise of large, state-sponsored navies” and what that would look like than how we would get there from here. Certainly would seem to be an impossible bell to unring.
If it comes to that, didn’t the Athenians have to soak the rich for “special public services” just to get triremes built?
What a strange hypothetical. Traditionally sea trade was high-risk/high-reward, which meant big rewards for groups of people living together near the sea that developed risk-sharing social “technologies.” This could take the form of a state monopoly on sea trade, where the society is run like the royal family business, or it could take the form of private contracts enforced by the king’s code of laws.
Then more sea trade -> more commerce raiding -> more reward for organizing a state navy. Hell, a big, enduring group of pirate ships basically becomes a small state, as St. Augustine reports the humble captain of one pirate ship telling Alexander the Great.
@theredsheep: Indeed!
That seems unlikely. There were large state-sponsored navies in the Ancient World, and pretty much everywhere else that has reached that level of sophistication and used much water transport. There have been brief periods when converted merchantmen were good enough, particularly during times with weak states, but they didn’t last very long.
Before you have large state-sponsored navies, you get merchant marine which can get quite large and organised.
Then as you get big rich merchant vessels carrying valuable cargoes (and passengers), you get pirate fleets preying on them (and possibly having island bases because now it’s worth their while to co-operate rather than every captain with his own ship trying to supply and repair it and for mutual defence).
Then it becomes enough of a problem that either the merchants have to find some way of getting ships that are not merchant vessels but warships designed and built and maintained, or they dump the problem into the lap of the state because “hey, you’re supposed to be the law and the defenders of the citizens round here”.
For a long time the merchant marine basically was the navy. Until the development of line-of-battle ships in the mid-17th century, there were no significant design differences between military vessels and the largest civilian ones; they were similar in size and sail plan, and could be (and sometimes were) similarly armed. Building a naval force often consisted mostly of pressing civilian ships into service; only 28 out of 130 ships of the Spanish Armada, for example, were purpose-built warships.
@Nornagest
But there were lots of dedicated warships before the invention of the ship of the line. They were just mostly galleys. I’m not as familiar with the 16th/17th centuries as I am with later eras, but I suspect that it was a combination of merchies being good enough and states too weak to afford large fleets.
@honoredb
Government spending is largely paid for by taxes. So one way to look at things you get from the government, is that it is something you paid for, not much different from getting the good or service from a private company. You don’t call it ‘getting free stuff’ when getting a service or good you paid for, even for insurance, where the payout is need-based.
However, this point of view is very hard to defend from the individual perspective, when the payments are purely need-based and there may not be any period when the person pays into the system. For example, welfare for a handicapped person who can’t and will never have a job.
When the government spending benefits both people who a net tax payers and those who are not, it’s a more hybrid situation. Something similar is true when people tend to get much more from the government than they pay in tax at one stage of their life, but tend to pay more than they use at other stages.
Of course, from a hyper-libertarian perspective, like David Friedman’s, people should always have the right to choose a provider of a service/good and to choose not to buy the service/good, so then all of it is coercive, making people pay for something that they don’t necessarily want or not from that provider, even if they do benefit.
I think one distinction between some things that feel like “free stuff” and other stuff that doesn’t is whether or not the recipient of “free stuff” can turn that benefit into liquid assets relatively easy. It doesn’t distinguish perfectly, but distinguishes some things.
You can’t sell your “share” in protection by the U.S. navy and turn it into cash. You can’t sell your right to drive on the highway either, your driver’s license is nontransferable. On the other end of the scale tax benefits and government checks are just money. In-between is something like food stamps, it’s possible to resell some forms of them even if it’s illegal. But even if you can’t do that, you can buy food with the food stamps and resell it below list price to get cash.
Of course, you do gain wealth in the long run by being protected by the U.S. navy and having access to the highway, but it’s not fungible with cash.
Appeals for somewhat obscure expertise: any good, accessible books on the history of the family? I don’t mean of specific families, but of how family structure and perceptions of it changed over time–something in the vein of Gies’s Marriage and the Family in the Middle Ages, but more general, or at least for other eras. I found one book on Amazon with a search (The Family: A World History by Maynes), but the few reviews seem to agree it’s mostly about criticizing misogyny and not about surveying structure. There’s also a book about ancient Greek families, but that has no reviews and is illustrated with the cover of a completely different book so I’m a little leery of trying it. This is a subject I’m really interested in, but haven’t read about in any systematic way.
I do have an even more obscure recommendation: The Child in Christian Thought, edited by Marcia J. Bunge. It a collection of essays that looks at how theological understanding of children has changed throughout the years – it starts with the New Testament, Augustine, Chrysostom, etc, continues through the middle ages and onto Barth, and modern feminist theology. Bunge also has a book The Child in the Bible which does the same for various books of the Bible (and touching on some contemporary family practices in the ancient world).
The books are accessible in the sense that they’re collections of essays you can dip in and out of, but there’s no particular structure or intend to give a comprehensive overview of how the family changed over time. You also have to be interested in reading theology, not history.
If those sorts of things interest you, I could dig into my old essays from my Family and Ministry studies and come up with some more recommendations. If that’s not your interest, I understand!
Not exactly what I’m looking for. I’m asking because I’m unsure how many of our ideas about family life over the years are rubbish. For example, until fairly recently I believed that the nuclear family, in America at least, was this newfangled thing that postdated WWII. That was what my high school history textbook said. But Gies says nuclear was the norm in Catholic Europe, and my Oxford Dictionary of Byzantium says the same for the East. Heck, Little House on the Prairie shows a perfectly normal nuclear family in the nineteenth century. On the other hand, what little I’ve read of traditional China implies that extended was the norm, and ditto for early Islam. I mostly want a sharper picture of structure and norms.
(but thank you!)
@theredsheep
In traditional society, women tend to live with their parents until marriage (not in the least because it is men’s job to earn enough for a house/farm (or inherit it), whereupon he becomes marriageable). It also is the children’s job to take care of the elderly, if they can’t provide for themselves anymore. This seems strongly based on the lack of alternatives: providing home care or elderly homes is quite expensive.
The dynamic in these societies seems heavily dependent on the ability for men to earn or inherit enough money to become marriageable. The longer this takes, the longer men and women have to wait to start a family of their own & the longer that they tend to stay at home.
Of course, only one of the progeny has to (and can) take in the parents. So with high birth rates, many of the children would not have to take in the parents.
I think that the extended family narrative takes this dynamic and exaggerates it.
Does anyone have much experience with language learning?
I’ve been practicing Chinese on and off for about 5-6 years, and I feel like my vocabulary is very strong for my level (I know 800-1000 words) , but my ability to communicate still feels very limited. I have trouble following books or movies, and can’t really speak with people other than my girlfriend.
I feel like I’m an A2 CEFRL normally, but at B2 with my girlfriend.
Is anyone else in this position? Any tips for how to break out of it?
Possible advice depends on what you are already doing.
I’m not sure if this matches your A2/B2 split, but I have two reasons why I have experienced that type of discrepancy:
(1) If I spend a lot of time with one person or small group, then both accommodations and internal references develop. For example, the native speaker might realize that I always mispronounce a particular word or incorrectly use a particular construction, but they have learned what I mean.
(2) Sometimes, I’ve really focused in one subject area for a particular language and then become relatively strong when the conversation is about that thing, however, I might be totally lost when the conversation is something “easier” or more common. For example, I can (or could at one time) conduct business meetings regarding bank loans and credit risk in German and Thai, while not being able to talk about popular sports. I think hospitals/doctors observe a more widespread version of this when 2nd generation immigrants are asked to translate between the doctor and 1st generation patients. While the kids may even appear are functionally fluent in both languages, they may completely lack the necessary medical vocabulary in either/both languages.
Reading general interest magazines or watching news can help with both of these issues. For some languages there are groups that produce this type of material that is simplified for language learners (vocab choices are more common variants, more background is given to understand context). Diving into material that was created for fluent speakers can be really frustrating if you aren’t already pretty strong.
In some respects, regular books/movies/tv aren’t good for learning language, because entertaining writing tends to have unrealistic dialogue. (And it being unrealistic is a feature, not a bug.)
Either watch/read kid’s media (which is designed to teach people things), or watch non-narrative Chinese media, particularly their variety shows. They’ll beat a joke to death, which means tons of repetition to learn complete phrases rather than words, but also getting you more used to actual conversational banter. They also like to throw extra captions on everything for comedic effect, which helps with referencing words you don’t know.
(The regular talk shows still have the issue of leaning towards more esoteric words, since they usually double as documentary/promotional bits.)
As usual when this topic comes up, I will recommend LingQ as a good site for building your listening and reading comprehension (or a free equivalent like Learning With Texts, though I think that there you need to import your own content), and just booking a load of tutoring sessions over iTalki or some other language teacher marketplace (or free exchanges with Chinese speakers who want to practice their English, depending on whether time or money is more of a constraint for you).
Does anyone know of resources to get a basic understanding of electrical engineering? Thanks!
Digital or analog?
E: and is this an academic or functional interest? Because I’m not going waste our times linking a primer on impedance or digital logic if you really care about building a robotic hatrack.
Either, I guess.
Edit: functional. My interest is in how electric power is generated and delivered to consumers, not so much robots or computers.
Sorry, see edit.
OK, then this is analog EE almost exclusively.
You’re going to want to look at basic circuits – circuit laws and passive components. Load balancing may be of use. Turbines are mostly thermo, not electrical, but if you want to understand the EE side of how they work a basic understanding of electric motors and rectifiers should do you good. Most EE courses will dive into semiconductor devices and amplifiers – I’m not sure if this will be interesting to you. Chances are that if circuit analysis makes you hungry for more they will.
If you want to get into grid design, there’s going to be a LOT more involved, and I’m not the one to look to for it.
I’d start with the first two courses here: https://www.khanacademy.org/science/electrical-engineering
After that, come check back – I’ll try and track down some texts that have the other things in them.
Thank you! Will do.
Are you looking to start from the basics and really undertand electrical engineering, or do you just want a primer on power generation and delivery? If the latter, this seems like it might be a decent starting place (as far as I can tell… I haven’t actually read it): http://lnx01.ee.polyu.edu.hk/~eewlchan/EE1D01/ebook/Pages%20from%20Electric%20Power%20Basics.pdf
Tangentially related to the downthread question of whether work is getting too intellectually intensive for people to do – is there a reason not to expect metic knowledge to develop around the kind of high-technology work the future seems likely to increasingly hold?
Arguments for:
1- Farming is really hard, and the fact that your average agrarian bear was capable of it is incredibly impressive. Collective knowledge is obviously good enough that it can overcome a lack of reasoning. There’s a wide history of success here to draw on.
2 – The benefits of individual epistemic (not like, epistemology, but in the sense of episteme) intelligence in occupations like engineering, programming, teaching, accounting, etc. seem limited. The sky may be the limit for the tippy top of the field, but there’s a lot of work in those fields you don’t need much reasoning for.
Arguments against:
1 – Metic knowledge works better when the landscape isn’t shifting under you. It’s possible that technological destruction is too fast for this sort of knowledge to develop.
2 – The kind of technological work that’s being done isn’t conducive to the production of metic knowledge.
3 – there’s too much atomization of society/the workforce for metis to condense
The first two objections seem fake as hell to me. At least in my field, experienced techs have way more of a clue than even experienced engineers about some things, and one of our biggest challenges is institutionalizing the knowledge they have. A tooling engineer is only half of what we need to build tools half the time. This isn’t surmountable with more training either; nothing but “ask the person who uses the tool” seems to be a satisfactory way to answer to the question, “how should this tool work?” If the objection is that this wouldn’t apply to software, I’ll simply repeat mine and John Schilling’s statements from a few threads ago:
These failure modes aren’t something that can be reasoned out of. At least not easily. But I’m the sort of person who actually believes that “Big Data” is a dumb meme.
The third worries me more. If we’re engineering the vectors for metis out of society, we might be fucked. I consider it by far the most serious problem.
The obvious response to “The big corporation making a one-size-fits-all approach doesn’t actually fit our size” is “develop it in-house.” My current consulting job is banging out tiny little programs that solve some specific problem in a particular factory’s workflow, sometimes connecting to a big one-size-fits-all program’s database to slurp up some particular data they need. AFAIK, this is a fairly common thing for consulting companies to do.
The downside to writing a program that solves one specific problem really well, incorporating all the metis your end users have developed, is that obviously you can’t then copy it to a thousand other sites that have slightly different workflows and expect it to work equally well. So you’re giving up one of the key advantages of software if you do that – the ability to copy it – which is why big companies providing 80% solutions continue to exist and make more money than dinky little consultants.
So I think that software has exactly the same issue that other fields do when it comes to building institutional knowledge and metis and improving on it. Which I guess puts me firmly in the camp of “metis will continue to exist.”
There is one thing that might be unique to software: Open-source software has the potential to let someone take the code you’ve developed for one thing, and then tweak it to fix whatever specific problem they have. This lets you get some of the benefits of the one-size-fits-all software while maintaining your metis. Of course, this means you introduce all sorts of maintainability issues once you fork the codebase… which means now your developers effectively have metis of their own that they need to maintain.
Really, when you think about it, software is just a giant pile of accumulated knowledge – every bugfix is someone saying “Actually, the obvious solution doesn’t work, because we didn’t know…”
The thing about in-house, and to a lesser extent any kind of niche software, is that it will be tailored very well to the problem it is addressing but everything else about it will be bad. The UI will be clunky, the performance won’t be great, it’ll be buggy and those bugs will be fixed slowly or not at all, it won’t be well documented, they’ll likely be serious security concerns, and so on and so forth.
This is because it’s really hard to write AAA software (by analogy to AAA games). It takes a lot of people with many different skill sets and ongoing attention.
It depends very much on how much the company is willing to spend on it and how serious they take it. There is some very well-documented in-house software where bugs get fixed fast. See NASA,
BoeingAirbus, etc.The finding of bugs tends to correlate with user base and intensity of use.
The clunkiness of the interface tends to negatively correlate with freedom of use (much in house software has to be used as part of the job, so workers have no choice but to use it).
Complexity of the interface tends to correlate with intensity of use and the capabilities of the users. Power users tend to prefer software with advanced capabilities and high learning curves over software with limited capabilities, but an easy to learn interface.
Incentives of in-house software tend to be different than on-the-shelf software, but so are the incentives of open-source software (which also tend to be poorly documented and have more complex and less friendly interfaces).
The main advantage of open sourcing in-house software is to expand the user base & to share development effort with others*. It doesn’t suddenly create AAA software incentives/outcomes.
* Including features that are valuable to the company, but not valuable enough to implement themselves. If another company does want to spend the money, you get the feature for free.
Software quality is like airplane seats. Everyone likes to complain about it, but nobody is willing to pay even a little bit more money to make it better.
Programming is almost entirely reasoning and intelligence, *especially* at the bottom end of the field. At the top end of the field, you might actually need to have read some theory about compilers, operating systems, theorem proving or whatever. At the bottom end, you can just figure things out by trial and error without too much difficulty.
I do, in fact, pay a little bit more money to make my airline seats better. The airline industry almost always makes that option available, and I pay for it. The software industry, not so much.
he software industry, not so much.
Yes, we do.
Use open source, and then actually pay developers to fix what you need. If there isn’t an open source solution to replace your inhouse software, then open source your inhouse software, and then pay developers to fix what you need. If you can’t open source your inhouse solution because it belongs to some vendor who has you by the balls, then pay more developers to develop an open source solution to replace the vendor solution.
If you complain about paying developers, you lost the argument.
That sounds an awful lot like “If you don’t like economy-class airline seats, you should charter a private jet. If you complain about the cost of chartering private jets, you lose the argument”.
The airline industry at least offers a range of services between lowest-common-denominator crap and bespoke, because no, hiring someone to produce a custom solution for just one customer isn’t a sufficient alternative.
The problem isn’t so much paying developers as it is paying dynamics PhDs for ~5 years of work. Take a look at open source multiphysics stimulation or CAD software; it’s very, very bad.
@Mark Atwood
Much in-house software is so tailored to the company that there isn’t actually a market for it, even at zero cost. Open-sourcing just makes it easier for hackers to attack the company, but provides none of the benefits of successful open sourcing: having a larger user base, sharing development with others, etc.
Open sourcing things is not a panacea.
@John Schilling
An important difference is that it actually saves the airline money to squish people into an economy seat by default, rather than give them a business class seat. It’s not (just) artificial scarcity.
In software, it is often no (or only slightly) more costly to offer the full feature set. So differentiated pricing is then going to be purely to capture more of the consumer surplus.
Also note that the differentiated pricing often depends on there being a dichotomy in the market. For example, it seems to me that business class is only viable due to business travelers (hence the name), who don’t pay for their own travel.
I regularly see software try to capture more consumer surplus, when a dichotomy in the market exists. For example, Adobe has long tried to do this with Photoshop, trying to get lots of money from professionals and less from prosumers.
Often it’s not that people aren’t willing to pay, it’s that they aren’t really interested in quality improvements, they are interested in signalling to others that they care about quality improvements. Which means they end up paying for the wrong thing. A lot of time is spent building the airstrips, with profound disinterest in whether planes can actually land there.
I’m not sure that some things are about raw intelligence.
There’s nothing like working with MD’s constantly to make it clear they’re normal humans who can be idiots sometimes.
Guy with both an MD and Phd who uses excel every day of his working life: complains that he needs the data sorted…. according to a column in the spreadsheet in front of him. To his credit he was embarrassed when someone pointed to the “sort” button.
A lot of things have a fairly small hurdle to understand them. A lot of very bright people never push themselves over such hurdles.
I’ve got a very strong case of “learned helplessness” for anything produced by Microsoft. They have a history of radical UI changes, requiring me to entirely relearn how to use whatever-it-is. Consequently, I never put in the effort needed to become an expert user of anything they produce, since it’ll all be thrown away in around 2 years.
Apple is less bad – often, the things I learned still work after their UI “improvements”, even though they are no longer discoverable (not visible on menus etc.). I have a 2 page list of things I do to a new Mac, to make it behave the way I expect (in many cases, the way Macs did at the time I learnt some particular feature), but at least it’s possible. So I’m a bit more of a power user of OSX than of Windows.
FWIW, I pay the premium to buy Apple because of the relative UI stability. Or I use Linux, but that has its own collection of problems, less relevant to this comment.
Consequently, I never put in the effort needed to become an expert user of anything they produce, since it’ll all be thrown away in around 2 years.
You said Linux wasn’t relevant, but this sentence is exactly me and Linux. It’s probably worse in the open-source world, because at least with the proprietary OS’s you have the alternative proven-to-work reward-system called “getting a paycheck.” Linux keeps on reinventing things that worked acceptably well because the reward for inventing a new subsystem is so much greater than marginally improving an existing subsystem.
OpenBSD seems okay, because its built for the people who write it so that they can use it, as opposed to being built for the kudos. (The fact that other people can use it is a lucky bonus.) When I come back to OpenBSD after a few years, I still recognize it, and my old tools typically work.
Most of my problems with linux are really problems with distros, which is why I didn’t want to go down that path. And those generally mess up the window manager, the packaging system, and (less commonly) the way processes are launched, while leaving almost everything else alone, at least at the level of a user.
At the level of a programmer, linux (both distros and base) change a lot more than this, and I’m unhappy that many/most distros make it all but impossible to e.g. get debuggable core dumps, even of programs you built yourself.
And to get really arcane, the arms race between kernel changes and the capability of the kernel core dump analyzer (crash) is insane – in any other environment, the 2 teams would coordinate.
But meanwhile, gnucash hasn’t changed drastically in the past decade; mutt still reads local email; postfix still handles my spool file; emacs still edits files with substantially the same UI as always; shell scripts are backwards compatible with the ancient /bin/sh etc. etc.
Yeah, MS has a real love for radically changing the interface and breaking the workflow of all its users every couple years. I don’t know if this is somehow helpful to their business model, or if it’s just something they’re big enough to get away with, but it sure is annoying.
My own response is to try to use open source tools as much as possible. In a pinch, I’ll settle for Apple tools, which seem less inclined to randomly change how they work in a way that requires a few weeks to get used to. I actively try to avoid using Word for anything substantial, though I often am forced into using it in collaborations with people who can’t or won’t use anything else.
Features I don’t need are added, features I do aren’t.
Having just received the latest Windows 10 update, very much this. Most of the faffing about (and that’s what they did, on a cursory look) is minor; some of it I’ll use, some of it I won’t. One particular feature is just about driving me up the frelling wall after only three days of it and I honestly think the only reason this was included is “well we have to seem like we’re doing something what with the subscription model we’re charging our customers”.
In some ways the customers drive this.
A colleague told the story of how he delivered a bunch of nice usability updates, and never heard anything about it.
Then he put in a new color scheme — just a new color scheme, nothing else — and everyone raved about how fast the app was now.
I see that all the time. Some task that took 1.5 months of constant, grinding work with major updates for some new feature: “Oh, that’ll be nice”. Something that took someone 2 hours “That’s amazing! This is so great, thank you!”
“People get the software they deserve.”
The corollary to this (coming from people on the non-tech side):
“That’ll just be like a few lines of code, right?”
Effort need not correlate with usefulness. The effort Apple put in 2 years ago to *remove* labels from icons in the “dock” on their iPhones and iPads contributed only to making the devices harder to use. It’s possible they could fix this intentionally-introduced deficiency by changing a single line of code (turn it back on); it’s equally possible they’d have to rewrite the code entirely to work with a radically changed underlying system. Either way, the usability improvement would be the same.
Something that took someone 2 hours “That’s amazing! This is so great, thank you!”
It seems like a major failure of the organization if there are lots of two-hour changes that customers would really appreciate but that aren’t being made.
Well, yeah. I mean, we already knew the labor theory of value was false… didn’t we?
Wait, you expect appreciation from customers? I figure I’m doing good if the customers aren’t screaming and cursing my name. (But then again, I do infrastructure programming; if the customers are thinking of us, something is probably wrong)
It isn’t about expecting appreciation, it’s that what generates positive user feedback will drive to some extent what gets focused on, and users give positive feedback for dumb (and even wrong) reasons.
Did he paint it red?
Then he put in a new color scheme — just a new color scheme, nothing else — and everyone raved about how fast the app was now.
Well I tell you this, my friend: if in the latest update they actually had put in a new colour scheme, I would be raving about it right now 🙂
At the moment, for the Office Theme in Word, I can have “Colorful (don’t get excited, that’s ‘light grey’), Dark Grey, Black, or White”.
If I use White, that’s retina-searing after a couple of hours with no contrast between the background and the onscreen page, so if I’m doing a full day’s work on the ol’ wordprocessing front, I pick Dark Grey.
A couple updates back, they let us have light blue as a choice and that was great, but in one of the updates they gave us the new improved “you can have black, slightly less black, or white” as Colourful! New! Themes! Ain’t You Glad! choices.
If they could manage to put in light blue, light green, etc. as backgrounds to save my poor aging eyes, I would be much, much, much happier than “No, I don’t actually need to be able to link straight to Wikipedia launched from my Word programme, thanks all the same”.
I observed back in college that professors were really bad at using our learning management system, which when I started was Blackboard. They complained that they could never find x or y and that z and a and b were confusing. Their solution? Get a new learning management system, of course! So professors pushed to switch to Canvas, and the exact same complaints ensued. The problem wasn’t with the software—the problem was that they didn’t know how to use the software, like really use it, no metis. But their increment was too short to learn, or they’d developed a learned helplessness from knowing it would be too short. The result was that they were all shit at using our LMS and classes would have constant, and I mean constant, problems with files “missing”, tests not appearing, grades not being entered, turned in homework being “inaccessible”. Which could of course be exploited by students saying, Why yes I did turn it in, gosh I don’t know what happened, has this ever happened before….
Now that I’m working I see this elsewhere too. There’s a related, or perhaps a more general, problem where folks think the solution to a problem is software, when it’s really (for lack of a better term) process. Consider a hypothetical: my boss wants us to use a task management program. But the program is useless so long as no manager is in the habit of entering and monitoring tasks and so long as no worker is in the habit of checking them off. And those habits don’t depend on the program, anyway—we could do this all with the whiteboard that’s in the room now.
This is our second, by the way. He didn’t like the first. He doesn’t know the latest one any better. He won’t know the third one six months from now. That which has been is that which shall be, and that which is done is that which shall be done, and there is nothing new under the sun.
Slightly OT: could someone define metis? I have a vague idea from context but I’d like more. Google isn’t giving it to me, and I suspect this is a rational-sphere-ism I don’t know.
“Seeing Like A State summarizes the sort of on-the-ground ultra-empirical knowledge that citizens have of city design and peasants of farming as metis, a Greek term meaning “practical wisdom”.“
I think it is a seeing-like-a-state-ism(or at least that popularized the term here), and it is just built up local knowledge, like all the related tricks and information needed to farm in Papua New Guinea, that are hard to derive from first principles.
Edit:I’m slow
The problem wasn’t with the software — the problem was that they didn’t know how to use the software, like really use it, no metis.
While this is definitely a problem (shiny new software installed everywhere but no training in how to actually use it), in defence of ‘people interacting with shiny new systems’, I have to point out that sometimes the designers/coders don’t know the fine details of what the systems will be used for, so they unknowingly set up roadblocks in the way of the end users.
For instance, the housing database I was using that didn’t allow you to enter apostrophes in surnames. This in a country with O’Briens, O’Byrnes, O’Boyles, O’Mahoneys and O’Mahonys, O’Gormans, O’Callaghans, O’Sheas, O’Donnells (distinct from the McDonnells or indeed McDonalds), O’Neills, O’Reillys, O’Sullivans and several more.
Which meant there was no consistent system used for entering names, so everyone had their own way. And since the search function was case sensitive, this meant many happy hours trying variants on “Did the person who processed this application enter the name as O Brien, OBrien, 0Brien, O. Brien, Obrien or some other version?” before you could find the application in question, if you could find it.
(They did fix it in a later iteration, after every town, city and county council in the country yelled at them about it. But you see what I mean? They were used to thinking of apostrophes in the context of programming, and never considered at all “entering surnames onto the database” because it never occurred to them, and it never occurred to the people asking for the shiny new software to mention this, presumably because they assumed ‘ah shure, they’ll know about that anyway without having to be told!’).
A failure to handle apostrophes probably means an SQL injection attack that has been hastily patched.
Haha, that’s a fun case. With databases, apostrophes can get you into real trouble. The least your developers should have done is trim the apostrophes from input automatically instead of forcing users to remove them themselves; that way you at least consistently get name minus apostrophes as the result, instead of a bunch of 0Briens (who the hell does that?). But really those should have been parameterized, which escapes apostrophes for you, and is secure generally from injection. This isn’t 1965—a lot of work has been done so that developers don’t have to think about these things, and their code libraries handle edge cases like these gracefully. Of course, this isn’t the case everywhere; the software could be very old, the developers could be idiots, shared code can still have bugs, and the pretty abstractions you build on top of all the plumbing are just as susceptible to bugs, though at least they’re less insidious. Since I worked at IT, though, I had access to our learning management system as a student and a teacher, though, and I can tell you there weren’t serious, experience-ruining bugs. Actually, only one that I can recall: I’d typed some code in a comment on an assignment I uploaded to my professor, and the code was breaking the page my professor used to download assignments, because the idiot developer never html encoded my input. In retrospect, I should have used that for evil, but I just reported the bug instead.
(who the hell does that?).
Ha ha ha ha ha (that sound you hear is hollow laughter from the memories).
People who tried inputting “O’Brien”, had the machine rear up and spit at them, and decided to try a different character to make sure it wouldn’t explode on them. If you’ve crashed the entire system just typing someone’s name in, and it’s going to take three days to fix (because the developers are all up in Dublin and all changes, requests, etc. have to be referred to them and they take their own sweet time answering), then you’re going to be very wary about anything that looks like it might make the system crash (thank whomever the patron saint of low-level clerical officers is that never happened to me, at least).
As I said, there was no consistent “okay everybody, make sure you do it this way” method, most likely because everybody involved on the data entry level bitched about it amongst themselves but nobody thought of asking “So can we get a consistent rule about this?”
The first time this happened to me, I went “Oh yeah, because apostrophes are used in programming” but then I went “Yeah, but nobody thought about that when designing a system that needs inputting names that have apostrophes in them, in a country that has lots of surnames with apostrophes in them? This does not seem like good design!”
To be fair, I think this was mostly the case. The original system was a pilot version done on a trial basis in limited areas, and when it seemed to work they decided to roll it out nationwide. But being government contract work, the time between “let’s pick a tender to build this”, the version that was delivered, and the version that went nationwide was a long(ish) time. So the original software was old, and then of course once the database started being used by everyone and not just the selected trial site, everyone wanted something different added, taken out, tweaked or solved, and that resulted in a creaky superstructure being tacked on top.
Like all top-down decisions, if they’d asked the people on the ground who were dealing with applications what they needed and how they did the job, then designed around that, it would have saved a lot of trouble because we could have told them “This is the paper form we use, this is the information we need, this is how we enter it, we need to be able to put six different addresses in for people and variant names because our clients change their names and dwellings more often than they change their socks” and so on.
But why ask the little people, when the top brass have had a Brilliant Idea and are full steam ahead on how this will be More Efficient and Less Costly? 🙂
No. No. No!
Yes. Yes. Yes!
Catherine of Alexandria, or Cassian of Imola, perhaps, in case you’re looking for an Icon to hang from your monitor.
Also,
https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/
I still see the first one occasionally in production code. It’s bad, yes, but not nearly as bad as whatever Deiseach was experiencing.
I don’t find it surprising, but then again – I read The Daily WTF.
As far as I’m concerned, it’s one of those things that really deserve a work item whenever the current sprint touches that particular piece of code.
The correct answer to “how do I sanitize my inputs?” is “you don’t”.
@Faza (TCM)
This makes no sense, at least in the context of how modern computers work.
@Aapje:
There’s only so far I’m willing to go in defending a piece of levity, but please elaborate.
The most basic rebuttal is that code acts on data. Without code interacting with data, you have no computer.
Modern computers use a Von Neumann architecture, where data and code are stored in the same memory and are transported over the same bus. So code and data meet in memory and meet on the bus.
At a more high-level, most code written in programming languages is treated as data to produce actual CPU-level instructions. So code becomes data becomes code.
To truly have separation between code and data, you need a hardware-program computer (like ENIAC, Enigma, etc), rather than a stored-program computer.
PS. You probably mean that code and data should be delineated more clearly.
It’s simpler than that: I meant that unless you can guarantee you’ll be in complete control of all your inputs, “eval” is a four-letter word.
I work on the other side of this, trying to make sure that this kind of thing doesn’t happen (to military software) and as a counterpoint, users are often terrible. For every time they talk to us and we come away with a clear idea of what they need, there’s another time when we end up more confused and end up having to make a bunch of changes because they didn’t communicate clearly, or one group tells us that what another group told us to do is idiotic. It sounds nice and simple to talk to the users, but different people use the system in different ways, so the system they build on your advice would probably be different than the one they build from the person who sits next to you.
(This doesn’t excuse the apostrophe thing, and it’s probable that the devs in question are idiots, but “talk to the users” isn’t a panacea. They need someone like me to sort it all out.)
There are architectures that mark some regions of memory as non-executable, or that design things so that the only code that can run is in ROM of some kind. This makes attacks harder, but not all *that* much harder. Google for “stack oriented programming.”
@albatross11:
Try as I might, I can’t make the connection between “mark some regions of memory as non-executable” and “stack oriented programming”. To me, stack-oriented is pretty much the epitome of mixing code and data in one bowl (the stack).
May I ask you to unpack it a bit?
Sorry, my brain is getting old. I was thinking of return-oriented programming.
I can see why apostrophes could be troublesome.
But case-sensitive searches, and no easy toggle for “search case-insensitive” ?
What were the developers thinking? Or were they thinking about the end-users at all?
More likely a DB migration from a case insensitive configuration to a case sensitive one.
Eh. I do think you need to sanitize, or maybe normalize is a better way to put it, a name field not because of sql injection–which should never be handled that way–but for data cleanliness reasons. If your users are at all likely to try O’brian, O’Brian, Obrian, obrian, etc. for the same person (which is a SME question) then you want them to be considered one and the same in your application.
I don’t think your example really supports your points. I’m at a university that switched from Blackboard to Canvas. With Blackboard there were a lot of complaints and problems. (It was truly awful. Every task took so many clicks, and was so unintuitive.) With Canvas there are also complaints, and people who are incapable of getting what they want done. But the number of complaints is less, and the general unhappiness with the learning management system is less. (According to random people I talk to and the IT support people. So the problem is perhaps to some extent process, but it’s also, to a sizeable extent, software.
I didn’t notice a difference in number of complaints (and I did pay some attention to this), but it’s possible my experience is an outlier.
There is a 1 in 7,300 chance that a 30 meter in diameter asteroid will hit the earth this September. Shouldn’t we have tried to reduce the odds of impact?
Sounds like it would only cause major damage if it hits a city. Given the odds of it hitting Earth are 1/7,300, what are the odds it hits a city?
Edit: Appears to be about 1/219,000. Better odds than I would have guessed.
Conditionally on that it hits Earth, I presume?
I wonder too how did you come up with this number? Resources I was able to find say about 2.5-3% of landmass is covered by cities, which gives a ~1% of total area (with oceans included), which gives ~1/730000 chances total, 1/100 conditional on hitting the Earth.
Wouldn’t it also cause major damage if it hit the ocean not too far from a city?
Wikipedia seems to indicate that it’d result in an airburst (not hit the ground) with a force of around a megaton. https://en.wikipedia.org/wiki/Impact_event#Airbursts
In the rough direction that 2006 QV89 is coming from, what point is the center of Earth from its view at 7am (presumably UTC) on September 9th?
Also, ESA (linked as a source for Wikipedia) says it is 40 meters, while Wikipedia says it is 30 meters.
My first thought on that “Goddamn is humanity that bad at reacting to small probabilities of huge risks”, but then I did calculations and in fact ignoring the asteroid is surprisingly rational thing to do. The chance of it hitting a city is 1 in 790 000 (per calculations in my answer to Uribe). It’s hard to guess how many casualties there will be if it does in fact hit a city, but I think it’s safe to say it’ll be well before 1mln. So making 100% certain the asteroid doesn’t hit the Earth is equivalent to certainly saving one person at most. Space launches come at tens of millions USD price range, and that doesn’t include a nuclear warhead, probe, R&D, premium for haste and so on. We don’t routinely spend anywhere near that much money to save a single person so we shouldn’t spend that amount to save proportionally more people from proportionally smaller risk.
We might get some positive utility from attempting to reduce the odds, figuring out that something we think we can do we cannot, and then fixing the process so that we can do that quickly. Like, a dry run for the next asteroid that has a 1:20 chance.
OTOH, we might get some negative utility from building a process for “get a nuke into space fast.”
That’s another matter, I agree we should try at some point on something, before it becomes critical. Don’t know though, maybe there’s more suitable candidates in the near future than this one.
What do you mean by a process of getting nukes to space fast though? Regular nuclear warheads are more then capable to survive a space launch, because they are in fact launched in space, just not into orbit. Load one or more on a regular space rocket should be trivial. And in fact the technology to launch a warhead specifically into low earth orbit has also been developed and even deployed briefly by USSR, before being specifically banned.
I am speaking from a meta-level here, not relying upon the exact process for delivering a nuke.
They are under some kind of guard and a hardened control process, and if someone (even the President) wants one put onto a rocket capable of leaving Earth orbit, it takes a certain amount of paperwork and safety to make sure we are still keeping careful track of them. Maybe creating the process for being able to get a nuke onto a rocket quickly creates more risk from loose nukes than it reduces risk from stopping impacts.
Maybe we should try to increase them instead.
Depends where it hits as to how much damage it will do (as of recent impacts, Russia and Australia seem to be the targets the Cosmic Gods are playing pitch and toss with). Also depends if you think “I can give you a selection of lotto numbers with a 1/7,300 chance they’re the winning numbers, wanna pay me $100?” is a bargain you would take. Yes, that may be much better odds than randomly picking numbers yourself, but would you really spend $100 on it?
The Chelyabinsk meteor was of a comparable size. Slightly smaller, estimated around 20 m diameter. It was about a 400-500 kT airburst at 100,000 feet, with the heat and gas penetrating to about 85,000 feet. It did occur in or near a populated area, and the shockwave resulted in zero deaths and only minor injuries from broken glass etc. At 30 meters, 1.5 times bigger, you would expect approximately 3-3.5 times higher mass (1.5 cubed) and thus (approximately) yield 1.5 to 2 MT. I could be wrong, but eyeballing it and also running a few numbers through the impact simulator, I expect that even if it was directly above a city center, the airblast would be high enough up (impact simulator suggests no penetration other than airblast/shockwave past about 50,000 feet) that it would be expected to cause few if any fatalities, probably no major damage.
So my opinion is that it is not of sufficient concern even in the worst case to call for an attempted course alteration.
Is there anything I can personally do to increase the odds?
A little downthread, @achenx brought up the release of Commander Keen as a mobile game, and it got me thinking. What are some of your favorite DOS/pre-Windows 95 games either for the experience or just the nostalgia?
For me (mostly in order):
Galactix (I absolutely adore this game…eat your heart out Space Invaders)
Nibbles (Qbasic)
Gorillas (Qbasic)
Dark Forces (am currently replaying this one)
Day of the Tentacle
Sam & Max Hit the Road
Indiana Jones and the Fate of Atlantis
Commander Keen
Duke Nukem
I am definitely forgetting some important ones, but those are the ones that come to mind. Any other favorite classics out there?
Wolfenstein 3D
DOOM
The three games I remember most fondly from this era are Command & Conquer (and Command & Conquer Red Alert), the Secret of Monkey Island and the Indiana Jones and the Last Crusade adventure game (for which Fate of Atlantis was a sequel).
Given that two of them are from the sadly defunct LucasArts (as are four games on acymetric’s list), obviously at one point the studio had something going for it. I can think of a handful of recent games that have almost the same level of humor I remember from LucasArts at its prime, I can think of none as consistent.
For the Secret of Monkey Island, I remember the Insult Sword Fighting quips most of all; it seems unique in that it built the humor into the gameplay.
For the Indiana Jones and the Last Crusade adventure game, while some of the gags have lodged themselves firmly into my memory (“Hi, I’m selling these fine leather jackets…”), what stuck with me was the subversion of just about every other licensed game I can think of as the game rewarded you for thinking beyond the original story. For example, there’s a scene in the movie where Indiana Jones (in disguise as a German officer) sneaks into a Nazi book-burning rally to recover his father’s diary. On the way out, he runs into Hitler himself, who autographs the diary. In the game, you can play it straight, or if you’re a quick thinker you can hand him a copy of Mein Kampf (which you can then use to bribe your way past any guard) or the easily-missed Travel Authorization Form (which will then get you past EVERY guard).
LucasArts put out a ton of great games (including, obviously, a lot of Star Wars ones). I missed out on a bunch of them because I was young enough that I was reliant on my parents for game procurement.
Man, I was a huge LucasArts fanboy back in the day. I think three separate times I received collections of theirs as Christmas presents. I played almost every game they put out and really liked most of them.
That’s exactly how I ended up with Day of the Tentacle, Indiana Jones, and Sam & Max. There were six disks, I can’t remember for the life of me what the other 3 were. One was demos, I think.
Ok, looked it up. One was a 3 level demo of Star Wars: Rebel Assault (and I played the crap out of those three levels), one was a “Screen Entertainment Utility” (I assume backgrounds and screensavers or something), and the last one was demos like I thought (although I do not remember playing all those demos, particularly Tie Fighter…maybe my system couldn’t support it).
Vol. 1
Full Throttle was just awesome. Rebel Assault was great, and if I’d had the maturity to play Tie Fighter and X-wing vs. Tie Fighter properly I would have enjoyed those even more.
Under a Killing Moon and The Pandora Directive were my favorites though. Only trouble was having to change CDs all the time.
I loved C&C and C&C: Red Alert, esp. the latter with its gonzo history.
Other than that, turn-based strategy was my genre back then. I had Civilization 2 and Master of Orion 2 by early 1997… might have been birthday and Christmas ’96 respectively. Civilization went on to bigger and better things, but MoO2 is still the peak of that franchise. Oh, and its fantasy sister game Master of Magic was never improved on, AFAIK.
There was also an obscure Space 4X that came out before MoO2, Ascendancy, which was leaps and bounds better aesthetically and as SF (they put a ton of thought into the species and technology, while MoO just copied tech from Star Trek/Wars and mostly used bipedal Earth animals as races). Unfortunately, the AI was skull-crushingly dumb, so it failed as an actual game.
Did Age of Empires require you to DOS Boot out of Windows? That was a great folding of Real-Time Strategy with its resource harvesting into historical 4X.
Dungeon Keeper! That also fits your definition, and DK1/2 were a wonderful way to experience a dungeon fantasy setting.
Descent (playing that over Kali back in ’95)
Airborne Ranger
Karateka
The original Civilization is a big one. And Caesar II, and Simcity 2000.
Also yeah all the Apogee (etc) platformers and shooters. Aside from Keen and Duke, I loved Cosmo’s Cosmic Adventure (same designer as Duke), and the early efforts of Pharaoh’s Tomb and Arctic Adventure. Galactix is great, yes!
ZZT. I read about Epic and Tim Sweeney earning billions of dollars from Fortnite or whatever, and I still think of them in terms of ZZT.
Star Control II.
The thing about Galactix. When I was a kid, I could breeze through to the last stage easily, but no matter how many times I tried I could not beat that last big red ship. Fast forward to 5-10 years ago, I decided to find a copy of it so that I could play through it again. The big red ship was unbelievably easy. Mild disappointment, like going to the huge slide at your childhood playground and finding out it was only like 5 feet high.
Also, whenever I played it as a kid I had to restart it like 20 times because it would always start up running at like 10x speed or something. No idea what caused it, sometimes 10x speed and sometimes normal.
Some really ancient games measured time in CPU clock cycles instead of seconds, and so would run at different speeds depending on the CPU speed. Maybe Galactix also used CPU cycles, but had some buggy method of trying to figure out how fast the CPU was and compensate, which sometimes worked and sometimes didn’t.
That sounds plausible.
I had a computer back in the day where you could press a button on the front to make the CPU run slower. Useful for those old 8086-era games that ran unplayably fast on a 486.
The Kroz games were open-sourced a few years ago, and since the programmers didn’t know how to (or couldn’t?) use accurate timing, they just ran an empty while loop in between cycles. You could specify that you had a “faster” computer to make the loops longer.
Err…if it works!
Acymetric, the DOSBox emulator has a function that allows you to adjust the CPU speed of your emulation on the fly. I found it very handy for some older games.
I’m pretty sure getting Galactix to run was one of the first times I learned about the 640k memory barrier.
Also I have a distinct memory in a late elementary school class, having a writing assignment where you could write about anything you wanted, and I described everything about Galactix in extreme detail to meet the length requirement. I should have apologized to my teacher later.
Dune
Dune 2
The Wing Commander series
Warcraft 1 and 2
Full Throttle
The Kings Quest series
Star Wars: Rebel Assault
Leisure Suit Larry 2
Myst
Battle Chess
Theme Park
Master of Magic is an absolute classic that is still unmatched despite many efforts to imitate it.
Fantasy General was pretty much just Panzer General, but it had an amazing soundtrack.
Fantasy Empires was… weird, but a lot of fun.
And of course Dungeon Hack was one of the best D&D games
What’s up with this? It seems like an indie developer could put out a multi-racial Civ game that cut & pastes MoM’s magic system and zooming in on blocks of troops when your units meet resistance that would at least match it, with HD graphics.
There have been several games that had parts of the MoM set. I can’t remember their names, since I tend to play for a few hours and just get disappointed in the bits that are missing. The things from MoM that I like to see:
1) A ton of spells from distinct sets, that include a wide range of effects (enchantments over the entire map, buffs for units, buffs for cities, creation/summoning, battle spells)
1a) While you can customize what your character is good at, nobody can get all of the spells
2) Races that have unique playstyles (Draconians all getting flight, dark elves all generating magic for example)
3) Cool and customizable heroes (I still love Warrax’s design, even if he looks kinda generic now)
4) Armies clashing at once (a lot of games now have turned to the 1 unit per hex system, which isn’t nearly as fun)
MoM also had a bunch of other features that were cool, but not vital to recreating it. The magic nodes lead to natural points to fight over outside of cities. The ruins/dungeons provided a fun early-mid game difficulty that encouraged you to never neglect your army, especially with how good the rewards could be. The two-world system with multiple points of transition between the two added some complexity to the strategic layer. And being able to design custom items for your heroes made them feel a lot more unique.
Have you tried Thea? The inventory system is super cumbersome, the random starts can be frustrating, but the storyline is worth completing once or twice
Unfortunately, I’m addicted to the broken trait system where you can create mana by creating and destroying items and therefore get your heroes super well equipped early on (I forget which combo of attributes it is, but basically they made a mistake of arithmetic rather than geometric discounting) and have a horde of blood hounds roaming the map while doing so.
In any modern game this oversight would be immediately patched out.
I’ve heard people say Age Of Wonders: Shadow Magic is a respectable successor to MOM.
I’ve never played MOM so I can’t verify for sure, but it ticks all of the boxes on your checklist below (edit: er, checklist above).
AoW:SM is currently $1.99 on GOG right now. It won’t cost much in money if you’re willing to dare disappointment one more time.
Thanks… I should still have the CD-ROM for the first Age of Wonders around somewhere. I wonder how much the series improved!
*sigh* I don’t currently have anything running any version of Windows. If it won’t play in Wine. or dosbox, or natively on Mac or linux, I don’t get to play it – and this won’t; I just checked.
These guys talk about getting it running under Wine, but it doesn’t sound like it was easy:
http://aow.heavengames.com/cgi-bin/forums/display.cgi?action=ct&f=31,3339,,60
I recall I got Duke Nukem when I was like 8 years old, when my father bought me one of those airplane-style joysticks for the PC, and it came bundled with 3 games including that one. I somehow managed to beat the entire 1st episode while playing with the joystick, which was an absolutely atrocious way to control a sidescrolling shooter compared to just the keyboard. It was the very first video game that I ever beat, so it has a spot in my heart.
Indiana Jones and the Fate of Atlantis also has a spot due to it being the 1st point-and-click adventure game I ever beat, and I played it with my best friend at the time in 9th grade, talking and brainstorming with each other to figure out solutions to the various puzzles. Also, the line “is that a broken ship mast in your pocket, or are you just happy to see me?” had us giggling like 9th grade boys and shocked that such a line could make it into a video game.
Lemmings.
Hey, I had Lemmings for the ZX Spectrum. Monochrome, no mouse control (cursor was controlled with the arrow keys), and you had to load every level from tape separately, and re-load if you failed, but it was pretty impressive that they managed to squeeze the game onto that machine at all.
(Also, the Dizzy games. Those were good fun and ate up probably an unreasonably large chunk of my childhood)
The Incredible Machine. Every few years I’ll remember it, spend an hour trying to find a working version, then give up.
to be fair, trying to get an old game working can feel an awful lot like playing incredible machine.
The Incredible Machine Mega Pack (which includes the whole series, I think) is currently available on GOG for $2.50.
You can also play it here. Although I’ve never had GoG fail to work on a modern computer and it’s likely worth the $2.50 to be able to save your game.
The Incredible Toon Machine for me. Basically the same concept, but with more anvils.
Tele-Arena
Kings Bounty
In the early FPS genre, DOOM was the Alexander the Great to Wolfenstein’s Philip. Heady days, those were. Everyone was happily enjoying the Apogee Software “first episode is free” business modelwagon, and here comes id Software with all these promises that sound trivial today, but were a big deal back then: full 360-degree motion, 3D (well, 2.5D), high FPS on a dinky old VGA card, full sound and music, bullet holes stay on the wall, etc. And then it delivered on every single thing. Everyone thought John Carmack was Einstein-level genius. (They thought John Romero was a rockstar, too, until Daikatana…)
Star Control 2 was great for the story. The gameplay (fly around, mine, upgrade, repeat) has been largely co-opted by later franchises (Mass Effect, Far Cry, Assassin’s Creed), with the exception of wacky ships with different abilities and playstyles. But the story was a great mash of epic and funny. Plus, the music was basically crowdsourced to a bunch of Finns from the demoscene. And they figured out how to play it out of a plain PC speaker.
Lemmings has a modern-day successor in the Dibbles series, playable on Kongregate. Roughly the same morbid whimsy, and as mindbendingly hard.
NetHack is still my favorite of the roguelikes – hack’n’slash on a randomized map where death is permanent. You find potions, scrolls, and wands, all unidentified, including the Identify scroll. You can try various experiments to figure out what’s what, though – you can dip your weapon in a potion, try writing on the floor with a wand, dropping a scroll on the floor and seeing if your pet will walk over it, and at last resort, try it and hope. Monsters leave corpses. You can eat them. Sometimes this can help you. Sometimes not. Eating a dead cockatrice, for example, is not recommended. However, you can wield it as a weapon. (Doing so without gloves is a bad idea.) This is very powerful (unless you’re fighting a xorn), but be careful; if you’re carrying too much and descend stairs, you might fall, and will likely fall on that corpse. This is a few of literally hundreds of interactions different objects have with you and each other. And “the devteam thought of everything”. And it runs on a VT100 terminal – you don’t even need a graphics card.
Play the first Diablo and you’ll notice what it borrowed from the roguelikes, as well as what it threw away.
Zork was among the first mass market text adventures. Great story, and freaking hard. Later came the Spellcasting series, which was easier, but featured Steve Meretsky’s spot-on humor.
Civilization and MOO were both Microprose games at the time, which had a reputation for VGA-era games that were massively complex yet fun. Not just 4X, but similar sims in general. One I haven’t seen mentioned here was Darklands, an RPG with serious attention paid to the history and mythology of medieval Germany. The monsters weren’t stock D&D stuff. Kobolds, for instance, weren’t wimpy dogmen, but rather house spirits. Your heroes didn’t have classes, but could specialize in skills, and could pray to dozens of Christian saints for favor. It felt like you were learning about medieval German life as you played. Darklands was hinted as the first in a series of such games set in various parts of the world, but I guess it didn’t sell well enough.
Dragonlance had a flying dragon combat simulator. That was pretty cool. Never got to play it all the way through though.
Ultima had a lot of games, but I only ever played the Underworld series. 3D, but the screen was tiny, and you got this nice feel of claustrophobia and fear of what was waiting out there in the dark. Meanwhile, I remember going through every possible combination of my runestones to discover new magical spells. I played those games to death.
King’s Quest was great. I only really played III and IV. I liked Space Quest even more, and I could go for a sequel today.
Riven was my favorite of the Myst series. (I never played the first.) Great graphics for the time, but mostly I enjoyed being able to solve puzzles by imagining how a device would work if it were made to be used by the inhabitants there on a routine basis. You could logic your way through. The latest game I’ve played in this subgenre is Obduction, just a few years ago. It’s not bad.
Oooh, I’m glad someone finally informed me!
Aw, geez.
I first played it long after Win95 came out, but Nethack is still the closest thing anyone’s written to an old-school D&D experience on PC, and it’s worth playing for that alone.
But that “old-school” includes things like “brutally difficult to the unprepared” and “highly reliant on memorizing the documentation” and “entirely possible to die to a falling rock trap on your first move”, so caveat emptor.
Incidental use-of-language note – I’ve only ever heard the expression ‘die to [x]’, as opposed to the more usual ‘die from [x]’ in the context of computer games, and it still sounds weird. Is it common these days? I guess prepositions are pretty arbitrary, but I’d have thought that ‘die from’ was well-enough established that it would crowd out any new forms even in computer game territory.
Star Wars Rebellion remains one of the best, and most underrated, strategy games of all time. My friends and I still play it, using multiple layers of emulation.
I played a ton of Rebellion (Supremacy here in the UK) and still occasionally do, but that was Windows 95, no?
I mostly played shareware games in the DOS era. I found PTROOPER.EXE (one of the very first PC games) memorable, though I could never take out those damn jets.
At school we had a few computers from pre-DOS platforms. There was the Apple II series, of course, but also the TI-99/4A, which practically nobody remembers today. I was the one kid who didn’t put in a cartridge and wrote little programs in BASIC to amuse myself.
BBA has died of dysentery. What do you want on your tombstone?
A very educational game, teaching the lesson that everyone who tried to go to Oregon ended up dying a horrible death. So don’t go to Oregon!
some of these may precede DOS . . .
Gold Box D & D games
Ultimas
Wasteland
Maniac Mansion
DOOM
Apologies for the unwarranted nerdiness, but the “precede DOS” bit triggered me. 🙂
I knew that the only one that could possibly qualify was the original Ultima for the Apple II. Did it? Turns out that it very much hinges on what we mean by “precede” and “DOS”.
A bit of quick research tells us that Ultima was released in June 1981, but also that CPC (the publisher) registered a copyright for it in September 1980.
What about DOS?
Wiki says that the initial MS-DOS release was in August 1981 (presumably as PC-DOS, the IBM-branded version for the IBM PC), but MS-DOS was itself a re-branding of SCP’s 86-DOS that was released somewhere in mid-1980.
It would therefore seem that Ultima preceded DOS, if by “DOS” we mean the Microsoft/IBM-branded release for the IBM PC, but it may not have preceded DOS, if by “DOS” we mean QDOS/86-DOS prior to MS/IBM involvement (and the PC itself). “May not” because we should also specify whether we’re interested in the release date or the completion date for Ultima (if release date, then no; if completion, maybe).
This concludes our home computing trivia segment for the day.
No Apple ][ disk game preceded “DOS” if you’re being pedantic, because the first Apple disk drives were released with Apple’s Disk Operating System (DOS 3.1, I believe; they started with 3.0 but I don’t think it was released). There were some Apple ][ games which preceded DOS, notably Wozniak’s Little Brick Out. I believe Scott Adams Adventureland may have preceded it also. Of course there were also pre-DOS arcade games — Pong, Spacewar/Galaxy Game, and Space Invaders for instance.
You’re absolutely right, of course.
In the context of the thread, I took “DOS” to mean “DOS on the PC”, meaning MS-DOS/PC-DOS and derivatives.
I may be alone on this, but I always thought Zak McCracken and the Alien Mindbenders was much better than Maniac Mansion (which got a lot more attention)
I’ll check that out, thanks.
I’ll gladly second that!
I loved the hell out of that game. In part because it was the only point and click adventure game of that era which I actually owned myself, rather than playing bits and pieces of at somebody else’s house, but having played others now, I’ll still say that while it may not have been very well polished, it had a grander scope, sillier premise, and better sense of humor than its contemporaries, including Maniac Mansion.
That game sparked my love for weird Weekly World News-style fake tabloids, which still persists to this day, and I’ll never look at a pair of Groucho glasses or a microwave the same way again. Plus, for some reason I always got the impressions that if he were a game designer rather than a musician, this would have been the game Weird Al Yankovic would have designed. I don’t know why, but the senses of humor always seemed remarkably similar.
The Super Solvers games: Gizmos and Gadgets, Operation Neptune, Ancient Empires, Midnight Rescue, Treasure Mathstorm. Ancient Empires and Operation Neptune are the standouts here – complicated and challenging even before they try to teach you math or history.
Also, Raptor: Call of the Shadows was the best shmup on DOS, while Duke Nukem and Cosmo are tied as my favorite DOS platformers. Really, anything by Apogee back in the day was a pretty good bet.
Also, one game that I didn’t really like as a kid, but revisited as an adult and found amazingly unique: Sid Meier’s Covert Action. So many spy games have been made, but nobody else has made one that’s really about looking for clues rather than just stealthing or shooting your way through a mission that has a clue at the end. It gave you freedom to investigate anywhere and choose how you gathered information, which meant that you had to think about where you wanted to go and where you’d be likely to find clues.
Ooooh man I forgot about some of the educational titles. Treasure MathStorm, various MathBlasters were solid.
My personal favorites: Troggle Trouble Math and Number Crunchers. Treasure MathStorm was right up there.
Sid Meier’s Pirates — Man, this game was fun. Sailing, sword-fighting, sun-sighting, treasure digging,…
Sid Meier’s Civilization — Just one more turn!
Wizardry 6/7 — I loved making uber-characters in these. I still had my saved game from 7 to import when 8 finally came out!
Might & Magic 4/5 (World of Xeen) — Still the best Might & Magic games.
Ultima Underworld — I remember being absolutely amazed by the graphics and my character just being able to walk in any direction
Quest for Glory (Hero’s Quest) — This whole series was great fun, and you could replay each game as the different character types, solving the puzzles in different ways each time.
Heroes of Might & Magic — My brother and I played this game against each other for hours at a time, winning the same areas back and forth.
Albion — One of the more original RPG worlds ever.
Star Control 2 — This was just so much fun exploring the galaxy, meeting the different aliens, and finally beating those Ur-Quan.
Wing Commander series — First person space combat and even good cut scenes.
AD&D Gold Box Games — I always preferred the low level ones, but they were all pretty good. I even liked the Buck Rogers ones.
Railroad Tycoon — Laying track and scheduling trains…
Out Of This World (a.k.a Another World) — Accidentally transported to an alien world, you make and alien friend and escape danger.
System Shock — So creepy. SHODAN scared the crap out of me.
Jagged Alliance — Really fun combat, really interesting characters.
A whole bunch of Infocom games — I still have my hand-drawn maps!
Gabriel Knight — Man, I loved these games. They were so atmospheric.
Frederick Pohl’s Gateway — I loved the book, and the game was good, too (with a completely different plot). Legend also made some other good games, like Eric the Unready.
Populous — The second one was better, too.
Betrayal at Krondor — Amazing game.
Prince of Persia — The original. Like Karateka, but much better.
Wasteland — Precursor to Fallout.
I never actually owned a Microsoft PC from that era — my first was a Win95 machine. The Atari ST had a pretty good stable, though. Some of my favorites on it were Bitmap Brothers releases: GODS, Magic Pockets, Cadaver. Other titles that stick in my mind include Blood Money (Psygnosis), Oids (FTL), and Archipelagos (Astral Software). And Lemmings, which also saw a lot of releases on other platforms.
The original Marathon (1994) just makes it in, which means that Pathways into Darkness also does. I played a lot of Warlords and its sequel on the early Macs, too.
I see I’m not the only one who’s mentioned Betrayal at Krondor. One of the few RPGs that made travelling all over the place actually feel like travelling: the need for food, travelling by night being a dumb idea, etc.
I still fire up Master of Orion from time to time. A game in a small map takes two hours, and it can be very brutal, so if I am in the mood of trying to conquer the galaxy, I get to experience a full game in one sitting.
It’s my favorite game of the genre, surpassing things like Civilization because Civ adds way too much useless busywork. MoM just has some sliders to build what you want, one planet per system (Master of Orion 2 adds more than one planet per system and a ton of busywork with it) and that’s all.
I wish MoM 1 came to other platforms, untouched. The sequels, and the recent reboot just add stuff on top of it for no good reason.
As a small child, I was obsessed with The Ancient Art of War. It was arguably the first RTS; essentially the 5 1⁄4-inch floppy version of the Total War series. You had both a strategic map where you maneuvered squads over varied terrain and dealt with attrition, and tactical battles where user-made formations of units (with three types) had a linear, mostly automatic battle (although you did have control over retreats/advancements). Still kind of amazing to me that they were able to program something that sophisticated in 1985.
That was a good game. Never got a chance to play the follow-ups (At Sea and In the Skies), but the original was one of my faves back in the day.
Wiki says there’s a new version out, and Moby Games has some additional info, but it’s not available through my usual sources (Steam and GOG), so I’ll be giving it a pass, it seems. Looking through the screenshots on MG, I’m not really sold on the graphical style. I like that they’re trying to keep it simple, but I feel that it just goes to show that a good pixel artist is worth every penny.
Turns out Archive.org has a playable version of the original. It might be time for the kingdom of Ch’u to put Wu back in its place again…
Microprose made a game called The Ancient Art of War in the Skies.
It’s set in the 20th century.
Not sure what to make of this… 😛
Wing Commander
C&C
Duke
Doom
Great Naval Battles of the North Atlantic, 1939-43
Silent Service 2
Panzer General
Ancient Battles
Fields of Glory
Worms
So I’ve seen Wing Commander mentioned repeatedly in this thread, and did some reading.
So it’s a MilSF flight sim where your carrier fights space kitties? And the first games made extensive use of pixel art cut scenes, then switched to digital movie sequences in Wing Commander III. Wow, remember when digitized graphics of actors were a thing? I know I’ve compared games in the standard AAA game template of “walk around a 3D world, fight, and experience in-engine cut scenes” to Hollywood blockbusters here before, but whatever happened to that earlier attempt to make video games movie-like?
They used it all up on Xenosaga.
It can be a thing again!
Somewhat more seriously and kind of related (although not pre-Win95), the cut scene that got me the most hyped was the intro for Mechwarrior 4: Vengeance. Kind of shockingly well done for live action acting in a video game (it was admittedly brief).
Not only was Wing Commander III using live-action cutscenes, they featured Mark Hamill, Malcolm MacDowell, John Rhys Davies, Tom Wilson (Biff from the Back to the Future movies), and… Ginger Lynn Allen.
There were even thumbnail displays during missions of your fellow squadmates, many of whom I recognized as college classmates. (WC3 was made by Origin Systems, based in Austin. I was attending UT at the time.)
Indeed. It would have been relevant to the point for me to mention that.
The Command & Conquer series had the more central examples of live-action cutscenes with non-actors, and when Red Alert 3 came out with a cut-scene cast of Hollywood actors, they defended it in the press as charmingly retro.
There were also games in that era that used in-engine 2D graphics of filmed stuntpeople. Think Mortal Kombat.
Heh, I mean, depending on what you mean by “digitized”, I may have a few upcoming titles like Death Stranding and Cyberpunk 2077 [WARNING: Some Violence, Blood, and Adult Language] to draw your attention to.
My personal “Golden Age” is probably more like 1995 through the early 00s due to games like Baldur’s Gate 1-2, Planescape: Torment, Fallout 1-2, and so on, but there are plenty of earlier games I quite like.
My list won’t be exhaustive because Littleskad has already hit so many of them that if not for the addition of strategy and sim games I never cared for I’d think he was my evil (good?) twin. So you can +1 pretty much everything he listed, but I will go into a bit more detail on a few:
Quest For Glory 1-4: You Got Your RPG in my Point-And-Click Adventure! No, you got your Point-And-Click Adventure in my RPG! This is admittedly sort of an acquired taste, but I thought that the traditional and amusingly mean-spirited Sierra Deaths meshed well with old school murderhoboing, and I actually liked the mix of silly jokes and puns with surprisingly interesting serious characters. Not to mention the basic conceit of a game that played very differently for different classes with unique content gave it a lot of replayability for the time. There are high-res remakes of some of the earlier titles, and even a new spiritual successor by the original creators in the form of Hero-U: Rogue To Redemption on Steam. Pro-Tip: If playing through the originals, import your QFG1 hero into the sequels and either go Paladin (which has its own, increasingly rich, set of story options as you play, and which the creators obviously favored), or multi-class into magic (which was sort of an unintended glitch) and as a fighter-mage or rogue-mage utterly BREAK the games over your knee in all sorts of amusing ways.
Betrayal At Krondor: I’m adding my voice to Acymetric and Dndnrsn here, because it’s a massively underappreciated game. In addition to a surprisingly gripping story at times, a satisfying combat system, the feel of travel that has already been mentioned, I loved the way the text was designed to read as if you were reading through one of Raymond E. Feist’s novels.
System Shock: The game that gave us the Audio Log, and arguably some of the best versions of it. This is the grandparent whose legacy gave birth to series like Bioshock and Deus Ex. Plus, if you’re a SSC poster, you’ll probably enjoy one of the great unfriendly AIs, up there with AM, Hal, and Durandal. Speaking of Durandal….
Marathon Trilogy: These came to Mac first, but they’re the spiritual parent to the Halo games and already display Bungie’s love of certain SF tropes: Supersoldiers in norse-themed power armor, complex multi-species alien empires, deep time, and AIs as both mission control and major character.
And now, one to add:
Buck Rogers: Matrix Cubed: A SF RPG from SSI using their Gold Box engine (The gold box games have been mentioned already, and I’ll second them as classics), managing to combine space combat, planetary exploration, and a surprisingly interesting setting for something based on Buck Rogers of all things. TSR’s Buck Rogers XXVc was a pretty solid tabletop RPG, and I always wanted to see more done with it.
Buck Rogers: Matrix Cubed was a sequel to the also excellent Buck Rogers: Countdown to Doomsday (if I’m remembering the order correctly).
System Shock was good, but crippled by the fact that nobody had invented mouselook yet. If you’re going to replay it you’ll definitely want the re-released version that adds it.
System Shock 2 was really good, but also a bit too recent for this question.
I’d say Marathon and sequels has in many ways the better story. Mostly because there’s more room for it: Halo, shipping on DVD, told its story through cutscenes and mission dialogue. Marathon originally shipped on floppies, later on CD-ROM, and couldn’t have fit that on disk, so it told its story through computer terminals scattered around the levels. They could go on for pages, and they ranged from straightforward to ominous to screamingly funny. You really got to know Leela and Tycho and especially Durandal, moreso than anyone in the later games’ cast.
It helps that Feist actually worked on the game, himself.
But yeah, even if you ignore the story and writing, Betrayal at Krondor was lightyears ahead of its time, (i.e. much like how lightyears measure distance and not time, what Betrayal at Krondor was doing and what other CRPGs of the time were doing couldn’t really be compared using the same unit of measurement :p ) and doesn’t get nearly the appreciation it deserves.
Frankly, I’m amazed that nobody has mentioned X-Com yet! It may have come near the end of this era, but it was definitely pre-Windows 95!
As much as I love the remakes (the Firaxis X-Com was the game that made me think, “damn, why has nobody made a 4th Edition D&D-based video game yet? It might suck as a tabletop RPG system, but as a turn-based strategy video game it would be amazing”) I still haven’t seen a game in the genre that manages to capture what the original X-Com did, from the complexity of both the strategic and tactical layers, and how they complimented each other, to the general atmosphere and feeling of terror you get over what might be lurking in the fog of war. It may have been broken in some ways, but it’s still one of the greatest games of all time, warts and all.
I’m looking for an in-depth, thorough and rigorous defense of the idea of technological unemployment (that it is a credible risk we should be worried about), and most importantly, that it is not really an argument about AGI, superintelligence, artifical conscious beings that have rights, etc. In other words, restricted entirely to advanced technology without resorting to anything outside of the realm of “prosaic” tech.
The reason I’m looking for this is mostly because 1) People like Eric Weinstein and Andrew Yang are convinced that it is or will be a problem very soon, and they are smart people, and 2) Classical economics basically concludes that, for many reasons, tech progress should not result in long term, chronically high unemployment within a free market society. 3) Also, because our discussions surrounding this issue as a rule involve arguments for or against certain policy initiatives, most of which, due to reasons in 2), would seem to be more harmful than helpful in the long term.
If you look at the data, it’s pretty clear that it’s not happening right now. It only appears that way because the effects of the recession took a really long time to recover from and more baby boomers are retiring.
The most steelmanned position I’ve seen is this: Long term technological unemployment is not really a thing. While some people disagree with this, they are mostly practicing incredibly heterodox economics and shouldn’t be taken too seriously.
However, short term technological unemployment is absolutely a thing and no serious person thinks otherwise. There is strong evidence this has a permanent, negative effect on the workers and communities it affects that they do not ever recover from. At best, their children do. At worst, it can lead to generational poverty because even as society returns to full employment the individual community or descendants of the individual experiencing feel ripple effects.
On top of that, unrest is a thing too. Even where the wages of technological progress are obvious, people who are displaced will suffer. They will object to this suffering even if it is in the service of their narrow interests at the expense of society.
This justifies policies that look like (but are not exactly equal to) technological unemployment remedies. Listen to Yang actually talk about the Freedom Dividend. He uses corporate language for a reason: by giving people a literal, dividend paying share in America he hopes people will have an interest in the overall performance of America. This is how he sells it to the rich and corporations: it will give a people an interest in general economic performance and reduce pressure for (in his opinion destructive) policies like a $15 an hour minimum wage. It will reduce things like Ludditism.
There are reasons to critique that position but it’s not obviously wrong.
To piggy back on this, long haul truckers currently number something like 3.5 million in the US. Autonomous self-driving trucks, even if they are only from and to the local “last mile post” hub, are going to really hurt that employment sector.
This actually seems much less likely to me than the inverse. Loading docks are complicated, busy, and require responsive drivers.
It was hard enough to get an experienced driver backed up properly in our too-small, poorly configured dock area (let alone just getting them backed up to the right dock). An automated truck would have been a nightmare.
I think that’s what HBC said. We’ll get automated trucks going from a-mile-away-from-me to a-mile-away-from-you.
Agreed, if anything you’d have an “automated” truck driven by a real driver until it reaches a truck station where it will soon approach the freeway. The trucker would commute to the truck station each day where a bus takes the truckers to docks, etc.
So basically like UPS drivers except one way
I’d have thought that driving in traffic would be more challenging for an automated truck than dealing with the loading dock.
FWIW, as someone who worked at a major UPS hub, a substantial portion of the complications at our loading docks were very human in nature, including angry shifter drivers intentionally parking in obnoxious ways and then calling for a union rep if anyone but their direct report asked them to move their vehicle.
The question is WHEN do they hurt that employment sector, and the answer is AFTER some untold number of engineering hours have been invested, and after production and retrofitting of self driving trucks is instituted.
Manufacturing as a sector stopped growing decades ago largely due to automation, but the total number of jobs was roughly as high in 2000 as it was in the late 60s. The large declines prior to 2000 are associated with recessions with employment picking back up (at least in total number of employed) and not associated with the introduction of masses of labor saving devices.
I think we’re going to see some extreme regulatory capture in this area, such as a requirement that there be a human in the cab at all times that can take over driving if “needed”.
I think everyone agrees that having a driver ready to take over doesn’t really work.
Regulatory capture isn’t about having regulations that make sense.
It depends on what you mean, typically people discussing long term technological UE are discussing net UE, some people discuss specific UE (ie coal miners losing work and remaining unemployed for long stretches).
Kind of, but also kind of not.
To take HBC’s example, no one seriously denies that automated trucks will cause truck drivers to become unemployed or a net increase in unemployment for at least some time period. Even those who think that they will have a bunch of equally good jobs waiting for them (which I’ve never heard), there’d at least be frictional unemployment.
Or perhaps someone does and I’m unaware of them. Do you know of anyone?
Care to elaborate?
Unemployment is not simply no longer working at a job, it is the loss of a job and the inability to find another. In the context of this discussion the receipt of unemployment benefits itself would not be a sufficient criteria as preferring UE benefits to a job that is available is not out of the question, however that is just a caveat I want in there from the get go.
For a truck driver to become unemployed due to self driving trucks he will have to
1. Lost his job due to self driving trucks
and
2. Be unable to find a new job.
There is functionally no reason to believe that under market conditions these two things will be met for any substantial portion of the labor force either empirically or theoretically. Empirically there have been multiple transportation revolutions that greatly reduced the number of man hours necessary to transport goods. Trains are a great example (thanks to I think John Schilling who brought this up months ago in one of the open threads discussing driver-less cars) where a small number of operators could run a train that can carry enormous quantities of goods much further and faster than was previously possible. There is no real UE associated with the expansion of train lines because while expanding train lines cost some jobs it opened up an enormous number of others. What you might expect to be a transition period, which is the claim of temporary net increases in UE, is unlikely to occur for structural reasons. The basic logic goes as follows
1. Trains replace horses and carts.
2. Horses and carts do not stop working or being valuable until AFTER trains start running.
3. Trains require large amounts of capital investment which includes labor.
So to complete the circle you have to start out with HIGHER employment during the period in which people are working horse and buggy plus also designing, testing and building locomotives, train cars, signals, tracks etc, etc, etc. There is no particular reason to expect a discontinuity of work here, as every freight load requires up front labor while also opening up opportunities on both ends of the load.
There is always frictional UE, but technology typically reduces rather than increases frictions, and that reduction is applied across the entire economy.
The full quote
The areas where these effects are observed are typically one factory towns/one industry cities. Industry* brings with it many competitive benefits, it produces lots of infrastructure, allows for dense living and opens up many other investment opportunities. Towns that boom from a single employer but fail to diversify do so because of some significant flaws, and these are the places that end up with the worst outcomes. Blaming the expansion of a new industry, or a trade agreement on these outcomes is like blaming them for the inequitable distribution of mineral wealth, or competence in governance, or luck. The shifts are real for the people experiencing them, but preventing the shift wouldn’t reduce the number of people who do experience them.
*Some exceptions would be industries that produce a lot of on site pollution, but even these usually end up with net positive externalities (see stockyards in Chicago).
We’re using two definitions of unemployment then. I mean something closer to the current federal definition. In order for you to prove my statement wrong, you would need to prove that automation will not lead to anyone getting fired and then spending some time not working while searching for a new job. No one, as far as I know, denies that will happen.
To the contrary, it’s not a limited phenomenon. Imagine, for example, someone who spends ten years working in a factory. They’ve invested a lot in factory worker skills. When they go into a new industry they have to (to some extent) start learning new skills and from the bottom of a career ladder. This depresses total lifetime earnings.
I’ll decline to comment on the rest. Your points are valid and I’m steelmanning someone else’s position.
Those are two separate discussions, one is ‘what happens under our current system’ vs ‘what happens under hypothetical market capitalism’, but it was just a caveat I put in there so that I can refer to it later if I want to, none of my other points relied on it.
No, because UE for truckers isn’t at zero. If there are 3.5 million truckers with 5% of them generally unemployed at any time then there are 175,000 unemployed truckers. The economic shift that creates driver less trucks could cause job shifts such that the total number of UE truckers was never more than 175,000 at any one time, which would refute the general claim even if some of those truckers on UE lost their jobs to driver-less trucks.
But you have ignored everything else. These shifts cause higher productivity and increased wealth, whatever caused their factory to close was related to the things that made cars better, air conditioning more accessible, better general working conditions, vaccines for their kids etc, etc, etc. If the only effect of technological growth was better sewing machines was to make T-shirts 1% cheaper and to cost you your job at the factory then yes, that hurt you on net, but that isn’t how it goes.
Can you elaborate? Why is it impossible that eventually the average human will be unemployable in the same way that a chimp or a severely disabled human (e.g. mentally retarded with IQ < 70) is currently unemployable?
I would say that my experience is that low IQ people are unemployable due to behavioral issues and the minimum wage.
Even if we had no statutory minimum wage, there’s a minimum amount a person must make to keep themselves alive. So what’s to keep automation from bringing the value (to employers) of average humans below this point?
Automation makes things cheaper, the cheaper things get the less you need to earn to meet that minimum. Humans have been able to live above subsistence level with roughly zero modern technology helping them, it is astonishingly unlikely that this would happen with modern tech.
In any workplace, insufficiently competent humans are value-destroying not value-creating. (Surely you have encountered some of these.) If you automate away all the basic non-cognitive jobs it’s entirely possible that most people will be zero or negative marginal product in the remaining workplaces.
@baconbits9
It seems like the decrease in costs is distributed across the whole population, while the decrease in jobs/pay is more isolated to the given industry, meaning that the cost decrease while a net positive for society does not compensate for the changes to the people in that industry. In other words, everyone’s purchasing power goes up because goods are cheaper, but the people in the industry see their purchasing power go down by more than it went up as a result of job loss/decreased pay.
Additionally, doesn’t it depend a bit on what is getting cheaper? I realize in the case of transportation that would appear to be “everything that gets transported”. In a hypothetical, automation makes Luxury Good X cheaper, increasing the purchasing power of the people who were buying it, making it accessible to the people who previously couldn’t afford it, but making the people who used to make it worse off because they still can’t afford it and now they don’t have a job making it.
On the other hand, making affordable clothes even cheaper, or food, would theoretically increase everyone’s purchasing power (we’ll briefly ignore that this likely comes at the expense of 12 year olds in China or whatever).
I don’t see any reason that automation couldn’t push average value of an average person as an employee below even the reduced cost of upkeep.
It’s true that average humans used to survive with no automation. But a lot fewer of them. And most of them died relatively young.
It does not seem this way at all to me, primarily because technological advancements are happening across the board and impacting all industries. If you isolate one advance like ‘driver-less trucks’ then you can create imaginary problems where truck drivers lose 100% of their pay while everyone else sees a 1% increase in theirs, but there is no specialized industry creating self driving trucks while not impacting every other facet of the economy. The advancements that allow us to do more than dream of driver less vehicles will effect every corner of the economy. There will be some unevenness in the distribution, but that distribution will be net positive and only an idiosyncratic minority will be on the negative end of things.
How could this possibly happen? What is the point of production?
Competence is determined by level of responsibility. When I worked the lower end of legal US jobs, stuff like dish washing, night shift bakery work etc, the behavioral issues were value destroying. I worked with several mentally handicapped dishwashers and one was absolutely value destroying- the alcoholic one who harassed all the female servers. The others (I remember 2) had a positive level of production (ie >$0 an hour worth), generally showing up on time, washing dishes and not breaking stuff.
For non handicapped people I have known who were value destroying they all did so through behavior- stealing, not working, lying, coming into work high/drunk/not coming into work.
My wife reports competence issues of programmers she hires, and it is value destroying for her to sign a programmer who cannot (or will not) do the things nor learn to do the things they were hired to do. These people are being hired for jobs at $80,000+, not remotely near subsistence wages.
So you see how people can be value destroying (e.g. negative value), and yet are puzzled by the idea that the value of an employee could possibly be lower than the cost of upkeep?
Imagine a world where anything that employs large numbers of people becomes a target for automation, for obvious reasons. No more dishwashing jobs or the movie-theater ticket people and the like. A lot of the rest of the jobs are going to be the sorts of things that don’t easily absorb unskilled labor. If any type of job *does* start to absorb lots of the surplus unskilled labor, then it suddenly becomes worth automating too, and those jobs go away again. Meanwhile, the jobs which are too difficult to automate are also the ones where unskilled labor has negative value.
I don’t find this terribly implausible.
(And of course in this scenario you can imagine that waterline for what counts as “skilled” labor will continue to rise, until we’re all unskilled workers contributing little or nothing to the work of the productive AIs.)
No, I am not puzzled that some people through a combination of traits/actions/behaviors could be value destroying, I am puzzled by the claims that people whose combination of behaviors/work ethic/intelligence are currently value creating could suddenly become value destroying (or zero value).
To be more specific, in my view there are three basic qualities a worker can have. Intelligence, industriousness, and good behavior, being a near zero in any one situation doesn’t disqualify you on its own. As two of the categories are at least partially in the control of most people I don’t see the difficulty of hiring 70 IQ people translating into the majority of people being unable to work.
I think I agree with your overall thesis here, but a minor nitpick.
I predict that science will eventually discover that no, you aren’t really “in control” of any of these things. Someone can no better “become a harder worker” than they can “become more intelligent.” Industriousness and/or agreeableness will eventually be discovered to be just as heritable as intelligence.
Machines beat any human in industriousness, and of course will not have bad behavior either. Meanwhile just about any job requiring intelligence is likely to have negative marginal product workers.
Comparative advantage makes these types of statements irrelevant. The fact that someone or some class of people are better than you at anything or even everything doesn’t render you useless.
Again irrelevant, stemming from the misconception that jobs exist outside of people. Jobs are created to utilize human labor, not the other way around.
The fact that someone or some class of people are better than you at anything or even everything doesn’t render you useless.
Yes, absolutely disadvantage with comparative advantage means you’re not useless, *if* the thing you’re absolutely disadvantaged against can’t be cheaply reproduced. That assumption breaks down with automation. (Think it through – the marginal product gets driven down to zero for the absolute-advantaged producer so the absolutely-disadvantaged producer will have negative marginal product.)
Again irrelevant, stemming from the misconception that jobs exist outside of people. Jobs are created to utilize human labor, not the other way around.
Jobs are created to maximize value produced, not to utilize human labor. It’s a happy circumstance that maximizing value in almost all situations currently requires human brains, but if a superior alternative existed jobs will be organized around that instead.
No it doesn’t, as the ability to cheaply reproduce labor drives down the cost of living toward zero. If marginal product got literally driven down to zero by automation then ‘workers’ would need to earn zero to be able to afford literally anything. As long as the marginal product of automation is slightly above zero then comparative advantage still exists, and everything is still groovy, you are just driving up real wages by shoving down real prices, rather than by pushing up nominal wages faster than nominal prices.
No, the profit involved in selling to people with zero/negative marginal product is zero. If there are positive-profit opportunities elsewhere in the economy then resources will be redirected to those instead.
Meanwhile, cost of living never reaches zero. Irreducibly you still need 2000 calories a day; implicitly you’re always renting a fair bit of farmland and energy. You also need various amenities like shelter and a reasonably temperature-maintained environment.
Taking the far-future limit as hopefully illustrative – it’s really easy to imagine how a world dominated by Hansonian em-cities might be able to find much better uses for solar energy than growing a bunch of beans for you to eat, and thus to outbid you for it.
If we get AGI or something close -maybe not super intelligences, but just regular intelligence- a bunch of menial jobs could be automated in a short timeframe. Not sure the service economy can absorb all those humans.
I mean, it cannot absorb all available humans right now.
It may all be science fiction, or not happen in our children’s lifetimes tho.
Well, we have Uber and all those gig economy apps.
But Uber’s endgame is using automated cars to get rid of their drivers, so they have built in the assumption that the whole gig economy is just a transitional state.
Maybe there will be a gig/service economy for humans in non-creative jobs, but if the AIs are cheaper than humans, there may not be after all.
You have only gotten here by starting from the assumption that people are zero or negative marginal product. If you define everyone that way then you get your dystopia, but you don’t get it through standard economic analysis.
At literally zero marginal cost to produce then producers are indifferent between producing something and giving it away, and not producing it at all. If you are not specifically at that point in the post scarcity world then comparative advantage still holds and there is no reason to believe that the population is filled with zero and negative marginal product workers.
Thesis 1. If some worker is positive-value today, they will be positive-value tomorrow.
I think this is right. The positive value might be small, but if you know how to do a favor for someone without breaking a piece of equipment or attempting to rape a coworker, you will always be positive-value.
Thesis 2. Given a positive-value worker, they will earn above minimum wage.
I think this is not true. The solution is obvious.
Thesis 3. Given a positive-value worker, they will earn above their subsistence level.
This could be true, but I don’t think it necessarily follows. I think it’s true if you have a “correct” level of redistribution. In the normal American-Overton-window of foreseeable market forces, I could imagine that class A (which does not include the worker) captures all the value of increased automation, and it does not show up in a reduced costs to the worker paying for their subsistence.
Or maybe it does follow and I’m just not putting the pieces together. I could be convinced here to agree with Thesis 3.
You have only gotten here by starting from the assumption that people are zero or negative marginal product. If you define everyone that way then you get your dystopia, but you don’t get it through standard economic analysis.
I have not started from that assumption. I started from the assumption that things with absolute advantage over many humans could be produced fairly cheaply. Zero marginal product for those humans then follows.
You don’t get it from standard economic analysis because standard economic analysis makes the for-now-reasonable assumption that large amounts of labor supply with absolute advantage cannot be cheaply created.
If you can create large amounts of robots, then the cost of the goods that the robots/humans are producing will end up being set by the robots’ marginal cost of production, which is lower than the humans’ cost of production. Therefore, humans drop out of producing anything. If robots are just better at their jobs than humans, then this can be true even if humans’ wages are zero.
At literally zero marginal cost to produce
Food and other upkeep for humans is never going to be zero marginal cost to produce.
The ability to cheaply reduce absolutely all labor might drive the cost of living toward zero, but the ability to cheaply reduce most labor drives the cost of living towards an asymptote defined by the non-automatable labor. If e.g. agriculture is 95% ditch-digging and 5% Ph.D. agronomists keeping one step ahead of the latest blights and pesticide-resistant bugs, then automation can drive the cost of food down by 95% while driving the market wages of ditch-diggers down by 100%.
It does not follow because of comparative advantage. This is literally the textbook insight that comparative advantage demonstrates, if you are better at growing apples and oranges than I am it is still best for you to grow one and trade with my production of the other.
As I have said before you ONLY get this outcome (from a logical perspective) if all human wants are being met, which requires all humans being able to afford to pay for those goods and services. You cannot push comparative advantage down to zero without violating the laws of conservation of mass/energy. As long as there is some cost to production then there is potential comparative advantage.
@baconbits9
…if transaction costs and such are 0.
So your contention is that the necessary IQ to do work is going up? Do you have any evidence that it is? I’m not aware that’s ever happened. The effect I’ve seen people concerned about is that intelligent people make increasingly more money than the average minimum wage type. But that’s not the same as being unemployable.
Anyway, the simple reason is that long term technological employment has never been observed and the trends today are not significantly different from the general trend of the last two centuries. The more complex reason is that so long as they are capable of producing some value with their labor, it makes sense for society to utilize that labor. And having worked with relatively low functioning individuals, behavioral problems are a much bigger problem than intelligence, especially for low paying jobs.
Necessary IQ might not be going up, but I think it’s very plausible that necessary education/experience/know-how could get high enough that people just can’t retrain in a reasonable amount of time without income assistance.
Suppose that the difficulty of automating a job is correlated with the complexity of tasks in that job – and thus the difficulty of teaching a human to do it. The lowest-barrier jobs would be lost first (I know there are exceptions to this rule, such as engineering drafters or tax accountants). The jobs created by this economic shift would all be higher-skilled, possibly very high-skilled – the person who just lost his burger-flipping job isn’t in any position to retrain as a technician for the BurgerFlipperX29. Someone else will get the technician job, freeing up a space for someone level lower, who frees up another space, until the economic gains work all the way back to the former burger flipper. But that might just take too long, especially if automation happens in big fits and starts.
Then the question becomes: in a market full of hungry people with outdated skills/low IQs, all willing to work for not much money, why doesn’t someone find a way to employ all that very cheap labor to do something?
Perhaps this reflects my own personal bias, but I’m actually less concerned for those with low intelligence than I am for those who are introverted and have low social skills.
As the economy becomes more and more “service based” that means less jobs where you sit at a machine by yourself and press buttons and nobody bothers you, and more jobs where you have to interact with people. As you say, with prices low enough, all of us would consider hiring people to do something. Clean our house, watch our kids, cook our meals, etc. But those jobs require a bit of human interaction and the ability to sell yourself as a desirable person to have around.
The programmer who’s kind of a jerk, but you keep him around because you need programming, is made completely obsolete by the invention of a low-cost programming bot. But to the extent that you enjoy talking to your cleaning lady, or having a human scan your items at the grocery store, or whatever, those people stay around.
Consider that right now, there is an entire class of e-girls who are capable of making a decent living for themselves talking to lonely men online. Some of them take their clothes off, but not all of them do. Some of them have even gotten quite rich in the process. And a whole lot of the really successful ones don’t have what you might think of as like, supermodel good looks. While a certain minimum baseline of attractiveness is required, success in this field seems much more highly correlated with social skills than with raw appearance.
So like, 20 years ago, if you said something like “In the future, nobody will leave their house. They’ll get their food delivered instead of going to Hooters. Strip clubs will be abandoned.” You might expect that would be a disaster for the employment of the “cute young female with decent social skills but low IQ” demographic. What will they possibly do once all of those jobs disappear? If you answered “They’ll sit at their computer and broadcast themselves talking to and doing silly things and wearing different outfits for a global audience of attention-starved men who will throw money at them for doing so” you would have been laughed out of the room. Nobody really saw that coming.
And yet, here we are…
Here is how i view this issue:
1. Previous eras of automation involved small scale crafts being replaced by humans performing routine tasks with the aid of machines. Very often the labor being made obsolete was equal if not greater in complexity than the work that replaced it. Other times it involved one form of unskilled labor
2. Modern automation involves the programming of tasks for computers by technicians, which mostly or fully replaces the most automated/routine aspects of labor.
Having a higher IQ allows the person in question to work in fields where tasks are unsupervised, complex, and extremely difficult to automate. Lower IQ jobs tend to be routine and therefore the easiest to replace with some kind of computer program.
Modern automation disproportionately shrinks the jobs available to low IQ persons and increases demand for high IQ labor to a degree that previous automation did not. [So i believe]
Most defenders of the status quo don’t take this issue seriously because they believe that all job skills are a matter of training, some of them don’t even acknowledge that there exists such a thing as intelligence. New jobs will appear and people will simply retrain themselves to learn them, goods will be cheaper and so everyone will prosper.
It is possible if not likely, especially in a highly regulated economy, that low IQ job opportunities won’t grow as fast as past low IQ jobs. The combination of stagnant low end wages and programs like disability and UE benefits may paper over the unemployment rate at the low end and give the false impression that the economy is reabsorbing the layoffs at a reasonable pace.
The true unemployment aspect of this is perhaps over-emphasized. Functioning markets should be able to, given enough time, price labor such that anyone that you don’t get unemployability. But that speaks nothing to the emisseration that will attend the necessary wage stagnation.
all willing to work for not much money, why doesn’t someone find a way to employ all that very cheap labor to do something
Any single thing that ends up employing a lot of people becomes a target for automation. Cheap labor can survive if it can do something difficult to automate, or if it can find a small enough niche for itself that no one finds it worth automating.
One reason it might resist automation is that many people like having it done by a human.
One reason it might resist automation is that many people like having it done by a human.
True, but that gets at Matt M’s point. There’s virtually nothing that people like having done by just any human. We like having stuff done for us by attractive, personable humans.
I think “attractive, personable” humans oversells it. I go to a Starbucks all the time. Most of the workers there aren’t “attractive” in anything other than the conventional sense that they aren’t burn victims or anything, and their personability is, like, average.
I do agree that it might be hard for people who are particularly unattractive or particularly socially awkward.
@Matt M That reminds me of the, silent Uber driver thing, maybe awkward introverts would hire awkward introverts to perform their services because they don’t want a house keeper who talks to them?
It’s certainly possible.
Although when I say “people with good social skills”, you know, a huge part of having “good social skills” is being able to effectively read your audience.
The socially adept uber driver is able to very quickly and painlessly read his passengers to determine whether they’d like to engage in lively conversation, or whether they’d like to sit quietly. The loudmouth who never shuts up, even with introverted passengers, might seem more sociable, but doesn’t really have any “better” social skills than the driver who never says a word.
Now imagine the people they don’t hire!
In all seriousness though, in the current state of the economy, there are tons of jobs available, the vast majority of which are far more desirable than “Starbucks barista”, and good social skills are desired in almost all of them. There’s no reason to expect that today, Starbucks would attract the people with the best social skills. Those people are working in pharmaceutical sales or something like that. And honestly, I’m not sure there’s any scientific/mathematical task requiring conventional intelligence that would be harder to automate than “convince this doctor to start prescribing your company’s overpriced, unnecessary new drug.”
I think “attractive, personable” humans oversells it. I go to a Starbucks all the time. Most of the workers there aren’t “attractive” in anything other than the conventional sense that they aren’t burn victims or anything, and their personability is, like, average.
Sure, but that job is definitely automatable in the not-too-distant future. Would you a pay much of a premium to order your coffee from an average-looking barista as opposed to punching a button on a kiosk? If not, you don’t really prefer the human for that service.
Similarly, if self-driving cars were common, would you pay a premium for a human-driven Uber? Probably not.
@Chalid
Maybe I would, and maybe I wouldn’t (I probably would’ve back when I just had one kid and wanted to knock around Starbucks for a while to give my wife a break, I wouldn’t now). But my point here is not Starbucks in particular, but any place where you actively want some human contact — and the more that the rest of the world is automated, the greater interest there will be in some places that do have human contact. Is that place particularly a coffee shop? I dunno. But it’s something.
My point is, human contact can be nice when the people are roughly median in terms of attractiveness/social skills, they don’t have to be top 20% or 10%. Now, bottom 20% or bottom 10% might be in trouble.
The actual automation going on at Starbucks involves placing an order online. The coffeeshop is still there with humans, and you pick up your order from a human, but this probably significantly cuts back on the need for cashiers. OTOH, lots of people like sitting in the Starbucks, and for that, having some humans working there is important.
On your point #2, I don’t really think there’s a fundamental economic reason that tech progress should not ever result in chronically high long-term unemployment. Rather, it’s an empirical fact about the world that the vast majority of humans have historically been able to create significant value through their work, and so it’s reasonable to assume that they will continue to be able to do so.
But doesn’t short-term technological unemployment suggest that long-term technological unemployment is also possible and could be in fact undergoing?
Is there any cognitive reason that makes it difficult to learn a new profession at age 50?
I tried to google it but can’t find convincing studies on IQ and aging: some early studies found that IQ peaked at 20-30 but these studies were confounded by the Flynn effect, more recent longitudinal studies that follow the same cohort over the year find that IQ decline only become significant in ones 60s, but these studies might be confounded by self-selection and survivor bias.
If IQ or at least fluid intelligence declines quickly then we can expect short-term and medium-term unemployment, but not necessarily long-term: the 50 years old truck driver who may never #LearnToCode, but his children might, if instead fluid intelligence stays nearly constant until retirement age then it means that his children and further descendants are also going to have a hard time at finding employment.
Sure, I agree that it’s possible. I don’t think there’s any real evidence that it’s ongoing right now, but I find it entirely plausible that it will happen within the next few decades. Real AI is the sort of seismic shift in the economy that could upend the historical pattern of humans being able to figure out ways to produce value.
“Short term technological unemployment” is a little misleading. It’s certainly an economic shock that causes unemployment but you could see a similar effect from trade. Some people have a hard time adjusting and theoretically this could have long term ramifications that are hard to recover from but it’s a very different thing than what people usually mean when they talk about “technological unemployment”. Also, I think this is more controversial among economists than Erusian is letting on.
Could you name the economists? I can name a few but they are mostly heterodox. Socialists are particularly fond of the idea. But people who subscribe to more mainstream views, from Keynes to Austria, tend to not believe that long term trends lead that way. At least in my experience. Again, happy to read new sources.
I was talking about this claim being controversial:
Isn’t that based on just one study?
I wasn’t talking about the claim of technological unemployment in general. Honestly though, I think it’s very plausible that we get to a point where most jobs are so pointless, soul-sucking, degrading and low paying,(imagine getting paid to wipe someone’s ass) that it might as well be technological unemployment. In that situation, everyone would just rather live off welfare than do any of these jobs and it could easily break our system.
More than one. But I agree there are people who disagree with that. That’s why I said ‘strong evidence’ and not something like ‘it’s absolutely certain’. I just meant that a reasonable person might find the studies convincing.
Getting paid to wipe someone’s ass was a real job, actually. And that presumes not working is an option. But more to the point, I think the future is likely to actually be the opposite. The tasks we’re good at automating are precisely the ones that are soul-sucking, degrading, and repetitive. It’s precisely complicated, judgment call requiring jobs that are hard to automate.
Yes, but those are also much harder to train people for. Most people just don’t have the capacity to be high value computer programmers.
@Wrong Species >
Don’t have to, it was among the duties of my being an “Attendant for the handicapped” 1988 to 1992.
@Plumber
really?? do you have a link or a source I could follow for that? I’d be fascinated.
@yodelyak,
I’m pretty sure it’s still among the duties just like when I did the job, as it’s just not something paraplegics can do on their own, nor (as far as I know) have machines yet been made to do it, if you want to meet someone who still does the task try asking thr staff at a nursing home.or for referrals at the Center for Independent Living.
The clients (typically) had to hire their own attendents from funds provided by the State of California In-Home Supportive Services (IHSS) Program which would be enough for minimum wage.
The irony of how crippling the back pain felt from lifting people out of and back into wheelchairs was noted by me at the time.
@Plumber et al
They have lifting systems for that.
I recently visited a large* housing and care facility for the severely mentally handicapped, many of whom also have physical disabilities. They have a pretty neat setup in the central day care building, with a cuddle/sensory room, their own kitchen, different living rooms for different groups with permanently assigned staff (with the severely autistic having a room dedicated to their needs, the severely demented having a room, etc).
They have an (expensive) ceiling-mounted lift system in some rooms, where people tend to be most handicapped; as well as a movable lift system for other rooms.
There also is a swimming pool, a nice gym, etc.
If I become mentally handicapped, it seems like a nice place to live.
* Encompassing 100+ buildings
Historically the tasks that we are good at automating have been things that we can brute force. The more nuance, even if its repetitive nuance, the harder it is to do so profitably.
Humans are very weird though, or very contextually dependent. The phrase ‘imagine wiping another person’s ass’ made me shudder a bit, but I am a stay at home parent with 3 small kids. I have literally been wiping another person’s ass as part of my job every day for 4.5 of the past 6 years.
Okay, but now imagine getting paid $100K/year to wipe someone else’s ass 20 hours a week. This doesn’t sound nearly so soul crushing. Better pay and better conditions take a lot of the sting out of otherwise-unpleasant jobs. And as baconbits pointed out, every one of us who is a parent has spent a fair bit of time wiping other peoples’ asses (and getting peed on, and cleaning up their puke, and….). We didn’t even get paid a cash wage for doing it!
If you want a vision of the future, imagine a person wiping a human ass – forever.
@Hoopyfreud,
When folks say: “Don’t worry, they’ll be plenty of health care/service jobs in the future”, that’s exactly what I imagine.
No, the post scarcity future where every physical good is automated means wiping a single ass sets you up for life.
Short-term technological unemployment is probably dependent on other forms of frictions, like location and reservation wages. If you are intelligent enough to, I don’t know, fix typewriters, you are smart enough to work at McDonald’s. You might have to take a pay cut, but that doesn’t mean machines made you redundant. You can still add value SOMEWHERE.
Given the dramatic aging of our population, there will likely be additional jobs in health care for generations, particularly if we are so rich that we can simply eliminate every other low-skill job out there.
Imagine you’re a factory owner. Technology increases to the point where you can replace half your workforce with automated assembly lines (imported from Japan), for a 5% cost reduction. Now you enjoy the pleasure of 5% savings, but society at large still has to support the half of the workforce that you fired – unemployment, reconversion, welfare etc. Ergo, automation may be beneficial for individual businesses, but not for society (and in the end for businesses as well since they support society with taxes). The feedback is however too long to actually affect business owner behavior.
Another scenario: economists like the concept of Competitive Advantage – no matter how behind you are technologically, you can still do something of value on the market. But what happens if you’re priced out of using your time by minimum wages? It’s a kind of competition between employers: make a profit by paying $8 per hour, or the government will cut in and replace you with welfare.
If the automated assembly lines cost so much that the total savings will be just 5%, the affected workers can then offer to work for 10% less than before, so you don’t replace them. Or you may be able to demand all your workers to accept a 5% reduction, threatening to fire those who don’t accept it. Your workers may not accept it and quit instead, but (assuming a free market) only if they have a better job available.
In practice, wages tend to be sticky (in nominal terms), mainly due to worker “protection” laws such as right to strike, collective bargaining requirements, restrictions on firing, or the minimum wage. However, the processes we are talking about are actually gradual. If automation is becoming available in a sector, then workers probably don’t have alternative opportunities that pay better, so a company can get away with not raising salaries which, in a few years’ time, translates into a real wage decrease due to price inflation.
As in this example, automation might change the distribution of income, in this case from the factory workers to either the owners, or to those who make the assembly lines, or the consumers. However, the cost to the workers (or the society that will feed them) is as much or less than the benefit to whoever benefits; your comment makes it sound like the cost to the workers (or society) can be much bigger than the benefits.
Sensible countries don’t raise the minimum wage to levels where it would cause a large amount of unemployment; an excessive minimum wage can be undone through inflation.
I generally disagree, wages are sticky for many reasons, one of which is that labor isn’t homogeneous. If an employer is going to replace half his workforce with robots he isn’t going to decide who to keep by random lottery and the workers themselves have a general idea of their relative value, so while some people will be guessing if they will be layed off many employees will be fairly sure one way or the other and that makes an across the board pay-cut difficult to impossible as the bottom end will have to absorb several times over the average pay cut in terms of a % of their salary.
The second issue is that if there is automation available now that will cause your pay to be cut then in a few years you expect that it would have to be cut further to prevent the next generation from replacing you etc, etc. Given that choice workers with the most options will look for other work, and those workers are going to disproportionately be the best workers and the ones that the manager wants to keep around after the switch. A couple of attempts like this and you will have driven off your best employees effectively ruining all the gains you were going to get from automating.
DeathDeteriorating quality of life by a thousand cuts.I was present at an organization where employees had to do a solicitation procedure for their own (or changed/new) jobs at the company. They were not amused when some took the opportunity to solicit for a job elsewhere.
The word “sensible” is doing a lot of work, there.
This isn’t in-depth or thorough, but the place I’d look first for long term technological unemployment would be for useful jobs all requiring abilities some large percent of otherwise healthy human beings don’t have and can’t learn.
That leaves jobs where the thing actually being produced is prestige – servants doing things that are more effectively done by other means, so that the person they do them for can display their high status on the human totem pool.
I’m not ‘normal’ enough to understand the demand for prestige markers of this kind. The only reason I don’t prefer to interact with ‘bots, signs, documents, ATMs etc. for all tasks is that they are too often incapable of doing what I want efficiently, or the cost to me of figuring out how to make them do what I want is higher than the cost of finding a human being to deal with the problem. (Well, I might enjoy the low grade social contact of saying “hi” to a doorman more than walking past the sensor to open the door, if I were, unusually, not in a state of human interaction overdose. But that’s not likely to happen as long as I’m employed in a world of open offices etc.)
So video game aficionados of SSC, what did you like at E3?
I thought Nintendo had the most things I was interested in.
Obviously, Breath of the Wild 2 (which better be called “Death of the Wild”). Apparently it’s going to use the same Hyrule, so I’m wondering if we get like a Light World / Twilight World thing going on? My dream is playable Zelda, where Link is trapped in the Twilight World and you switch back and forth between Link and Zelda. Or maybe even co-op…
We finally got to see Astral Chain gameplay and it looks really, really good. Very much looking forward to this.
Also, Fire Emblem: Three Houses gameplay, and confirmation that there’s **spoilers** a time skip. I had been on the fence about this one because I wasn’t sold on the whole “Fire Emblem: Hogwarts” thing, but it turns out that’s basically the prologue, and then you get a real war.
Not Nintendo exclusive, but the Trials of Mana remake looks promising.
Ubisoft had absolutely nothing of interest besides Gods & Monsters, which looks like Ubisoft Breath of the Wild, maybe? And it’s from the same people who did Assassin’s Creed: Odyssey, which was my favorite game of 2018. Speaking of AC: Odyssey, they released a new community quest builder for it, so now fans can make whatever missions and stories and full game expansions or whatever they want. I imagine it will be mostly junk, but I’m sure somebody’s going to recreate the entire main quest line of Skyrim or something, so that could be very cool. If there’s a game award for “Best Post-Launch Support,” Ubisoft deserves it for AC: Odyssey. Every game post-launch should be like that, with the constant QoL improvements, new features, free missions, on-time paid DLC, and all the rest of this. Great job.
Keanu Reeves in Cyberpunk 2077 and Star Wars Jedi: Fallen Order, obviously, but kind of a let-down when MS’s big reveals are third party games. No Halo: Infinite footage. Nothing but multiplayer demos for Gears 5. The content-free “announcement” of a new Xbox.
GhostWire: Tokyo was intriguing. Weird Japanese horror/mystery stuff. But it’s hard to get worked up over a cinematic trailer. It’s too easy to make an amazing cinematic trailer for some boring-as-hell microtransaction mobile game.
What did you like?
I really enjoy the tactical gameplay in Ghost Recon: Wildlands and for a couple of years has been my go-to for screwing around when bored. I’m cautiously optimistic for the sequel, it looks like there are a lot of interesting new features. My one big concern is the change in setting. The change from spec ops in Bolivia fighting cartels to spec ops in fictional archipelago fighting drones takes a lot away from the atmosphere.
I’m afraid its going to go from realistic-ish tactical-sim to sci-fi/fantasy.
Yeah, Call of Duty: Black Ops was great when it was about sneaking through the jungles of Vietnam and that kind of thing. And then it turned into future cyber soldiers with robot hands fighting…more robots by Black Ops III. Completely ruined the atmosphere that made the first game unique.
Nintendo in general.
Would probably pick Watch Dogs Legion if it was on Switch, where I do most of my gaming. May grab on steam for cheap later.
Microsoft continues to tempt me with Forza. Would love if the Lego vehicles were buildable, but I doubt that’s the case.
I would love to play Forza again. I had a racing wheel for my 360 and played Forza 4 and Horizons. I’m building a new gaming PC soon and when I do my plan is to invest in a high-quality racing wheel for that so I don’t have to worry about replacing it every console generation and then dive into the back catalog.
And yes, Watch Dogs 3 was interesting with that “play as anyone” bit with the murder grandma. That might be an interesting enough gimmick to make it worthwhile.
No comments on the FF7 reboot (which is incorrectly being marketed as a remake, even though it won’t be)?
I don’t know what to say about it. I got FF7 when it came out 20 years ago, and the Cloud t-shirt I got free with my preorder eventually disintegrated in the wash about 6 years ago. I’ve kind of played it, so…meh?
As Matt noted, it isn’t really a remake. Total reboot, looks to be several times longer (so presumably more depth to each part of the story).
I actually doubt this. I predict they will greatly lengthen the Midgar specific sections, while greatly reducing everything else.
It seems that they want to make this thing to be a cool looking marginally interactive modern action movie. That means the parts of the game where you race motorcycles through the city while fighting the corrupt evil corporation are highly desireable compared to the parts of the game where you wander through the countryside battling random imps for no real purpose other than getting stronger, and stay at a series of small town inns for the express purpose of having flashbacks.
“Saigon, I can’t believe I’m back in a Saigon bed and breakfast.”
“Charlie was close. I could smell his breakfast.”
What’s the point?
I mean, I get what the actual point is: milking the nostalgia cow, but this is precisely how we got The Force Awakens and The Last Jedi.
@vV_Vv
Well, that’s how we got The Force Awakens. The Last Jedi was more like leading the nostalgia cow behind the shed and shooting it. Not that I care about the remake, since I can play the original whenever I want (it’s $12 on Steam).
People who have played FF7r came out very impressed with it.
Also seems everyone is down on Square’s Avengers game.
Banjo in Smash is great news. Animal crossing delay and a lack of Metroid content made me sad. Astral Chain looks neat but I was really hoping for a mainline Atlus game – Persona or SMT on Switch would have been very cool. Not too excited about the Zelda content we’re getting, but BOTW was not too fun for me and I’m really not sold on the Link’s Awakening remake artstyle.
Bethesda’s conference was breathtaking in its stupidity, but was salvaged by Arkane and id. A whopping TWO (and a half, for Wolfenstein) games to be excited about would win them E3 from me if it weren’t for the reanimated corpse of Todd Howard grinning madly at me from the stage. And their 3 mobile games.
The lack of CroTeam projects at Devolver was disappointing but not unexpected. Cyberpunk was I think the literal only thing neat in the MS conference. I gave up on EA and Ubisoft years ago (sorry Conrad, but I’m bored to tears every time I see more than a minute of Ubisoft gameplay).
The Final Fantasy remake and Death Stranding are making me seriously consider picking up a cheap PlayStation. But I have no faith subsequent FFVII “episodes” will maintain PS4 compatibility. I’ll wait until I know.
Also Shenmue 3 on Epic Games Store is nominally disappointing, but Shenmue is a meme anyway.
E: award for “most WTF” goes to The Dark Crystal: Age of Resistance Tactics. Like, what?
Including for some reason bringing back the Commander Keen franchise?
The Venn diagram of “people who remember MS-DOS Commander Keen games” and “people who are interested in a F2P mobile game with derivative gameplay” is essentially two separate circles, right? Why bother calling it “Commander Keen” at that point?
Probably not as small as you think, mostly because “people who are interested in a F2P mobile game with derivative gameplay” is a much larger circle than you think. I might give it a look, although I probably won’t actually end up playing it.
The circle might also include people who remember Commander Keen, and have kids who they think ought to play Cmdr Keen-type games, and whose kids are interested in F2P mobile and are too young to tell derivative gameplay when they see it.
That said, I suspect the real reasoning was some variation of “we have this derivative F2P gameplay app, and we have this old IP lying around, and we have an art department that isn’t doing anything at the moment other than drawing paychecks, so let’s have them reskin this app in Keen art and ship it”.
Eh, the only Ubisoft property I like is Assassin’s Creed. And Mario + Rabbids. I’ve never played a Far Cry or Watchdogs. I was kind of hoping they’d do a reveal of the setting of the next AC game. There’s a writer for kotaku that correctly leaked the last 5 AC games’ settings and he says its Vikings, but it would be neat to get the official reveal.
And agreed, Bethesda was just embarrassing. “Hey, remember that game that last year I said ‘just worked?’ And it turned out to be a completely broken mess that destroyed our already terrible reputation? Totes fixing it now ha ha! Now on to a whole new presentation with sixteen times the lies!”
Also agreed about Dark Crystal. “Who wants this…?”
Watchdogs is just hipster GTA. (Change my mind)
I burst out laughing when they said Watch Dogs 3 takes place in “post-Brexit” London, which has turned into a police state. Right, right, it would be terrible if the delightful place with cameras on every street corner, where you can’t buy a butter knife without a license, where the police come visit you if you say something naughty about foreigners on the internet turned into a police state of all things!
Eh, I’m not even dealing with the political angle. I just mean that when I played the first Watch Dogs, it struck me as “This is what GTA would be like if it took itself super seriously.”
Which is fine. Was an OK game. Didn’t hate it. But I’ll still take the cartoonish super-violence over an angsty protagonist with family drama and musings on the philosophical nature of modern surveillance programs.
There are certain genres in which I think realism and introspection work very well. I’m just not sure “sandbox shooter” is one of them.
Why when you’re right?
Aaaaaaand I’ve lost all interest in that game. There are very few things that I find as irritating as media that is based in an alternate reality that “proves” their preferred policy is correct.
It would be trivial to instead set it in a alternate reality EU that has turned into a police state and just as silly.
I think somebody made a comment in an early thread about “editorials from the future” that paraphrasing “Its easy to win an argument when you get to decide all the facts.”
@Conrad Honcho
That was particularly hilarious considering Ubisoft continues to insist that their games aren’t making political statements.
Funnily enough, I’ve lived in Britain all my life, and neither I nor anybody I know has ever had to get a licence before buying a butter knife. Maybe you should try finding better sources for what life’s like in Britain.
I’m pretty sure Honcho was referring to the infamous British knife ban from a few years ago, and exaggerating for humor. No licenses were ever mentioned; just a ban. I doubt it’s gone very far, and indeed, a lot of people were going squinty-eyed and muttering about parodies and Poe’s Law, but apparently it really is or was a thing.
The closest to a “knife ban” mentioned there is an article calling for some types of knives to be banned. You can find articles calling for all sorts of things, most of which end up being ignored; as indeed the call for a knife ban was ignored, by everyone except ignorant Americans.
British knife law is genuinely quite restrictive, compared to other countries.
Only folding, non-locking knives under 3″ are legal EDC.
For everything else, you need a good reason to have it in a public place.
In practise, the only real inconvenience I find is worry that a non-locking knife will close on me.
But it’s the principle of the fact that walking out of your front door with a butter knife ion your pocket for no reason is a criminal offence.
Yeah, but “good reason” is interpreted pretty broadly, AFAIK. Also, the police only ever do knife-searches in places that already have high rates of knife crime; at any rate, neither I nor anybody else I know has even been stopped and searched for illegal knife carrying.
I don’t deny that there are some silly consequences of the laws as written (although that particular one strikes me more as an accidental loophole than an attempt to assert the state’s dominance over citizens), but it’s not like Britain is alone in this: pretty much every modern country has a system of laws so enormous and labyrinthine that you have unnoticed absurdities slip in, or that it’s often impossible to be sure that you aren’t committing a crime (is it still the case that the average US citizen commits three felonies a day without knowing it?). I do not think that Britain’s chances of becoming a police state are noticeably different than those of any other western country.
The UK is fairly well known for arresting and convicting people who carry small knives with insufficient excuse. I remember one a few years ago about a guy caught with a boxcutter in his car. What’s he use it for? Opening boxes at work. So why can’t you leave it at work? Guilty, next case.
This one ended in acquittal, but the process was still costly:
https://www.thetimes.co.uk/article/ex-officer-cleared-over-knife-in-bag-rl2p7wqtfv2
Here’s the infamous butter knife case
https://www.telegraph.co.uk/news/uknews/1487762/Butter-knife-an-offensive-weapon.html
Here’s one about a Swiss Army Knife
https://www.telegraph.co.uk/news/uknews/crime/7593039/Disabled-caravanner-given-criminal-record-for-penknife-in-car.html
How about a potato peeler?
https://www.dunfermlinepress.com/news/16197023.man-in-court-for-having-potato-peeler-in-public-place/#mntab2
And America is fairly well-known for being full of fascist cops who gun down unarmed black children with no repercussions. What is well-known isn’t always accurate, and listing a few anecdotes is too vulnerable to the Chinese robber problem to tell you much of use.
If a group says “please give me this special power, I promise I won’t abuse it,” then every single instance of that abused power is relevant.
This isn’t “Chinese robbers” at all. No group besides police in the UK are arresting people for violation of the UK knife laws, of course; the analogy makes no sense.
“Bethesda teased fans by announcing a release date of April 2020 without any further clarification.”
–The Onion
I like Muppets. I like tactics RPGs. And this Dark Crystal thing is, um, wat.
I certainly don’t blame the Henson Company for wanting to monetize their older properties, especially considering how lousy their new ideas are. Dark Crystal may not be as beloved as Fraggle Rock (let alone the main Muppets, who were sold off to Disney and Sesame Workshop a while ago) but it has its fans, and making a game tie-in to the upcoming Netflix prequel makes sense. It’s just, why a grid tactics game? Who decided that was a good fit?
Between this and the Commander Keen thing discussed elsewhere, I wonder if there’s some sort of marketplace (or app) where game developers can get matched up with aging, dead IPs…
I did not realize until just now that Shenmue 3 was a kickstarter project. So fans backed it for a Steam key, it blew up, attracted publishers, they took that Epic money and made it an EGS exclusive. And aren’t giving backers refunds. That is a dick move. Wow.