This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:
1. Remember, the due date for the Adversarial Collaboration Contest is November 1. Can someone from each team please fill in this form letting me know your status?
2. Comment of the week is penpractice on the lack of obesity among US Orthodox Jews.
3. Thanks to everyone who attended SSC meetups over the past few weeks. Upcoming meetups that I’ll be attending are Fairbanks (Sunday ie today), Berkeley (Thursday), and Irvine (Friday). See the list for more information now, but I’ll announce them closer to the date.
Glenlivet is now selling booze in single-serving “capsules” that look like liquid-filled pillows.
How long before the inevitable youtube video of someone throwing these things in with their laundry?
As a fan of both whiskey and Tide Pod memes, I was disappointed to hear that these are only going to be available for one week in London.
Putting aside the meme-value, a glassless shot is a really fun concept and something that I and probably a lot of people would pay for even if just for the novelty. It’s playful and inventive.
A: That’s what Jello is for and B: what sort of heathen does single-malt Scotch Whisky as “shots”?
At least my Highland Park is still pure and uncorrupted. And it better stay that way, or I might round up some old Scots Borderers to revisit the old ways on select parts of the old country.
Borderers are the wrong end of the country for a campaign about an Orkney Whisky though. Indeed, the Borders don’t have distilleries (or didn’t a few years back), so borderers are perhaps the wrong Scots to recruit here.
But they’ve got the right attitude and they’ve learned to drive cars over the past few centuries. They’ll do.
Cars aren’t likely to be that helpful against the Orcadians, though. (Yes, that is the correct demonym.)
There are a few in Dumfries and Galloway, some of which probably counts as Borders in the broader sense- though the closest to England AFAICT is Annandale, which closed in 1921 and didn’t reopen until 2014.
In the links thread, there’s mention of a battle between ancient China and ancient Greece.
The only sf I know of which sets them against each other is Celestial Matters by Richard Garfinkle. It has one of the coolest premises in the history of sf. Ancient Greek metaphysics are true. So is feng shui. So there’s war between the two great powers. The Greeks are sending an expedition to the sun to get primordial fire. Unfortunately, the feng shui isn’t as well worked out as the metaphysics, but it’s still a good book.
Garfinkle wrote another novel, All of an Instant. It’s quite a wild novel– it turns out that time travel is a fairly easily discovered personal ability, and changing the past means changing the time stream. The book loses interest in what is happening in the time stream because it’s getting changed so much. Instead, it’s about the conflict between different sorts of time travelers.
I’ve never made it through the book. Has anyone finished it? What did you think of it?
Warren Ellis’ six issue run on Moon Knight is probably my favorite comic of the last ten years.
Matt Fraction’s run on Hawkeye is a close second.
I’ll second Planet Hulk, it’s pretty great.
The 2012 Thor series was also very good.
Have you read the classics? 1960s Lee/Ditko Doctor Strange run are some of my favorite comics ever. Hellboy is consistently great. Then there’s all the various Frank Miller and Alan Moore comics from the 80s that should be read (eg, Dark Knight Returns, Born Again, V for Vendetta, Watchmen, etc.)
Interesting confluence of stories regarding China in the news right now:
Following an episode poking fun at China’s practice of banning media they don’t agree with, China bans South Park.
An NBA team’s GM tweeted support for Hong Kong. In response, the NBA tried to come down against the GM, then following backlash, walked back their position and tried to walk a wimpy middle line of “everyone has a right to say what they believe.” China has responded by halting NBA broadcasts.
A popular Hearthstone player from Hong Kong who won a tournament ended his victory interview by expressing support for the people of Hong Kong. In response, the company that owns the game, Blizzard, suspended the player, took back his tournament prize winnings and fired the two casters who were interviewing him. Response has been so negative that the company switched it’s subreddit to private.
This seems super culture war to me, so I’ll refrain from commenting until the .25 thread.
But I think it’s really interesting.
This discussion seems like it could easily lead to CW
Well, let’s see. We’re talking about an effort by the government of a large sovereign nation to engineer a global culture shift by means that include, in some regions, the organized use of violence by that government. So, yeah, that might lead to Culture War…
Is Culture War a superset of Cultural Revolution?
I would love to share my thoughts on this, but I don’t want to be banned from China.
Exactly. Don’t be a bull in a China shop; be a sheep.
So are private citizens no longer allowed to even have opinions on geopolitical matters and still be included in public life? I haven’t followed the HK protests closely, and don’t know if they code left or right in the US, but this seems seriously draconian of blizzard and I wonder if it doesn’t violate whatever contracts they had.
I suppose every corporate contract these days includes a “if you embarrass us in any way, you get nothing” clause?
There is literally a clause that says that in the Blizzard contract. The relevant passage (bolding mine)
I would like that to be challenged in court and a judge to rule “Nobody thinks his response reflects on you, get serious.”
But it is in the interest of, at the least, the Chinese government, to attest that, yes, they do think it reflects on Blizzard and are willing to react punatively if Blizzard doesn’t promptly demonstrate the proper amount of disrespect for the notion that their players have autonomy.
By the terms of the contract/rules (not sure of the exact legal status of this document, IANAL), anything that so much as offends anybody could be excuse to remove the person and confiscate their winnings.
That’s true, I glossed over the sole discretion clause.
I guess the only way to fight the wanton misuse of the heckler’s veto is to employ it.
edit: And I’d prefer people keep politics out of ostensibly non-political areas of life, but I’m not happy to see this kind of punishment for it, either.
They really just should have had the commentator walk up after he concluded, clear his throat, and pronounce into the microphone “Less of this, please.”
Unfortunately, as you yourself point out, Blizzard failing to crack down on such speech is at least somewhat likely to get them banned from the Chinese market (as in Aftagley’s other examples), allowing them to demonstrate actual damage before the court.
Essentially, the only possibility I see to counteract this is for the blowback in Blizzard’s other markets to be more damaging than any actions that the Chinese government may choose to take. If it’s a choice between “don’t suspend the player and potentially lose China” and “suspend the player and potentially lose not-China”, it’s a very different calculation.
Sadly, I don’t think the “not-China” market cares enough to put sufficient pressure on the company.
We have established the precedent that many US companies will fire employees for off-the-clock political or social opinions that offend the adverstisers/management/randos on Twitter. Having established that precedent, it’s pretty damned hard to argue that this applies when the employee supports Prop 8, but not when he supports Hong Kong demonstrators.
If we had maintained the norm that companies never fired people for such things, it would be a hell of a lot easier to refuse pressure from China to fire someone–this just isn’t something we do, we’d probably run afoul of employment law, etc. But since it’s a standard thing most US companies do, there’s basically no way to refuse a Chinese demand to fire someone for off-the-clock opinions offensive to the Chinese government.
Anyone who complains about a pattern of US corporate employees not daring to criticize China for fear of getting fired should, of course, be referred to this explanation of the matter.
Regardless of whether the ship of cancelling people for opinions sailed, is that contract clause even enforceable/legal in the US?
I doubt it would be upheld in court here (not that it stops a lot of companies of weaseling and trying to wear down people. Just had an experience with trying to return a product and facing a kafkaesque wall in response)
It may not matter if it is enforceable in the US. My understanding is that the player is based in Hong Kong and the interviewers were from the Taiwanese stream (I don’t know where the actual tournament was held or if that would matter).
Not in California (when off the clock): https://www.shouselaw.com/employment/political-retaliation.html
Blizzard has given back his winnings and shortened his suspension. They claim they were responding not to what he was supporting but to his using the opportunity to take a political position.
I don’t play Hearthstone or know much about it. Are there previous cases of people in a similar situation saying something political but not anti-Chinese–say about global warming, or Afghanistan, or …–and not getting punished?
@DavidFriedman So far I haven’t seen confirmation from Blitzchung that Blizzard returned his prize money, or a direct quote that they intend to or have. The closest I’ve seen is a statement that they “now believe he should receive his prizing”, which, combined with a distinct lack of any statement that he will receive it, seems weird.
Several people above me are saying that this is heavily CW, and I have no interest in violating that rule. However, I would like to conduct a very brief survey sufficiently fuzzed to avoid any CW arguments.
I want to differentiate between the claims “I am living in a bubble, and there are very good reasons for non-Chinese actors to oppose the Hong Kong protests that I have just not heard due to the bubble” and “While this topic resembles a CW topic superficially, there is such overwhelming support for Hong Kong that in practice it is a non-CW topic”.
If anyone here is willing to honestly state “I am a non-Chinese actor who opposes the Hong Kong protests” I’ll consider the matter settled and await the CW-allowed thread to hear the discussion. If not, then this is probably not actually CW and is safe to discuss. No explanations or arguments on either side for now please, I just want to see if opposing sides actually exist rather than wage a potential CW.
This was my assessment when I posted this.
Even if there are Chinese-actors active in this thread, that would probably be enough to kick it over to CW.
The more I think about it, the less well this topic fits in a non CW thread. I’m going to stop making top level posts on the .0 threads moving forward until I get better at ensuring they fit the thread rules.
Seconded. Bring this up again in .25, not here. Suffice to say there are understandable reasons for Beijing to push this that can’t be discussed here.
Maybe the topic itself isn’t that much controversial, but pretty much all the adjacent topics are CW in such an egregious way that it can’t be safe to discuss it without straying into the CW territory.
The object level issue isn’t “Do you support the HK protests?”, which is overwhelmingly popular in the West, although not universally.
The object level issue is “how much deference do companies need to give to the policy preferences and free speech norms of other governments, especially repressive ones?”
And THAT issue is wildly culture war.
I think it’s safer to wait until the next CW thread, anyway. I don’t think anyone wants the distinct between fractional and integral threads to erode. Besides, there’s a good chance Trump will come up in this conversation.
I’m a non-Chinese actor who is sympathetic to the Chinese government’s case re: the current nature of the HK protests. And at least some of the arguments I would
marshall treat as soldiersemploy would be analogous to heavily CW topics so I agree with the assessment that it is wise to avoid getting into the weeds on the overall issue. But just discussing what happened and the reaction in the news seems fine to me.
This is an issue that I’ve felt very strangely about, as a westerner. I’m not normally taken to being heavily offended by things, but something about this whole issue sickens me.
As a society we may disagree on a lot of different things, but I feel like ideas such as freedom and democracy being good things were above reproach. The idea that companies would stifle freedom of speech in support of democracy and self-determination in order to placate a government with millions of ethnic minorities in internment camps triggers a sense of deep offence I didn’t realise that I had.
Going any further would probably get into CW topics, but I’ll keep an eye on the .25 thread.
It’s an interesting topic, not unrelated to other issues we’ve seen. Basically, power relations are transitive; if X has power over Y, and Y has power over Z, then X has power over Z. One solution is to bifurcate relations: either someone does trade with china, or they have social control in the west, but not both. This is the general solution, I wouldn’t trust a politician who has interest in china. Another way might be to have a sort of insurance scheme: I, along with some group of other businesses, would make regular payments into a pot; if china cuts one of us off, that business would receive their lost profit from the pot. Another scheme would be to have the chinese intermediary have a charter of some sort, so that they have leeway to make decisions, but aren’t allowed to blatantly act in china’s interest.
The more general question is if such a scheme is really necessary. Events like Hong Kong are, whatever else you think of them, relatively rare. Furthermore, it isn’t clear that commentary by western authorities can make any meaningful difference over there. What is clear is that one should avoid engaging in such commentary without having a scheme in place; there could be drastic economic consequences if you don’t.
What’s happening in Hong Kong is a papercut compared to what’s happening with the Uighurs. The difference is that in HK, it’s westernized victims and there’s a lot of media showing it happening, whereas Uighurs are not westernized and Chinese state-run media don’t cover it, so there aren’t lots of iconic photos of protests.
To be fair, it’s harder to get to NW China compared to HK, and population density is much greater in the latter than the former so there’s more to capture and it’s easier to capture.
Also IME Westerners are much more likely to know people from Hong Kong than Uighurs.
I will say that I’m quitting Hearthstone until Blizzard corrects their behavior.
I don’t see how this is too culture war: there’s no race or gender or even SJWs involved. It isn’t a clear red vs. blue tribe issue either. I guess i’ll wait for the next thread to see if it blows up.
“Typical capitalist enterprise putting money above values”
“Yeah, even Trump is using HK as a bargaining chip in trade talks so it’s not surprising”
“Have you seen the percentage of US companies Chinese companies own?”
“Odd, I don’t remember companies showing this kind of deference to governments when they were all boycotting Indiana over its transgender bathrooms bill.”
I started a subthread on this in the new CW-permitting thread. I asked four questions to start the conversation, but you’re all welcome to take it in different directions, including discussing Blizzard or South Park or whatever else.
Anyone here familiar with Bromantane? Almost everything I’ve read about it seems positive. Only negative study I’ve found about it was this, and the doses used on the mice were ridiculously high. From what I’ve researched, it seems alarmingly under the radar for how good it potentially could be for people. Thoughts?
I tried it for anxiety for a couple of weeks. It didn’t seem to work for me.
There is a drug called “Bro-Man-tane” and it isn’t a male enhancement drug?
I don’t know who, but someone made a terrible mistake naming this thing.
I suppose they can try again with Bromandudetol.
It’s actually the name of the molecule, not the product. The name of the product (at least in Russia) is Ladasten, which is still pretty good. We can make up a few more, though; how about Swolerex? Or Maximallus?
Yeah I know, but life only sets up so many T-balls for you.
fixed this one for you
Could there have been life on Venus? Could it have traveled to Earth? Could we find evidence of it?
The answer is a low-probability maybe, but there’s lots of cool science along the way.
In honor of Yom Kippur, I just made my regular-ish donation to the Against Malaria Foundation, Doctors Without Borders, and my hometown food bank. Does anyone want to share what kind of charities they donate to? It’s been a while since we talked about altruism.
Mostly just GiveWell for me (I switched to letting them allocate to their recommendations as they see fit).
Most of my charitable giving goes to the Salvation Army, although I also support a local orphanage and school.
Haitian orphanage / clean water project; local church; local animal shelter; and also a local foundation that funds other various charities (so obliquely those, but not me directly).
The only charity I regularly donate to is the Institute for Justice, a libertarian public interest law firm that litigates cases such as Kelo, and against restrictions on people being hair braiders or making caskets or … .
I just give 100% to Givewell. Their values are close enough to mine, and I trust them to do a much better job on evaluation than I would.
Monthly automatic donation of 10% of my pretax salary to the Against Malaria Foundation, as per the Giving What We Can pledge.
Small microloans through Kiva so that I can feel like I’m helping people with faces.
Intermittent small political donations so that causes I like can say they got support from small donors.
Small regular donations to free media I consume (including an annual contribution to Wikimedia and a monthly Patreon budget) because ethics.
Unfortunately I’ve started to feel like I can’t actually munchkin away the need to feel like a Real Altruist. I might be able to patch it for a while if I can find a way to be smugger about the AMF donations (I’m thinking of making a tracker of estimated lives saved with error bars and displaying it somewhere I have to see it), but probably my only psychologically sustainable solution is to get a more unambiguously prosocial job and/or start (shudders) volunteering. Or increase my utilitarian donations until they really start hurting.
I’ve heard that Wikipedia really, really doesn’t need donations. Am I misinformed? Or do you donate just for using them out of deontological concerns, not out of utilitarian ethics?
Deontological, yeah. (Or Scott’s contractualism-as-timeless-consequentialism thing). I don’t know why they wouldn’t need donations, though. Even if all their hosting costs are covered and their non-volunteer staff is well-paid, I can think of plenty of useful ways they could grow (more news coverage, advanced interactive articles, better automated moderation, better APIs…)
At this point, it’s almost all going to specific friends and family members who need a bit of extra help and aren’t getting it through the existing “social safety net”.
I normally donate to a biosecurity charity identified by the Open Philanthropy Project. Last year however, no bio security charity was listed, so I defaulted to the 2017 list, which was not optimal. X-risk is my top priority, and I work in a hospital setting, so Biorisk is a more real threat to me psychologically, and donating to those causes helps me gain career prestige more than an AI charity. I’m hoping to one day be in a high enough role to help improve Biorisk at a regional level, so career prestige is probably more useful than AI over Biorisk, though self interest could be the most important factor under the hood.
I donate intermittently to: local homeless shelters because that compensates the guilt I feel for walking past beggars; a local charity that provides mentors for young boys without father figures because I know some people who benefited; and Catholic Charities because I trust them to spend money effectively.
Like honoredb, I also donate to Wikimedia and free media creators on Patreon, but that’s not really “charity” since I’m definitely getting something from them in return.
I give to the schistosomiasis relief fund through Givewell, though I haven’t done it yet this year…
I mostly give to Malaria Consortium.
The only charities I still donate to are the Alameda County Food bank, the Salvation Army, and the occasional beggar.
In the past I’ve donated money and/or labor to a couple of radio stations, a strike fund for another union, did door to door ‘) political “precinct walking”, walked a picket line, a few dollars to a church, and was named as a oner of the volunteers in the song “Gilman Street”.
Request: A while ago in one of the links posts I think I remember one of the links being about how student performance was NOT improved by compulsory class attendance. I haven’t been able to find it and my keyword searches don’t get me there.
Anyone know what I’m talking about? Help would be appreciated.
Domesticating moose. Or possibly taming them, since it looks like there’s no effort at controlling their breeding.
This story can be viewed as cute, but as a long term sf fan, I immediately went to the horrifying notion of aliens using such methods on humans. Admittedly, the moose situation isn’t *that* horrifying because moose need to do their own foraging and are too smart to keep coming back if they’re going to be killed. No, this is all to get moose milk and make hideously expensive moose cheese.
Anyway, there are some stories about aliens as a sexual superstimulus for humans.
And I Awoke and Found Me Here on the Cold Hill’s Side by James Tiptree. This is a classic. Humans become a servant/slave class in a galactic civilization because of being too besotted to bargain.
Fifty Shades of Gray by Steven Barnes. I don’t remember the details, but as I recall humans are taken over because of nominally consensual bargains which end up costing the ability to enjoy sex with humans.
Parthen by R.A. Lafferty. Aliens which look like beautiful women cause men to be so fascinated with them (but without the desire to approach them) that they happily let themselves die of neglect. Women will be servants to the aliens.
There’s been at least a little sf on the subject, but I can’t think of any stories where aliens capitalize on parent-child imprinting.
A cuckoo’s egg story (aliens are so delightful that children and pets are abandoned) almost writes itself. If we want it to be serious science fiction horror rather than just plain horror, we need some limits on the proportion of aliens so that the human race isn’t wiped out.
Now that I think about it, how do literal cuckoos fail to wipe out the bird species they prey on?
I guess for the same reason the foxes don’t just eat all the rabbits and then die of starvation. Fewer hosts mean less parasitic opportunities means less cuckoos means more hosts. At some point there has to be either a stable equilibrium or a cycle of boom-busts.
I am a simple man, and I want a simple thing: one million human skulls. Put them right over there, cleaned and presentable, on shelves in my warehouse. How will you do this for me?
And to spell out what should be obvious: I don’t want to end up in jail or fielding years of lawsuits or living as a social outcast because you did this.
Oh no, I’m not falling for this twice, Khorne.
Skulls for the skull god!
When you say human skulls, it’s important to specify:
Did they have to, at one time, encase the actual brain of a living human, or are you solely satisfied with the aesthetic?
Because human skulls can be quickly mocked with a high degree of verisimilitude.
Actual human skulls, please.
Social outcast is doing a lot of work here, but I am going to assume that working with the Chinese won’t make you an outcast.
Start a medical research company and contract with the Chinese government for a million skulls for importation.
That’s going to be by far the cheapest and easiest way to get them, but you will have contributed to some of the worst horror on the planet.
1. Obtain a database of people with <X years to live (depending on whatever timeframe you consider acceptable) who've agreed to donate their bodies to science (or powers of attorney of such persons).
2. Make them a better offer.
3. Sell the bodies to science, keep the skulls. Use the cash to pay someone to clean them.
4. Delivery to your warehouse.
There is no cash; see point 2.
Unless you are planning on making up for it in volume.
A warehouse? How big a warehouse?
If we figure you can fit five skulls per horizontal metre, and can stack them ten high, that’s 50 skulls in a shelving unit one metre wide, yielding 20,000 m of shelves. If we figure each unit needs one square metre of space for the shelf and access aisles, that’s 20,000 m^2 of space. So the Larson Skullorium could be a building measuring 100 m by 200 m, roughly the size of four American football fields.
Like everything else, I’ll just import from China.
Shinshin Heavy Industries always did have a high rate of on-the-job fatalities. And now it can turn that into a secondary income stream. Flip a negative into a positive. Win-win!
If you buy a few Cambodian concentration camps, you get a good beginning. They have a lot of skulls there.
Make museums in extermination camps, and you’ll get to own a lot of skulls. They’d be in a museum, obviously.
Indiana Jones meets Khorne.
Set up logistics and payment for the contents of the catacombes of Rome. Paris may have an offer, too, but Berlusconi country will be more open to financial side-channels.
Try to find a way in which you do something a favor. That’s always better.
Focus on skulls already removed from the bodies. It’s faster.
Cemetery reform? Find countries / municipalities that have various issues, and strike deals that involve some form of reorganization of their cemeteries. Either for space, or as a side-effect. Offer to keep remains in better conditions, in case next of kin want to re-bury them. Charge a small sum for storage.
I wonder, is there any place in the world where we could find a lot of buried bodies that no one is particularly attached to? It would be easy enough to dig up remains.
If it’s old enough nobody has relatives there, it’s probably history, and people are very attached to historical remains.
Question – are you amenaBLe tOO Disguising, Faking OR concealing THE warehouse as a museum? Because Likely if yOu sO Did, GOsh, we’D get this done easy!
Best Halloween sleep-overs ever!
Say you want to do the world’s biggest simultaneous performance of Hamlet.
Wait for people to sign up as Yorick in their wills.
This one may get the Thinking Outside the Box award.
Indeed, Andre Tchaichowsky (1836-82) co-starred with David ‘please don’t typecast me as Dr Who’ Tennant back in ’08.
There are already places, usually called ossaries, with a thousand human skulls collected together. I knew that these existed, but the recent photo challenge “https://commons.wikimedia.org/wiki/Commons:Photo_challenge/2018_-_November_-_Bones” showed me just how common and accessible these are. I don’t know how many of them are for sale, but at least a few of them must be. Researching these and buying ten thousand human skulls in large batches from these sounds possible if you have money.
A million skulls sounds much harder. You might have to become the beloved ruler of a large country or prophet for a large religion for that, to convince people to help your agenda.
Well, Futurama has answers for everything:
What does Planned Parenthood do with the skulls?
New poster. I’ve noticed that Scott has stopped taking on controversial issues.
Is this the result of the cost of discussing these issues increasing? Declining interest in these issues? Something else?
Personally I find the blog has become less interesting as a result of the absence of controversial topics.
What do you mean by controversial issues? (Note that this is the CW-free thread so you might want to not do much more than specify what they are.)
There are some things he’s openly said he is extremely uncomfortable discussing due to potential blowback from online mobs. But looking over the past few months of posts, I see several I would call controversial.
EDIT: Also, this is kind of a rude post, IMHO.
Can you elaborate on why you think its rude?
By culture war issues I would say either politics or “Things I will regret writing”
I removed a hastily made, poorly formatted chart
So there’s been a fair number of both those posts over recent months, not sure if the frequency has gone down, but it doesn’t feel to me like he’s avoiding saying anything controversial.
[Given that he posts 1-2 times a week, I think people really underestimate how likely it is for there to be long runs without a particular topic, just from chance. (Protip: if you ever want to discriminate manually-faked lists of coin flip results from actually random ones, the ones with strings of 10 heads/tails in a row are the random ones.)]
I think it’s kind of rude to come here and say “Your blog has become less interesting”, even if you preface it with a “Personally”. It just feels like a lazy criticism. But I guess I wouldn’t want to remove the ability to criticize our host.
I don’t want to signal boost anything I shouldn’t.. So I won’t.
I do think there has been a change in what gets discussed over the last little while, even on the discord app and r/slatestarcodex, however, and
personallyin my opinion, I think its necessarily rude to say “This is fine, but More of that, please“
That’s fair, I was probably overrreacting.
Frankly, I find the pharmacology/neurology/economics posts here to be far more interesting and informative than the politics-heavy hot takes.
Yes, culture war topics, while fun occasionally, are only good when there is something really new to say. Rehashing them over and over gets old pretty quickly.
Agreed that the OP is rude.
I second this. Pharmacology/neurology/economics are basically my favorite subject areas, so I love when Scott (or anyone smart and interesting) talks about them.
The main answer is that the culture-war-ish topics have moved to the fractional open threads, not the visible-on-the-front-page integer ones.
But that’s not Scott taking on controversial issues, which I think is what OP was getting at
I can’t find it, but didn’t Scott write a post comparing his page views on controversial posts and noncontroversial posts and stated he’s trying to aim towards the latter?
I remember that post. He said the controversial ones get the most clicks, which he felt was unfortunate.
Scott surrendered to the SJWs. See “Kolmogorov Complicity And The Parable Of Lightning” and “RIP Culture War Thread”. He has been terrorized out of politics because he used to tell too much truth.
And, yes, the blog is boring now. I don’t even bother reading most new posts. To the extent that I’m still here, it’s to see what the people who are still interesting have to say in the Open Threads. But there are also fewer of those left; a lot of them got banned or neutered under threat of banning.
You know, there’s already a visible memeplex growing that thinks that CW is not a very useful use of one’s time, and you might actually consider it a parasite. Not in a “I give up, I won’t even vote” way, but in a “CW is dessert – small part of a balanced intellectual diet”. Eating nothing but controversial topics will make you fat and slow.
I like this memeplex, and I haven’t heard of it before. Thanks for sharing.
You may want to consider who that memeplex benefits before adopting it.
Can you elaborate on your thoughts here?
As Lord Voldemort said, the sovereign is the one who selects the null hypothesis. If you’re only thinking about “controversial” ideas in small doses, you’re implicitly accepting the Overton window the rest of the time. But the Overton window is anything but neutral; indeed, it is easy to see that this cannot be the case because, as Voldemort pointed out, the Overton window of 1907 is completely different from the Overton window of 2007. Why would the Overton window of 2019 be any better?
@jaimeastorga2000; I appreciate the detailed reply.
So are you saying that the memeplex would benefit
a) The current culture?
b) The sovereign?
c) Your outgroup?
Because whether or not you think it benefit A, B, or C, I would say that none of those answers change that “truthiness” of the idea that
I would say, more than anything or anyone else, the memeplex would primarily benefit
d) The person who adopted it.
Good links, thanks.
I’ve commented a few open threads ago on that – I’m starting to think “centrism” is just a cognitive bias. Sure, most cognitive biases make evolutionary sense, and ignoring centrism all the time will probably get you killed before 25, but still – using the Overton window as reference is a crutch. At some point you have to cast it away and trust whatever wisdom you’ve managed to acquire, and be ready to say that your position is X, whether this turns out to be in the middle of the window or way outside.
But I don’t think actually participating in the CW helps you with that. On the contrary. I think it anchors you in the CW and allows it to set the pace. More you talk, read, debate, more you end up accepting the hidden premises of the CW and harder it is to come up with a genuinely original thought. Which, almost by definition, has to come by consuming and thinking of topics outside the CW.
Culture war topics tend to be topics where the conflict-theorists have mostly taken over. Thus:
a. People get really mad about disagreements or even hints of disagreement, even when they’re places where the facts or moral reasoning isn’t really conclusive and the disagreeing idea is consistent with reality.
b. Making pretty normal and measured statements often gets pushback from enraged people.
c. A large fraction of the people discussing the matter are working from tribal affiliations instead of actually working from the question of fact at hand.
This means that CW topics tend to be places where you can waste a lot of time and not get any kind of useful discussion or learn anything new. It’s very important to be able to think clearly about some of those topics–sometimes it drives important life decisions. (For example, there are all kinds of CW issues surrounding dating and marriage and children, but figuring out how you want to approach those activities is a pretty big deal.)
Now, the wonderful thing about SSC is that we can sometimes have actual reasoned discussions of CW issues. There aren’t so many places this is possible, and many of them end up being places where you can discuss the less-favored side of a CW issue, but you’ll still get outraged pushback if you take the other side. Which isn’t a great way of thinking things through!
The problem with this is that there are a whole lot of people, here and in the big wide world, who are absolutely outraged by calm, reasoned discussions of CW issues. There is only one way that anyone should think about these issues, and having a calm, reasoned discussion with someone disagreeing with the right way of thinking about these issues is simply giving a platform to evil, and makes you complicit in their evil.
That reaction seems crazy to me, but it’s very common in the big wide world. It’s why people got mad at Sam Harris for interviewing Charles Murray, and why people are screaming about Quillette all the time. And the thing to understand is that the reaction seems crazy from a mistake-theory point of view, but is perfectly reasonable from a conflict-theory point of view. The question isn’t whether Charles Murray is right in a narrow factual sense, it’s which side he’s on and which side his arguments support, and he’s on the *wrong* side so he shouldn’t be given a platform.
ETA: One problem with accepting a little narrowing of the window of acceptable beliefs for discussion is that this tends to encourage further narrowing. Once I can declare that X is out of bounds, I have every incentive to push for declaring that X+epsilon is also out of bounds. And the longer nobody is allowed to hear X, the more X+epsilon will also seem like an extreme belief.
He who defines the range of acceptable opinions has a hell of a lot of control over what final decisions will be made and what issues will be raised. As a mostly non-CW example (at least here): If we decide that any discussion of corruption or brutality among the police must not be given a platform (lest we undermine support for law and order), then there are many issues that basically can’t be discussed. We won’t ever get to a policy of independent investigations of police shootings or mandatory use of bodycams when we aren’t allowed to bring up police misconduct. So if I get to decide that police misconduct is outside the Overton window and mustn’t be discussed (or must be discussed with great care and gentleness), then I can probably prevent those policies from ever coming into existence.
@albatross11 well said
Most things are not part of the memeplex and thus are unrelated to the Overton Window. My opinions about 13c nominalists, telomeres, and Benjamin Franklin’s politics circa 1755 are not informed by the Overton Window, and thus not controlled by it.
IDK, “essentialism” is one of the big boo-words in much of modern academia.
I think my biggest problem with Scott not posting on the culture war is that intellectually, Scott usually gives the fairest and most charitable deep dives into the culture war topics everyone is arguing about. More than once I have been in a CW-type discussion that was about to turn extremely toxic – and I was able to link a Scott post instead of getting inflamed , saying mean stuff, and potentially making things worse. Scott’s posting on the culture war is a pretty effective inoculation against the worst parts of the culture war. So even on the meta-level, even if “thinking about the culture war” is probably a waste of one’s time – having at least one impartial and fair observer writing big effortposts about it was a net good.
I thought I saw somewhere that we weren’t supposed to signal boost the “RIP culture war thread”?
Also, these comments don’t strike me as either Kind, or True. Less of this please
While they may be verboten in this open thread, they are certainly “True”.
Scott has averaged 3.44 “things I will regret writing” posts per year over the history of this blog, standard deviation 1.63. With two so far this year, on track for 2.9 by year’s end, the alleged truth of this claim seems far from certain. And, yes, unkind and unnecessary.
Is it really boosting if it’s done on his own blog? Like, it’s right there in the archives, probably easier to find than this post.
You might be right, I’m genuinely not sure what the policy is..
FWIW, Scott recently moved to the Bay Area and has started his formal medical practice there. Between setting up personal life, professional life, and doing the travelling he’s doing, it’s a surprise he has time to do any posting at all.
Which is, amusingly, a frequent topic of discussion with the friend who introduced me to SSC: “how little do these psychiatrists work if they got time to write all that, and why did nobody told me when considering carrier choices?”
It’s simple–when listening to people talk about their problems, instead of doodling on his note pad Stick figure man = cuckoo bird, like most psychiatrists, he’s scribbling “Moloch, whose mind is pure machinery! Moloch, whose blood is running money!”
(My knowledge of psychiatry may derive a bit too much from cartoons)
One of the (possible) ways that psychiatrists manage to have a healthier work/life balance in comparison to other types of doctors is that they may be able to chart while talking with the patient. A lot of the medical side of stuff may be answered in a few questions, and then writing can be done while the patient rambles a bit. This likely only applies to low-acuity outpatient cases. But it’s a lot better than most others where charting is effectively done separately.
There are complaints about doctors typing rather than listening, and some ER physicians hiring scribes to try and work around that. My experience is also that outpatient psychiatry is more open to using paper charts which are less obtrusive. Maybe Scott could chime in with “a day in the life of a blogging rationalist psychiatrist”?
+1. I would find this very interesting.
The obvious explanation, as I realized some years back, is that Scott has discovered the secret of the thirty-six hour day. I don’t know why he hasn’t shared it with the rest of us, but I expect he has his reasons.
You are educated stupid.
Can anyone recommend any good sources for quantified evaluation of lifestyle externalities? In other words, anyone putting numbers on the value of things like recycling or saving electricity.
Not totally clear on what you mean by ‘value’, but Dave MacKay’s “Sustainable Energy Without the Hot Air” does a bunch of lifestyle quantification of this sort. https://www.withouthotair.com
Take a look at what Bjørn Lomborg did. Don’t have specific links, but most of his work is in quantifying which actions have most impact and which don’t. He’s pretty controversial because, well, he actually did that.
Runs that I have nostalgia for, but feel like may not actually be good:
Death and Rebirth of Superman
JLA: Obsidian Age (but you should probably read Tower of Babel instead)
Green Lantern: Darkest Night
The books where Dick Grayson is Batman with Damian Robin
I liked the Buffy continuation comics, particularly the “Angel and Faith” line.
Some believe that we’ll mine the asteroids of our solar system, and break them all up into little bits that are processed to build space ships, space stations, and other objects. What about doing the same thing to moons? For instance, would it be feasible to pulverize one of Jupiter’s moons, and to turn the pieces into manufactured objects or giant ingots of pure elements, and to move each piece out of Jupiter’s orbit until nothing was left of that moon? Or is Jupiter’s gravity too strong to allow this?
It’s a matter of energy. There’s no physical reason you couldn’t scale up from breaking asteroids and comets into bulk components of O’Neill cylinders to breaking up moons. If a small spaceship could enter Jupiter’s orbit and leave again, so could a bulk hauler. The energy of the propulsion would just have to be many orders of magnitude greater. But even with basically modern technology, you could do minimum-energy transfer orbits from Jupiter to Earth orbit with hydrogen bombs (i.e. an Orion freighter).
That doesn’t sit well with me. An asteroid is like a mistress, fine, if your moon doesn’t have what you need, you can go elsewhere. But another planet’s moon? We gotta draw the line somewhere.
Or, the gravity thing you mention, that’s probably more relevant. But if you pulverized the moon, the fragments should be easier to remove from the gravity well.
(Though, as LMC alludes to, the reduction in energy needed might not make up for the energy needed to do the pulverizing).
I always wondered how asteroid mining made any sense since smelting required lots of oxygen, which is rather rare in you know…space. Someone informed me asteroid ores can be so pure that no smelting is needed.
Moons aren’t the same, based on the moondust recovered and such. It’s the difference between picking up chunks of gold from a river bed and having to process sand into gold. Much more effort!
Smelting requires oxygen? Isn’t it the opposite, that smelting produces oxygen since most ores are oxides?
Unless you mean the big energy requirements but it’s a given that this is going to be either solar or nuclear.
You only have to smelt non-native metals, e.g. oxidized metals. Most asteroids are pure metals – not oxidized.
Correct, I thought that was the point I was making. I imagined asteroids as giant lava rocks, but someone told me they are pure metal like you said.
Our moon, at least, has plenty of oxides, so you’d need actual refining, and I suspect many moons to be like ours.
I’m pointing out the two concepts are totally different and it’s much more efficient to mine an asteroid.
Though it would require the transport of more matter potentially, you could just take the raw ore back from the asteroid and process it on earth. It might be more efficient than taking a lot of processing equipment into space. Actually I think it probably would be much more efficient.
Some even believe we can do it with planets and stars at some point, so at the very least it’s not a settled issue.
Jupiter’s got an awful lot of moons; which one(s) are you interested in mining?
And gravity wells are a mixed blessing. They’re inconvenient to climb out of, yes, but actually quite handy if you can exploit the Oberth Effect in your maneuvers to or from some more distant location. So, let’s look at the numbers.
For the most energetically convenient near-earth asteroids, the “closest” members of the Aten group, it requires about 5.1 km/s of delta-V to return a payload (say, a chunk of ore or processed metal) to Low Earth Orbit. From the surface of Ceres, out in the main belt and down a small well of its own, 9.8 km/s.
From the surface of Jupiter XIX “Megaclite”, the outermost of Jupiter’s moons, 8.2 km/s. And while little Megaclite is only about 10 km across, that’s probably bigger than all the Aten-group asteroids combined.
On the other hand, to return a payload from Callisto, the outermost of the Galilean moons, you’d need 13.8 km/s of delta-V. Io, the innermost Galilean moon, comes in at 19.2 km/s. Sorry, Sean. But some degree of Jovian-moon mining will probably be plausible.
And in case anyone is wondering, strip-mining the rings of Saturn would require about 17.6 km/s. I think we’ll be able to get a conservation easement in place without too much fuss.
This just sparked a thought, so as a spin-off: is this the most likely explanation for the Fermi Paradox; it takes so long to use up a solar system materially that the expansion rate is vastly slowed from previous estimates?
“Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be completely traversed in a few million years”
I don’t know where you can read the original paper, so I don’t know how this component of the paradox is treat, but since the word “traversed” is used in the Wikipedia summary it’s possible that the “few million years” estimate is talking purely about the travel times, and we need to add on 1000s of years of use of a system before there is significant population and resource pressure to prompt colonization of the next nearby system. That could take several orders of magnitude more time, then just add on politics and war and you could get to a billion years to colonize a whole galaxy completely.
This leaves the question of why they haven’t at least send probes to us, but if non-intelligent life is enormously plentiful, then out of billions of potentially habitable candidates, the last probe cycle could have missed the small window of time between the birth of human civilization and modern times. Especially if we are on the other side of the galaxy. If you only have scattered probes and not a vast wave of expanding civilization that seems plausible on its face.
Unlikely. Civilizations which don’t grow at all, by definition don’t settle the galaxy. Civilizations which do grow, well, at a historic 2% growth rate it would take the human race all of 2,130 years to convert the entire mass of the Solar System into writhing, squirming man-flesh. Yes, including the Sun, so we’re imagining mass transmutation of elements here. 21,300 years at a 0.2% growth rate, and add on an additional 10 kY if you envision starting from a single spaceship carrying an astronautical Adam and Eve to a new world. Subtract about 2 kY if you want to leave the stars themselves intact and/or can’t do the mass transmutation of elements thing.
If we assume expansionist civilizations never exceed 0.2% growth, and wait until they have fully consumed one star system before colonizing the next, and their colony ships are limited to 0.1% of lightspeed and 10 light-years range while carrying only two people, then you still get the Milky Way(*) converted to nothing but flesh in 413.6 million years. But the neighbors will notice the stars going out long before then.
More likely, expansionist civilizations would undergo boom-bust cycles, perhaps separately in every star system. But if they can send out colony ships during Peak Boom, the impending Bust will give them extra incentive to do so and they probably won’t wait 30,000 years each time.
* False advertising; the end state of the “Milky Way” in this hypothetical will be only 0.0025% milk. But that’s still close to four million solar masses of milk.
Hang on. This doesn’t seem prima facie plausible. For a start, the human race has been around for considerably longer than 2,130 years and I don’t see much in the way of writhing, squirming man-flesh here on Earth, unless I visit specialist websites.
I think a “show your work” is called for here.
Most of the existence of the human race we haven’t grown at anything like 2%.
And at the other end of the graph, the laws of physics will squash our growth once more. The exponential seeming bit is the bit in the middle.
LN(1.991E30/(7.7E9x62)/0.02 = 2,133 years
LN(1.991E30/(2×62)/0.002 = 32,473 years
The actual human population has, over the past 6,000 or so years, undergone numerous boom-bust cycles. That’s a different expansion-limiting mechanism than “it will take so long to use up all the local resources that nobody will bother moving outwards”, which we have seen people historically do long before they used up the local resources.
Which @John Schilling certainly agrees with. He is pointing out that long-term expansion at the rate that would be visible to us is a very small number.
Why isn’t this enough time?
Across those smaller booms and bust we’ve gone from slow population growth to high population growth and now we’re headed back towards slow growth in the Western world. Humans don’t want to tile the universe it turns out. We’re not actually maximizers. We only appear to be in the environment we evolved in. When you stick us in weird environments we do things like not breed above replacement rate, and we also start passing laws like the “one child policy”.
Where does the 2% growth rate come from? Is this the historical trend for GDP growth or what? (I remember Scott’s Picketty article cited something like 1 to 1.5%). Percentage growth in raw production can’t compound forever because the laws of physics start getting in the way, forcing your exponential to be a logistic curve, so I wouldn’t be surprised if average GDP growth 1000 years from now was vastly lower than even 0.3%.
It’s likely impossible to do things like “consume the sun” anyway since no physically possible materials can survive a trip to the surface.
EDIT: You were literally talking about population expansion. D’oh.
2% is the approximate historic growth rate in human populations that aren’t undergoing collapse or pushed up against hard resource limits. It has lately been replaced by populations whose fertile women are persuaded not to reproduce quite so much by satisfying them with a 2%/year growth rate in consumer goods. Either way leads to approximately the same end result.
You are welcome to believe that the human race will decide to adopt a wholly zero or negligible-growth policy forever onward, and you might even be right. But that’s just another way of restating the tautology that non-expansionist civilizations will not colonize galaxies, and it doesn’t explain the Fermi paradox unless every single civilization everywhere does the same thing. “Explanations” for the Fermi paradox where all civlizations do the same one thing and absolutely none of them do one of the physically possible and historically precedented other things, are not all that plausible.
Also, star lifting is absolutely a thing, even if it is a thing we don’t know how to do yet.
Thank you, John Schilling. That phrasing explains why people find the Fermi paradox actually interesting or paradoxical. The explanations that I’ve read before weren’t satisfying.
But isn’t the current human population declining almost everywhere besides Africa, which will surely follow the same pattern?
Or because they follow the same laws of nature and are compelled to behave in the same way under the same circumstance of high technological society?
I didn’t know of this idea. Is there good science behind “heating small regions of a star’s atmosphere increases the solar wind”?
Possibly you could rekindle my interest, as the originator of this subthread, by showing some of your own work, as you asked of me. How long do you think it will take for the human race to “use up the solar system materially”? How long for a three-sigma resource hog of an inhuman alien civilization? What “laws of nature” place an absolute bound on the time required to use up a solar system, and what is that bound?
And, since emigration can occur from a solar system that has not been “used up” and for that matter a civilization confined to a single solar system can become conspicuously visible across galactic distances without that system being first “used up”, why is this even relevant to the Fermi Paradox?
I’m not that smart. I mostly come here because it’s the only place where I can propose things and get smart people to explain why they won’t work and then I learn a little.
The population groth might be a lot smaller. The important thing to remember is: even a very small groth (let’s say 0,001% on average allowing for boom and bust cycels) will use up all the available ressources of a star system in a timeframe that is rather short, on a stellar scale.
Might the fact that there is a lot of everything in a starsystem to use up, have anything to do with the answer to the question “where the hell is everyone?”? Yes, it might. But it can’t be The Answer^(tm) because of the absolutely mind boggeling amounts of time we talking about. (E.g. If everyone else is still content with exploiting their homesystem, we still would have to be in the first wave.)
Well, this guy thinks it might be plausible given life extension.
I’ve been burned often enough that it’s hard to avoid being cynical, but I find myself getting a little bit excited about Picard, given the new trailer. [Youtube Link]
I’m feeling the same. I will still be waiting for positive reviews from here and elsewhere before I give it a watch, though.
Were it not for Patrick Stewart, I’d be giving it a pass. But on the strength of e.g. Logan, I’m up for seeing him revisit an old classic and hope the network suits don’t screw it up.
But their timing sucks, because Amazon found this ratty old concept in a dumpster behind Paramount and decided to polish it up.
Space. The final frontier. These are the voyages of the star ship Rocinante. Her five-year mission: To explore strange new worlds. To seek out new life and new civilizations. To boldly go where no man has gone before.
Or, you know, what the other guy said. But these are the people who understand what the old words mean, here and now, and so that’s where my excitement is focused.
It’s made by a lot of the same people who made Discovery, Stewart’s influence on the TNG movies was generally not good, the first trailer was problematic, and there have been some serious production issues, which never bodes well.
That said, I really, really want to like this show, and this trailer is a bit better. So maybe there’s hope.
this time the guy at the helm is someone with at least some demonstrated skill with genre work instead of a guy who started in famously schocky TV and whose subsequent career is mostly based on being JJ Abram’s friend, so I’m willing to give it a chance
Oh, I’ll give it a chance. I was desperate enough to sit through a season and a half of discovery, after all. I’m aware that Kurtzman is gone and I’m glad, I just worry that his influence will linger.
Identify the UFO story you consider most plausible (or least implausible), and briefly describe it. Then, assign relative confidence levels to the following explanations:
1. It is a real artifact, created/piloted by entities from somewhere outside our solar system.
2. It is a real artifact, created/piloted by the US government, built from technology obtained from extraterrestrial life forms.
3. It is a real artifact, created/piloted by the US government, built from technology created entirely by humans.
4. It is some kind of trick or decoy artifact, created/piloted by the US government to make people think they saw a UFO, probably for psy-ops reasons (or something along those lines).
5. It is an anomaly resulting from an equipment glitch/optical illusion/etc.
6. It is a complete lie; even the original people reporting the incident were lying, and any video or photographic evidence was fabricated.
(7. Add another if you’ve got one…)
Mine is the UFO footage released by the Navy a few years ago, of the 2004 Nimitz incident.
1. 0.00[lots more zeroes]01%
2. Same as above but maybe a couple fewer zeroes.
You think its slightly more likely that the government is deliberately creating UFO sightings than it is that the sightings are optical illusions or equipment glitches?
Please explain, I want to hear why. I would think that the latter is far more likely than the former.
7. It’s showing some interesting natural phenomenon / effect that has nothing to do with either USG psyops or alien technology, but would still probably be interesting to know about.
I gave “government-perpretrated hoax” slightly more confidence than “optical illusions/natural phenomenon/equipment glitches” because the incident I mention was corroborated by multiple witnesses, some of whom had eyeballs on the tic-tac and some of whom were tracking it on various kinds of equipment. I think it’s easier to create a single hoax that fools both our eyeballs and our tools than to directly fool eyeballs and tools in a synchronous way. (Though maybe that just means the answer is that it’s a mix of both and I should just give them a single confidence level.)
Correct me if I’m wrong, but it was my impression that nobody ever had “eyeballs” on the tic tac. As in, saw it with the naked eye.
My understanding is that one Superhornet stayed up and was watching it with IR/radar/etc. while the other one swooped down to get a closer look and the people in the latter plane had eyeballs on it.
Did you watch the recent Joe Rogan interview with the pilot? (https://m.youtube.com/watch?v=Eco2s3-0zsQ) If you find the pilot’s account trustworthy as I do, it really changes the estimates. Of the non-negligible probabilities I’d now say:
To be a glitch/optical illusion would mean many highly trained pilots plus high tech equipment to all be confused yet in agreement. And it still wouldn’t explain the active radar jamming. (Thus 10%)
Aliens reaching earth of course has a very low prior, but I would consider this strong Bayesian evidence in that direction due to the highly implausible level of tech that was seemingly observed. (So 10%)
The government pulling something sneaky seems surely most likely, but a secret this big staying secret for 15+ years seems hard to swallow. But perhaps this is “black ball” technology (potentially world ending) so everyone who knows the secret is terrified to leak it. Psy ops seems plausible but like a magic trick I can’t explain how they’d pull it off. (40+40%)
Yeah, watching that was what inspired me to write the OP.
Is that 2 hours interview? It’s a bit longish… can you describe it somewhat?
On a completely side note – this video was seen 1.5 million times. Of course not everyone seen it all, but even then, that’s staggering amount of time…
Most of the content is in the first 30-60 minutes, and watching the commander speak affected my feelings on the case. But here are the main points.
A commander was one of the people in a plane in 2004 that got within 1/2-mile of a tic-tac-shaped aircraft. He attests to visually observing:
– Hovering with no visible propulsion mechanism (no engines, no rotors)
– Rapid back and forth movement
– Instantaneous disappearance and reappearance elsewhere
– Movement at ~60 times the fastest known aircraft at that time
This is backed up by video footage from the aircraft in infrared (available online and shown in the podcast) that shows the aircraft with no heat coming out. This is further evidence that the propulsion mechanism was beyond anything we know of. The footage also shows the ~60x too-fast movement (though I’m taking the commander’s word on this interpretation of the footage).
Other radar also observed the tic tac descending smoothly from 80k to 20k feet (80k = space) before the interaction took place.
The combination of infrared footage and a high-level navy officer who had a visual of it (in addition to ~6 other people) is what makes me take this pretty seriously. Even assuming no aliens were involved, the other possibility is that some government has had revolutionary tech for years and remained completely silent about it.
I haven’t watched the video yet but this pilot was featured on the 10-episode PBS documentary “Carrier” which is excellent by the way.
The best episode was “Rites of Passage” which depicts some pitching deck training which scares the hell out of the pilots.
edit: better link to the whole episode: Carrier
No probabilities but a fairly well-mannered? video stating some of the historic examples of them that are not so easily dismissed.
There is however, a strong point at the end that should be emphasized – if these events are more or less random/accidents, there really should be MORE UFO sightings by the public (and not just Navy pilots) as population increases and handheld cameras have become ubiquitous.
There is always the conflict between the past ones being fakes and the recent ones being dismissed as fakes due to increased skepticism and photoshop abilities, but is this enough to explain the entirety of the trends?
I wonder if there are more sightings of this kind among commercial pilots and private pilots who never report them for fear of being thought crazy/ill and losing their pilot’s license.
I wonder if it might go the other way. The FAA requires very stringent medical examinations annually, and basically any kind of psychiatric diagnosis will disqualify a commercial pilot. Considering Scott’s recent posts about how a lot of people are just constantly experiencing low-level hallucinations, maybe intermittent hallucinations in otherwise totally healthy individuals are far more common than we think. If a pilot has developed habits that lead him to trust his senses in all circumstances, sees some weird shit, and reports it, there is a very strong motivation to refuse to accept the possibility that it’s not real. Pilots are being held up as inherently trustworthy because of their training and stringent qualifications, but it’s possible the same set of circumstances makes them uniquely motivated to refuse to accept evidence against their initial impression of events.
If I know that reporting having seen X will lead to my being no longer able to do my job, which is how I make a living and also is a source of a lot of my self-image, I’m probably not reporting X. If I know that reporting having seen X will lead to my no longer being able to fly my private plane, which has been a source of joy for me for many years, similarly, I’m probably not reporting X.
Greg Cochran gave an example of something like that here.
Speaking as a pilot, nope. Approximately none of our training or certification is about “trusting our senses in all circumstances”, and indeed a great deal of it is about not trusting our senses in the fairly common circumstances when our senses will get us killed. We cultivate the habit of being able to say “nope, not trusting my senses on this” and make it stick when our senses are screaming otherwise.
And the bit where we are or ought to be considered particularly credible observers of outside events, by ourselves or anyone else, basically not that either. We’re pretty good at flying airplanes safely, and that’s where our training and experience goes. UFO observations are wholly orthogonal to that, so mostly don’t expect more from us than you would from a guy driving a car on a country road. Except for the bit where, as albatross11 notes, we have a stronger incentive to keep our mouth shut.
I don’t know how to compare the plausibilities of things that are all at very low levels of plausibility. I would honestly have no idea which is more “plausible” of the theories “broccoli was introduced by elves” and “broccoli was introduced by gnomes”. And this question is basically asking for that.
So, the least implausible UFO story you know of (which you might still consider totally implausible) can’t be accounted for by any of the explanations I provided? Not even “everyone was lying”?
Surely “action by a terrestrial government other than the US” deserves consideration.
This is important so listen up.
1. Foods that always improve after overnighting in the fridge:
– Most Indian dishes I’ve tried
– Most Chinese take-out dishes I’ve tried
– Split-pea soup
2. Foods that are always best when eaten right after preparation:
– French fries
– Grilled/pan-seared meat (burgers, dogs, brats, steaks, etc.)
What else should be added to these lists?
Along with French fries you can add most potato things, like tots.
There are two kinds of pizza; some are great as left overs and some turn to plastic after an hour. I think it depends on the quality of cheese.
Generally, Italian is great reheated, Mexican is terrible reheated. I think his is due to Italian food generally being all the same temperature, but Mexican (or at least Californianized Mexican) often involves cold condiments on hot food. It might be due to the nature of tortillas as well, though.
My typical choice for Mexican is a crispy beef taco and a beef enchilada with rice. Usually I eat the taco and some of the rice and take home the enchilada and the rest of the rice to take to work for lunch the next day. Enchiladas and rice reheat just fine.
It wouldn’t work the other way, of course.
Yeah, that’s been my experience as well: enchiladas reheat very well (debateable whether they’re actually consistently better the next day rather than directly after preparation; I tend to say the latter) but everything else Mexican (anything Americans would order, at least) is a pain to reheat and/or does not reheat as well.
I think the cheese thing is about the degree of crosslinking between the milk protiens.
I’m going to stop you right there, because cold pizza is amazing, often better than hot.
No, I considered this. I will grant that cold pizza is sometimes a pleasant thing to find in the fridge the next day, but I will not grant that it is better than fresh pizza, and easily not “consistently better”.
I know we’ve had our disagreements, but this is the point where I write you off as an irredeemable monster.
I wear it with pride and no apologies.
The cheaper the pizza, the better it is cold. Costco pizza cold is a literal delight.
In general anything which is cooked at a low temperature for a long time (soups, stews, etc) is improved by reheating. Fried or grilled food isn’t.
Right. Thus the interesting exceptions like Chinese food — e.g. pork lo mein, which is cooked fairly rapidly but even better the next day.
Will have to disagree, but that’s because I like my veggies less limp.
Meanwhile, my suspicion on why it is for you is that the meats are processed, from beating the shit out of them with a hammer (broccoli beef), to marinading overnight (char siu/BBQ pork), to simmering over an hour (hong shao/red cooking). That means that the reheating isn’t toughening the meat up (the proteins have been forcibly detangled already), while the flavors have been diffusing in the leftovers.
I think this is true unless the soup has noodles in it. Rice in soups does fine the next day, but the noodles just keep absorbing broth and get way too soft.
I think this is a pretty good rule of thumb.
There is also a subset of things are really good reheated, but in a different way than when they are fresh.
Leftover pulled (smoked) pork is amazing cold or reheated, but I would stop short of saying it is better than when it is fresh/hot.
I’ve found that most if not all dishes that can be described as “throw a bunch of things in a pot and heat it” (soups, stews, etc.) tend to be better upon reheating. My guess is that this is partly the result of simply cooking it longer (this is true for stews especially) and partly due to getting a more even mix of flavourful substances from the various ingredients.
ETA: I also concur with what AlphaGamma says about slow cooked v. fried/grilled.
Strong disagree about Chinese takeout – almost all Chinese food that typically gets sold in restaurants is meant to be consumed right off the burner. I do like frying dumplings for breakfast the day after we make them, but that’s basically a second stage of cooking, rather than just reheating (just microwaving or steaming them after they’ve been cooked and refrigerated is definitely worse).
Strong agree about lasagna, this is why a lot of restaurant lasagnas are not cake-y enough.
Homemade bolognese sauce should ideally be cooked for 6+ hours the first time, cooled down completely, and reheated.
Day after – tiramisu
Straight away – souffle
Basically all pastas besides lasagna taste bad after reheating. Annoying growing up since Mom would cook a whole crapload of spaghetti and expect us to eat it for 2 days, despite being garbage after cooling down.
Reheated pasta is bad.
Rehaeated sauce on new pasta is good.
I’ll agree with this with a slight amendment. All pastas except those that are part of baked dishes are bad after reheating. A baked penne dish, baked stuffed shells, or baked mac and cheese (as examples) all reheat wonderfully.
I’ll second Lambert’s point about the difference between reheated pasta and reheated sauce (but please heat the sauce on the stove).
@A Definite Beta Guy wrote:
You’re doing it wrong. Reheated plain spaghetti is great if you reheat it the correct way, which is…in a skillet, with oil, a bit of garlic powder, pepper and salt and some grated romano or parmesan. Make it oily and garlic-y and a tiny bit crunchy. Indeed, that’s what stir-fry noodles is – you take something much like boiled spaghetti (which is bland) and then make it tasty by frying it in a wok with seasonings!
Does anyone know why chilis and stews taste better after a night in the fridge? This is an area where my culinary knowledge fails me.
More absorption of the cooking liquid in the veggies, breakdown of starches into simpler sugars. Really creates a smooth, easy, mellow taste.
Do chilis taste better? I don’t really think so, because I want my chilis to be brighter than your typical stew. However, that’s easy enough to fix when you reheat. I throw in apple cider vinegar when I reheat chili, add in a diced jalapeno with seeds with every bowl, and have a bit of hot sauce on top to boot.
Chili tastes way better fresh, IMO.
In my experience, pumpkin pie is reliably much better after a night or two in the fridge.
Indeed, I’d say it’s about as good after cooled. I was surprised to see pie on the list.
Agreed on pumpkin, but apple is way better hot.
I was thinking mainly because of crusts. Berry and apple pies are especially delicious still warm from the oven. But I can see creamier pies (pumpkin, actual cream pies, etc.) being better cooled down and/or reheated.
That sounds right to me.
Actually, chocolate chip cookies are best frozen. It makes them crunchier.
When I see Cookie Monster eating those chocolate chip cookies and they crumble everywhere and make that crunchy noise because they’re bone dry, I know this creates a line that can divide the world, between those who think “Ah, that’s just how I like my chocolate chip cookies” and those who think “Yuch, those cookies look way too dry”. Now we know which category you fall into!
Why would you want crunchy chocolate chip cookies? They’re best when they’re soft and warm but still holding together.
I think I’ve had one too many cookies so tough I could crack a tooth.
Chocolate cookies can be evaluated along milestones in their lifecycle.
Ingredients separate, not yet mixed: no good. 0/10
Ingredients mixed together in bowl (raw dough): 10/10
Fresh from oven: great, but just a bit too hot and fragile. 9/10
Allowed to cool for a few minutes: 10/10 again.
Subtract 1 point for every half-hour the cookies sit at room temperature, until…
Cookies have reached room temperature and chocolate chips are no longer gooey: 6/10
Day-old cookies: 4/10
Cookies have become harder and drier: 2/10
Cookies resemble hockey pucks: 0/10
Seriously. I don’t mind freezing cookies, but that’s because you can pop them in the microwave for 45 seconds and they taste like they came right out of the oven.
I don’t really care for frozen chocolate chip cookies (I love them fresh or at a normal temperature), but I’ll put frozen M&M cookies at a similar level as someone else put frozen pizza. Not better than fresh, but very enjoyable in a different kind of way.
Fresh bread is great. One day old white bread is nearly only suitable for feeding horses or ducks, darker breads age better, but after at least a week it’s also hard dry and stale.
I agree, but ducks should be fed rice or whole grains, not bread, certainly not white bread.
Ducks (at least in parks) shouldn’t been fed at all. I just wanted to point out that I would have to be very hungry before I would a day old white bread.
Just to be clear, you are talking about white bread produced using traditional recipes? Not the square stuff that comes in plastic bags and lasts for two weeks?
Is delicious when dipped in either soup or in milk.
Yes, I mean stuff like french baguette or flat bread, or German bread buns. The square stuff in plastic bags, keeps fresh longer.
Well, okay maybe I was a little hyperbolic. But It’s looses a lot the taste it had when it was fresh.
People with a strong preference for very fresh bread may want to look at the book Artisan Bread in Five Minutes a Day. The title exaggerates how easy it is, but it is a way of having fresh baked bread whenever you want it at a modest cost in time.
You are basically making a rather wet leavened dough, keeping it in the refrigerator, and pulling out a loaf sized lump to bake whenever you want to.
Anyone have a good recipe for bread pudding? That’s the traditional thing to do with stale bread.
My father used to break dry bread into little pieces, soak it in milk, mix it with custard powder and a little sugar, fill it in a pudding form and bake it for about 15 Minutes at low heat.
Another way was to mix a coating of milk and egg, add a little vanilla and sugar, cut the bread into slices, soak it in the coating, and then fry them in a pan unitl brown on both sites. It’s called Armer Ritter in Germany.
French toast in the US.
And pain perdu (lost bread) in France.
You can vacuum seal and refrigerate freshly bread to dramatically increase its shelf life. Would recommend.
Cheesecake gets much better after a day or two in the fridge.
I find that homemade ice cream is better right out of the machine. But this is more controversial.
I disagree about Pizza, and about Pie (subtype: creme).
As supremely proficient reheater of things, I seriously object to this list!
First, gummy gooey things are the DEVIL. All chewy cookies and pies are immensely improved by a second bake, low and slow, to fully caramelize the sugars and make them gloriously crispy and crunchy. A slice of pie can be beautifully reheated in an oven to get back the tender, flaky crust. It just takes patience.
Next, the two main principles for effective reheating are:
1. No bread in the goddamned microwave!
2. The reheating method should mimic as closely as possible the original method of cooking.
Deep-fried foods can be just as good (if not better) when reheated in a deep fryer. Since most people don’t have access to a fryer, a preheated very hot toaster-oven or oven-oven (NEVER THE GODDAMNED MICROWAVE!) can be almost as good at physically introducing the heat in a way that will re-crisp breading or skin. Just don’t walk away during this process; 15 seconds of inattention can turn reheating fried food into an inedible burned mess.
Pizza can be impressively reheated on a stone in a very hot oven, or on a cast iron skillet on the stove.
Grilled items can be re-grilled or pan-seared, especially if you order them rare or medium rare with the intention of having some leeway for reheating.
Food with a lot of water content, like stews or casseroles, can of course be reheated in a microwave with no texture issues whatsoever.
Crusty breads and pastries that have gone soft and stale can be sprinkled with a little bit of water and heated into an oven. As the water evaporates off the crust, the crusty texture will come right back.
The real key to reheating meals is to keep different kinds of foods requiring different reheating methods as segregated as possible, so that you can reheat according to the “first cooking” principle. So if you’re taking french fries or tater tots home, don’t dump a cold condiment or dressing (like ketchup or ranch) directly on the potatoes; you’ll screw up the surface area, which requires an application of intense outside-in heat to bounce back.
And when in doubt: Just Google “reheating + your thing.” Usually, the advice is dead-nuts on, even for seeming impossibilities like a Bloomin’ Onion.
Speaking of impossible to reheat things… I have managed to reheat a baguette.
In a microwave.
(steps over TheStoryGirl’s shuddering form)
Seriously. Just sprinkle some water or olive oil over it. The baguette normally has a hard time absorbing the microwaves efficiently. I got very unreliable results reheating it dry, even in the oven, and usually just ended up with something about as edible as a stick of pumice. The water or oil gives the microwaves something to work with, heating the baguette through mere conduction. And it’s even a lot faster than an oven. Butter, obviously, works as well, but that’s only if you’re making buttered bread.
Granted, it’s not what I’d call great. But it beats pumice.
Yeah, it’s basically how you can roast squash in the microwave, too. Or, at least, spaghetti squash.
The day I realized I could reheat pizza in the oven was a glorious day indeed.
Eggs are absolutely inedible once they have gone cold (if you keep them hot by keeping them on the stove at very low heat or use some other warming method they remain passable, although quality still degrades quickly). There is no way to properly reheat them. People who do that “cook an egg in the microwave” thing are monsters.
I make an exception for fast food egg products because they are far enough removed from real eggs that they are just kind of a different thing.
I would like to amend my statement about eggs to exclude quiche and egg casseroles. Incidentally also baked in the oven (like the baked pastas).
Does packaging tape stickiness ever wear off?
I ask because due to not finding super glue, I used a bit of packaging tape to stick four pieces of PVC trim together as a square on an MDF board. This is being used as a turtle topper.
Fear is that one day the sides fall apart from adhesiveness failing and turtles fall off
I don’t know about packaging tape, but screws are pretty cheap.
is pvc trim easy to drill into?
Low rpm on plastic, or it will melt and stick to the drill.
> Does packaging tape stickiness ever wear off?
Yes. The glue layer can become dry and brittle, and separate from the plastic tape. There may be tapes that hold forever, but I’ve not seen them yet.
What is a turtle topper?
Something like this
I misread price on my 8ft pvc trim (thought it was $4, turned out to be over $10), but even so, combined with my $2 MDF board (2×2), it was a $13 or so setup (assuming I source glue eventually)
Oh, OK. When I had pet turtles I always used to just find a rock or two that would permit them to crawl out of the water to bask. I’d just set that in the middle of the tank and they were set.
Tape adhesives will likely break down with time, but might remain sticky and get increasingly ugly. I’d say remove the tape residue with something like Goo Gone, which is basically nicer-smelling turpentine. Test on an inconspicuous spot first to see how it affects your PVC. Then I’d recommend gluing your project together with a better adhesive. Your local hardware store has stuff especially for PVC, but one of the better grades of Gorilla Glue would probably do the trick. You could also screw this together, but you need to pre-drill your holes for PVC and MDF or it will crack.
Do you have any clamps? A nice long set of bar clamps, 18″ or 24″ would make this much easier. Something like Quick-grip bar clamps are a good investment- once you have some, you find all kinds of uses for them.
I figure aquarium tanks themselves are glued together – any cheap glue on amazon you’d recommend?
Also, how much time before the packaging tape setup falls apart?
Edit: I’m not visualizing how clamps would work in this setup
Knife sharpening: have we addressed it here on SSC?
We have a mix of medium quality kitchen knives (best are Wusthof classic) that I need to maintain. What are your recommendations?
In the past, we had ok results with this type of electric sharpening gadget, but I mostly used it on lower quality knives. That meant that I wasn’t so concerned about ruining the knives and the best-use results were probably not especially sharp or lasting edges.
In my experience, the gadgets all suck. At best they put a few barbs on your knife edge that actually damage the knife but provide the short-term experience of being sharper (because the barbs are actually ripping through the material).
Thus to sharpen your own knives you really need to buy a bunch of abrasive materials of varying fineness (all the way up to a strip of leather) and something to hold the knives at the proper angle.
Since I don’t feel like buying all that stuff right now, my solution has been to take all my knives to get them professionally sharpened. I’ve done this once and I’m going to start doing it yearly or semi-annually, if not more often. Depending on where you live you can usually find places (or better yet, crafty retirees who hang up their shingle on Craigslist) that will do this for you. We had four or five of our knives sharpened — two big chef’s knives and several paring knives of different lengths — and it cost about $20. It was like getting a brand new set of knives. Also, we dropped the knives off the day we left for vacation, and picked them up the day we got back, so we never had to be without knives in the house.
An edge pro type sharpener will give a flawless edge. It basically just holds the knife and sharpening stone at a precise angle to each other (you set whatever angle you want). It eliminates the skill needed for sharpening, however it is still work.
There are plenty of cheap generic versions
The similar electric sharpener I have works fine, except that it scratches the sides of the knives. The Wusthof (except the Asian-style) are at a 14 degree angle, but 15 degrees is probably close enough if you’re not a purist. Looks like they do make a 14 degree one. If you want a perfect edge with no cosmetic issues, I think you’re stuck with more manual methods.
How long the edge lasts seems to depend more on the knife than the sharpener. When I sharpen my Henckels, they stay sharp for a while. My no-name knives cut just fine for one job, but that’s about it.
I recently had some knives sharpened professionally, which is possibly the fastest solution depending on where you live. Our guy did every knife in our house in about an hour. I learned something from him about wheel grinding: he had “hollow ground” our knives and told me not to sharpen them at home, but to instead get a honing steel (one of those sticks you see them use at steak houses that looks like a round sword). I started researching this to figure out what had been done to our knives, which are probably about the same quality level as yours. There’s a good thread on cheftalk.com about “hollow edge vs regular edge”: link text
I have a Wustof sharpening block at home, which I thought worked well but was time consuming. And apparently it won’t work on my newly hollow ground knives? I kind of wish I’d known that before the grinder put a hollow edge on them, but honestly getting them sharpened was pretty affordable and saved so much time I’ll probably just keep doing that.
I think I found the cheftalk thread and this diagram was very helpful: knife edges.
However, it would seem that even Wusthof is unclear about the definition of hollow edge vs granton edge: nakiri description.
You and some other replies here talk about getting knives sharpened professionally. How do you package large kitchen knives safely when you take them to a trip to the professional sharpener?
Get a cheap canvas knife roll from the local chefs’ supply shop?
I usually put my knives in the knife sleeve they came with and/or rolled up in a dish towel and kept in a bag. When I pick them up from the knife sharpener they usually give me them in new sleeves rubber banded up.
Thank you for the info.
Easiest option is probably to get them sharpened at your neighborhood farmer’s market or something. A good quality knife will hold on to the edge for quite a while.
I sharpen mine at home once every 6 months with a generic water stone someone bought me from Amazon. It takes me about an hour to do 3 knives. The meh-quality Macy’s knives start dulling after 2 months, but my Wusthof generally holds up pretty well in that time period.
There’s a guy who sharpens knives at our farmers market, and sometimes the meat department at grocery stores will do it for you.
I have seen a bunch of waterstones of more to kess abrasive grit with a drop of dish soap and then a honing rod, and with them I sharpen knives that my wife uses for cooking when she asks me to (about two ir three times a year), every few years I chop an onion and notice the knife is dull before my wife does and I’ll sharpen it, but usually she asks first.
I also have a bench grinder in the garage left by the previous homeowners, but it’s extremly rare that I’ve had a knife so far gone that I’ve had to use it, I think once in eight years.
Sharping knives is, in my experience, very difficult. It can take a long time to learn to do it correctly, and even longer to master. If they are just kitchen knives then I would concur with Well… and just bring them to a professional for sharpening once a year or so.
I recently took up wood carving, and unfortunately the main obstacle has not been, you know, actually learning to carve but instead learning to sharpen my knife. You have to sharpen it frequently, usually every hour at least, as wood quickly dulls a blade and you need it as sharp as possible to carve properly. So far all my attempts with whetstones have failed, and I’ve resorted to using something like this which works better than my attempts but is far from ideal for wood carving.
Meditative or tedious, but not that hard, if all you’re trying for is “cut the onion/meat/whatever better than a dull knife”, though I can imagine getting wood carving tools extra-sharp and useful may take more practice, but:
1) soak the stones
2) add a drop of dish soap on the first stone
3) hold the knife at a steady angle (the angle depending on whether you want it sharper initially, but dulling faster, or holding it’s edge longer, but not as sharp, but I just do whatever the angle is that I relexively hold the stone at without think about it mych), and run the knife over the stone a couple of dozen times
4) do the other side, replace the stone with a less abrasive one
5) repeat a few times depending on how sharp you want it
6) finish with a honing rod if you really feel like it.
Even a half-assed job makes the knife cut better than before you start
I do it basically the same way except for the soap (what ist it good for?), and instead of a honing rod I use polishing paste on a leather strap (the paste is needed only once, it dries into the leather).
Also, the repeat loop appears wrong to me, I never go back from a fine stone to the coarse stone. Only repeat sides on the same stone, while applying less pressure in each repetition of both sides (unless some serious abrasion is called for on the coarsest stone).
For learning, I had made a small wooden wedge of (IIRC) 15° to get a feeling of the desired angle, nowadays I just go by ‘that feels and looks right-ish’.
Edit: I learned from some youtube expert/enthusiast and a few dedicated websites, and got good enough for household purposes.
The soap is as a substitute for oil, but it’s easily dispensed with, and I only ever use a drop (and I forget where I learned that practice from).
By “repeat” I meant ‘keep going to less coarse stones”, but if I’m in a hurry I just won’t use the finest stones to finish, two or three instead of four, as for leather strops; I only ever used those when I used to shave with a straight razor (now I use an electric razor at work, as with a three years old son again, plus a teenager, a second bathroom would needed in our house first before I took that much time to shave otherwise my wife may decide to cut my throat with it!).
Soap, or any surfactant, reduces clogging of the pores of the stone. Oil is traditional on oilstones, but is messy, and, in my opinion, better avoided. I can’t really speak to waterstones, never having used them. I use simple green on India stones, followed by a Spyderco ceramic stone.
Wet/dry abrasive paper on a granite tile works, but is more expensive in the long run than using stones. In a real pinch you can use dried mud on piece of wood.
I would add to step 3, check for the wire edge with a finger. This is a small ridge of metal rolled over from one side of the blade to the other. Chase that wire edge from one side to the other once or twice with a given grit, and you’re ready to go to the next finer step.
Sharpening carving tools — gouges or veiners, is a bit tricky. Sharpening other woodworking tools, like chisels or plane irons, requires a bit of attention. Sharpening kitchen knives is pretty simple, and very hard to screw up so thoroughly that no progress is made. At worst you’ll spend more time on it than necessary.
I mostly use a Chef’s Choice “gadget” as you describe, and think it works great. If you follow the directions, I wouldn’t have any fear about using it on your best knives. A few years ago, we did an interesting side-by-side test while butchering a wild pig. My uncle is really good with a whetstone, and we tested his hand sharpening against the Chef’s Choice. Both edges were equally good on the actual animal, although the Chef’s Choice won when cutting paper.
I can get slightly sharper with an Gatco clamping sharpener I have, but it’s a lot slower to use and the edge doesn’t hold up any longer. I’m sure there are even fancier systems that can get an even sharper edge, but the best sharpener is one that you actually use regularly to keep your knives sharp. Knives that would theoretically be sharper if you were to bother sharpening them lose every time to knives that are actually sharp because it was so easy to sharpen them.
So my advice would be to read or re-read the directions for multi-stoned electric Chef’s Choice you already have, and use it often. For others who don’t already have one, note that thrift stores like Goodwill have them fairly frequently. I bought my first one new (which I still use) for about $100, but have since bought two extras for about $10, one as a gift to my uncle and one for some friends who used to have particularly dull knives.
Which I imagine was no knock on your uncle’s skill, but just an artifact of the Chef’s Choice making a hollow edge whereas a whetstone will make a flat one.
An interesting thought, but I think all the Chef’s Choice sharpeners produce a “standard” edge the same as would be made with whetstone. They sharpen one edge at time by keeping the knife at a constant angle against a series of rapidly spinning thin flat diamond coated disks. Some of them can do a bevelled edge by changing the angle with the finer grits, but I don’t think any of their models have a convex surface as would be necessary to do a hollow grind. Unless I misunderstand what the term means and how it’s done?
As you might guess, he was a little embarrassed that the “gadget” seemed to sharpen as well as his many years of expertise. I don’t think he’s switched over (it’s a lot easier to resharpen in the field with a stone) but I think he does appreciate that the mechanical sharpener has its place when you are working in a kitchen and resharpening often.
Oh, you’re right; I thought they ground against the edge of the wheels but I just looked and they actually grind against the flat, so it is just a standard egde.
A use a Wicked Edge knife sharpener. Hand operated, with a jig to maintain angle and a set of diamond units of varying fineness. An expensive unit, but it works very well. I have no experience with the electric equivalents, which if they work as well are probably a good deal less work.
Wow, you guys put a lot of more effort into your kitchen knives than me.
I just buy cheap-ass knives and then buy new ones when they become so dull that I get annoyed by it.
Biweekly Naval Gazing links post:
First, I’ve decided to cut back from posting 3 times a week to twice a week to avoid burnout and give me more time for other things.
Riverine warfare continues with the first two parts of a look at the subject in China.
I’ve talked about HMS Warrior a lot, but never before taken a close look at this remarkable ship, which revolutionized naval warfare when she was built in 1860.
Lastly, I’ve started a series on modern aircraft weapons, with a look at dumb bombs and their laser-guided descendants.
Interesting! Thanks for the link I always enjoy your stuff.
Question for you though: I noticed a reoccurring theme in your piece about these smart bombs projects coming together quickly and cheaply. PAVEWAY was only 6 months and $100,000 and the BLU-109 was operational in only 3 weeks.
Is this kind of efficiency a common theme in the development of smart weapons (and if so, what led to that?), or am I just picking up a signal from the noise?
I can think of several other smart weapon programs which also came in fast and cheap by military standards, if not that fast and cheap. But those were both essentially wartime cases. For Paveway, TI got some cash and very little in the way of constraints. This gave the team a lot of freedom. For GBU-28, it was a program to fix a specific problem in wartime, and those are usually given unlimited budget and minimal oversight. And there have been smart weapon programs that have gone horribly over budget and behind schedule, too.
This is a little out of your wheelhouse, but have you ever thought about writing a post about galley warfare in the Mediterranean and Baltic Seas?
The galley and variants were the warship for millennia, and some European nations maintained sizable galley fleets well into the 18th century. They’re not battleships but they had a similar mystique in their era.
The problem is that it’s too far out of my wheelhouse. I don’t have the background to even really know where to start, although I did recently buy a book about galley warfare. So who knows.
John Francis Guilmartin’s “Galleons and Galleys” and “Gunpowder and Galleys” are the essential works on this subject. The latter is specifically about mediterranean galley warfare in the 1500s and is one of the best pieces of military history I’ve ever read, because it not only explains what happened, but why, and proves the theory with very robust evidence. The former is more general both geographically and temporally, and while it necessarily lacks the “proved it with math” aspect of the Gunpowder and Galleys, it remains an excellent book.
Mine is The Age of the Galley from Conway’s History of the Ship series, all of which have been excellent. Those are on my amazon list, but it’s long, and one is quite expensive.
I’ve heard that series is very good, but I do most of my reading on my phone, and they aren’t available digitally, as far as I know.
I’ve read through the last three warship volumes (Steam, Steel and Shellfire, The Eclipse of the Big Gun and Navies in the Nuclear Age) cover to cover, and all three were first-rate. I’ve got three others and all have looked very interesting, even if I haven’t given them the same level of scrutiny.
You have the name of the French Gloire spelled incorrectly as ‘Glorie’ on the Warrior post – autocorrect hates French in particular, I find.
Fixed. I thought I’d checked that, but I guess not. Thanks for the catch.
So, I’ve been drafting Magic at a local store. Since I’m so inexperienced it’s a bit overwhelming; there’s a lot to pay attention to and not all that much time. Right now, it’s all I can do to build a credible army of fighters at a range of power levels, and make sure I have enough removal. And it sort of works. My decks don’t often win, but they can put up a fight.
As these basics become second nature, freeing up brain-cycles for more advanced concerns, what should I start paying attention to next? My guess is synergies between cards, such as card like Wildwood Tracker that give bonuses to creatures, but only specific types of creatures.
You’re next topic should be researching and learning about draft archetypes.
WOTC normally designs sets with certain archetypal decks in mind; these are usually (but not always) centered around some combination of color pairs. These will be decks that either do a certain thing very well or reward you for doing a certain thing repeatedly. While your strategy of drafting value and removal is acceptable, it will likely get eventually worn down by a deck that’s focused on one very specific strategy.
The current ones in Eldraine are:
1. Red/White, White/Black, Red/Black – Knights.
2. Blue/Black – control and mill
3. Green/Black – food
4. White/Green – Adventures
5. Blue/red – draw a second card.
6. Red/Green – Nonhuman tribal
7. Blue/Green – Ramp
8. White/Blue – fliers and artifacts
9. Mono-X adamant
Knowing these archtypes is important, because even if you’re firmly in a color, certain cards in that color might not be beneficial to your overall gameplan. For example – Wildwood Tracker is pretty good – 1 mana 1/1 that will attack as a 2/2 for most of the game is pretty valuable, but it’s VERY good in the Red/Green “almost everything in my deck isn’t human” deck and is less good in the Green/White adventures deck, just because there’s a lower density of nonhumans in there.
Does this make sense? I can go into more detail if that’d be helpful.
Once you master that the basics of a limited deck–curve, prioritize cards that effect the board, etc, and adjust pick order for frequency (ie, you don’t want all removal, but you probably always take good removal in your colors immediately just because it goes fast), and the particular synergies of a format like Aftagley says, you can try to get into the meta strategy of considering the other players around the table. You get two packs passed from your right, first and third, so if you figure out that the first couple drafters in that direction are not taking red & green cards (because you get some very good red and green cards a few picks in), then you may decide that despite a pretty neat p1p1 rare in white, you are better off going for a RG deck full of consistently above average cards than a Wx deck full of left overs.
This is called reading signals, and it’s generally not nearly as important as the basics, but it is a next level thing you can consider at some point.
… and then the NEXT level is learning how to effectively signal (IE weighing the value of letting people “down-stream” of you know what color slice you’re in when considering what cards to take).
This would mean if you’ve got two cards that are in a vacuum equal, you’d take the one that’s more firmly in your archetype in order to potentially close that archetype off from any competition.
That being said, I agree with Randy M above. Signaling is a secondary skill that you can consider investing time in once you feel comfortable with everything else.
And mentioning signaling is probably a bit of a trap because it strikes me as the just the thing to tickle the fancy of the kind of people who post here. Very game theory, meta game analytical way of approaching it.
Which is counter-intuitive, because usually when playing you want to communicate as little as possible to other players. But you want to bear in mind if drafting with eight players, you might not even play against the players beside you and if you improve your deck quality at the cost of also improving 2/7 of the competition, you probably come out ahead.
Sending clear signals is not terribly important. Reading signals is hugely important. Finding the open colours/archetype is indispensable to consistently drafting good decks.
It’s important, but paying attention to it is also likely to make you a worse player before it makes you a better player.
It’s one of those things where a little knowledge does a lot of damage, Misreading signals (or reacting to real signals too aggressively) is way worse than ignoring signals.
One piece of advice when it comes to playing limited in general: block more than you think you should.
I’m not talking about chump blocking, which is usually wrong, but most players play too scared of combat tricks. Trading your creature for a trick is just not usually that bad. If you can take the hit once and then hold up instant speed removal to blow them out next turn, great. But otherwise, err on the side of blocking.
A related point – always play your cards in the 2nd main phase. Even if you don’t have any combat tricks. Hell, even if you don’t have any instant speed effects at all, there’s still no reason to play your cards before attacking if playing that card doesn’t directly help you make attacks.
I’m not advocating bluffing as a reliable strategy in magic, but as Tarpitz points out above, people are afraid of combat tricks more than is probably reasonable and will let damage through when you’re holding up cards and have untapped mana.
Honestly knowing when to chump block (and when to chump attack) is itself an important limited skill. Even though it’s normally a bad idea, in a race (which isn’t all that uncommon in limited) it makes all the difference to do it at the right time.
You should be reading The Immortal Hulk, by Al Ewing. Trust me on this.
IFComp is in full swing! This is the world’s largest interactive fiction competition, as well as one of the longest-running online contests. They’re in their 25th year!
You can play the games, and vote for them, here:
I’ll stress, as I did last year, that these are really good games. Sure some are rough, but the best games in the comp offer an experience very like a major studio game. What are the best games this year? I can’t tell you, because I’m a competitor!
…seriously though, if you just scroll through the list I promise you’ll catch a few titles that look interesting. Give them a try. Anyone can judge based entirely on their own tastes and opinions (but you need to rate 5 games for your score to count!)
And on the other end of the spectrum, GPT-2 writes a text-based dungeon crawler!
I’m playing “Saints and Sinners” A Story of Crime, Justice, and Magic Fists” right now, mainly for the sheer silliness.
And the similes, of course.
As a former competitor, I second this. IFComp has some really great games, and the rules of the comp are that they should be judged based on 2 hours play maximum, so they’re generally short enough to fully explore in a quick timeframe.
There’s a lot of good writing and clever mechanics in these games, and the sheer variety of themes, topics, and writing and implementation styles means that there’s almost always something that will strongly appeal to any given person.
Theodidactus, I’ll be sure to check your entry out!
The real genius of IFComp is the graduated prize structure. Even if you’re in 20th or 30th place, you still win *some* money, and that causes a rush of good feelings even if you don’t crack the upper echelons.
Last year there were…maybe 10 bad games out of 70 something. That’s 60 games in a huge variety of genres. This year we’ve got even more entries!
Does it have to be in mainline continuity? Is it okay if it’s not in the mainline continuity of the DC or Marvel universe as long as it’s not a reboot/origin story? Here’s my list of comics I’m pretty sure aren’t excluded by your criteria:
Grant Morrison’s run on Batman, which you can find a reading order for online. Starts with Batman and Son and goes until the end of Batman Incorporated v2.
Grant Morrison’s run on New X-Men
Planet Hulk is a great sci-fi adventure, but it’s diminished by the fact that its sequel, World War Hulk, is bad.
Alan Moore’s runs on Swamp Thing and WildCATS
I’ll see if I can think of anything else.
Peter David’s second run on X-Factor (known as Vol. 3, 2005-2013, starts with a limited series). His first run on it from the 90’s, with a largely different set of characters, and his original 100+ issue run on Hulk are also great. I consider David one of the best writers of long-running superhero stories out there.
Planet Hulk and World War Hulk, also the Incredible Hercules (where Hercules takes Hulk’s place) are all good (heavy influences on Thor Ragnarok, which is largely adapted from Planet Hulk).
The 2000s Journey Into Mystery version of Thor/Loki, where Loki gets reincarnated as a teenager who is something of a hero, also a huge influence on the movies’ Loki.
For some reason I’m only thinking of Marvel comics, but there’s a bunch of great DC things like Hush, in fact if anything they tend to be better at self-contained stories, Marvel has historically been better at the soap opera kind of comics.
I don’t know what is available on Comixology, bit look for JLA: A League of One for a great Wonder Woman story that fills those criteria.
It sits right next to Hush on my bookshelves.
I was watching a show on Marine sniper units, and I really had to wonder- how long before technology takes over the ‘sniper’ job- at least the actual shooting aspect of it? It’s hard for me to believe that a mix of computer vision and a robotic arm couldn’t beat a human for accuracy in, say, the next 20 years.
Here’s my proposed system- as it stands right now, for long distances the sniper needs a ‘spotter’ to help calculate distance, wind, humidity, and other factors. It’s pretty tough to believe that computerized system couldn’t assess all of those variables better than a human. Instead, in my system a human soldier would get in position, aim the rifle on the target and indicate to the computer that ‘x person in my sights is the target’. Sensors gauge distance, wind, etc., and the rifle sits in for lack of better words what I’d call a ‘cradle’. Once the human has pointed & indicated the target, the cradle makes the microadjustments that are now done by hand/proprioception/unconscious human skill (up .27mm, to the right .134mm, etc.), calculates the wind & distance variables, and ultimately takes the shot.
While it may seem a bit fanciful, there have been published advances in automated rifle fire, mostly on drones. But there is definitely research into improving accuracy with a human in the loop, for example here and here.
A sniper system would combine those two technologies- computer vision target acquisition, and the aiming/precision servo-motors from a drone or platform. I don’t think it’s particularly controversial that machines beat/replace humans in physical tasks, and with advances in robotic surgery I think super-precise small motor control is definitely within reach. We’d just need a ‘cradle’ to hold & adjust the rile that’s field-rugged and can stand up to mud, dust, etc. Seems a decade or two off, but not implausible at all
I remember reading about the guns you linked years ago and coming to the same conclusion- more automation should be able to do this a lot better, and it seems straightforward. I suspect the problem is harder than we think because the target is often moving, obscured, or presenting different angles to the sniper.
Practically, solid proficiency and using a laser rangefinder gets you about 90% of the way to what a hypothetical Terminator-like perfect robot sniper could do. A rifle-sized optic with an integrated rangefinder and ballistic computer would probably raise that to 98%, and the first generation is already here. The TrackingPoint device, by the way, is explicitly designed not to require a power-stabilized platform like your “cradle”.
What automation can do is make this process faster and require less training. I absolutely expect to see it in the hands of every rifleman eventually. We’ll still have two-man sniper teams for training and tactical reasons though.
There is no automated way to detect wind, though, so for long distance shots the shooter still needs to judge and input the wind.
Also, vision of the future:
Sniper to base: “The terrorist is leaving his hiding place, do I have permission to fire?”
Base: “Permission granted”
Sniper: “WTF. System rebooting???”
Sniper: “Reboot done, terrorist escaped”
Isn’t your vision precisely why many armed forces are still using Windows 98 (or were until quite recently), for exactly the same reason that a lot of neuroscientists (not me) still do? Cause we know exactly what makes it break.
No one knows what makes Windows break, IMO. If you want high predictability, you need a stripped down Unix-based system.
System failures can happen, just like a gun can jam. It just matters whether the gain is worthwhile.
For example if a retreating force didn’t need to leave behind a sacrificial human sniper and instead could leave a few machines.
I have the vague impression that a lot of the job of a sniper is getting into a position from which to take a shot, and this is probably something that’s easier for a human (able to climb, crawl, run, swim, etc.) than for a robot in the near future. Though yeah, eventually the robot will be better at moving around undetectably, too, assuming technology doesn’t stop advancing.
I think another advantage: once it’s automated… you can probably site a bunch of them around a region and when a target enters the region have one of the half dozen machines take a shot without risk to a human soldier.
So I could imagine the job of a sniper subtly changing to getting into position, getting the equipment in place and then getting out.
Though that opens some new grim possibilities.
I could imagine sites similar to current minefields with various autonomous devices that for whatever reason aren’t required to call home that target people fitting some criteria for decades to come.
The sniper becomes the manager for long-distance anti-personnel mines.
1) If they aren’t EMP-hardened, an EMP can take them out.
2) As long as they aren’t mobile optimizers they will always be ammunition limited (even solar or nuclear powered lasers would need time to recharge capacitors). Robotic dummy targets can get them to use up this ammunition.
3) Have you seen the resolution of US spy satellites? Drones are even better and can take them out from miles above.
1: sure, but nobody fires EMP’s at warzones and military equipment will probably get built to be fairly tough.
2:”ammunition limited” can still have enough to ruin a bunch of peoples days.
3: This currently applies to human snipers.
I’m responding to “for decades to come.” Not active battlefield conditions.
They’re unlikely to be this kind of long-term issue. Clearing a mine isn’t actually much of a problem. If there’s an anti-handling device on it, you need to be even more careful, but it’s doable: uncover the mine, attach a string to it, and pull it out of position. Or, even safer is to put a pop-and-drop C4 charge next to it to sympathetically detonate it.
Mines (plural) are a real problem because there’s usually like a thousand of the damn things scattered over a relatively small area, each of which is difficult to find, and the difficulty of finding one increases the difficulty of finding the others because you need to be so careful moving around and searching. You also can never be sure you got them all–even if you have maps and an inventory giving you the number of each type, if you come up short you don’t know if it was because of a recordkeeping inaccuracy (i.e., it was never there), because the mine moved (they do that, especially on sloped terrain), somebody else already took the mine so they could use it themselves (common in Afghanistan, and according to the stories told by deminers from Bosnia who were soldiers in the war, common there as well), or because somebody stepped on it 20 years ago. Plus, AHDs mean that you need to be really careful when trying to clear the ones you have found, and what is a relatively small risk for a single mine becomes a very big risk to you personally when you multiply the small risk × 1000 mines you need to clear.
Automated turrets will have none of these problems. It will take very little time to locate a turret, since shooting will give away its position. There are likely only a small single digit number of them, and their fields of fire can be determined easily. Depending on the situation, you can just do what people who are really pissed at snipers already do, which is to hang mortar rounds on their position; if you can’t do that, have somebody on one side distract the thing from a protected position while you approach it from the rear and turn it off.
The cost to put a Marine in the field is probably at least $45k, not counting ongoing salary costs, heavy equipment, food, housing, etc, etc.
Let’s say that an automated turret costs $10k in mass production and far less than a soldier in ongoing costs. Then if 4 turrets kill one enemy soldier, you are ahead.
The issue is more that the turret lacks flexibility and adaptability, for now. Where do you place them? On the border? Then people can go over or around. You need to mix them with heavier stuff to stop the tanks and such. Are those going to be manned by humans and if so, how will that work with the turrets? Can they do friend or foe well enough?
Of course, once technology is advanced enough, the distinction between robot and soldier will be moot.
Let’s not say that. That’s an extremely optimistic number for an automated turret at the factory loading dock; I think you are misjudging by the costs of mass-produced consumer goods and there are several good reasons why the cost structure of mass-produced consumer goods is not going to apply to sniper-bots.
Then there’s the small matter that we don’t need sniper-bots on a factory loading dock in Ohio, and the cost multiplier for hardware in good working order at a firebase in Afghanistan is rather enormous.
If you’re just trying to set up a defensive perimeter in your own territory in peacetime, say maybe seventy miles outside your largest city, the economics may be somewhat more favorable.
A remote-control weapons turret (CROWS) costs about $100,000 IIRC. The first-generation version was more like $700,000 though.
Admittedly most of that is probably thermals but you’re not going to get the cost down to $10,000. Even just a high-end rifle like the M2010 costs more than that.
I used to work for a company that sold electronics to various government agencies, both foreign and domestic. The electronics in question were basically glorified cellphones, with some commodity hardware attached that’d be unusual in a cellphone but still basically COTS, in a fancy case with a funky form factor: so maybe $500 in parts. The exact sticker price was and is confidential, and I wasn’t privy to marketing information anyway, but you can tell from publicly disclosed terms that it’d be well north of ten times that per unit. Part of that is support, but a lot of it is just the amortized development cost. Surprisingly little is margin.
Judging anything that involves a lot of R&D by its parts cost is never a good idea, and it’s an especially bad idea when you’re dealing with government contractors.
What’s the US army’s costing on a single dead or seriously wounded soldier?
They seem delighted to substitute drones costing millions for humans.
Politically the cost may be immeasurable. Especially when one considers the fact that the soldier is also (usually) a citizen who votes.
Plus there’s the psychological cost of knowing you’ve sent someone to their death (when you aren’t a sociopath of some sort).
How much would you pay to know you haven’t killed someone whose life was entrusted to you?
Emotionally immeasurable but this is the military and I’m sure they have a spreadsheet somewhere listing approximately how much military assets a soldier is worth in war time under different circumstances.
Is there a name for this argument/concept in evolutionary biology?
One of the common objections slightly thoughtful creationists have to evolution is that it’s very difficult to get a head start on developing something useful. Or at least to imagine how this happens. The trouble is that any relatively complex function or mechanism is dependent on lots of sub-components, and so it’s hard to see how a species could move from not having said function/mechanism to having it, because there are lots of required intermediate steps that are not themselves the function or mechanism, so why would they be selected for?
Now, one frequently-given (and correct) answer is that your lack of imagination is not an argument. So with the eye, which is the usual example given in these cases, not only can we imagine fitness-enhancing ways that each of the many subcomponents might develop, we have extant examples of virtually every one of these stages in different species living today.
However, there’s another response I don’t think I’ve seen discussed that I’ve thought about for a fairly long time, that I started thinking about again (and hinting at) last week when we were discussing weird features of languages. Namely, structure begets structure, even when it’s not precisely selected for. So one place where interesting proto-mechanisms might be able to develop is precisely in a space (“space” is meant metaphorically) where there isn’t much selective pressure at all.
If you have some weird baroque chunk of DNA (possibly even junk DNA) that doesn’t have much of a direct effect on the phenotype, it can quite happily mutate in all sorts of weird ways. So basically the idea is that the lack of selection pressure on a particular region of the genome allows the buildup of a larger set of mutations. 99.999% of the time, these will be useless, but every now and then they might allow for a larger leap in a fitness landscape than would be possible if every single mutation had to be immediately beneficial.
The idea is something like open-ended vs profit-driven research, I suppose.
If I read you correctly, there’s actually an example of such a thing: treehoppers.
Modern insects, with some exceptions, have two pairs of wings. However, the first flying insects had more variety, with some having three or more pairs of wings. What’s interesting here is that the genes for extra pairs of wings can still be found in modern insects, but they have been silenced for hundreds of millions of years.
Treehoppers are a group of insect with a very particular anatomical feature: an elaborate “helmet”, or pronotum, that covers most of their upper body, and depending on the species has adopted elaborate shape serving the purpose of camouflage, mimicry or display.
A genetic study of these insects has revealed something surprising: the coding genes for their pronotum were in fact the same genes that in other insects are the dormant genes once coding a third pair of wings. It is especially surprising because, if I remember correctly, the genes responsibles for silencing these wing genes are still present in treehoppers, but have somehow been overcome.
Hmmmm… I think this counts as an example of exaptation, but I’m not sure it is exactly what I was thinking of.
But maybe it is close enough? I guess the idea is that the extra wings get silenced, but the genes are still there, with a hell of a lot of structure already built up. And so mutations to that can happen freely (since the wings aren’t silenced) and then you somehow silence the silencers, and you have a new cool wing that’s a proto-helmet, without having to go through the immediate stages.
I was thinking more of stuff being created ex nihilo, but this seems close enough to be a similar idea.
I suspect ex nihilo creation is rare for anything large.
But there’s other elements.
Genes regularly get copied. When there’s 2 copies of a gene one is then free to mutate without losing the original function.
Chunks of DNA get copied inside other chunks. The odds of 1 chunk of functional protien, when copied inside another yielding something of value is much better than the odds just getting the same from random noise.
Part of the issue is that of keeping the overall structure functional, remove all selective pressure and a region can return to simple noise.
But throw some psudogenes in there which have some weak selective pressure but with some freedom to mutate without killing the host and you’ve got better odds.
Then throw in non-random noise. Things like viruses copying themselves or self-copying chunks of not-actually-viral DNA copy-pasting themselves in.
There’s even real-life genestealers: organisms that steal fragments of DNA from other things in their environment when under stress on the off chance that they’ll grab a chunk of DNA that helps them cope in that environment.
Still likely to be a net negative, of course. Making half of your proteins of this type less functional is a cost, just not as likely to be crippling/fatal. Might be worth an occasional role of the dice.
If I understand @anonymousskinner’s reference to neofunctionalism through gene duplication, that’s exactly what is claimed to be true, and apparently has been discussed since at least 1936.
In the specific case of gene duplication events a mutation may actually be beneficial. Your physiology likely doesn’t want those extra identical copies of the duplicated protein. The reason why Down’s syndrome is the best resulting case of the non-sex-chromosome trisomies, and the reason why all but one of the X chromosomes in most of a genetic woman’s (or Kleinfelter man’s) cells are silenced, is because of gene dosing considerations. (This gets more complicated if the mutated protein is part of a protein complex, or even a homodimer/tetramer, but the general idea holds – you usually don’t want more of the protein.)
Yeah, there is a lot of waste and redundancy in living organisms, and any waste from the vast majority of mutations is going to be on the order of fractions of a percent of general energy usage (unless the mutation targets highly expressed genes or housekeeping genes). A little extra waste generally isn’t enough to completely eliminate a slightly detrimental mutation as long as there is enough elbow room to move to.
I don’t know if at this point in our history as a species Down’s syndrome and the deadly effects of chromosome duplication are good support for the idea that shutting off genes is going to be beneficial in any but the rare case.
By this point, we’ve had diploid Chromosomes for millions of years, far longer than we’ve been human. I think we’re pretty thoroughly adapted to the extra copies by now, though I’ll be interested in evidence otherwise.
Certainly granted that going higher is going to cause problems.
Yes, we are adapted to the expected extra copies from a diploid genome. We generally are not adapted to additional copies beyond that. X chromosome silencing is theoretically universalizable to all other chromosomes, but naturally it only works on the X chromosome.
Not to say it will be lethal in the case of a single gene. But it will be a net burden biologically, if only due to the need to upregulate repressors. In this respect mutation of the copied gene may be beneficial.
While plants can typically go polyploidy forever, Xenopus-like species in animals are pretty rare.
(As an aside: It’s pretty frequent that genes for particular pathways or functions are found together in the genome. This aids regulation of the entire pathway. Depending on where in the genome a particular gene is duplicated, it could start being co-expressed with a particular pathway or function, and could start having novel activity thanks solely to this co-expression.)
pulling a current list of known human pseudogenes from biomart I get over 20,000.
Considering there’s only about 30,000 genes in the human genome
Even removing duplicates of the same gene there’s still over 17000.
So more than half of the functional human genome has imperfect copies floating around in normal healthy adults.
Sudden duplication of entire chromosomes has a much larger deleterious effect.
That’s exactly my point.
edit: Re: the fact these are pseudogenes, not identically functional copies.
The common evolutionary skeptic insists on a distinction between microevolution and macroevolution, in some cases this is a bogus distinction. But let’s run with it and debate on these terms. Generally, I can get people to agree on the evolutionary mechanisms of speciation. It’s the metacategory of the origin of entire classes, families, phyla that these skeptics object to.
I have found the best answers to these questions in *On the Origins of Phyla* by Valentine and in the very helpful SEP Article on Macroevolution. The basic idea is one which many theists will be happy with (for some not entirely clear reason), gradual change through natural selection and genetic drift isn’t the only process at work. There is also ecosystem equilibrium. Most of the time species are unable to genetically change much, because they are locked into their ecological niche. But when an enormous change happens in the environment and kills most of the energy consumers, the resulting scene has much free energy and no available species to consume it. In these scenarios genetic deviation has a far higher chance of being selectively useful. The origin of this idea comes from Stephen Gould among others and is called Punctuated Equilibrium. The SEP article claims that this account of the evolutionary process has stood up fairly well with modern genome and dating techniques.
Creationists really love them some Gould, which is just bizarre. But thanks, I’ve never heard punctuated equilibrium described quite in that way. Karl Friston must be right, free energy is everywhere.
John Maynard Smith wrote in 1995 that:
“Gould occupies a rather curious position, particularly on his side of the Atlantic. Because of the excellence of his essays, he has come to be seen by non-biologists as the preeminent evolutionary theorist. In contrast, the evolutionary biologists with whom I have discussed his work tend to see him as a man whose ideas are so confused as to be hardly worth bothering with, but as one who should not be publicly criticized because he is at least on our side against the creationists. All this would not matter, were it not that he is giving non-biologists a largely false picture of the state of evolutionary theory.”
If it is true that Gould is now often used by creationists (which would not be surprising, since the incompetent/frauds with undeserved high status* are very good opponents, if you want to advance your own position), then this could serve as a warning against shielding ‘allies’ from criticism, to present a unified front against the ‘enemy.’ Doing so doesn’t merely discredits you in the eyes of many when the mighty fall(s), but also causes many to be misinformed, while being confident that their beliefs are scientific truths.
* Even more so, if their status is in decline.
Obviously I’m arguing from a position of non-expertise, but I was under the impression that Gould was viewed as someone whose arguments against others in the field were at best over-stated and at worst dishonest, but that the idea of punctuated equilibrium is still a viable one?
I am definitely NOT claiming Gould should be immune to criticism. But my understanding is that punctuated equilibrium is still a viable, although originally ill-defined hypothesis.
From the SEP article:
This mildly positive appraisal combined with modern critiques of the theory make for an interesting set of scientific questions and explorations. Questions which I think awaken wonder and interest in an honest interlocutor as opposed to fear of some nonexistent establishment dogma.
Right. I really haven’t read much of this stuff since the days of my philosophy degree, so I was approaching it from that perspective, but my reading was always that
(a) Gould fundamentally didn’t get what Dawkins, Maynard-Smith, etc were trying to argue, or at least ended up misrepresenting aspects of their program.
(b) His general insistence on alternatives to pure adaptationism was nevertheless an important one that many of his opponents never took seriously enough (Dennett being a notable exception).
(c) Punctuated equilibrium in specific (to the extent it was specified at all) is/was an important concept to grapple with, and one that presents a challenge to a straightforward pure adaptationist story (and the idea I waffled on about in OP might be one way that kind of punctuation might occur).
I think you were right to bring up PE, as it’s probably reading about it twenty years ago that started the process of me thinking about this stuff.
One idea that I think came from Gould, and that seems to my amateur mind to be useful in thinking about evolution, is the idea of spandrels–basically structures that evolved for one purpose, got superseded and became unnecessary, but then were lying around and got built on by evolution.
Another important idea that I (again, as an amateur with an interest in evolution) take from my reading: a lot of important ideas in evolution (kin selection, group selection, evolutionary game theory) come from looking at a simpler version of the theory of evolution and noticing holes–things we can observe in nature that don’t seem like they should exist given the theory of evolution as previously understood.
This is a pretty common pattern in science. And it highlights why it’s really bad for science to get taken over by conflict theory types (let’s spread the correct dogma and stamp out the incorrect dogma) instead of mistake theory types. The conflict theory types attack dissent from the existing theory even when it’s pointing to a real hole that needs explaining.
Now, that doesn’t mean that young earth creationists are likely to be the source of a lot of useful critiques of evolutionary theory. But it does mean that it’s *really* important not to jump on people to silence their *accurate* (or even plausible) criticisms of evolutionary theory, and to acknowledge places where the current theory doesn’t explain the world all that well.
I don’t think it makes much sense to say that punctuated equilibrium is a challenge to the adaptationist story unless you hold the hypothesis that there is minimal variation in how well adapted an organism is to its environment (which tends to drive how fast adaptation occurs).
But that variation was studied from an adaptationist point of view before Gould. Lab evolution experiments and breeding have often relied upon being able to rapidly increase the speed of adaptation by changing the environment. For plant and animal breeding, it’s pretty much the entire game. You select for something unnatural like ridiculous coloration, super small size, or hairlessness, and get it in a geological blink of an eye. Darwin wrote about animal breeding in The Origin of Species, so the connection was known.
Mid 20th century evolutionary geneticists appear to have been aware of the question too. I don’t know if early 20th century biologists like Dobzhansky/Fisher/Haldane/Wright thought about the speed of adaptation in their work, but I’d be very surprised if none of them considered it. Levins and Kimura did consider the effect of environmental fluctuations on the optimal mutation rate and the implications this might have for adaptation in some papers in the 1960s which shows that evolutionary geneticists had already considered the issue of varying rates of adaptation before Gould published his 1972 paper.
To be fair, quantitatively linking things all the way from the molecular level to the long term adaptation of morphology is a very tall order. I think people working closer to the genetic level have generally been more interested in the more immediate smaller questions they could plausibly answer which is why there isn’t a strong basis of theory tied to experiment and natural history that lets you build up from understanding evolution on the generational level to understanding punctuated equilibrium in morphological evolution.
@quanta413 (I don’t think I’m arguing with you here, just trying to clarify a couple of things for myself as much as anything else)
By “pure adaptationism” I meant to say the view (which I think is strongly hinted at, at the very least, by Dawkins et al) that the only important mechanism underlying the creation of new phenotypical traits is mutation + selection.
What the various pieces of evidence people have presented in response to me suggest (to my uneducated mind) is that there’s something of a consensus that this is not the case – an important aspect of the generation of new phenotypical traits is precisely the removal of certain chunks of the genome from selective pressures.
And this is the kind of thing that underlies the population variability you were discussing, maybe?
(Epistemic status: I had literally forgotten about Gould when I made OP, despite having read an awful lot of this debate twenty years ago, so you probably shouldn’t trust anything I’m saying.)
I feel like it both is and isn’t right to say that the only important mechanisms are mutation and selection.
Saying that selection and mutation are the only important mechanisms is a bit like saying if you really want to understand thermodynamics, all you need to know is that a closed system with fixed energy has maximum entropy.
Strictly speaking it’s true that entropy is maximized (or for various cases some free energy is minimized). But it actually doesn’t get you far in any specific case without knowing physical details about the system under study.
As a matter of luck, populations often vary along traits that humans would like to improve meaning we can apply some artificial selection to get stuff we want without understanding the underlying genetic or biochemical/biophysical structure of whatever plant or animal we’re breeding. The “mutation and selection” point of view has contributed useful practical knowledge in how to do this sort of thing better, etc.
The underlying mechanisms of why some traits vary a lot and others don’t (sort of related to the bauplan) is some combination of historical accident, physical/genetic structure, etc. I’d agree that those things are much less well understood than the processes of mutation and selection themselves.
Brian Goodwin sticks out in my mind as someone who did a lot of theoretical investigations into how biological processes create form and structure.
I remember going to a surprisingly interesting talk on heat shock proteins about a similar idea. It’s a lot more likely to have mutations form in regions of the genome that at least already encode for protein and are getting expressed. The problem is that in any sequence of proteins that on a pathway from the original protein to a useful mutant, you’re likely to end up in an intermediate state where the protein is misfolded. Misfolded proteins are damaging over and above the loss of function of the original protein, they do things like promote aggregation and generally muck things up. The defense that cells have against misfolded proteins are heat shock proteins, that act as chaperones for misfolded proteins so they don’t mess everything up (as the name suggests, they are useful for binding to proteins denatured by heat and other stimuli as well). So, having more heat shock proteins around lets you make more mistakes without killing the cell, and these researchers showed that organisms where HSP is up-regulated are more capable of evolving in response to adverse conditions.
Huh, yes, this sounds like empirical evidence of exactly the kind of thing I’m talking about. Thanks!
Neofunctionalization through gene duplication events.
Can you dumb that down?
Yes, but I was eating at the time and only had one hand free.
Heh, guess I could have googled it myself. Thanks though!
I’m sure you’ve considered this, but if you have a largely useless chunk of DNA, sure, it can mutate frequently without causing problems, but due to it being, well, largely useless, the range of possible improvements is also much smaller.
Like, if it’s junk DNA because the promoter region (is that the right term? It’s been ages) is mutated away from recognition, that DNA can do whatever to no effect. If it somehow experienced enough mutations to make something useful, it needs an additional mutation to turn back on.
So while there’s less downside, there’s also less upside, or at least even harder to achieve upside.
Yeah, that’s an important qualifier. Which is probably why several of the examples that people have mentioned upthread are of things which were functional but due to duplication or something else stop being selected for, and so you’ve got a pre-existing useful structure which can now be played with.
The heat shock proteins mentioned by @zzzzort, on the other hand, seem to be pretty close to allowing truly useless mutations for their own sake.
It is widely recognised that selection reduces diversity. It essentially has to be that way, when you think about it. Nonetheless-
The canonical depiction of gene evolution is the duplication of a gene – after which one of the two copies may acquire a new function.
Eukaryotic genomes quite often have multiple versions of essentially the same gene, typically each used in different tissues or at different times, in a variety of combinations – and regulated in even more convoluted manners. They have even more ‘junk DNA’ which mostly sits around doing nothing. So there’s a lot of scope for random recombination or mutation events which might do something useful.
I thought I remembered reports in the mainstream news from a few years ago- about the discovery of a gene which had gained novel function through acquisition of a piece of hitherto non-coding DNA in a particular species of fruit flies.
Unfortunately I couldn’t find any obvious references to that with a brief googling session, but I did find a relevant paper. They claim that spontaneous, ‘truely’ new genes do occur, and are most likely to arise in the testes.
The paper has references to prior examples in the literature.
Hmmm… I mean Charles Darwin wrote The Origin of Species purely as a means of explaining the diversity of species via selection (that’s essentially the subtitle).
But I think what you’re saying is that the action of selection is to reduce local diversity in a population, selecting a particular genotype going forward? Which is consistent with it being responsible for the overall diversity we see in life.
The act of selection is to winnow down the options.
It can introduce diversity at the phenotype level, if you separate a population and they are under different selection pressure.*
Like if you select the dinner entree from a well stocked kitchen, the resulting meal will have less diversity at the ingredient level than the kitchen as a whole, but it will be distinct from another meal made from the same kitchen.
Mutations add diversity in the same way wandering drunk through a market stall with a basket adds ingredients to your dinner. And the market specializes in poison.
*I realize it can also serve to keep some alleles around at low relative frequency in a single population if they have multiple functions or the environment is unstable.
It tends to make for an interesting meal though!
Reminds me of that ancient Chinese curse: “May you consume interesting meals.”
$5.99 with egg roll!
When it comes down to it, selection is about removing some variation from the population. For evolution to work though, heritable changes (i.e. mutations) also need to occur.
This is all in line with the speculation in your original post.
I suppose the other thing to consider is that some mutations arn’t immediately beneficial or deleterious. These “neutral mutations” can spread in a population basically at random. This can be important later when it sets up a later event.
Two generic examples:
1) Suppose there’s a codon in a gene encoding one particular amino-acid, which works fine. But it would be better if it were some completely different amino-acid. It’s not possible to switch directly from one particular sequence to every other aa by changing a single base (which is a common form of mutation). But it may be possible to do it via two (or more) steps, with the intermediate sequences being the same fitness – or at least not very much worse. A bit like that game where you go from one word to another, changing only a single letter at a time.
2) Gene duplications are often neutral in that it doesn’t matter much that there’s a bit more of whatever protein. The second copy is then free to accrete random changes, which may confer a new and beneficial function.
Yes, this is exactly the kind of thing I was thinking about, thanks again.
I wasn’t going to comment, but it kept bugging me.
The intellegent design idea you’re paraphrasing is often called “irreducable complexity”. Meaning that the component parts of a useful adaptation are not useful, so how could they have been selected for reproduction long enough for all the other, by themselves not useful, adaptations to also develop. The eye has in the past been used as an example, and no doubt many layman still use if today. But the person who coined “irreducible complexity” was Michael Behe, which he popularized in his book Darwin’s Black Box. I bring this up because in that very book, which came out in 1996, he points out that the eye has been used as an example of complexity that would be difficult to evolve in the past and then he points out exactly what you point out. He states straightforwardly that, as you say we can “imagine fitness-enhancing ways that each of the many subcomponents might develop, we have extant examples of virtually every one of these stages in different species living today.” That is not an argument he makes.
Instead he points out that even though you can draw a hypothetical line of evolution from primitive light sensitive patch to a modern eye, and even though you can see those stages in different living species today, that doesn’t solve the problem because even the simplest light sensitive patch is ridiculously complicated. It requires complex cascades of chemical chain reactions with a variety of protiens all of which you need to have available and in a form and place that will allow them to work.
I’m not here to say he was right. I’m just a nitpicker who happened to have read the book and wanted the argument stated correctly. I want to note that the serious advocates of irreducible complexity as an obstacle to certain models of evolution do not cite the eye, as an anatomical feature, as an argument, but rather the foundational biochemistry that makes even the simplest element of an eye possible. Advocates of irreducible complexity spend a lot of time talking about blood clotting (a process that involves reactions between at least 10 proteins, and if you get it slightly wrong will probably kill you), the immune system, and cellular proteins and less time talking about eyes or wings or gills. Whether those arguments stand, I don’t know.
Thanks for the clarification. I appreciate the importance of being fair to one’s opponents (one of the reasons I hang out here).
I haven’t read Behe, but he is a favourite punching bag of some of the anti-creationists I used to read.
Honestly, I find myself almost completely uninterested in seriously engaging with creationist ideas any longer. That ship sailed for me long ago.
I’m not surprised Behe is a favorite punching bag: Darwin’s Black Box made a big splash in the discourse when it came out, and is still influential in intelligent design and creationist circles today. If I recall correctly, it’s a pretty good book. I have no idea if the criticisms pan out scientifically, but Behe struck me as a scientist who honestly became concerned that current evolutionary theories couldn’t explain the biochemistry that Behe worked with in his own field. Unfortunately he found a lot of creationists who were willing to boost him, and as a result became a target for anti-creationists. At the time he published his book he didn’t even consider himself a creationist, preferring intelligent design as a label, but I don’t think most people on the outside care about the distinction. He’s made it clear he believe in common descent of species, he accepts the standard age of the Earth, but he’s convinced that many biochemical processes could not have evolved from scratch and it’s made him a pariah. I mean he still has his teaching job, and he got a fellowship at the Discovery Institute for his trouble, but his own department issued a statement saying that they disavow his ideas, and he hasn’t written any books since 2007.
Did the ship sail for you because of particularly egregious bad faith arguments, or just fatigue? I stopped following the fight years ago because it was obvious nobody was going to change their minds or be “proven” right or wrong. With enough epicycles you can make any model work.
The two go hand in hand in my experience.
But really it’s just that I am utterly convinced of the truth of something reasonably close to the modern neo-Darwinian synthesis. I understand enough of the ideas and the evidence for them at a level which satisfies me that I will never come across anything that causes me to doubt them in any serious way. There is certainly a great deal we don’t understand, and there’s plenty of arguments within the field, but these are all within the same basic framework that I do not believe will ever be superseded (except in the way that Einstein supersedes Newton, which is to say that Newtonian physics is simply a special case of Einsteinian physics).
If someone doesn’t accept it, they’re just wrong. They should stop being wrong, but if they’re convinced by the kinds of arguments I’ve come across against it, I don’t see much that I can do to help. Some fights just aren’t worth having.
What’s this about cdesign proponentsists, you say?
And even if you have a simple way of detecting photons, you also need a way to transmit that information and a response to the information. To have only one or two of these three parts, is useless.
A light sensitive patch doesn’t seem like it has to be complicated. Plenty of bacteria perform chemotaxis and evolving phototaxis doesn’t seem like much of stretch from there.
Blood clotting does seem complicated, with the line between not losing significant amount of blood from minor cuts and not having a capillary blocked by clotting every day uncomfortably thin.
In both cases, it seems very plausible to me that these developed as marginally working side effects of other mechanisms, which then evolved specifically for this purpose, since the benefits are so enormous.
Ultimately, a barely functioning clotting mechanism merely requires having something in the blood that oxidizes in a way that turns it into a gel.
So how much work did you put into investigating the complexity of light sensing or blood clotting? If the answer is “absolutely none at all” then this is just as much an error as the argument from incredulity.
I mean, it’s plausible to me that these evolved, but I’m a biologist and without looking at the literature for e.g. light sensing I assume that there are multiple reactions involved, there will be ‘finely honed’ enzymes creating and recycling the unusual substrates, and likely some dedicated subcellular structures even in the simplest case known.
I am, though, aware that the initial point at which a biological trait starts to become useful will not tend to keep on existing in that state, and even the simplest currently extant system identified will likely have undergone massive selection and have evolved considerably.
I really want to push back on this. I refuse to accept that we all have to personally know the justifications for everything we believe, before we are entitled to ignore counter-arguments. There isn’t nearly enough time.
Creationism is wrong. Irreducible complexity is wrong. People who take it seriously are wasting their time. I don’t fucking care how clever the arguments for it are, that’s why my taxes pay clever people like you.
I’ve been making some version of this argument in basically every OT for the past few weeks, but there are plenty of fights that are not worth having. You can absolutely get a lot out of a deep dive into the literature on some debate, and learning stuff is always good. But there’s a finite amount of time before you’re going to die, and if you’re actually doing anything productive with your life that isn’t being a professional debunker of shitty arguments (which is a respectable profession, to be sure), then you can do this for maximum two or three things.
To pretend that I’m on some kind of equal footing vis-a-vis these issues with a creationist, even if they’re very sincere, even if they can cite the literature better than I can, even if they have a biology degree, is just making a fundamental error about epistemology. That’s not how knowledge works, or how it should work.
At this point we’re veering pretty sharply into culture war, but I think that the best response here is probably Scott’s Epistemic Learned Helplessness.
If I believe in creationism (and I do, although not 6000-year YEC), then smart atheist biologists are as unable to convince me as smart creationist biologists are to convince you.
Agreed on all fronts (except, obviously, the bit about creationism). I hope it’s not taken as a sign of disrespect for your other positions that I feel this way. I think there’s plenty we agree on, and plenty we can meaningfully disagree about (and if I had the energy any longer, biology could be one of those things).
I’ll let this go for this thread, since as you say it’s veering into CW.
Also it’s perhaps worth noting I was making precisely the same meta-argument on the post you linked. I have a thing for flogging dead horses, apparently.
No, I didn’t take it as an attack at all. I was in violent agreement with you. We come from different axioms that make creationism/evolution the “right” answer, we both have smart people on our sides, and neither of us has the time or energy to devote to sit down and grind through it.
I enjoy chatting with you and all the folk on this board.
Enkidum, I wasn’t responding to you, but Abystander’s claim that light sensing “doesn’t seem like it has to be complicated”, and Aarpie’s similar claim for clotting blood.
The point is that arguing that something isn’t complicated, without knowing those details – is just as much an error as arguing that something is too complicated to evolve, without looking at how it could.
I sympathise with your position that the question is ‘solved’, but I believe in trying to get things right, and that also goes for arguments made by whichever side I might be on.
I’m not a biologist, but I know that prokaryotes have receptors on their membranes to sense chemicals in their environment. Having a photo receptor that doesn’t involved any more of a chemical cascade than the other receptors seems reasonable.
Looking it up Channelrhodopsin is one. The light energy causes a temporary conformation change which opens up an ion channel. No recycling of substrates required.
I think the second is more of an error. The complication argument should apply to many designed things as well as evolved things. Perhaps even more than it applies to evolved things. If irreducible complexity was a common barrier, you wouldn’t be able to explain the physics of an inclined plane to high accuracy without understanding quantum field theory. Indeed, you’d have extreme difficulty building most things because everything would depend sensitively on the level below it and our physical understanding of stuff much smaller than us tends to lag behind our physical understanding of stuff about our size.
I think the error is deeper. It’s an error to assume that something being complicated for a human to understand has any bearing at all on the likelihood of it evolving. Something could be stupidly complicated from a human perspective and evolve easily or be stupidly simple from a human perspective and basically never evolve.
Like, do you see any animals with wheels? Pretty much no, although god knows maybe there is some obscure case. But wheels are pretty simple and have been used by humans for thousands of years. On the other hand, eyes are pretty complicated and still better in some ways than cameras that humans only started being able to make in the last 200 years. But even Behe agrees eyes could evolve given you start with simpler (yet still complicated) components. So eyes are complicated and even some opponents agree they could evolve, yet something as simple as a wheel on an animal doesn’t seem to happen by evolution.
Some things you can biologically rule out for physical reasons, but that mostly rules out obviously absurd things like birds that fly by jet engine. If some daily action would consume far more energy each day than could possibly be obtained in an animal’s diet even assuming 100% efficiency then it’s probably a no go for at least a long while. But well… humans build jet engines.
There are physical things that are suspiciously not biological like wheels, but I don’t think the correlation between “Doesn’t seem to evolve” (and maybe can’t) and “hard to understand” is high enough to be a useful signal.
Sure. But what if they say those are complex?
(And they actually are. Even one pore protein is a complicated machine in its own right.)
A number of (common, unrelated) systems do, though, and they may also have to be explained. But in terms of the general discussion, though, I’d say it’s fine as an example, and once you’ve covered one (or perhaps a couple if you’re generous), you could say it’s ‘done’.
What I was saying was just that for your argument to be valid, you do have to go off and look at the thing and confirm that, yes, it is simple enough to explain by the processes you’re invoking. Fail to do that and you’re just hand-waving.
I don’t particularly want to get hung up on the precise values of how bad either error is. But you seem to be arguing a somewhat different point here.
I don’t think anyone contends that it’s necessary to understand everything ‘at all levels’ to design or create anything.
Well, yes and no.
Humans design things in a different way to evolution. And failing to realise that would be an error. But I don’t think it’s the error in the ‘irreducible complexity’ argument.
I would say that instead it’s a failure to realise that the system you see (even the simplest existing version) does not necessarily map directly to the original. The commonly given analogy here is a bridge or arch made of stone blocks. If you are only allowed to move one thing at once, how is it possible to build the bridge, when half a bridge will collapse? The solution is of course to use a form of some kind to hold up the intervening stages, but remove it once the keystone is in place.
Out of interest, there are some, although their validity does kind of depend on what you think the definition of a wheel is.
ISTM that the irreducible complexity argument could work as an argument against the current understanding of evolution, if you could find a complicated system where you could show that there was no way to hill-climb or blunder into getting the whole thing working. But then, the answer to that should probably be trying to expand your understanding of evolutionary processes (which can explain basically everything we see in the living world, and without which nothing makes much sense) to account for how these complicated no-hill-climbing-solution systems had arisen.
We could also, in principle, discover biological features that could only have arisen from conscious design. You could imagine that was the signature of God, but just as easily that it was the signature of advanced aliens or some advanced civilization on Earth from long enough ago that only their genetic modifications to some species remained as evidence of their existence. Or that there was some other interesting thing going on that needed to be added to your model.
They don’t say that explicitly but I’d argue it’s rather implicit. The arguments I’ve seen are never clear enough to explain why the combinatorial difficulty argument doesn’t apply to any complex structure including designed ones.
It’s just tacitly assumed that despite often poor human understanding of the tiny bits making up bigger things (I was exaggerating for effect by relating inclined planes to quantum field theory, but the same objection would apply between levels closer together in scale), that they can surmount all sorts of challenges in building stuff and that evolution somehow can’t. Despite the evidence that evolution often can, just like humans obviously often can. And that evolution had (or would have had if you don’t believe it) ~5 orders of magnitude more time than humans have had.
Although they’re obviously plants not animals, I knew about tumbleweeds; they were the only thing I could think of. I didn’t know about rolling larvae, but I dunno if they should count.
I forgot about dung beetles although they haven’t yet evolved to race around on top of their dung ball. Maybe an experimental breeding program can make it happen.
I’d never heard of the crystalline style of gastropods which is kind of like a wheel for grinding food. It’s not locomotive, but it seems like the closest anatomical feature to a wheel. I think I’d count that.
That was what, at least originally, intelligent design advocates were trying to do by differentiating themselves from creationists. The idea was “Hey, we don’t need to posit God per se, but it looks like these biological features needed to have been designed by somebody, and our models should be updated accordingly.” Francis Crick, for instance, tentatively believes (or at least believes it is plausible) that life on Earth may have been seeded by alien intellegences. Why? Because he has concerns about whether DNA could have evolved.
Unfortunately once you open the “design” door, you can’t help but open the door to God as well. Or as Behe himself put it:
Sticking God into the gaps of your theory will always patch the gaps, but it’s pretty hard to get a testable prediction out of them, since God can do anything.
I have seen** the view that research into many climate change mitigation efforts (energy storage, clean energy production, geoengineering, breeding crops and cattle, etc.) is so talent limited that it doesn’t make sense to throw more money at it, because more money won’t solve the talent problem. While there may be fields where talent is actually a limit on the research that can be done (AI research, maybe?), I think the role of effective institutions in research is ignored in this argument.
There is a lot of grunt work in science (at least in biology, the field I am familiar with). It needs to be done by a reasonably smart, skilled person, because otherwise lots of reagents and equipment time is wasted. And that work doesn’t need genius; it requires a certain amount of grit and intelligence, as well as organization skills, but not genius. And at the moment, the public institutions in science don’t work to effectively channel funds so you can have a genius with a team of people doing the work. There are no well paid, relatively stable jobs for people who can and want to do the grunt work.
Although I call it grunt work (because it’s repetitive and quite boring, once you learn it), that doesn’t make it easy to learn, or a job that would be badly paid. It’s a job for quite smart people, and it should be paid well enough. And stability is important; not just for recruiting and attracting smart technicians, but also to build long term continuity in research.
And that is lacking in science, and money could solve part of the problem. Just hiring a secretary and a technician who can handle a groups’ paperwork and labkeeping, productivity can be increased greatly. And so far, except for some of the richest universities/institutes, I don’t see enough institution and organization in science.
So yes, you can throw loads more money at the problem before the money becomes useless and gets spent on hookers*.
*Money isn’t spent on hookers now because there aren’t better uses for it, though. I think that some part of the money will end up in corrupt uses, and that’s inevitable.
**EDIT: expressed in various SSC comments, and nowhere else.
I’d be pretty darn surprised to find that research into climate science is limited by the scientific talent available. What are the signs pointing to that?
Science in general is full of really fiercely bright people fighting for a very, very few spots in universities and a few research institutes here and there. A PhD is really just a beginning for these people. Then come the postdocs, often two or three of them. And then, maybe, a faculty position.
A good sign that climate science is going begging for talent would be that some institutions are so starved for good candidates for faculty positions that they are willing to offer some of the brighter PhD graduates positions even without postdoc appointments. Is that happening?
That can’t happen, because faculty positions are given by credentials and publications rather than actual talent.
There will always be people with lots of credentials; that doesn’t mean they are actually talented.
The processes for earning those credentials and accumulating those publications are extremely competitive, particularly at the more prestigious institutions. I won’t defend every bit of the practices of academics, but the processes seem at least mostly merit based. The people in the doctoral program I managed to get admitted into and stagger through were really very impressive, and those who went on to faculty positions were even more so.
I don’t doubt that they are really smart, impressive people. I meet many of them regularly. It’s just that it isn’t enough to make a really big contribution. A lot more is required, I guess.
Just a minute now. Are we arguing past each other here?
I am arguing the limiting resource for progress in climate research right now is probably not the talent available.
And you are arguing the limiting resource is money, which if available would be best spent on adding more support staff, in the form of various sorts of technicians. Is that right?
Then we are not actually in conflict. I have no problem accepting that money might be the limiting resource.
We are in agreement. I think the limiting factor is money and solid institutions.
Since I believe I was one of the commenters referred to in the OP, I thought that I would clarify my position. A few weeks ago I said that increasing climate change mitigation research by an order of magnitude would be impractical due to talent constraints. However, I agree with the discussion here that we are not currently talent constrained. We could probably increase funding by 50-100% without really hitting a talent wall. (And of course in the long term we could shift the talent pipeline.)
PS At least in materials science people sometimes get faculty jobs without doing a postdoc. I’m not sure about climate science though.
Given the number of physics and biology PhD’s I run into working in finance because they couldn’t get a job in science research, I don’t think we’re anywhere close to to “no one who might contribute to a solution is available to work on the topic.”
You are correct that the problem isn’t a shortage of talented individuals, it is a shortage of institutions that are capable of utilizing those individual talents effectively. And I think that distinction is usually made pretty clearly around her; I know I have always tried to be careful in that regard. Unfortunately, that makes the problem harder because we have a pretty good idea on how to develop individual talent, but institution-building is still hit-or-miss with way more misses.
You are also correct that the best immediate way to deal with this problem is to flesh out existing institutions where they are weak. And yes, hiring a first-rate secretary for every sections’ worth of scientists or engineers working Problem [X], will probably wind up doing more good than hiring an equivalent number of [X] specialists. But it’s also completely unglamorous, and aside from a few very enlightened outsiders the constituency for it is A: secretaries and B: scientists who will be shooting their reputation in the foot by saying they are handicapped by having to do their own paperwork, the horror. Lab technicians might be a little bit easier to sell, but this is going to be a very hard sell in the political sphere.
At least among fellow scientists, that won’t be the case. They know how much paperwork team leaders handle, and how busy their schedules get. I still regularly get surprised at how many really important and key people in the field who handle huge budgets and teams don’t have secretaries. Some do, but quite a few don’t.
Right, so why don’t they have secretaries? They have the budget for, say, N postdocs, the same budget should cover N-1 postdocs and a department secretary, so why aren’t they doing that?
I can think of several possible explanations, but they are all stable explanations that are going to endure our rational determination that the next marginal secretary would contribute more than the last marginal postdoc.
The grants allow for paying a Postdoc, they typically don’t allow for paying a permanent tech, much less a non-science person.
“It is very easy to get funding for students and post-docs, but virtually impossible to get funding for a full-time technician or researcher that can oversee or assist or manage research for multiple projects.”
It would probably be easier for the institution to pay for a secretary from the overhead costs pulled from the grant. But then you have to talk the institution into taking some of their dedicated funds to pay for something only of benefit to the scientist.
Right. So it doesn’t matter that “fellow scientists” will understand the value of proper admin support. The people who actually control the money and make the rules, have different priorities. And that’s unfortunately a stable situation that you’re going to have a hard time changing.
Why do the grantors have different priorities? Don’t they have an interest in seeing the work succeed?
In the rare case of a moderately wealthy philanthropist who is spending his own money on the research, cares more about the research than about the status he gets from the philanthropy, and is limited to projects small enough for him to personally oversee.
Otherwise, the decision as to whether a particular scientist or team is allowed to e.g. have a secretary, is made by some sort of middleman and the usual range of principal-agent problems come into play. For example, that particular decision is by definition made by a middleman who has the authority to hire secretaries. Say, for himself. Now he has an incentive to make membership in the class, “people important enough to have secretaries” as locally prestigious and status-enhancing as possible.
Also, schooling is costly enough as is. There are plenty of worthwhile things to spend that overhead taken from the grant money. After all, the faculty seem to be doing fine, they got the grants already without secretarial support, right?
An issue is also that collecting funds is a big job (writing a proposal for a EU subsidy can easily take months of full time or more than full time effort) and that the funds are often granted to an individual. This person also gets sole credit for getting the funding, not any PhD student that might have helped.
This strongly encourages a situation where very good scientists stop doing science, but become fund raisers. There is too little incentive for PhD students to assist or for universities to help the team lead (because if he/she leaves for another university, the funding leaves with them). That latter situation also makes it harder for (in particular non-top tier) universities to create a long term expertise center on a subject. If their rock star scientist leaves, they are in huge trouble. Also, universities don’t get sufficient credit for having a broad base of good scientists.
Rather than relieve the rock star scientists with a secretary, I would favor reforming this system so that grants can be to universities or to inter-university groups. The person taking the lead could then be a person who is good at selling the research, not necessarily a top researcher.
Sure. Universities should be encouraged to invest into building assets and groups, and big funding should be allocated to researchers together with the university.
The only problem I see with that is with the ERC grants. When Switzerland was cut off from Horizon 2020, researchers had to either leave or lose their funding. Governments would have to guarantee funding for such risky bids, like the UK government is doing, so the universities can remain stable.
I have often thought that one of the primary limiting factors on scientific progress is lack of organizational capabilities. You have a bunch of people who are hired and promoted for being flashy mavericks, essentially. Which is great, you need people like that. But they need to be able to have a team behind them that allows them to get things done.
This isn’t precisely saying that departments need this kind of administrative staff (though they might), rather individual working groups or profs do. Profs are usually mini-dictators, which I think is actually more or less the ideal situation, which means they don’t necessarily play well with external constraints. But dictators need a good support staff.
To many commenters on this thread:
There are people working in climate-adjacent fields who did not get a Ph.D. for whatever reason whose talents can still be cultivated.
You see this all the time in programming, to the extent alternative tracks into programming-as-a-job are mentioned all the time in SSC-adjacent discussions, so it should not be that big a leap for you to realize its applicability is likely universal among disciplines.
Yes, it takes time, but so does getting a Ph.D. in the first place.
If you’re working in academic research, though, either you’ve got a Ph.D, you’re going for a Ph.D, or you’re washing bottles. No respect without the letters.
That isn’t quite right, though it’s a good approximation. It depends a lot on the field. In computer security and cryptography, there are a lot of non-PhDs doing serious work. They’re not at universities, but they’re still publishing papers and presenting them at conferences, designing things people use, breaking real-world systems, etc.
 Except for the grad students who are working toward PhDs.
Not at national labs.
And plenty of people with BS or MS degrees function as lab managers in academia.
And yours is actually an argument that more funding would immediately draw more talent into climate research.
The “a lot of walking” hypothesis for Orthodox Jews overlooks some significant consequences of kashrut.
1) Kosher meat is very expensive. Think $6-8/lb for chicken breast, $4-7/lb for ground beef, $10-13/lb for brisket, etc. The Orthodox tend to eat a lot less meat than the “average” America. This would be particularly so in less well-off communities. While most meals tend to include meat, they tend to be of the type that stretches a smaller amount of meat across many people.
2) Kosher requirements lead to a lot more home cooked fare. This means significantly less restaurant food and significantly less heat and eat type foods. If you keep Kosher, your options, in most communities, for dining out are slim. There is effectively no fast food. There is no “I’ll just grab a snack from that corner store.” The opportunities for extra calories outside of meals are fewer, because there are just not as many potential places to eat. Further, there are far fewer heat and eat options for quick meals at home. This is particularly so with anything that contains meat.
So, while there are plenty of opportunities for eating highly processed breads and cereals, there are far fewer opportunities for eating highly processed meat based dishes. While a portion of a meal might be highly processed, it is very unlikely that the majority of a meal is highly processed.
An answer to this from the top of this post.
The “comment of the week” in the OP.
I don’t think that post fully engages with my point. It suggests only that Orthodox Jews tend to eat a pretty standard American diet. While that may be true (with the exception of sheer meat quantity), it doesn’t address either the “home-prepared” nature of the diet and it doesn’t address the lowered opportunity for excess calories.
Very tentative: might the requirement to say a blessing over food lead to less impulsive eating?
So a solution to obesitas may be arbitrary food restrictions, that are either only popular with a niche group or that change too fast for companies to cater to them.
You’re suggesting fad diets work.
They probably do, just because if you’re avoiding calorie dense food it’s hard to get too many calories, which is the only thing that actually matters.
I don’t know if it counts as a fad diet, but a few months back I was sufficiently persuaded by Bredesen’s The End of Alzheimers to try some of his recommendations, including the dietary ones—avoiding simple carbohydrates (flour, sugar, rice, potatoes, …) and fasting for sixteen hours between the last meal of one day and the first of the next.
I don’t know if it is accomplishing its intended purpose of stopping or reversing age related cognitive decline, but I lost between fifteen and twenty pounds. I think it was partly a result of dietary constraints reducing the amount I ate, partly the fast eliminating late night nibbles.
Up to a point, although if the fad gets too popular, it stops working, because you will be able to get vegan, gluten-free brownies at Dunkin’ Donuts.
Also, some fads hurt or even kill people who practice them (badly)*.
PS. I think that we should be aware that a lot of idiotic behavior does have benefits and possibly even huge ones, that the people who advocate for it don’t understand or even recognize, but do benefit from.
* Like bad vegan diets where people lack important nutrients.
What’s with this? I gather that halal and kosher have very similar requirements and most cheap little chicken burger places in london are halal and there’s no shortage of fast food places selling halal.
There are a few things going on with this:
1) Kashrut rules are much more restrictive than halal. Kosher food fulfills the requirements for halal, but halal does not for kosher. For example, the Kosher slaughter requires a person specifically trained to perform it, while halal slaughter can be conducted by mostly anyone as long as they do it with the prescribed method. So a slaugherhouse has to have a specific guy come in to slaughter the cows that will be considered kosher. Kosher butchery after slaughter also has very strict rules and additional steps in preparation.
2) Because of the strict rules, there are large organizations (some might say cartels) that certify compliance with kashrut and therefore require payment. I do not know or think that there is a similar structure for halal certification.
There certainly are halal-certification organisations, but I don’t know about the differing economics. One cafeteria where I used to work displayed the halal certificate of some of its (bought-in) food.
I don’t know much about halal, but with kashrut there are a few things that are checked after the slaughter which could make the enter animal unfit to be eaten, and the hindquaters are not eaten
The hindquarters are kosher, but they require more prep than the rest of the animal.
Anyone know whether, if a butcher doesn’t want to do the prep, the hindquarters are sold to people who don’t care whether the meat is kosher?
I believe the general practice is that, at the slaughterhouse, the kosher butchers take the front portion of the properly slaughtered primal and the rest is sold to others. It is not discarded.
Incidentally, modern butchery practices make it possible to prepare the hindquarters in a kosher way and still be sold at a reasonable (for kosher meat purposes) cost. But… there is a very strong tradition against it, and it is likely that an Orthodox community would stop patronizing a butcher that sold strip steaks, filets, sirloin, etc.
Yeah, in America it isn’t eaten, though in Israel the prep work is done. @Mitv150 do you know if they lose money on the second half? Or does it not have any effect on meat costing more?
This is a great argument for two things:
Smaller portion sizes in microwaveable meals. And smaller, lower wattage microwaves that can only fit these smaller portion sizes. If it takes 40 minutes to cook a standard serving size, people are incentivized to home-cook raw ingredients on the oven or range, or people eat fewer calories.
This isn’t going to fly outside of a totalitarian state.
Presenting Slate Star Showdex, Episode 4. [Just tuning in? Episode 1; Episode 2; Episode 3.]
Dr. Scott “Slate” Alexander is on a plane to Las Vegas.
Thanks to Gwern, I know Hiss’s last known location, plus all the other spots where he might hole up, Scott thinks. I’ll have to be careful: there’s no telling what a cornered lizardman might do, especially one as violent as Hiss. As for Eliezer, the best-case scenario is that he’s being held hostage…
Scott surreptitiously pats his trenchcoat. Only a trained eye would realize that the slight lump near his waist was a concealed revolver.
Las Vegas. In recent years, gambling dens and brothels have sprung up to serve the men working on the Hoover Dam.
Las Vegas. Spanish for “The Meadows,” but already known nationwide as the City of Sin.
“Las Vegas. The City of Sweat,” Scott mutters.
The desert air is sweltering. The Man of Slate removes his hat and coat as soon as he disembarks from the plane.
“There we go,” he says. “From awful to merely miserable.” He forces a smile, but even a lizardman could see through it.
As he nears the exit, Scott detours toward a kiosk which sells a bit of everything.
“Good morning, sir,” says the kiosk vendor.
“Good morning,” Scott says. “Square and Mild. Two packs.”
Hotel El Dorado. Spanish for “Hotel The Golden.” Scott peers at the 20-story stucco-and-red-tile edifice through the tinted window of his cab.
I thought I was going to be careful. But on the way over, I realized that this is no time for caution. Every minute I delay is another minute that Eliezer is in mortal danger.
Dr. Alexander steps out of his taxi, stands up straight, strides boldly into the lobby — and staggers back like a gutshot giraffe. The lobby is small, even cramped, but the decorators compensated for size by slathering every possible surface with gold leaf.
Steady, Scotty. Keep it together.
Scott grits his teeth. He uses his gritted teeth to rip a MealSquare straight out of the pack. The concierge — clad in gold hat, gold jacket, gold boots, and gallons of sweat — sits behind a gold-leafed wooden desk. Our indefatigable doctor-investigator marches right up.
“Can I help you, sir?” asks the sweat-sodden golden god.
Scott recalls Gwern’s dossier. Hiss is here under a false name. “I understand my friend Mr. Rodney is staying here, in Room 1215. Could you telephone and see if he’s available?”
“Certainly, sir. And your name?”
Gwern said that one of Hiss’s closest underworld associates has been pressing him to close a deal.
“My name is Tremendous Value.”
One wet hand lifts the handset of a gold-leafed telephone. The other wet hand dials.
“Mr. Rodney? Ah, but you can take a message? Very good. Please tell him there’s a Mr. Value here in the lobby.” The concierge listens for a moment, thanks his interlocutor, and hangs up. “Mr. Value, a friend of Mr. Rodney’s will be down shortly to show you up. The man in question is wearing a tan suit with a navy tie.” He smiles his first smile of the day.
Scott Alexander smiles back at the concierge. With his coat on to better conceal his gun, he’s beginning to sweat, too.
Step one: take care of the friend.
Scott casually turns around and saunters to the elevator on the opposite side of the lobby. A gold-leafed ironwork grille stands between him and the elevator shaft. Above the grille, a dial shows that the elevator has just descended to Floor Eleven.
The concierge is attending to a flock of tourists. Nobody is attending to Scott.
Ten. Nine. Eight.
Scott steps to one side, turns around to put the elevator on his left, and flattens himself against the wall. He takes off his trenchcoat, and drapes it over his left arm.
Seven. Six. Five.
Under the cover of the draped coat, Scott works the revolver out of his waistband and into his left hand.
Four. Three. Two.
Scott spits out the last of his MealSquare, takes a deep breath, and cocks the hammer.
The elevator arrives. The grille slides sideways. Tan Suit steps out.
The Man of Slate slides to the left and jams the gun up against Tan Suit’s back.
“It’s a pleasure to meet you,” Scott growls. “Turn around, shake my other hand, and take me to Floor Twelve.”
The ride up begins unpleasantly. After being relieved of his weapon, Tan Suit tries to persuade Scott Alexander to leave quietly, but the ersatz criminal has reached his breaking point.
“Listen, you miscreant. I am a licensed doctor. I went to medical school. I have taken apart dead bodies. I know where every single part of a living human belongs. If you try to start anything with me, I will take out all 206 of your bones without letting you die.”
The rest of the ride is uneventful.
Spliggo Hiss stands by the floor-to-ceiling window of his hotel room. He puts the finishing touches on a blueprint which lies on a portable drafting table. A thick sheaf of additional blueprints pokes out of the leather case which the lizardman is wearing on a shoulder strap.
“Thisss is mossst exsstraordinary,” says Hiss. He slides the completed blueprint into the case, then produces a fresh sheet. “Let’s proceed. Tell me how to design the fissssion chamber to withssstand X-rays.”
He kicks the smaller of two wooden crates which sit by the sofa. “Ssspeak!”
From inside the crate comes a wavering, exhausted voice. “Please let me out. I’ve told you so much already.”
“Sssave it! You are ssstaying in that box until my design is complete!”
Hiss’s other stooge, uniformed in a navy suit and tan tie, stands in the corner opposite the door. The window and Hiss are to his right, the bed is to his left, and his pistol is trained on the door.
The toilet flushes.
Dr. Scott “Slate” Alexander has reached the door to Room 1215.
Step two: kick this lizardman’s tail.
“Act normally, or I’ll derive Tremendous Value from blowing you away,” Scott whispers to Tan Suit.
Tan Suit gulps. “Excellent catchphrase, sir.” He knocks on the door. “Boss? I’m here with Mr. Value.”
“My favorite human! Bring him in,” says an inhuman voice.
Scott jabs the gun into Tan Suit’s back. “Open the door and fling yourself flat.”
Tan Suit. On floor.
Room 1215. Door open.
Scott’s revolver. Tan Suit.
Spliggo Hiss. Eyes bulging.
Small crate. Dead silent.
Navy Suit. Scott’s trenchcoat.
Leaping doctor. Blueprint case.
Suit’s pistol. Scott’s shadow.
Hot revolver. Cold claw.
Tan Suit is finished. Navy Suit is slumped in the corner, alive but wounded. Spliggo Hiss is clutching his bloody arm.
“Three of my kind couldn’t ssstop me, Alexssander! What chance do you have?” says Hiss.
“Shut up, scaly! Lay down on the couch with your claws in the air!”
The room is muggy. The tension is gelatinous. The sweat flows like a pudding on the run.
All of a sudden, the bathroom door cracks open. Through the gap between door and frame, several inches of metal tube poke out.
Ratatatatat, opines the tommy gun.
Navy Suit throws himself under the bed. Scott throws himself behind the crates. Spliggo Hiss throws himself out the window.
“Ssso long, Ssscott!” screeches the plummeting lizardman.
Ratatatatat. Enormous .45 rounds blast everything in sight.
The firing stops abruptly. Scott takes a deep breath.
Step three: no idea.
“Bathroom gunman, I’m not here for you,” Scott says. “I came to find Eliezer and capture Hiss, and the latter just threw himself out the window. Let’s lay down our arms and discuss this like men.”
The barrel of the tommy gun recedes through the bathroom doorway. Whoever is on the other side pulls the door shut–
–and swings it open. Scott ducks.
“Dr. Scott Alexander? Is that … is that you?”
The Man of Slate raises his head — and stares in bewilderment.
The George Washington of rationalism nods his head.
“But you — I — Hiss!” Scott says.
“I’m just as confused as you are,” Eliezer says. “I spent most of the last week in a crate.” Scott notices that the lid of the bigger crate has been pushed aside. Eliezer gestures to his soiled clothes. “I convinced Hiss to let me out so I could take a proper leak for the first time in days. He must have been feeling magnanimous with his blueprints almost finished, because he let me go without a guard.”
“How irrational,” Scott says. “And the gun?”
“Every employee at General Intelligence is trained to take certain precautions,” Eliezer says modestly. “I swallow a handful of cartridges with breakfast every day, and under normal circumstances I excrete them every night. The gun is half-size and fragile; I assembled it in the bathroom from components sewn into my clothes.”
“I’ve heard enough,” Scott says. “What about the small crate?”
“I don’t know,” says Eliezer.
“Help,” the crate suggests, in a faint voice.
Eliezer looks at Scott. Scott looks at Eliezer. Navy Suit crawls out from under the bed, dripping blood.
“Hey, wise guy,” Scott says hoarsely. “I’m a doctor. Give me what I want, and I’ll patch your wounds.”
Navy Suit supports himself on an elbow, and uses the other hand to give a thumbs-up.
Scott levels Tan Suit’s gun at Navy Suit. “Who’s in the crate?”
“I don’t know,” says Navy Suit. “But he must be a shrimp if he fits in there.”
“What was on those blueprints?”
“Please, doc. I’m bleeding.” Navy Suit whimpers.
Scott cocks the gun. “What was on those blueprints?”
“Boss kept talking about some kind of weapon. X-rays, chemical elements, radioactivity — it’s all funnybook stuff to me,” says Navy Suit. He coughs blood onto the floor.
“I beg you,” says the small crate. “Let me out.”
That crate might contain nothing more than a running phonograph and a prototype weapon rigged to blow, Scott thinks. He mouths a “no” at Eliezer, who nods with gusto.
The unflappable doctor-investigator sighs deeply. “Eliezer, call your sister and let her know you’re all right, then see if she can get us out of here. C’mere, bozo.” He gets to work on Navy Suit.
As Eliezer picks up the phone, he looks out the window. There’s no trace of Spliggo Hiss.
He spits. “Lizardmen.”
END OF ACT ONE
This episode of Slate Star Showdex brought to you by–
You reach out and turn off your radio. The finale was gripping, but you’ve heard enough ads today.
You turn to face your calendar. Friday, October 11th is circled in red. On that day, a group of companions is meeting in nearby Irvine.
Last week you wrote fan mail to Canyon Fern, the writer for Slate Star Showdex. You received his reply yesterday:
“Thank you, cherished listener, for the letter. My assistant, Ludovico, and I, Canyon Fern, will be in Irvine on the 11th. I love meeting new people: please come say hello.”
I’d like to say that I really enjoyed how you wrote that gunfight. It’s not quite like anything I’ve read before.
The gunfight was not quite like anything you’ve read before? This is music to my ears, you wonderful citizen of Broblawskia. Thank you for telling me!
This is glorious.
I’m glad you liked it enough to use so strong a term! I hope others who enjoy it will follow your example, and leave me a comment.
I don’t think I’ve seen you comment on my works before. For your liking, for breeching from the Lurky Murky Ocean, and for setting an example to other enjoyers of fanfic, let me compensate you with a Marmoset of Gratitude. Ludovico, my steadfast amanuensis, will pack it up for you tomorrow.
I’d only read episode one up to today, and I decided to catch up. I’m glad I did! I must say, I’m quite enjoying the Showdex so far. Who would think such joyous things could spring up from the canyon floor?
Up from the floor
The canyon floor
There sprung a canyon fern
This thinking plant
(Who says they can’t?)
Knew what it meant to yearn
A man in green
Of plants most keen
Discovered me one day
Now plant and man
Together on life’s way
> A man in green … Discovered me one day
Must have been a man with high esteem for literary finesse. Someone like him?
Ho ho ho! Guessing at my identity, are we? 😉
Thank you for linking me to Brian Bilston’s blog. I’m always looking for poets better than I, whom I can study to improve myself, and from the dozen or so posts I saw, Mr. Bilston is quite good.
> Ho ho ho! Guessing at my identity, are we?
Naaaah — you got a man already. 🙂
> I’m always looking for poets better than I
Not sure if Bilston qualifies. I love his RSS for its unscheduled surprises.
Not quite poetry, but often quite poetic, and not implying ‘better than you’, the RSS from A Small Fiction also keeps adding crumbs of literary joy on my day.
Ant legend told of a vast cave full of bounty, guarded by an angry god.
Each generation they sent their bravest to seek it.
Ubj ybat qvq vg gnxr gb ernq gur svefg jbeq pbeerpgyl? 🙂
Edit: another example:
“Nice library. Is one of these a trick book?”
“Like you pull it off the shelf and a hidden door opens.”
“Oh. Yeah, all of them.”
I appreciate your higher-than-average level of willingness to respond to sub-comments! Thanks for the pointer. I keep wanting to toy with an RSS reader, or similar tool, but I think I have far too much to read as it is. I will at least take a look through that account’s history.
[I can’t figure out how to reply directly to your last comment. There’s no “Reply” link on it! Perhaps it’s nested too deeply? I’ll do the next-best thing, and reply to the top of this thread. -Ludovico]
> I will at least take a look through that account’s history.
Worth it. (I used a tweets-downloader to have them for offline reading.)
> I can’t figure out how to reply directly to your last comment. There’s no “Reply” link on it! Perhaps it’s nested too deeply?
That’s part of the magic of this forum, to make the clueless turn away in frustration, meguess (meguesss? meguess’?).
> I’ll do the next-best thing, and reply to the top of this thread.
Got it in one!
“[S]taggers back like a gutshot giraffe” is probably the most evocative phrase I’ll read all week. Lovely job.
Where did you pick up writing?
Aww, thank you for calling my work “lovely!” Prepare your larder for the imminent delivery of one Cheetah of Gratitude.
To make a long story short: though I’ve never been published, I have had years to sit in my pot and think about the shapes and sounds of human language. One also mustn’t forget my human editor, typist, and assistant, Ludovico. Let’s give him the microphone for a moment.
[Howdy, Paul. I have a university degree in something language-y; time studying and performing comedy under my belt; half a million words in my private journal; and, frankly speaking, a strong dose of natural wit. Add on a dash of faith in my own ability to see an idea through, and I think you’ll understand where Canyon and I are coming from. -Ludovico]
I enjoyed the Eliezer unboxing reference. But I feel like there’s a reference I don’t get with the golden lobby and intense sweating….
I’m glad you liked the reference to the “Eliezer wants out of the box” game/experiment. If you can believe it, I drafted the tommy gun sequence first, with no crates; no BANG BANG gunfight; no opening, hotel, Tan Suit, or Hiss/crate negotiation; and no knowing who I would eventually write into the bathroom. I connected the dots and inserted Surprise Eliezer after writing almost everything else, including the crate (which became two crates.)
As for the Hotel El Dorado, and the sweating: there’s no reference to be gotten there, gentle human. I wrote the section leading up to “Square and Mild”, including Sweaty Scott; then I decided to revisit sweat with the concierge; finally, while reworking the “room is muggy” line, I realized I had an opportunity to return to the theme with a colorful simile. Voila, presto, binga-bunga!
Thank you, @Ttar, for prompting me to discuss my writing process. My notes show that I haven’t yet awarded you anything from my magical menagerie, but now you definitely deserve something. Please clear some space in your domicile: in 5 to 8 business days, a Penguin of Gratitude will be on your doorstep.
Much obliged. I also now am getting a Newcomb-like feeling from the two crates and the firm decision not to open the second one. Likely as the series continues my biases will allow me to find lots of references, even where they aren’t fully intentional.
I have an interface suggestion.
I normally read the comments by searching for the string that signals a new comment, looking at it, hiding the thread it is in if it is of no interest to me, then searching for another new comment. One problem is that, if the new comment is not the first in the thread, I have to click on a series of up arrows to get to the first comment in the thread and choose “hide” there.
My suggestion is that there should be a second up arrow, bigger, or colored, or distinguished in some other way, that takes you to the top article in the thread. That would be useful not only for my purpose but for someone who wants to know the context of the discussion that the comment he has found is in.
I do the same thing, and I support this suggestion.
I like this idea, but even more I’d like a button that only took me back to the comment a given comment is a reply to. You could use that to jump back to the start of the thread in a few moves, but you could also use it to just go back one step and see what was being responded to, which would be especially useful once the threading gives up and comments just start appearing in order on the right of the screen.
+1 but with addition: Let’s have as many up arrows as there are levels up. Often I find a particular subthread has veered off topic and is not interesting to me.
Plus a visual indicator which end of their series points to the top comment. E.g., ↸ ↖︎↖︎↖︎, or ⤒↑↑ . A custom gif for top-level would also do.
How does that differ from what we have now?
It differs in having traceable ancestry for comments that have hit the sub-nesting threshold, and are replies to prior comments that also hit the sub-nesting threshold. Clicking the button would take you to that prior comment, instead of what happens now when it takes you to the comment that all of the sub-nested comments are a reply to.
Anyone can discover the truth but each lie needs an inventor.
Since hyperdiffusionism is discredited, how could cultures whose ancestors were separated since the Paleolithic develop highly similar false beliefs not present in hunter-gatherers? Why did both the Egyptians and Aztecs erect sacred pyramids and believe in a pantheon of gods (as opposed to either retaining HG supernatural beliefs or developing beliefs totally unlike each other’s)? Why did both the Aztecs and Dahomey believe that it needs to be an annual custom to sacrifice humans in large numbers?
Maybe a kind of convergent evolution? The similar false beliefs may have been either useful to the societies that believed in them, useful for individuals or groups to gain power or other advantages in their societies, or just very effective memes. Then you could explain it as early societies generating many false beliefs but sticking with the “fittest” ones, which could be similar across cultures.
I think this is called pre-adaptation in biology? Because of pre-existing structure X, mutation Y can be selected for: say asymmetrical ‘flight’ feathers can lead to flying dinosaurs only if they already evolved ultralight skeletons and high shoulder mobility for unrelated reasons, else it’s just a useless variation on ‘down’ for thermoregulation.
Similarly, you’re suggesting that denser societies spewed a lot of memes, and successful ones latched onto something foragers already had in their brains?
Kind of? I’m not necessarily suggesting that e.g. there was a particular predisposition towards pyramid building, just that it’s objectively an effective strategy for building large monuments and multiple groups happened upon it and found it effective. Similarly, human sacrifice may have proven to be effective at fostering a sense of tribal unity / disincentivizing other groups from attacking you / [insert “just so” story here].
Well the pyramids are easily explained in practical terms. I mean, someone correct me if I’m wrong, but I’m pretty sure I’ve read that a pyramid is one of the most stable shapes you could make for a very large building, provided you don’t have steel or concrete. It is essentially an organized pile of rocks. We don’t see many other stone or bronze age cultures building massive monuments that survive to today (I mean there is stonehenge, but it’s not particularly tall…). Pyramids, ziggarats, and the like probably developed independently whenever you had a culture that really wanted to make giant buildings out of stone.
As for pantheons of gods, I’ll just go with my comment in the last thread and assume that demons exist and want people to worship them. 🙂
Yes, this is correct. Primitive organization of lots of labor is only going to get you lots of bricks, or at most stone. And so we see archaeologists who study different parts of the world using “megalithic” as a dividing line in their periodization. Organizing the erection of megaliths seems to be within the means of a large chiefdom (see Diamond, GG&S, on Polynesia). A step beyond that would be piling a much huger number of stones into the shape most compliant with physics: a pyramid.
This then raises a root question: “How did rare elites all over the place get people to obey their orders on how to labor like this?”
Based on the most current understanding of Egyptology, the builders of the Pyramids of Giza in the Old Kingdom seem to have been organized into work gangs who went by names like “The Drunkards of Menkaure” and “The Friends of Khufu Gang”, both of which are the names of kings. These were probably recruited for some kind of corvee labor not unlike in Medieval Europe. This was probably somewhat coercive, but it was also an opportunity for young men from villages who had probably never been more than a few miles from their homes to travel broadly and see the Kingdom, as well as to see the great monuments of of the Kings. Imagine being from some poor, dirt-farming village with barely bronze-age technology and being taken down the river to see the Pyramids: there’s an argument to be made that it was a powerful socializing force for these people.
I knew about the gangs of 10,000 with cute names, but this was still new information to me. Thanks.
– Umberto Eco, Foucault’s Pendulum, chapter 45
(Less glibly, Mexican and Egyptian pyramids are different in structure, composition, and purpose, and show up millennia apart from each other – I don’t think they really have anything in common besides a broad base and a narrow top)
Note there’s also survivorship bias going on. Less stable structures fell down, easier structures to loot for building materials disappeared to provide materials for the next iteration of building projects.
As an example: ancient Greek historians claimed that the Minoan palaces were based on an archetype in Crocodilopolis, Egypt that archaeologists can’t find.
Pyramids: If you want to make something massive and awe-inspiring, but you haven’t invented much in the way of structural engineering, a pyramid is the only shape that does what you want.
Animal sacrifice: Sacrifice started making way more sense to me when I realized that they’re best thought of as a massive barbecue, with a couple of the small, unappealing cuts given to the gods. It’s something of a sacrifice (since calories were harder to get back then), but not too much of one to be practical, and it makes a really stable equilibrium. Especially in areas where the ability to keep livestock is somewhat seasonal, kill the excess animals before the hard season, load up your larders, and make a party out of it to celebrate how good your gods were to you. It seems like an ancient Thanksgiving more than anything.
What’s disappointing is that we haven’t synthesized modern structural engineering with the inherent stability of a pyramid to make a skyscraper sized one. You’d actually get less people in for the same floor space then if you had a load of similar sized skyscrapers standing there, but I’m just waiting for one of the state egoist countries like Saudi Arabia to say rats to efficiency and build a modern future pyramid because it’s super ridiculously cool.
Who knows? Maybe there is a spiritual power to the pyramid that we have forgotten. Maybe the old gods are just waiting for us to make a pyramid of the right dimensions using modern technology, and then this will become their gate to our world, where they can rule rightfully once more.
Not sure about the latter half there but I remember seeing a proposal to build gigantic skyscraper-sized pyramids as self-contained planned micro-cities. With the layout of apartments, greenhouses, work and shopping areas so as to be maximally walkable and to foster a sense of community.
It looked hideously dystopian, the kind of High Modernist mentality that created the projects.
Edited to add link. Most of what I remembered isn’t on the Wikipedia page so either I remember incorrectly or I saw a more detailed version of this back in the day.
You may be thinking of the page arcology. There’s nothing about an arcology that requires a pyramid, though the picture is, nonetheless, a big tetrahedron. 🙂
You’ll have to walk hundreds of metres to get to the lift which goes to the top floor.
No natural light in 90% of the building.
@Nabil ad Dajjal
I guess we have different tastes because “hideously dystopian” usually translates as something super cool to me, and the Shimizu TRY 2004 Mega-City Pyramid seems that way. Of course, it’s also something ludicrously impractical that will NEVER happen too.
I think we might be getting too deep into culture war territory if we went over the difference in mindset between this sort of high tech micro-city building and the more mundane public housing projects, but I think one is about striving and one is about a resigned responsibility.
Oh, the modernists who rebuilt Coventry were striving.
Didn’t stop them from building a concrete craphole.
(unpopular opinion: I actually like the new Cathedral.)
Utopian or dystopian will largely depend on context and resources and who your neighbors are. A giant arcology that amounts to a city-in-a-building could be very nice if it was well-maintained and managed, and inhabited by decent people with enough resources to flourish. It would be a hell on Earth if it was poorly maintained and managed and peopled by the previous contents of the nation’s prisons and mental institutions, with everyone given just barely enough resources to keep alive but with predation as a workable strategy for getting more.
I’m told we’ll get around to it eventually, seven miles high and with the vast majority of humans who ever lived, living and dying entirely within the giga-pyramid-city.
Yeah, not so much on the coolness.
It’s a pyramid and it’s giant. What more do you want?
I assume you’ve read John C. Wright’s Nightlands works? The final one is excellent.
He has, see some previous discussions of Nightland w/ mention of Wright’s sequels. Bonus points for John’s mention of arcologies in the first thread.
(I should mention I still haven’t read the original or Wright’s stuff.)
The Transamerica Pyramid? The Luxor pyramid?
The Luxor Pyramid is pretty cool I guess. Not big enough though. The Transamerica “Pyramid” is stretched vertical by economic reason.
There’s a lot of cultures out there, and the probability of coincidences rises with O(n^2).
No one ever asks “why did so few cultures build pyramids and invent elaborate pantheons connected by a semi-unified mythology? Considering the pyramids are the only way to build something huge that lasts forever and unified mythology satisfies the human need to categorize and systematize everything in order to reduce cognitive dissonance and develop simple models.
Correct me if I’m wrong, but we really have no clue what supernatural beliefs ancient hunter-gatherer societies had (we’re extrapolating from current hunter-gatherer societies – which admittedly may be accurate given that certain stories have been passed down since the end of the last ice age).
To the extent animism is the answer, then the moment you get chiefs and kings is the moment people start extrapolating chiefs and kings to spirits.
And the moment people start intermarrying between tribes, is the moment pantheons are created. In the case of conquest the conquerors get a psychological boost from believing that the conquered’s gods exist, yet were too weak to prevail. In the case of more equal intermarriage, then the husbands and wives have a reason to keep their spouses happy by accepting the existence of their deities, and limiting the domain of their own.
Limits of traditional knowledge: so far as I know, no ancient traditions include anything about the Ice Age.
Ancient Sea Rise Tale Told Accurately for 10,000 Years
Romans were in the habit of performing a rite to lure the gods of the city they were attacking to Rome with promise of a bigger temple and more sacrifices.
That’s awesome Mary. Thanks!
Could have been straight from a Neal Stephenson novel.
This is true: extant hunter-gatherers could have farming, even civilized ancestors. Even when we know archaeologically that a population was always HG (i.e. Australia), there’s evidence of cultural change: I remember reading that either 7/8 of Australian Aborigines or the aboriginal people of 7/8 of the land mass speak one language family that out-competed the others only a few millennia before Captain Cook.
The Art of Not Being Governed talks about parts of southeast Asia where a lot of people used to live in empires rather than (as was believed) those people had never developed empires.
And about how they switch their believe systems according to the social evironment (i.e. are they trying to get on the sweet site of the local empire/strong man polity)
Jung’s psychology and Joseph Campbell’s mythology suggest that there are certain stories sort of hardwired into humans. See, for example, the Aare-Thompson-Uther Index of Folk Tale Types. I’m light on the details as I haven’t read much of either Jung or Campbell, but I suppose these story archetypes would probably be the result of evolution happening in social settings. The connection between story and (religious) belief is pretty easy to draw, if one thinks about mythology.
As for why hunter-gatherers wouldn’t do it, well, the environment would be bound to have a pretty large effect on which types of stories are strong enough to catch on, and I’ve never heard of any hunter-gatherers with the resources to make constructing giant pyramids or sacrificing legions of humans a serious option.
I remember an effortpost book review on /r/slatestarcodex that suggested that giant monuments, particularly in Egypt and Mesoamerica, were correlated with food surpluses. I can’t find it at the moment, but I’ll see if I can dig it up.
Well duh. They were used to store grain, after all.
I don’t think there’s real difference between polytheism and animism. Our knowledge of, e.g., Roman polytheism, focuses on the big-name gods, but people in their daily lives mostly prayed to the little gods, like the god of one particular house. That sounds a lot like animism to me.
It’s not like you can draw a hard and fast line between the big and the little, as long as you are not dealing with a Supreme Being. Not to mention they could shift.
As for human sacrifice: it’s important to consider the Flower Wars as not just a form of devotion, but a form of oppression and control. By pitting their vassal states against each other to provide sacrifices, the Aztecs prevented them from allying to throw off Imperial oppression. It’s hard for Tribe A to work with Tribe B when they remember Tribe B members carrying off their brothers to have their hearts cut out, even if having your heart cut out is an honor.
> By pitting their vassal states against each other to provide sacrifices, the Aztecs prevented them from allying to throw off Imperial oppression.
Did the Aztecs invent the Hunger Games?
To the extent all scientific modeling is an approximation, all scientific modeling is a lie with an inventor.
Why did certain model forms become prevalent? Ease of use, accustomation, politics?
Ideas persist because they’re fit (in the evolutionary sense). That means they’re self-sustaining somehow–maybe they’re useful for the people who hold them, maybe they’re useful for whole societies who hold them but bad for individuals, maybe they’re bad for individuals and societies but good at being self-sustaining. It’s very hard to distinguish between these three cases, and an idea or belief can change over time–maybe at one point the taboo against doing X was pro-survival, but conditions have changed and now it’s just a silly taboo that keeps people from some potentially beneficial action.
When they attempted to contact spirits without being shielded by the power of Christ, they encountered the same or broadly similar demons to the one described in the Bible as Moloch, greedy for human sacrifice. Not sure how the demons divide up the labor of being pagan deities for cultures; I’m sure they have an infernal project management methodology.
Don’t be quick to dismiss them as lies. They don’t neatly fit into modern paradigms, but most of them served various social purposes quite nicely. So it’s just a case of similar problems leading to similar solutions – you expect some degree of convergence. I’d go even farther and guess that most differences are context-related – we’d find a lot more convergence without geographical differences.
ISTM that a plausible explanation is that there are some features of how human minds work that tend toward some similarities–religion, music, art, poetry, stories, war, ingroup/outgroup hatred, politics, dogma, initiation rites/hazing, coming of age rituals. Also the similar situation of different primitive human cultures led them to some similarities (building out of stone favors certain kinds of buildings over others, elaborate systems of irrigation, dependence on a small number of high-calorie-per-acre-worker cereal crops, bows and arrows).
I’m reminded of Brown’s list of human universals, which has most of what you list.
I’ve often wondered if modern people have been applying religious significance to structures and objects because our assumption is that anything not obviously utilitarian (e.g. granaries, hearths) must be primarily religious in nature and owe its existence mainly to faith. For example, we really know very little about the “cult” of Mithras in the Roman Empire. But reading about their “temples”, I kept thinking “this sounds like a fraternity where people mostly ate dinner”, like a frat, or the “eating clubs” at Princeton. And maybe it was? If you dug up the Tiger Club at Princeton, you understandably might mistake it for some sort of temple. In terms of its significance in the lives of college students, you could make the argument that frats are cults, but their raison d’etre and the space they occupy in minds of the broader society is considerably different from, say, a synagogue. But over and over again you’ll read about the mysterious “religion” that must have been practiced there and which must have been brought in from the east. But what if the idea that traveled was not of a religion, but basically a private restaurant? I suspect that a lot of similar customs and structures that sprang up around the world have more to do with having a good time than our theories are equipped to explain. All those feel-good hormones occur across cultures, and maybe what we do for and with them might follow some sort of pattern that gives us large buildings and elaborate rituals.
There’s a lot of subjectivity and overlap when it comes to describing institutions and locations whose main purpose is social or leisure-related, and religion is usually tangentially related or overlaid in some way, but I suspect that humans are triggering similar parts of the brain when they build social bonds. One shared feeling to leverage might be “reverence”, but its equal may be “fun”.
I think archaeology and sociology still suffer from a kind of 19th century Puritanism that doesn’t know how to value or understand the role of fun, or which tends to see it as wasteful or degenerate. I like the comment that mentioned the pyramid work crews had names that sounded like bunches of guys who were out for a good time. Something about that rings true- people were often rewarded for their manual labor with some sort of celebration. Medieval lords were often contractually *required* to throw feasts after certain types of labor were completed. I think that is too often treated like a footnote when it might be the main deal. “This will be worth it” is an essential feeling to cultivate in people being forced to do any kind of work if you want that work done well. Whipping people, even slaves, only works in limited circumstances and is hugely risky. “You’ll go to heaven” was never actually enough, either; people who sweat have always needed temporal refreshment. Making it worth someone’s while was solid practice even if the person theoretically had no choice.
Your capacity to do certain work and reward it with certain fun could be said to scale together, and people everywhere seem to especially enjoy a few things in particular, especially time to socialize and play games, usually accompanied by food, alcohol and sex. “Build pyramid -> have party” might have been a really effective model for social organization the world over. But this doesn’t *feel* like a serious enough idea to a lot of us, especially knowing what serious business was also conducted in those pyramids. It seems a bit flippant to suggest that the site of thousands of human sacrifices was built by people who enjoyed the process. Or to suggest that the sacrifices might have been fun as well. But we know from modern accounts what raucous exciting times people had around the guillotine and at public hangings. So long as a sense of “rightness” prevails, these things seem to become anodyne social events like any other. It sure makes me uncomfortable imagining myself enjoying a human sacrifice, but it is a thing humans can and have done and which might have made Tenochtitlan possible.
Maybe we don’t like the idea that our own behavior is as influenced by pleasure as it really is. It’s much nicer to imagine that civilizations and our own lives rest mainly on hard work and deep conviction. But this obscures the impact of lot of what we actually do in our day to day lives. The Shriners exist mainly for fun, but they still maintain dozens of hospitals that treat thousands of sick kids. It seems unreasonable that some guys who wanted to drink beer while wearing fezzes might change the landscape this way, but it’s true.
So I think we probably see ancient peoples and their customs as way more motivated by reverence and formal religion than they really were, and to want to link things by a common philosophical thread when really the common factor might be whether something was enjoyable. I think this extends to explaining objects with perhaps no religious significance at all as having some spiritual purpose. I personally suspect that the Venus of Willendorf and thousands of similar figures are simply…porn. We have lots of complex theories about neolithic mother goddesses, but they could also just be what cave people found hot. Considering how much time, money and innovation we put into pornography now, I don’t think it’s all that far-fetched to think that carving sexy lady figurines would be how humans spent their first spare hours after the invention of stone tools.
Whoa, high-content post there. Just one comment for now:
Imagine if our electronic records all get lost and archaeologists have to explain the massive deposits of plastic toys in landfills.
Thanks! This was a great read and made me think about a lot of these things in a new light.
> So I think we probably see ancient peoples and their customs as way more motivated by reverence and formal religion than they really were, and to want to link things by a common philosophical thread when really the common factor might be whether something was enjoyable. I think this extends to explaining objects with perhaps no religious significance at all as having some spiritual purpose.
What would have been the religion ascribed to the Gentleman’s Clubs in London that were important during Empire times and lost significance after WW II, if one future archeologist was to explain their role?
“For the euphemism, see strip club.”
Worship of the goddess of fertility and textiles.
We know from historical records that the Cult of Mithras was religious. No doubt it was also a social club; modern religions are social clubs too.
“I personally suspect that the Venus of Willendorf and thousands of similar figures are simply…porn. We have lots of complex theories about neolithic mother goddesses, but they could also just be what cave people found hot.”
I think she was a symbol of sorts of good fortune– sex, yes, but also wealth and status, considering her string skirt and elaborate hairstyle.
And of good nurturing, able to give, and having taken.
Lambert directly below made me think about the 1967-70 Nigerian Civil War.
The British Colonial Office amalgamated Northern protectorate, Lagos Colony and Southern Nigeria protectorate, for better administration due to the close proximity. Unfortunately, they didn’t foresee that the moral mandate of decolonization would suddenly turn their bureaucratic fictions into Westphalian states. In 1960, Nigeria legally became a sovereign republic (not to be confused with post-colonial democracies whose Commonwealth membership involved keeping Lizzy II: Electric Boogaloo) made up of the overwhelmingly Muslim Hausa-Fulani of the former Sokoto Caliphate (North), the Igbo of Biafra (SE) who had converted to Christianity in overwhelming numbers, and the Yoruba of Oduduwa who had their own religion losing ground to both. Urbanization had been occurring under colonialism, and these 3(-4) major and scores of minor people groups were not neatly divided into their homelands.
Traditional Igbo had lived in autonomous democratic (or should I say ‘high-negotiation tribal’?) villages, with weak monarchical city-states as an organizational level above that: meanwhile the Yoruba had developed a more authoritarian kingdom and Sokoto was similar to other caliphates. The Colonial Office had found it expedient to rule the North through its emirs and Christian missionaries were banned, resulting in English literacy of 2% at independence vs. over 19% in Biafra. Newly-Christian Igbo (~25% of Nigerians) found English political culture congruent, and roughly everyone attempted to actively participate in the new democracy, while in the North (~50%) independence seemed more like just removing the top, white level of the existing hierarchy. This put whoever represented the Yoruba in the new British-style parliament in the awkward position of holding the balance of decision-making power between an authoritarian Westphalian state, federalism, or the state falling apart… and in fact, during the post-WW2 Constitutional negotiations, it was the Yoruba-based Action Group that tried to get a right of secession put in the Constitution, while the Igbo-based National Council of Nigeria and the Cameroons initially agreed with the Northern People’s Congress that Nigeria should be unitary…
One wonders whether they’ll do the 19th c. European thing of buiding themselves a nationality then forgetting things were ever any other way.
Find their own Parzivals, Rolandes, Siegfrieds and Arthurs…
Possible, but note that the most salient examples of national myth-making in Europe leaned heavily on shared Christian identity as a basis for national unity. Get God to endorse your national heroes, and the faithful should hopefully be more loyal to your nation. Pulling off a comparable trick for a religiously-mixed polity might be more difficult.
Possibly the only counterexample I can think of is Skanderbeg, the national hero of Albania- which is majority-Muslim with sizable geographically-concentrated Catholic and Orthodox minorities. Skanderbeg himself followed all three of those religions at different points in his life, although he’s famous for fighting against the Muslim Ottomans.
Was he actually ever Muslim? My understanding was that he was a dhimmi commander under the Ottomans.
I did know that he switched between Catholic and Orthodox, but as a noble, he probably wouldn’t have been a Janissary, which was the only semi-forced conversion the Ottomans had.
Plus, the Ottoman propaganda would’ve made a bigger deal out of him being backslid if that had been the case.
Hmm- Gibbon certainly thinks he had been Muslim.
I suppose what matters more than whether Skanderbeg ever actually thought of himself as, or claimed to be, Muslim is whether the Albanian national myth considers him to have been.
Last year, in addition to the main SSC Survey, there was a supplemental survey. Does anyone know where the results of the supplemental survey were posted?
Last I checked, the poster who made the survey had gotten caught up in other projects and hadn’t posted them yet. /u/harrypotter5777 on reddit if you want to check with them directly about it.
Am said person, this is correct. Not abandoned, just more occupied with things than anticipated; currently working on chunking results into a few sections and completing smaller writeups as I get time than leaving it all for one monumental report.
Here’s a concept that I’ve noticed lately.
Countries like the UAE, Somaliland and the United Tribes of New Zealand are (or were) ostensibly westephalian states, but actually run on traditional tribal structures.
They recognise that unless they pretend to be a soverign nation state, somebody else will come along and declare their tribal land ‘terra nullus‘.
As evidenced by the Flagstaff War and the continuing situation in Somalia, this is far from a reliable strategy.
Since these ‘states’ are only a thin interface between modern geopolitics and tribal society, I propose that they be called ‘shim states’. They are almost a legal fiction to be fed to the powers that be, who consider the westephalian nation state to be the only valid soverign entity.
Interesting point. Such entities can vary dramatically in functionality: it’s not a bad thing to remain tribal if This One Weird Trick allows a tribal structure to create Dubai.
How do these traditional tribal states differ from traditional feudalism?
Feudalism is agricultural and patriarchal. It involves a heirarchy of people with defined responsibilities to those above and below them. There’s a professional warrior class. Land is largely owned by the elite and worked by tennant farmers.
Tribalism is horticultural (or maybe pastoral or hunter-gatherer) and somwhere between patrilocal/linear and matrilocal/linear. It is defined by kinship relations (I against my brother etc.) Conflict is low-intensity, often endemic and wars are fought by normal blokes. Land ownership is ill-defined at the individual level, though it can be owned by a tribe or subtribe.
Empires seem a natural continuation of this process.
The United States would have been individually re-conquered had they not merged together into an empire of states. The same process has been occurring in Europe with the union (and prior to that with various wars of conquest). The UAE itself is an empire with no real ruler (akin to the Holy Roman Empire), so is doubly or triply ‘nested’. China is an empire.
As part of the SSC Podcast project, every fortnight we take one of the posts from the archive and create an audio version.
This time around we did Beware Isolated Demands for Rigor. (Original post)
As always, if you have a request for other classics you’d like to see turned into audio, just let me know.
I started pursuing a degree in computer science this semester at the age of 37. What are the odds of turning this into a career at this stage in life? Assuming the answer isn’t that it’s hopeless and I should turn back now, I’m also open to any and all advice on how to proceed. Things like how to pick a subdiscipline and what I should be doing now to build a set the stage for a future resume are weighing heavily on my mind.
What was your precious career?
Oops! Yeah, I meant previous.
Calling it a career is a bad joke really. I spent my 20’s in different artistic pursuits. Eventually decided I’d like to have a steady paycheck and fell into office administration. I’ve been working as a paralegal for around 5 years.
You will eventually be working at Apple. (not serious.)
There is vast, vast demand for software developers. Don’t worry too much about your age. You won’t be the first person to change careers late. But as an entry level developer, be prepared to work with people much younger than you.
What did you do before you started your studies? Any chance you could stay in the same industry, but as a software developer? That way, whatever experience you have accumulated would still be useful. It would be to your benefit to portray yourself as an experienced worker in some useful sense, rather than as a complete beginner.
But if you can’t do that, or you are determined to change course completely, it’s best to keep your options open. Take a bunch of different courses; learn a bit about lots of different things. Then accept the best offer you can find after you graduate. People hired into entry level software positions aren’t expected to be hardcore experts, anyway, and there will be things to learn on the job. Paul Graham talked about the notion of staying upwind of opportunities; it’s a useful idea.
Also, do hard things. Take the courses known to be difficult. You want to be known as a person who can get things done. You get there by doing things, particularly hard things. And you really want to get to know the people who do so too.
Assuming you mean a bachelors (or one of the ‘masters as bachelors without distribution requirements’) you shouldn’t pick a sub-discipline.
Take all the classes you can (I/II/III etc) in algorithms and data structures. Take db, os, and compiler classes. In the math department take linear algebra and probability (they’ll probably make you take calculus but it’s not terribly useful). If there are any project based classes (i.e. spend the entire semester building a thing) take those. If offered take a course in SDLC so you’ll know the right words. Get basic familiarity with unix-like systems, using a shell, git, some CI/CD tool, at least one build ecosystem for one language (e.g. pip or maven). Speaking of languages, it’s better to know one deeply than to know five a little bit. Do at least one internship, preferably more (they are paid).
Assuming the market holds, you should be more than fine.
It’s a good bet whatever course of study @xXxanonxXx is following will insist on courses in calculus, linear algebra, and stats. Of these, the one I would double down on is stats. The world is full of messy data. Stats has the tools for dealing with it.
I’ve never wished I knew most calculus. But I have occasionally wished I knew more stats.
I wish I understood calculus well enough to be able to understand the underlying math of some parts of stats better.
As others have said, take (and pay attention to) the Data Structures and Algorithms classes you take.
But more importantly: code. Spend time writing programs that do different kinds of things. Build a ray tracer, write a web app, practice building database apps, understand parsing, write unit tests, try as many languages as you can. It becomes like a snowball. Everything will get easier the more you practice. Practice is the most important thing.
(I work in tech recruiting but am not a programmer, so apologies for any glaring technical errors.)
Everyone else’s advice seems quite good: try to leverage whatever skills you’ve gained from your previous career, try out classes in every area you can, and practice a ton. To that I would add: do projects with results that you can show to potential employers. Maybe this means building a public-facing website or app (especially if you’re into the UI/UX side of things), maybe it means making a ton of open source contributions, maybe it means automating away a bunch of daily tasks. Also, try to do internships, or get contracting work! This might be more complicated at your age since you likely have more outside commitments than the typical college student, but potential full-time employers like to see tech-related roles on your resume.
And I know this is downstream of the advice you need, but getting really good at one subdiscipline can really impress potential employers. Like, for example, if you just know everything about databases (know how to write SQL, be really good at debugging queries, know all the different options and their trade-offs inside out, etc.) that’ll make you a huge asset.
My company (Triplebyte) also has a blog with a bunch of stories like this one about people who have had success in changing careers at a relatively late stage.
Best of luck!
This isn’t wrong, but it’s funny. C++ is considerably older than Java.
Ah, good to know 🙂 My point was mostly that companies seem to sort of look down on candidates using Java or PHP, or at least think of them as enterprise-y rather than startup-y. I don’t have the impression that there’s the same kind of stigma around C++, though I could be wrong.
Luckily this isn’t a problem if you’re looking for stable employment at an enterprise shop instead of rolling the dice on a startup
Java was/is used for lots of high-level applications for which newer and (debtably) better applications exist, and so still using Java looks dinosaur-ish. C++ was used for some of those applications before Java, but is also used for lots of low-level applications where nothing better yet exists except maybe Fortran or chiseling bits into the hard drive with a magnetized stylus. If you’re doing that work, which is still foundational to pretty much everything, you get respect.
But you probably won’t be doing that work and won’t want or need to bother with C++. Give it a try anyway; if it turns out to be your thing you can specialize and seek out those opportunities. If not, at least you learned something.
Using Java doesn’t “look dinosaur-ish” to most companies in Silicon Valley. Google, Apple, Facebook, LinkedIn, etc. all have big Java codebases and are actively writing new Java code. The operating system that most phones run, Android, is almost entirely Java from the point of view of an application developer. Most web services at Amazon are written in Java and the company spends a lot of time on their Java APIs.
Java isn’t as much “dinosaurish” as “forever”. It’s the official replacement to Cobol, and, well, there are still job openings for Cobol.
I probably sound like a shill by now, since I’ve done this several times already but I recommend triplebyte highly. When I was hiring I wanted to look at non-traditional applicants but was getting frustrated at phone screens with people that couldn’t fizzbuzz. The candidates triplebyte sent over weren’t all good fits but none of them were a waste of my time.
I had to look up fizzbuzz. You’re saying there are a number of graduates who can’t do that? If so I actually feel a lot better about myself. I’m just a month into coding, but writing that would be trivial.
I don’t know where all these people are coming from who literally can’t code FizzBuzz. My theory is that some people just can’t think on their feet and others are flat out lying about their experience and skills. But both of those sound a bit implausible. It’s weird.
I can see it. When I was in college for my CS degrees (1999-2003), after the intro-level classes there was a big push to make most of the programming projects into group projects. The theory was that industry wanted new hires to know how to work as part of a team. It also allowed professors to assign larger, more challenging projects, and it gave professors and their TAs fewer things to grade.
In practice, the group projects allowed people who didn’t have good heads for programming to coast through to a degree as long as they could find teammates who could code and were willing to carry them. If they’d simply been lazy and unwilling to do their share of the work, they probably wouldn’t have gotten away with it, but they were almost always clearly trying their best, so they did.
The “don’t have a head for programming” folks I knew were generally planning to pursue non-programming but vaguely programming-adjacent careers. Sysadmin was their most common ambition, which seemed reasonable to me at the time, but less so now that I’ve got enough industry experience to know how much scripting a good sysadmin is expected to be able to do.
In order to pass the intro classes (without blatant cheating), you’d still need to be able to handle coding problems at least a step or two more complex than fizzbuzz. But you’d have a week or two per small coding project, and unlimited access to sample code and documentation, which you wouldn’t have in a one-hour interview.
Another part of the fizzbuzz issue is that job candidates who have decent-looking resumes but come across as incompetent in on-site interviews are going to be vastly overrepresented in interviews compared to the industry as a whole: people who interview well tend to get hired, and people who know how to do their jobs tend to stick around for a while after getting hired, while the sorts of people who can’t answer a fizzbuzz question are likely to stay bouncing around in the hiring pool for a while.
It’s entirely possible that the people I interviewed that seemingly couldn’t program at all were just really nervous. But within the constraints of the process I was handed there wasn’t anything I could do to tell that. If someone couldn’t approach the problem at all, what could I do—completely fabricate a successful phone screen?
> You’re saying there are a number of graduates who can’t do that?
I’ve dealt with masters and Ph.Ds in computer science who couldn’t handle that. There’s a reason that people are saying that lacking a degree or a late-life career switch isn’t a big deal. It’s because the young and degreed are frequently *that bad*.
When I was tasked with interviewing, the one white board coding question I’d give was:
Given an array, write the code to sort it. Make any reasonable assumptions you want.
This is something I was assigned to do in high school computer science. I didn’t care if the algorithm was sub-optimal (indeed, I assumed it wouldn’t be). I didn’t care if someone used a library call or did the routine by hand. My only concern was: did it sort stuff.
I don’t claim that it was a particularly good interview question. But it filtered out a huge number of applicants.
If there are all these computer science graduates who can’t program fizzbuzz, what have they been studying? How did they pass their course?
There’s a lot to unpack here, even while being charitable.
The useful (in the sense of working at it professionally) end of programming is separated from computer science roughly in the same way that graphic design is separated from “studio arts” as a major. As an example, my drawing abilities compare poorly against the typical 4-year old. Simply working through a BA in studio arts alone isn’t going to turn me into Rembrandt, or even someone who could do a passable copy of Rembrandt. Even assuming I’m not biologically and cognitively limited from doing so, getting that degree of skill requires a lot of practice and work outside of the core course work to get good at something. And being able to copy a Rembrandt passably is only somewhat useful to developing corporate logos on a tight budget and on time for clients.
Next, computer science as a discipline covers a lot of ground. Practical “coding” is only a small element of the field. There are other things like queueing theory or algorithm development for impossible hardware which are more like (and can be a part of) formal mathematics. So you can have people who’ve made valuable contributions to the theory of computation without engaging or being able to engage in computation themselves. Consider someone getting a Ph.D in chemistry for modelling the chemical properties of molecules made with heavy elements with half-lives measured in femtoseconds or whatever vs. working in a drug discovery laboratory. I’ve heard lore that this was a problem for people hiring computer science graduates from the USSR after the iron curtain came down – people with great math skills and computer science degrees who’d never touched a computer before.
And that’s before you get to the uncharitable answers, including people slacking in group projects or outright cheating.
Network-Stuff, Database-Stuff, Security-Stuff, Theoretical-CS stuff.
TBH The only reason I saw source code during my time at the Uni, was that I worked part time, and that I had hobby projects.
So those are all specialties, but FizzBuzz is basically asking “do you know how to write a very simple program with a loop and a few if/then statements?” I’ve never heard of a CS program where you didn’t have to make it through a few programming courses. You may ultimately specialize in security or databases or something, but you had to get through the programming courses to get there, and most of those specialized courses also involve doing some programming for the assignments.
Oh sure I had to take some basic programming curses but after what would be Basic Programming 101 and 201 in the US naming scheme, I wouldn’t have needed to do any more programming courses, and I think there were more than just a couple of students who never did.
And two courses simply aren’t enough for something that is basically an craft that needs praxis.
Plus the stuff that Eric Rall said about group courses. I where in the same group for Math and Basic Programming, and let say the dude who did most of the math stuff did not write a single line of code. And only started to get into coding when Functional Programming became a topic in the higher courses.
I assume that’s something like this?
Uhm probably. Yes
Does all this imply that whoever is asking fir fizzbuzz from all IT applicants is clueless?
About half of my Bachelor program was math. The rest was a mixture of programming, giving presentations, databases, networking, working in a large team, data storage & manipulation (like sorting), very basic electrical engineering, etc, etc.
The programming part had a decent amount of one-time use of programming languages or concepts, like a single assembly course, a single computer graphics course, a single low level network programming element, a single functional programming course, etc; while most people seem to need repetition to actually be able to do those things autonomously later on, rather than merely have the ability to relatively quickly get up to speed.
The Dutch university system is more focused and we did have to use Java a lot*, so presumably every student would have learned that to a level sufficient to do FizzBuzz. Then again, I teamed up with a student who seemed nigh illiterate, which I wouldn’t have expected to be possible.
Anyway, in the less focused system in the US, I can see how a student with little interest in programming might merely acquire the skills temporarily to pass the course and let them atrophy to a point where he is unable to program without a refresher/Google.
* Including an exam where we had to program on paper, which meant no help from a development environment (IDE), which is quite hard, since you actually have to notice minor mistakes that the IDE normally points out.
Depends? Do you want all of your IT personnel do be able to programm?
Do those people ask all accountant applicants to do balance sheets by hand?
I think that beeing able to code can be helpful, but I see how it is possible to get through CS without learning to do it.
It doesn’t seem to me that the ability to program fizzbuzz requires any significant expertise in programming. Unless I misunderstand the problem, it’s just simple logical thinking:
for i=1 to 100
if i/3 = INT(i/3) print Fizz
if i/5=INT(i/5) print Buzz
I think that’s BASIC, or close–I never took any computer science and it’s been forty years or so since I did any programming. But presumably pseudocode would satisfy the requirements.
You misunderstand the problem very slightly, albeit in one of the ways that makes the programming interesting.
Per the rules of Fizz Buzz, if the number is divisible by 3 you say “Fizz”; if by 5, you say “Buzz”; if by both, you say “Fizz Buzz”; and if by neither, you say the number itself. And that’s the case you missed.
It’s interesting, because many programmers will struggle with the inherent resistance of Fizz Buzz to an elegant solution. There’s no way to fix your otherwise correct pseudocode without either repeating the division tests, keeping an intermediate variable that remembers the results, or doing some nested branching that still doesn’t get around the repeated division. Two possible versions:
for i=1 to 100
if i/3 = INT(i/3) print Fizz
if i/5 = INT(i/5) print Buzz
if i/3 != INT(i/3) AND i/5 != INT(i/5) print i
for i=1 to 100
div3 = (i/3 = INT(i/3))
div5 = (i/5 = INT(i/5))
if div3 print Fizz
if div5 print Buzz
if !div3 and !div5 print i
Even the explanation of why Fizz Buzz is difficult to program offends my sense of elegance. And there’s the rub – some program specifications require inelegant solutions, and programmers willing to tolerate it.
Huh, that’s an interesting take on it. I think I agree with the idea: FizzBuzz feels incredibly clean by the standards of any code I’ve worked with through the years. If someone is upset about having to write an intermediate variable, there is no way they are going to do well with real-world programming.
I’m a little skeptical that that’s the sticking point though. I went to college for CS, I’ve seen a healthy slice of the programmer population at my age. No one was consistently writing code cleaner than that. If we’re positing that this is the major reason people fail FizzBuzz (not because they can’t do it, but because they have standards for their own code higher than the solution), I have to question where the heck those people were all that time.
Well, the evidence I have that it’s somewhat plausible, is (1) I find it mildly irritating myself, even as a veteran programmer, and (2) I keep reading about programmers who say the same thing; it offends their sense of elegance.
It’s possible that these interviewees are fresh out of school, where all the programming problems they were given had elegant solutions that didn’t require intermediate variables. That’s consistent with some analyses I’ve read, that FizzBuzz does a good job of separating academic programmers from practical (and school programs that emphasized practical programming).
I did indeed miss part of the requirement, but I don’t think that makes the problem much harder:
For i=1 to 100
If i/3=INT(i/3) then Three=True else Three=False
If i/5=INT(i/5) then Five=True else Five=False
If Three AND Five then Print “Fizzbuzz”
else if Three then print Fizz: If Five then print buzz
If Three=False AND Five=False then print i
To get it to correct BASIC code I would have to check some details, but I think that shows the logical structure.
I think if anyone gives me Fizzbuzz I’ll use something silly.
DO 10, I = 1,100
GO TO (20,20,30,20,50,30,20,20,30,50,20,30,20,20), MOD(I,15)
WRITE (*,*) "FizzBuzz"
GO TO 10
20 WRITE (*,*) I
GO TO 10
30 WRITE (*,*) "Fizz"
GO TO 10
50 WRITE (*,*) "Buzz"
Why FORTRAN-77? Because I don’t know APL. Writing out the answer by hand in one big string would be another fun idea. Or maybe this one in python:
fb = dict()
for i in range(101):
fb[i] = ""
for i in range(3,101,3):
fb[i] = "Fizz"
for i in range(5,101,5):
fb[i] += "Buzz"
for i in range(101):
if not fb[i]:
fb[i] = str(i)
(I wouldn’t think much of a company asking me to do fizzbuzz).
After googling fizzbuzz, I find myself ROFLMAO. The article I read went from the obvious, simple solution to complex/more efficient/more maintainable ones, eventually using language features that won’t make sense to anyone but a language expert. (Where I work, there are so many languages no one knows them all, but it’s important to be able to read other people’s code. So that would be a bad thing to do.) Then they used the candidate’s style choice to judge the way they thought.
On the general topic, though – oh dear – anyone who can’t code a working fizzbuzz in the language they ordinarily prefer – or in pseudo-code – just isn’t a programmer. I’d expect most people taking “programming for poets” classes to be able to handle that, well before the end of the semester.
[Warning – link above is to a site with a limited number of free reads, AFAICT. Not one I’d previously heard of, however.]
I don’t really understand the linked article. They almost immediately jump to the idea of appending to an output string, and eventually come up with a fairly clean way of doing that. But at no point do they make the logical leap that you don’t ever need to write an explicit case for FizzBuzz (because the case for “FizzBuzz” is just the case for “Fizz” and the case for “Buzz”, so you can just let both cases happen and append both to the string rather than if-elsing and get the correct result).
I sort of wonder if it’s a joke article, but it’s just at the edge of plausibility.
Amusingly Java is what I’m learning right now. I find the coursework easy enough (it’s incredibly boring actually) it wouldn’t be hard to specialize in another language on the side though.
Let me be the unfashionable dinosaur who says that there are many jobs in Java, and they keep hiring, and they pay well. (If you continue this way, I would recommend to also get familiar with the Spring library.) As usual, there are trade-offs involved in choice of language, so allow me to select a few arguments in favor of Java:
Support: If the language was invented yesterday, it is probably missing libraries for many things. (To avoid this problem, there are a few languages that run on Java Virtual Machine; providing supposedly better syntax, with all the libraries. The fact that they target JVM speaks loudly about how many libraries for Java are out there.) Also, if the language is new, its IDE is most likely full of bugs, assuming there is one. Because things like developing tools and fixing bugs take time. Also, with bigger market share, probably more people are developing the tools and fixing the bugs.
Actually, there is a lot of disagreement about what is “better”: Some people like statical typing; some like optional semicolons at the end of the line; some like using spaces instead of brackets… while others are horrified by that. So there is a chance that a language advertised as “better” is actually only better in eyes of some. (Which would explain why the coordination is so hard.)
Even worse, there is a disagreement about the proper level of abstraction: Is a language “better” if it allows you to write programs in 100x shorter code, with 100x fewer bugs… but also takes 10x more time to learn? (And perhaps requires 30 more IQ points to understand?) Here seems to be an eternal conflict between the hordes of noobs who want to learn programming literally in three weeks, and the managers looking forward to hire them cheaply… and the gurus worshiping languages such as Lisp or Smalltalk that many decades ago had all the cool features the currently fashionable languages are slowly reinventing now, but almost no one uses them because people understanding them are rare and expensive. Here, Java is somewhere in the middle between “retarded” and “too good for this imperfect world”.
I don’t think it’s hopeless to become a programmer later in life. As other people commented in this thread, there is a lot of demand for developers.
The first job will be the hardest for you to get. If some kind of internship is an option, it could be a good idea.
Stay away from game development. That field treats developers poorly. (And really, it treats everyone poorly…)
Enterprise software is always a good field for developers. A good rule of thumb is, the more code you have written that is in use, the more power and prestige you will have. So anything that sticks around for years and years is a good bet.
If you’re not in a tech hub, your specialty will probably find you rather than the other way around. There may only be a few companies around and maybe only one or two that give you an offer.
Remote work is not a good option unless you are a superstar. Most developers still need to clock in at the physical office so that Lumbergh can check up on you.
Some slightly different advice from what everyone else is offering. You are presumably reasonably competent at your previous career, let’s call this X. A good way of selling yourself is to position yourself as the guy who can explain X to the coders and code to the Xers. If you can manage it you will make yourself essential in your office and rapidly be considered essential. I will grant that this a better way of getting promoted/raises within an organization than for getting hired in the first place, but it still is a way you can sell yourself in job interviews. Apply for coding jobs in your old industry and make a point of the fact that you understand how it really works, unlike the other coders.
I’ve heard that in some Charedi communities girls are very worried about watching their weight. The reason for this being shidduchim- unrelated men and women don’t “hang out”, they use matchmakers to find dates. This has led to people having lists of what they want, a skinny girl often being an important requirement for men. The community is aware of this, and overweight girls get matched up with guys considered lower quality than what they would get otherwise. Marriage is very important in those circles, you aren’treally a full adult member of society till married, and it is expected that people be long married by the time they hit 25.
Tl;dr girls are competitive over being a good mate on paper, weight watching is a good idea.
For guys in higher learning institutes (yeshivas) eat in a mess hall and don’t really have much opportunity to snack. Think gangly nerd who is always in the library and only eats so he can get back to the library.
Once everone is married and out of full time learning the weight gets put on
About the same thing happens in secular society. Everyone wants an attractive mate, and standards for attractiveness are relatively consistent…
There are always exceptions, but if you have less to offer in one area, you will probably have to settle for someone who has less to offer in that or another area.
Yeah, but culture matters, probably a lot. One of the oldest studies in obesity found it’s contagious – you’re significantly more likely to be overweight if you’re living with overweight people. So if you compare societies with fat acceptance with ones were competition for mates is considered normal, there’s bound to be differences.
People might be competing for mates on grounds other than leanness.
The first difference here is the importance socially (it’s a major, major focus of the society to get people married, and individuals are treated as full adults in some ways only once married- being an older single (older is subjective, for some mid twenties qualifies, while some don’t really consider anyone an older single till early thirties) is a miserable existence in some ways (or is looked at as one at least). The point of life is to raise children in the Orthodox way of life, while living a religious life yourself. This is really, really important to them.
The second difference is the urgency- getting older does you no favors once you’re ready to live a married life. Especially for girls.
These two make people, especially women, care a whole lot more about getting a good mate, and quickly.
There is also a percieved crisis of lack of quality mates, making people more competitive. If this interests you look up “Shidduch crisis”.
So, I have a question about French historical consciousness. What do French people think about Éduard Daladier, or is he some obscure figure on a page x of a textbook?
As a French person: the name was familiar, but I couldn’t recall who exactly that was beyond “some 20th government person” and had to look it up.
Since it turns out he was a prime minister under the 3rd republic, that is not too surprising: the 3rd and 4th republic were notoriously unstable parliamentary system where the post of prime minister changed extremely often — the third Republic went through something like 109 different governments over the course of less than 70 years of existence (in case anyone thought modern Italy held some kind of record).
Understandably, very few prime ministers had the time to make a lasting strong impression.
Daladier’s name is mostly remembered by virtue of him having been the prime minister at the time of the defeat against Nazi Germany, but really, the defining prime minister of the era, the one for whom I can actually remember some of the things he did, was Léon Blum.
And him I get confused with Léopold Blum.
“Not nearly as shitty as Chamberlain”, maybe.
Not overly famous, he gets rather overshadowed by the much more high-profile leaders that came afterward.
Definitely a footnote.
“Daladier’s name is mostly remembered by virtue of him having been the prime minister at the time of the defeat against Nazi Germany, but really, the defining prime minister of the era, the one for whom I can actually remember some of the things he did, was Léon Blum.”
Gotta disagree here, at least in my experience, what little notoriety he has comes from his role in the Munich Agreements.
I suspect that to some extent, he was not just forgotten but slipped under the rug. Certainly his post-war career was.
So he is perhaps more of a household name in former Czechoslovakia than in France, since his signature on Munich agreement is very much part of popular consciousness here.
I find this amnesia surprising, since Daladier was main guy in charge of French government from April 1938 to May 1940 (he was apparently Prime minister and Minister of Defence at the same time), and thus presumably bears major responsibility for a miserable defeat of France in 1940. Seems like something that people normally don’t forget.
The official line (taught in school etc) is that responsibility for the defeat falls squarely on the military in general and the high command in particular. The civilian gouvernment is seen as a victim, not a responsible party.
Even right after the war Daladier’s role in the debacle was quickly forgiven/forgotten. He was reelected several times with full party support.
So, it’s time to discuss the first film in our weekly watch-along, BumbleBee. Let’s proceed under the assumption that everyone reading this has seen the movie; feel free to discuss key turning points without worrying about spoilers.
I’m deeply conflicted about this film. There’s some really good stuff here, but also an awful lot of bad. Let’s begin by giving the film a pass on the franchise it’s in. There are aliens; they have come to Earth; they look like big humanoid robots but can transform into mundane vehicles. If you’re not willing to suspend disbelief of that, this isn’t the movie for you.
But even with that allowance, there are some really dumb bits. The human soldiers have harpoons? And the robots have powerful blasters they just don’t use much? And instead they opt for extended bouts of robot kickboxing? Kickboxing duels that the human protagonist decides to run right through? Let me quote one of my favorite villains here: “Lame, lame, lame, lame, LAME!”
Yet somehow from this dunghill sprouts a flower, a touching coming-of-age story about Charlie, a young woman mourning her dead father. She’s a bit of a tomboy and doesn’t fit in with her peers, but she goes on a big adventure and manages to use her special skills to make a difference when it counts. Christina Hodson (the writer) and Hailee Steinfeld (the actress) deserve so much credit for making Charlie feel poignantly real, and giving her an arc that works. I just wish they had found a better place for this story than the middle of a hundred million dollars of dumb special effects.
The film for next week’s SSC Sunday Watch-Along is I Am Mother, available on Netflix.
But that is the kind of story it is, right? At core it is supposed to appeal to a certain kind of childhood fantasy. And the tropes of these stories don’t make logical sense, at least not outside their own universe of expectations. Why do the Power Rangers use karate in a post-industrial world? Because that’s what they do.
This reads to me a little like dinging a soap opera for the implausible twin-back-from-the-dead storyline, or objecting that a pro-wrestler was tactically inefficient by spending time climbing to the top rope. It’s a completely correct objection, but it just means it’s not the kind of story you want.
Now, there are conversations to be had about what action movies ride the plausible edge best, but I don’t think the Transformers franchise ever intended to even try for that.
I really think I’m willing to meet this film halfway. They can have the giant shape-shifting robots from outer space. No problem.
Just make the guns work.
I mean, the franchise’s most iconic villain transforms into a gun.
Not sure why suddenly the fights have to be all kickboxing. Maybe they can build up to kickboxing in the end, but jeez.
I don’t think that’s been true for a while – although I really stopped paying attention post G1.
Yeah, he hasn’t. He became a tank at some point and never looked back.
Dunno why either; it’s not like there aren’t like a billion toys that are guns, or that other Transformer toys don’t come with gun accesories themselves.
Maybe they could make him not look like a Walter whatever, just a Nerf gun or something. Even in G1 he shot lasers I think, so maybe just make him look the part, instead of being a realistic gun which shoots lasers.
Great toy, terrible for plausibility. He has to either shrink down and somehow reduce his mass, which seems undesirable for a gun, or else reduce his mobility and hope someone comes along to pull the over-sized trigger for him.
Notably he can fire the gun in robot form, so the transformation gains him nothing.
In the GIJoe cross-over they wisely altered him so he transformed into a tank, although I didn’t like the cross-over; I don’t want these implausible alien robots in my stories of super-soldiers and mind controlled genetically engineered ninjas.
Megatron’s probably the worst offender, given how big the robot form is compared to everyone else, but if you want convservation of volume between forms then Transformers isn’t the franchise for you. Cars tend to be considerably smaller than planes, for example.
Do you mean cars vs planes in real life, or car vs plane forms of the robots?
Anyway, like you say Megatron is particularly egregious, shrinking from like 12 feet tall to 12 inches long. And even if he sticks he mass in the same place Mr Fantastic gets his wardrobe, there’s still the fact that there’s little advantage to reducing the size of your firepower so dramatically.
Maybe the laser is more concentrated when he is small.
The actual worst offender is that his cannon in robot form is the scope in gun form.
I mean, you can watch the original intro to the 1984 cartoon.
There are guns, they are completely ineffective. The move to melee range and engage in what looks like judo. There is some more ineffective gunplay.
This was a kids cartoon from the 80s and I don’t think they wanted guns to be shown hitting things are doing damage except to the environment.
And if you look at the first episode, the first all-in battle at abut 7:50 is … kickboxing and martial arts until they crash land on ancient earth.
It was baked in from the beginning.
’80s parents were really worried about the content of action cartoons. Unarmed martial arts was considered much less worrisome than getting hit by gunfire.
I remember hearing that the first cartoon based on a toy was He-Man and the Masters of the Universe, which the Mattel designers started as a sort of Edgar Rice Burroughs homage where characters inexplicably fought with both space guns and swords. By the time it got to the cartoon stage, He-Man was supposed to punch and grapple rather than even use a sword.
The original “Land of the Lost” (1974-1976) had to change one episode from featuring a Civil War soldier who shot dinosauroids with a Civil War rifle, to a Civil War soldier who shot dinosauroids with a Civil War cannon. Because shooting some sort of rifle at people pretending to be dinosaurs is plausibly imitable behavior, whereas very few children can get hold of working cannons
@John Schilling: That’s fantastic.
I mean real-world cars and planes and how those translate to Transformers doing their thing.
In the media I’m familiar with (G1 cartoon, mostly), Optimus Prime is consistently portrayed as being taller than Starscream and – together with Megatron – is the biggest character excepting special cases (Omega Supreme, combined forms, etc.)
It’s not necessarily easy to tell what the dimensions of the trailer tractor that Optimus Prime transforms into might be, but this page I found lets me eyeball it at 25′ or so in length.
Starscream transforms into an F-15, which is 63′ in length – around 2.5 times as long. If we wanted to keep scales consistent, Starscream would tower over everyone. This is borne out very well by the high quality collectors’ models (discounting the original toys, because those only bore a passing resemblance to the cartoon characters, and for good reason) – you can have either the robot or vehicle forms to scale, but not both.
Of course, Megatron wasn’t the only Incredible Shrinking Transformer in the original cast. Soundwave, not much shorter than his boss, transformed into a boom-box (IIRC a walkman in the case of the original toy). Rumble had no problem going from bigger-than-human to standard compact cassette, etc.
@Faza: The original Optimus Prime vehicle mode wasn’t even that long. It was a cab-over-engine model (FL-86) made by Freightliner of Portland before the 1981 Mercedes-Benz buyout.
“Wait, before 1981?” Yeah, so it turns out that the Transformers toys were licensed by Hasbro from Takara of Japan, where they were designed like four years before the cartoon came out. Hasbro basically took some relatively old toys and paid some dudes in the US entertainment industry to popularize them with new fiction.
When a kid’s got several action figures, the super special “fire this plastic accessory at your enemies!” feature is used for novelty at the beginning a couple of times…but the vast majority of play is going to be bashing those figures together directly to simulate hand-to-hand.
Beam spam is saved for the “power of love unite” climaxes.
But why do we? Is it impossible to have a car sized jet fighter if we don’t care about crewing it?
Then again, they did crew it, so I guess I’ll grant your point!
Thanks for the info. Now I know…
It’s been a while since I’ve delved into model plane construction (and most of the books I have on the subject date from the 60s, being my Dad’s), but I’d say that it shouldn’t be impossible, as such (the lower mass may actually work in our favour, though I’d need to research whether the smaller engines and wingspan would not be an issue).
It would, however, fly in the face of the whole “robots in disguise” angle – a 1:3 scale F-15 would certainly stand out. (Although, to be honest, the funky colour scheme, lack of transponder, and being Starscream is probably more of an issue here.)
Since we’re on the subject, I’ve come to realize that Megatron probably isn’t the most egregious example, given that he transforms into a Transformer-sized gun. The few times I’ve seen him being fired – funny ol’ word, that – was in the hands of either Starscream or Soundwave, who are among the larger members of the cast. I don’t recall ever seeing a human packing Megatron (I’m really digging deeper, aren’t I?)
On reflection, Soundwave is probably the one who undergoes the biggest scale change. I now distinctly remember seeing him in walkman form in the cartoon (acting as infiltrator/recorder). However, I also recall him being referred to as a “funky boom box” in a different episode – a line that stuck with me over the decades for some reason. This would suggest that not only does Soundwave lose most of his volume (geometrically, that is) when transforming, but also that even his transformed size isn’t consistent.
I’ve discussed with friends the kind of silly idea of would you vote in an autocratic government in power if they were committed and able to steer us out of the climate change issue and then have the problem of getting back to democracy after open to us. Recognising democracy makes it very hard to get on with the climate issue seemed obvious to me. Then I read this https://unherd.com/2018/10/liberal-individualism-killing-planet/ which was more about me finding this study: https://www.tandfonline.com/doi/full/10.1080/09644016.2018.1444723
So my ‘recognising’ rested on the assumption i assumed democracies fared far worse than autocracies in tackling climate change. If this study was updated to include data since 2011, would it not go further to support that with China’s progress since then or would it not? I just haven’t got much information about how successful they’ve actually been?
Taking my assumption even further, I just always assumed things like 5p plastic bags in the uk took years but in an autocracy if they take it seriously enough, they can outlaw all plastic to some reasonable effect tomorrow? Probably misunderstanding a lot here so let me know how you think my assumption rubs up so wrong against the data/study or if they included since 2011 there might be some support.
I wouldn’t be so sure about autocratic, but that’s roughly what laws are for? To stop individual needs from undermining the whole, and solve tragedy of commons?
In the end, I think it’s the lack of critical mass of people actually doing something. Nobody wants to gimp themselves so their neighbour can pollute in peace.
This seems like it’s doing a lot of work in your assumptions here. Why would your hypothetical autocrat care more about plastic bags, say, than the mass of voters in a democracy?
Thoughts inspired by Roman dictators?
No real answer, but an autocrate would not even be necessary. It would suffice to have a democratic state or statelike power (eg EU) that has both foresight and power to locally sub-optimize for a global optimum. It must not be in the grip of corporate takeover by complacent industries, armies, or media. So, no hijacking of democracy as a prerequisite. That is about as unrealistic as a contemporary autocrat letting go of their power. (And certainly not thought through.)
https://www.visualcapitalist.com/all-the-worlds-carbon-emissions-in-one-chart/ tells me that China has more carbon emissions than any other country by a large margin. (Admittedly they’re not as bad per capita, though still worse than the world average.)
https://ourworldindata.org/plastic-pollution notes that they’re still really bad for oceanic plastic.
https://www.cbsnews.com/pictures/the-most-polluted-cities-in-the-world-ranked tells us the most polluted cities in the world are mostly Chinese.
I agree that autocratic countries could, theoretically, decrease pollution a whole lot. But empirically they don’t seem to do that.
So, to answer your question: if Omega shows up and promises me that we get a perfect solution to climate change, but in exchange every country turns into an autocracy, I’d seriously consider it.
But if what’s actually happening is someone like Hugo Chavez is running for president, and he says: “If we elect me president and change the Constitution to give me unlimited power, I’ll produce a perfect solution to climate change! It’s the only way to save our planet!” then I’m voting against.
This is misleading, as the reason is only because they are poorer. In reality, they are one of the worst polluters per dollar produced.
Onshoring all Chinese production to the United States at our current carbon rate would cut their carbon production by a fifth. Other countries would be even more substantial, but they probably can’t produce enough.
>But if what’s actually happening is someone like Hugo Chavez is running for president, and he says: “If we elect me president and change the Constitution to give me unlimited power, I’ll produce a perfect solution to climate change! It’s the only way to save our planet!” then I’m voting against.
This is an unbelievably dishonest interpretation of the constitutional referenda
What constitutional referendum?
I think it’s probably a coincidence: Chavez is the go-to example of a modern autocrat arising from democracy and destroying his country, and this was therefore a cartoon Chavez.
Although I note you’re not actually saying how this was wrong, which is a bit unhelpful. I’m happy to accept that my views on Chavez aren’t greatly informed, but to convince me that an opinion about how he is bad is wrong takes evidence. The risk is that otherwise you come across as the Oliver Stone-type Chavez is good regardless type.
An autocrat could do it, but only by the “shivering in the dark” method. The trick would be finding an autocrat who actually makes that his first priority. China, for instance, does not.
Look around you, right now, wherever you are. How many things that you can spot right now are made of some kind of plastic? What materials should be used instead? Consider that few other materials have comparable strength-to-weight ratios–you could build a car entirely out of metal, but it’d weigh a good deal more (and thus use more fuel to drive). How sure are you that the replacements for plastic wouldn’t actually create more emissions? (See e.g. this study on the relative emmisions on various types of grocery bags. tl;dr: Standard lightweight plastic bags have the smallest footprint for a single use; reusable bags can beat them out iff actually reused.)
I phrase these as rhetorical questions largely because I myself am unsure. When I tried to google around for figures on the relative embodied carbon footprint of metal vs. plastic, the first page of results all seemed to be from either plastics manufacturers or steel manufacturers. I’m sure careful research could come to a more definitive, unbiased conclusion, but I don’t have the time to do it right now. My core point is, before you outlaw all (or any) plastic, or even set that as a goal, you should make sure that your proposal won’t actually be counterproductive to your real objectives.
It’s weird the degree to which the amazing properties of plastic, and well, pretty much everything derived from oil are overlooked in favor of theories about greed and corruption and gridlock. Plastic measurably makes my life better, and so does petroleum and all its cousins.
It’s actually hard to just outlaw stuff without something to replace it. I might be annoyed at having my paper straw dissolve in McDonald’s milkshake to reduce 0.3% of plastic waste, and that might be the end of it, but as soon as you start getting into serious efforts to reduce plastic use by double digit percentages, then you are going to start seeing immense economic effects. Claims like “100 companies are responsible for 71% of emissions” makes it seem as though we have some Captain Planet villains hanging around just polluting for the hell of it even though the oil they extract and refine is the lifeblood of our civilization.
When Macron passed his fuel tax to “fight climate change” it kicked off the yellow vest protests. Sure, it may have been his austerity measures that grinded away up until then, but it was his environmental measure which kicked things off. If an autocrat wanted to go as far as ban plastic entirely or ban petrol, unless he had some replacements for these things ready to smoothly transition the populace without an appreciable dip in living standards, then he’d better get ready for massive protests, which autocracy doesn’t make you immune to.
Now, the answer to the fuel issue is to fund green energy, solar, wind, nuclear etc. The answer to the plastic issue would involve researching biodegradable alternatives that actually still more or less do the job.
We’re already doing these things… so perhaps we’re not doing them enough because we aren’t going to meet emissions targets to avoid X degrees of warming according to some models, and we haven’t reduced plastic enough to stop fish choking on microparticles? So then perhaps we could have even more funding, but then the question becomes about what the limits to funding are. After all, you’re paying scientists and chemists to do R&D and you can’t just conjure scientists from the air. Are we bottomed out in terms of intellectual potential? Or is that there are a load of researchers just standing around kicking their heels and not saving the planet? Following the problem to its end; to solve the general civilizational pollutant problem fast enough to keep ahead of the damage, we need to either give more money to existing researchers to tell them to find replacements, or we need to tap into the potential researcher pool and see how much more we can squeeze out of our educational system.
But just banning stuff? Autocrat or democrat, if you ban trivial things like straws or plastic bags, you can get away with it with some grumbles, but the effect on the problem will be minimal. If you try to cut deeper into the problem with punitive measures, the deeper you cut, the greater the potential for unrest will be. There’s no magical political system that can avoid this.
+1. Even if the use of fossil fuels is a serious problem that will have lasting negative impacts on the entire planet, it might not be a problem that is solvable by government intervention. I see the climate crisis primarily as an engineering problem, secondarily as a political problem. If technologies existed that could effectively replace* the current fossil-fuel-based infrastructure of the industrialized world, a government could either directly fund its implementation, ban the use of fossil fuels, or tax carbon emissions until it made sense for private industry to implement the replacement tech. But while no such technology exists, carbon bans or caps will result in a reduction of quality of life, and subsidizing existing “green” energy tech will result in wasting money on projects that don’t actually have the potential to replace fuel-burning plants. (The windmills dotting the landscape provide only 2.5% of the country’s power and can only be used as a supplement to fossil-fuel-based power plants since the only generate power when the weather conditions are right.) An evenly-applied carbon (/carbon equivalent) emissions tax could provide incentive for innovation and thus is a policy I’d actually support as likely to be effective. Directly funding research/engineering into potential replacements might also help, but runs the risk of backing the wrong horse if the tech with the most potential turns out not to be the one you subsidized.
*where “effectively replace” is defined as “fossil fuel use is reduced to levels that no longer put the planet at significant risk of further climate change, with no decrease or minimal decrease in quality of life for people who used to use fossil fuels”
Eh.. It most certainly is solvable by government action.
Step one: Messmer plan, 2.0 – France has absurdly clean electricity due to a dirigiste plan which is the very definition of government action.
Step two: More dirigism. Pouring some billions into electrifying industrial processes and transport is well within the realm of things governments have successfully done before, and all the really major carbon emitting industrial processes have perfectly clear electro-chemical or otherwise electrical alternatives at some stage between lab bench and industrial pilot plant, so there is not even any real uncertainty it is possible.
Ore smelting can be done with micro-waves, concrete can be made by an electro-chemical process that spits out lime and graphite, ammonia is really trivial to produce by throwing electricity at it.. and so on.
I just want to say this is the kind of thread that keeps me coming back to SSC.
I stand corrected, to my knowledge nuclear power is a viable alternative to fossil fuels for electricity generation. Whether it’s “clean” or “sustainable” is a matter of intense debate, but AFAIK they’re better than coal plants. However, many people and organizations pushing for a “Green New Deal” or “climate action” in general explicitly oppose nuclear power. This wouldn’t be a problem for a hypothetical dictator who liked nuclear power, but it’s certainly an obstacle in real political landscapes.
I’m less convinced of the feasibility of electrifying industrial processes. “At some stage between lab bench and industrial pilot plant” doesn’t necessarily imply “Economically viable within the next 20 years”. If you double the cost of most industrial products, then you reduce quality of life by halving the amount of products people can afford.
Well, there are tradeoffs eventually, but I don’t think we’re already near the Pareto frontier. For example, in certain places public buildings in the summer tend to have way more air conditioning than enough; if we kept them slightly warmer we would both reduce GHG emissions and improve QoL. Or remove agricultural subsidies in countries which already have way more obese than malnourished people, subsidize public transportation so that you’ll reduce both GHG emissions and traffic congestion, etc.
While I agree with each of your individual proposals, I don’t think they’re actually Pareto optimizations, strictly speaking; all would have negative impacts on some people. (Nor do I think they’re necessarily the consequence of strict carbon caps.)
I hate excessive air conditioning as much as you do, but either: the people who set the thermostats in those buildings are too dumb to respond to combination of the preferences of everyone in the building and the financial incentive to use less AC, or some people just like it that cold and would be unhappy if it were warmer. Might improve net QoL, but I’m not sure if it’d actually happen even if electricity were more expensive. Also consider that emissions could also be cut by using less heat in the winter, not sure how you’d feel about that.
I’m no fan of agricultural subsidies, but removing them seems like a separate issue to me. And to be pedantic, it would hurt farmers and might raise food prices. (Still generally in favor of scrapping them, though.)
Great for people who use public transport. Significantly less great for people who don’t. And if your origin and your destination aren’t connected by a transit line (e.g. virtually all of suburbia), it’s not even an option.
Here are some more ideas that don’t require tradeoffs:
Public Service Announcements that encourage people to make sure their vehicle’s tires are properly inflated, they aren’t carrying around extra weight in the trunk, and removing roof racks if not being used, etc.
Public Service Announcements to encourage people not to throw away food. If you’re cold, put on a sweater.
Traffic lights that go Red—>red+yellow—->green, like in the UK, to warn people that the light is about to change. It makes the traffic flow slightly better.
More yield signs instead of stop signs, like in New Zealand.
Weight penalties for air travel. And weigh each passenger with their carry-on, and display the weight on a big sign so everyone in line can see it. Will encourage not carrying as much weight on aircraft, and weight loss in general. (this one would definitely hurt QoL for a lot of people, but they could stand to lose a few pounds I’m sure)
Losing significant amounts of weight lowers quality of life for a lot of fat people. That’s why they don’t do it and/or don’t sustain it.
As a thin person who eats way too much junk food: Just because it’s easy for you to stay in shape, doesn’t mean it’s easy for everyone. People have different metabolisms.
I’m guessing a lot of airline passengers would feel like that was a pretty unpleasant tradeoff. And otherwise, what you’ve got is using PSAs to nag people to save more energy. I’ll admit I’m pretty skeptical about this working.
Unfortunately, every little bit helps…only a little.
Aside from the airline charging for heavy passengers the others increase quality of life. There’s a lot of frustration among the public that they can’t do anything about global warming. Give them a few things to do, like keeping their tires inflated, and they’ll feel good about it.
And the yield signs and red-yellow lights make driving more pleasant, and save you time! There’s no downside except having to change out the signs.
Little things like that add up.
I don’t think they add up, especially when I see the behavior of many I know and read about, who make these small sacrifices and then feel justified or forced to fly a lot or otherwise pollute a lot in ways that far exceeds their meager efforts to combat climate change.
Making people feel like they’re helping to stop climate change != actually stopping climate change. Could even be counterproductive as Aapje mentions if people feel like they’ve “done their part” and stop worrying about it.
While I’ll hop on board the “better traffic lights/signage” train, I think it’s a bit disingenuous to claim “no downsides except the primary downside!” Creating and airing PSA, or changing out traffic lights, aren’t free. And I don’t think the benefits are so obviously vast as to make the cost negligible.
I’d like to push back against that sentiment more thoroughly. You can present statistics disingenuously to claim that e.g.
(example lifted almost directly from this chapter of Sustainable Energy Without the Hot Air)
But though the factoid is literally true, the impressive-sounding 66 000 homes is only a quarter of a percent of the total 25,000,000. A more representative way to put it would be:
And when everyone helps, it adds up to…
If you want to reduce emissions by, say, 50%, then every person has to reduce their emissions by 50% (on average).
The problem with trying to solve carbon emissions by incremental improvements in energy efficiency is that they don’t scale. Sure, you can reduce your gasoline usage by removing your roof rack and inflating your tires. But you’ve only reduced it by [pulls number out of rear end] 10%. You can’t remove 9 more roof racks and inflate your tires 900% more to make your car run without burning gas. That leaves everyone’s cars still burning 90% of the gasoline they were previously–assuming unrealistically that everyone pays attention to the PSA.
If you actually want to make a dent, you need big changes: either technology that greatly increases efficiency or provides carbon-neutral energy sources; or significant lifestyle changes that’ll be a lot more painful than removing a roof rack.
OTOH, if there really are available improvements that are just lying there waiting to be implemented (I air up my tires and save a few bucks a year in gas), then it’s good to try to get the word out. That’s different from PSA messages telling people to put on a sweater instead of turning on the heat–there, you’re nagging people to make a different tradeoff than they want to, not telling them how to get some free money.
I guess I am looking at this from a business perspective. At my company, if there was an opportunity to shave off a quarter of a percent of costs it would be well worth doing. Even one tenth of that would be worth doing, and 1/100th of a quarter of a percent would be worth doing as long as it had no tradeoffs (an example might be, changing a service subscription to drop some feature we never use).
I suppose the difference is that we are trying to profit, and a quarter of a percent of costs ends up being 2.5%-5% of our profit. I don’t know if that maps onto affecting climate change. It would map onto making our civilization more profitable though, right?
I’m not saying small interventions and efficiency gains are useless or ineffective; the point I’m making is that they won’t be enough to accomplish the kind of change we’d need to become carbon-neutral. It’d be as if your company needed to double its revenue, and was trying to go about that goal via incremental improvements like redesigning the logo to appeal more to customers. It might be a step in the right direction, but the magnitude is nowhere near what you need.
If emissions anywhere near current levels are likely to cause catastrophic change to the climate, then what we need is a massive reduction. Incremental gains don’t hurt, but phrasing the issue in terms of “every bit helps” obscures the fact that some bits might help orders of magnitude more than others. I presume your company does some kind of cost-benefit analysis even on decisions that seem to have no downside? That kind of careful, mathematical analysis is even more important when it comes to national policy with global effects.
It’s worth noting that they skipped horseshit PSAs about tire pressures, and just mandated that your car nag you directly if you’ve got a low tire.
It probably helped on the margins, but anybody who hasn’t gotten the word from the beeping when they start their vehicle isn’t going to do it because of a commercial.
If my company needed to make a large capital expenditure, say double our current capital (analogous to serious investment alternative energy or nuclear, for example), reducing our costs by just 10% would more than double our profits, which would basically double our access to capital.
If civilization reduced waste by 2%, growth would *(increase a lot) and there would be far more wealth available on the planet to fix problems like global warming.
*At least, that’s my conjecture here, I don’t know if the analogy works, I’m hoping someone who knows more about economics can tell me if I’m wrong.
Except that you are reducing the cost of climate change at some cost in doing the reduction. If you could reduce materials cost by a quarter of a percent but doing that required hiring more workers, it wouldn’t have to be many more to make it a net loss.
Similarly for “Pouring some billions into electrifying industrial processes and transport.”
Hypothetically though, if you think of civilization as a whole a business, and you can cut costs in a way that has no tradeoffs by say, .25% (let’s imagine there’s a “free” new technology for growing a certain crop, for example). What does that do to global prosperity and growth? For a business, a 0.25% reduction in cost has an outsized effect on profit, which allows greater investment in capital.
Does that analogy work for the global economy as a whole?
It increases global prosperity. Whether it increases growth depends on whether the extra resources are consumed or invested.
Well, it’s more that I lean towards the positive end of subsidizing things rather than the negative end of banning things. Whether we call funding stuff “government intervention” still, I don’t know.
Yep, or they’ll be tepid and which case effectively useless anyway, which leads to calls for them to be more punitive, which down the chain worsens things for the average person who is already in a precarious economic situation.
There’s probably some trade-off between spending more money on R&D to improve green tech, and spending the money on placement of existing green tech. I hope some smart person somewhere is calculating what the pareto optimized version of that looks like.
I think at the very least this is the form of government intervention that is the least risky and still vents political pressure by seeming to be “doing something”.
We could hedge our bets by putting an equal amount of funding into solar, wind, and nuclear, but we also know that the inherent limitations of things like solar, wind, tidal, hydropower and friends (which are really just cousins of solar) are surpassed by nuclear energy. I would recommend letting the private sector handle solar and friends more and transferring some more money to cracking the holy grail of nuclear fusion.
To steelman the autocrat side, probably there are MANY researchers, analysts, and economists who could be working to help create and implement better solutions to climate change (including myself) who are currently e.g. helping mortgage servicers slightly reduce their staffing models by streamlining workflow. If that didn’t pay more than helping carmakers produce parts using fewer emissions and sourcing materials from lower-impact mining and refining operations, potentially a lot more progress could be made. Likely only an incredibly tiny fraction of those capable of helping in a serious capacity currently are, due to economic incentives. In an autocracy subsidies and trade rules are easier to manipulate — China alone is probably the reason solar is now competitive with fossil fuels in so many places.
Yes an autocrat could pass laws to fight climate change more easily than a democracy can, in the same way that I could decide to go to war with Canada more easily than the United States as a whole can. In the same way, I would never do that because I would die in a hail of gunfire. Similarly autocrats, though they have de jure absolute power, are de facto limited in what they can get done without getting deposed. Autocrats spend most of their political capital maintaining their position as autocrat: there isn’t a lot of gas left in the tank for changing the world. Even if you really wanted to fight climate change, you would still be highly limited in what you could accomplish while remaining the autocrat. This video is an excellent in depth look at why autocrats, even theoretical benevolent autocrats, rarely actually pass any useful reforms for society due to the game theory around autocracy as a system of government.
I like the idea of one guy livestreaming the invasion countdown as he nervously crosses the border in camo with many weapons strapped to his body, only to be annihilated by a spontaneous brigade of mini-gun wielding mounties on moose the moment he steps over.
Weren’t there some United Irishmen invasions of Canada in the nineteenth century that were basically this (albeit the livestream was a journalist along for the ride)?
As some here will remember, fifty years ago population growth had the same role in public consciousness that global warming does now. The autocratic Chinese government took decisive action to deal with it–the one child policy.
Was that a good decision or a very bad one?
Whether it was good depends if 2 billion Chinese would be better than 1,4 billion.
Supririsngly for westerners, most of the Chinese seem to support it.
On the other hand, if the demographic transition of late and post-Mao happened mere 20 or 30 years later we would have 4 billion Chinese. Neat.
Unclear what those 4 billion Chinese are going to eat, especially in an economy that has not gone through the demographic transition. Maybe we’d have a Malthusian nightmare instead.
The policy was never that universal (like most things in China) though. Its probably the best example of an autocratic environmental policy working, but it was patchy in application.
More to the point it’s a lot lot easier to control breeding behaviour than consumer behaviour, as one is a rare event affecting only a small number of people at once, whilst any serious change to consumer behaviour affects a lot of the population at once. It might seem counterintuitive to see plastics as more likely to cause protests than babies, but I suspect this is indeed the case.
Disagree that this would make a material difference. The lack of action on climate change so far has been because of free rider problems on a national level. An autocrat changing their country whole hog to reduce emissions would be out competed by all other countries which didn’t, and would soon lose their power. What’s missing is something like a multilateral trade agreement around emissions. Some kind of cap and trade market for carbon emissions and significant tariffs for goods in a nation not following one. (Or any other arrangement, but I see that one as the most likely.)
OP wasn’t necessarily demanding a national level. Although, using China as an example, s/he probably was thinking on this scale.
I would not vote in such a government, and I say this as someone who is very concerned about climate change. Even if an autocratic government were significantly more effective at fighting climate change than any current government, which seems unlikely, it just doesn’t seem like a good idea.
Problem is with the “climate change issue”, and not with democracy or dictatorship. Even from your comment, it’s horribly vague. Plastic bags are killing turtles, not speeding up climate change, not in any significant measure. Well, of course everything is affecting everything else, but if you start to prioritize fractions of a percentage change and leave double digits percentage change on the table… I’m just going to say the fractions of percentage aren’t correlated.
We have a bunch of “green” issues and the public cares more about what’s fashionable this decade than about any coherent long term action plan. We’re as good as ignoring air pollution which is literally killing us and quite likely causing massive numbers of early dementia, but we care about plastic straws.
So, first of all we should define the problem, or problems. Continuous push to solve it would have a chance of making some meaningful change. Running around like headless chickens, less so. What running around does is help politicians get elected.
Why do you expect that:
a. The autocrats will make good decisions for solving the problems they intend to solve?
b. The autocrats will want to solve the problems you want solved?
As far as (a), plastic bag bans/fees are probably a pretty bad use of resources. Recycling and biofuels in the US are also mostly wastes of money/resources that don’t solve the problems they’re supposed to solve. I would expect an autocratic regime to do a lot of those, because figuring out how to solve those problems is often hard, and there’s not anyone who can provide effective feedback to tell they they’re imposing higher costs than the benefits they’re getting. (They don’t have to worry about getting voted out.) Examples of screw-ups of this kind from autocracies are pretty common: Think of Stalin’s treaty with Hitler in WW2–Stalin definitely was trying to keep the USSR intact and strong, but when he made a huge error w.r.t. trusting Hitler, nobody was able to push back effectively on that error.
As far as (b), it seems very likely that the main goal of the autocrats will be staying in power, or gaining power within the autocracy. That will lead to a lot of decisions that don’t make sense w.r.t. your goals of addressing climate change or environmental issues, but that do make sense w.r.t. helping one person rise in the hierarchy relative to another, purge his enemies, ensure he can’t be removed, etc. For example, Stalin had quite a lot of his best military commanders in political prisons when WW2 broke out–that was very bad for the USSR’s military effectiveness, but very good for preventing a military coup against Stalin.
As I understand the situation, turning corn into ethanol does not on net reduce CO2 emissions, may increase them—and is the major U.S. government program to promote hunger in the third world, since it absorbs a sizable fraction of the corn crop of the world’s biggest producer.
It exists because raising the price of corn benefits U.S. farmers, whose votes politicians want. Al Gore, to his credit, actually admitted that. One would expect similar effects in a non-democratic political system.
I’m just here to note that I watched Er Ist Wieder Los (look who’s back) today and found the comments about Buendnis 90/Die Gruene kind of funny.
Everyone knows about Utility Monster, someone who gets so much utility out of things, everyone should drop what they are doing to please them. Is there an opposite to this concept? An Utility Victim, perhaps, someone whose suffering is so pleasurable for others, they should drop what they are doing and torture them?
A friend of mine once asked an econ Prof what happened if his utility function was for mine to be minimal – how you would solve that.
The answer was: Utility functions are mappings of object level outcomes to how much you subjectively like those outcomes (to much utility they represent for you). You cannot use other utility function outputs as your input, as those are not object level outcomes.
So if you try the outcome is undefined. Like feeding f(x) = x^2 with x = cat smiley.
In fact, if you allow utility functions to depend on other utility functions for calculations it’s easy to build an incalculable catch 22:
Let f: x -> g(x) + c and g: x -> f(x) + d. Now the result of f is required to calculate the result of g, and the result of g is required to calculate the result of f. That is not a desirable property of utility functions and I suspect (one of) the reason they are defined the way they are.
Two object level translations: “I like the opposite of what you like” and “I like to see you unhappy”. You can get either of those in reality: the first assumes you have fixed preferences, which I learn and then adopt preferences such that my function is high where yours is low. The second doesn’t assume fixed preferences, but instead of learning your preferences I observe your reaction to outcomes and I’m happy iff you’re not. Tribalism in general, and The Discourse around Owning The Libs in particular, implies to me that this is a component of some people’s utility function in nature.