This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:
1. Those of you who don’t use ad-blocker may notice some more traditional Google-style sidebar ads. I’m experimenting to see how much money they make vs. how much they annoy people. If you are annoyed by them, please let me know.
2. Someone is doing one of those tag your location on a map things for SSC users. If you sign up, you may want to include some identifying details or contact information, since right now most of the tags don’t seem very helpful unless people are regularly checking their accounts on the site.
3. I’m considering a “culture war ban” for users who make generally positive contributions to the community but don’t seem to be able to discuss politics responsibly. This would look like me emailing them saying “You’re banned from discussing culture war topics here for three months” and banning them outright if they break the restriction. Pros: I could stop users who break rules only in the context of culture war topics without removing them from the blog entirely. Cons: I would be tempted to use it much more than I use current bans, it might be infuriating for people to read other people’s bad politics but not be able to respond, I’m not sure how to do it without it being an administrative headache for me. Let me know what you think.
I expect 3 to work badly.
As described, it doesn’t appear that anyone other than you (Scott) and the subject of the ban would be aware of the ban. Therefore only those people could police the ban. I’m sure that most SSC participants would self-police effectively, but I suspect that the subset who break rules in the context of CW topics are less likely to do so. If the subjects fail to self-police effectively, then you (Scott) are faced with the impossible task of personally checking that they’re complying with the restriction.
This is exacerbated by the fact that “culture war” really isn’t a clearly defined category (and in an SSC context seems to mean something slightly different from what it normally means), so it would be difficult even for a good faith participant to be confident they were complying with the ban and the person enforcing the ban would potentially be faced with a whole series of difficult decisions as to whether comments were straying into the forbidden territory.
The suggestion appears to be aimed at dealing with users who break the rules in a CW context but otherwise make a positive contribution to the blog. I’m afraid I’m not entirely clear who these people are, so I may not have a good understanding of the problem, but perhaps you could make more use of warnings or short bans? You could even say in a warning, “If you find yourself incapable of posting on this topic in a constructive way [or whatever] then you may wish to consider refraining from posting on this topic at all, because if you continue as you are I will have to ban you, which would be unfortunate as I value your other contributions.”
I’ve now worked out what DavidFriedman is referring to. If 3 is in fact a response to that episode, I think it’s bad for a further reason.
If (because of problems you’ve previously referred to or other reasons) you prefer that people not discuss certain topics on this blog, you should ban those topics. You should not allow discussion of the topics while secretly banning people who take one side of the argument from participating. That would obviously be antithetical to our ideals of rational discourse.
This is a strawman, it’s nowhere near what Scott is suggesting.
Based on recent public banning behavior it would seem to be the likely result. You know Bayesian reasoning and all.
“If you find yourself incapable of posting on this topic in a constructive way [or whatever] then you may wish to consider refraining from posting on this topic at all, because if you continue as you are I will have to ban you, which would be unfortunate as I value your other contributions.”
Seems pretty good halfway house solution to me.
I was banned for 3 months for making a disrespectful (but obviously totally fair and deserved) comment about Trump.
Had I known that Trump’s honour was a sacred cow on SSC, I would have abstained from making a crude comment about him/his abilities. I mean, I get it. There’s no great intellectual contribution to be made by Trump supporters and myself trading insults in the comment section. I thought I was being witty but I get it.
Now, as I said, the 3 months ban didn’t really affect me much since I’m not a regular contributor (I read the blog regularly since I discovered it last year and like to catch up with long ago topics but I don’t comment often) but it kinda stinged – I was being super witty, goddamnit!
A ‘fair warning’ would have been a decent ‘Strike 1. Strike 2 and you’re out’ method.
I suspect the problem with your comment wasn’t insulting Trump’s honor but that comments with such tone steer the conversation towards Trump opponents and supporters trading insults, and away from Trump opponents and supporters having a constructive debate.
If anyone would like to decide for themselves whether Frederic’s comment was fair, deserved, or super witty, they can read it here.
Thanks.
I don’t think that post made any valuable contribution to the discussion—just insults without argument.
Hmmm. Re-reading, yeah, okay, not my best and not exactly witty. Oh well. Selective memory bias.
I shouldn’t have done it. OTOH, I do think I was making a valid point in the first sentence.
Anyhooo…. back to moderation tactics, I thought I’d have reacted correctly to a warning and dial it down as a consequence. And found it a slightly better experience than a straight ban.
That said, the ban worked as well in my case so it’s hard to argue too strenuously for a different moderating policy
You probably meant paragraph, because I doubt you are referring to “Hahahahahaha.”
It’s quite a poor paragraph, which includes circular reasoning. You argue that Trump is racist by arguing that his racism will make him treat Norwegian migrants better than non-whites, but you don’t actually seem to have any evidence that he did/does treat Norwegian migrants better.
If you want to argue supposed hypocrisy in the future (elsewhere), I suggest you only make claims that you can actually substantiate and then provide the evidence. Then others can evaluate the evidence.
In many cases, what seems hypocritical to some, seems like a case of disparate situations that require a disparate response to others.
By giving the evidence, those who disagree can then point out why they think the situations are disparate. You can then argue they are not (or agree). This allows for (semi-)good debate.
Extremely vague assertions or judgments based on mind-reading don’t allow for good debate.
That’s a very isolated and frankly silly demand for rigour. Trump’s ban on immigration from Muslim countries and lack of ban on immigration from Norwegian countries (clear evidence that “he did/does treat Norwegian migrants better”) should be common knowledge. It’s true that for some reasonable definitions of “racist” you can’t infer from that evidence that Trump is racist, but Frederic’s comment didn’t do that. The only crime he committed was being coarse and not particularly interesting, and thereby failing to meet “kind” and “necessary”.
Nitpick:
Trump talked about a ban on immigration from Muslim countries or by Muslims, but the actual executive order he put out banned immigration from five countries that were already under heavy immigration restrictions because of the risk of terrorists coming in via the immigration system.
Also, Muslim isn’t a race, it’s a religion. For example, a policy of refusing immigration by Iraqi Muslims but permitting it by Iraqi Christians might be unconstitutional, but it wouldn’t be racist.
@rlms
Frederic made a claim that Trump distinguishes between and makes or seeks to make different policy for these three racial groups:
– Brown/black immigrants
– Chinese/Eastern Asian immigrants
– Norwegian immigrants
I’ve never seen this claim before and after, so it’s not a claim that people can be expected to be familiar with. The particular policy you refer to has been argued (with some evidence provided) to be limited to a subset of Muslim countries because of legal reasons, where Trump had preferred to extend it to all Muslim countries. However, I’ve never seen anyone provide evidence that Trump’s policy had a racial intent and that he would specifically prefer to restrict migration of brown/black people. So it is far from obvious that the evidence that you argue is relevant, actually is relevant to the claim at hand.
Another issue with the claim is that it shifts between a racial claim and a nationalist claim. At one end we have skin color (brown/black), then for the second group it shifts to a national/regional group (Chinese/Eastern Asian) and then for the last group it shift once more, fully to nationality (Norwegian). This doesn’t add up to a coherent claim about Trump’s views or policies. Is the claim that Trump discriminates by skin color, by religion or by culture????
One way to troll is to play games like this: one writes things that are extremely ambiguous, even to the point of being contradictory, while being suggestive of a certain interpretation. Then when the other person tries to rebut this interpretation with specific objections, one claims being strawmanned, that the other person’s interpretation is due to bias, etc.
Now, the intent of such writing doesn’t have to be trolling, but I would argue in favor of being very intolerant of such sloppy writing, whether intentionally or accidentally so.
This is the non-cw thread, right?
The bolded part is merely the pretextual reason.
The (overwhelmingly likely by Bayesian logic) actual reason is that Trump pre-committed to a ban on Muslims. He then stated that he had banned Muslims.
I would like to reply and justify my claims but 1- I was only citing my case to argument about moderation (and I’ve unwillingly derailed that) and 2- I don’t want to break the no-CW rule on this thread.
Any idea of where on the blog I might reply? Would you gents follow me back to the thread with my initial comment?
The 125.25 thread is up now and permits CW discussion.
HeelBearCub:
Those five nations were already under heavier immigration restrictions from the Obama administration, as I understand it.
@aapje @albatross11 @rlms and whoever else might be interested: It’s done.
See https://slatestarcodex.com/2019/04/10/open-thread-125-25/#comment-740380
First, this is the CW-free thread, so please refrain from insisting that your original insult was valid and witty.
Second, I think the ban had a lot more to do with tone than with content. Trump’s honor isn’t a sacred cow here. Posts that read like low-effort drive-bys are generally not liked, particularly when they come from new posters, who don’t have a record of good contributions to balance against.
Well-said and with gentlemanly clarity.
Had I known that Trump’s honour was a sacred cow on SSC, I would have abstained from making a crude comment about him/his abilities.
It’s not that his honour is a sacred cow, or even about him in particular. It’s that you make that remark, then I call Alexandria Ocasio-Cortez “Horseface”, then everyone else chips in with their favourite insult for their least favourite personality, and we end up like the nastier parts of Reddit.
The best way to avoid all that is not to start in the first place.
I can appreciate a good bit of apophasis, but it tends to be guilty of the sins it decries. CW-free thread, too.
Another issue is that other commenters wouldn’t see, by example, what sorts of culture war comments Scott considers bad.
Is there any reason not to post these content-specific ban notices as the usual bold, red public comment, rather than (or in addition to) in e-mail?
“Let me know what you think.”
My inclination is against.
I’m not in your position and don’t know how serious the threat to the blog is from attackers pointing out reasonable but politically incorrect comments. But since part of what makes the blog so good is the wide range of positions offered, I’m worried about the potential danger of selectively suppressing such.
One problem with the secrecy of the approach you describe is that if you are making decisions which many contributors to the blog would consider misguided, we will never find out about the decisions, so you will never find out our view of them.
I say “good,” but then I’m a “k-line them all, God will sort the banned-“ist in general. I would like to see a more democratic process for this though, where a user who isn’t on the opposite side to the user in question has to say “yeah, that guy has crossed the line,” since it blurs the bright line of whether or not you’re censoring people. Maybe this is impractical.
Honestly, I think it would be valuable even if it were just a public Naughty List; it directly attacks the disciplined user’s status, serves as a “don’t listen to that guy, he doesn’t represent us” to outside observers, and has value even just as an official “hey stop that.” I mean, I learned to drive by always doing the first thing that occurred to me and listening for horns…
I would probably rather be temporarily banned than publicly shamed.
Perhaps the message could be a warning “I am considering banning you based on your contributions on culture war topics”. I feel like that would do the trick somewhere north of 90% of the time, but perhaps I’m easily cowed.
Ditto. I’ve sometimes wondered if I’m crossing the line and a confirmation to that effect would pretty much stop the behaviour in its tracks.
I think the email warning would work (and it would test community response to soft power). I do this with debate students when their arguments get too heated.
An outright ban would work too, but could result in aforementioned headaches, especially since you might feel committed to it once you begin the process.
Start with email warnings. If those don’t work, move to a three month culture war ban system. If that causes too many headaches, we can figure out how to reduce the headache from there.
> Honestly, I think it would be valuable even if it were just a public Naughty List
Let’s not go for the public shaming route, that tends to enflame conversation rather than cooling it down.
Which users are banned can/should be public information, but ‘Hey, knock it off’ style warnings are much more effective when given in private.
+1
I just wanted to post my appreciation for the terminology. IRC represent. *fistbump*
What are some things that help people with anhedonia (specifically the emotional blunting kind)? I’ve been trying to treat my own for a long time, with little success. I don’t have much money so things like ECT are out of the question, but any suggestions help.
My main symptom is a general lack of emotionality. I just don’t feel much. The severity of it waxes and wanes, usually being exacerbated by stress, but it persists in one form or another regardless of my life circumstances.
Things I’m interested in trying or have heard helps:
– Parnate (MAOI)
– NSI-189 (Experimental, neurogenic)
– Sarcosine + NAC (Stack, don’t even know)
– Uridine stack (Aka the ‘Mr Happy Stack’, dubious name but pretty common)
– Rexulti/Vraylar (Atypical antipsychotics)
– Rhodiola Rosea (Adaptogen)
– Salvia microdosing (Something something downregulating kappa opioid receptors)
– Curcumin (Anti-inflammatory, antioxidant)
– Meditation (Meditation)
Things I’ve tried:
– Ketamine (Self-administered, helps a bit but is inconsistent)
– Exercise (May help a little but is inconsistent at best)
– Therapy (3 different providers, several years)
– Zoloft
– Wellbutrin SR and XR (Helped a little until it started making me feel bad every time I took it)
– Trintellix
– Celexa
– Remeron
– Mushrooms (Low and high doses)
– Ayahuasca
– Weed
– Lactobacillus Reuteri 6475 yogurt (Oxytocinergic-activity-increasing bacteria I guess)
– Having a girlfriend lol
– A variety of vitamin supplements — Vitamin D, B complex, magnesium, fish oil, vitamin C, folate
– Various other supplements — SAMe, L-methylfolate, inulin, creatine, L-tyrosine, L-Theanine, ashwagandha, Alpha-Lipoic Acid, St. John’s Wort
Been trying to address this issue for quite a few years now, I don’t want to say I’m getting desperate, but I will say my tolerance for risk is steadily increasing. I’ve posted about this a few times already but am always looking for more ideas.
Consider tianeptine for your next pharmaceutical to try—better safety profile than Parnate, and one of the SSC Nootropics surveys ranked it rather highly. The internet consensus is that it’s subtly mood-lifting without the blunting of an SSRI. Anecdata: it helped get me out of a multi-year rut of anhedonia. N=1, so there is a chance it just spontaneously resolved, but people online report similar things. YMMV.
It’s unscheduled in the US (and most other places), so you can buy it off the clearnet easily and cheaply. Mechanism of action might be mu-opioid related. There’s some stories out there of people experiencing withdrawal after taking far above the therapeutic dose, but the risks are fairly low otherwise.
As far as the other things, I’d give exercise another shot, especially if medications/supplements help your energy levels enough to do so. If the process of getting ready to exercise outside is too much, a bodyweight routine in your room works. Otherwise, find some kind of cardio that you find intrinsically fun. Anecdata: I also thought exercise didn’t help much, but I realized I just hated running. Bicycling worked much better. I generally prefer strength training for health reasons, but cardio seems better for mood. Bonus points for cardio that lets you see the outdoors.
Also, check on your sleep duration and quality. Going from 4.5hrs a night to 7.5hrs on a regular schedule probably did more for my mood than anything else. Use melatonin as needed to achieve that.
What type of tianeptine is best? There’s salts, sulfate, etc. for sale. Not OP, but have similar symptoms as well as motivational issues that make things I can purchase online legally ideal.
Sodium and sulfate are the two major kinds. Sodium had clinical trials run on it and is the one that can actually be prescribed in the EU. Sulfate is supposed to have a longer half-life, avoiding thrice-daily dosing and lowering abuse potential, but I’m not clear that it’s been proven to be as effective as sodium. Last I checked, sulfate was more research chemical than actual pharma.
Prescribing guidelines for sodium are 12.5mg t.i.d., so you can make a solution in water and dose volumetrically if you buy powder. Or if you can find tablets, even better. It’s a shame that it’s become more difficult to source (see belvarine’s reply), but at least it’s still legal in nearly all states, and you can still find reasonably good vendors on the clearnet, though some of them might want to be paid in BTC.
Unfortunately since Michigan scheduled tianeptine, most major payment processors refused to work with vendors selling the substance so it’s very difficult, if not impossible, to obtain tianeptine from reputable online US vendors with adequate quality control these days.
You may be able to discuss a prescription with your psychiatrist, but since tianeptine was discontinued after trials in the US major pharmaceuticals don’t manufacture it over here and you may have a difficult time convincing anyone to prescribe it to you.
Note: None of the above applies to Europe, where you can get a tianeptine prescription.
Tianeptine worries me a bit because it seems like a quintessential “treating the symptoms” kinda thing given the short half life (readministration requirement) and potential for abuse. I’m mostly looking for longterm cures.
Sounds like people’s reports on it are very positive though, so I’ll look into it. Thanks!
I’m still exercising, though I do way less cardio than I should. I’ve mostly been sticking to weight lifting recently, I will try to work in more cardio and HIIT if I can though. I hate running, but bicycling is the shit.
Sleep is super important and something I’ve been meaning to look into, since I’ve always needed like 9-10+ hours to feel rested. I don’t know how to ascertain my sleep quality but I think I’m getting enough. I do have issues with insomnia when I’m not taking meds for it though — melatonin as recommended by SSC doesn’t help, sadly. I want to try a weighted blanket soon.
I don’t have anhedonia, but I do sleep 9-10 hours – or did, up until ~3 weeks ago when I started taking, of all things, Allegra. (For an entirely unrelated reason.) So far the best explanation anyone’s given me is “if an antihistamine is helping, it might be sleep apnea” – have you been tested for that? – but honestly I don’t really know why it works, just that it sure seems to. Anyway, this probably won’t help, there are lots of different reasons to sleep 9-10 hours and I gather some people just do, but I figured I’d throw it out there just in case.
(I do not know if it interacts with anything else you’re taking this is not medical advice, sample size is one so take with requisite grain(s) of salt.)
IME weighted blankets help with insomnia but not sleep duration, but the effect varies widely across people. If you sleep better in winter/with heavier blankets and have worse insomnia in summer/with lighter blankets, that is strong evidence that a weighted blanket would help.
Good luck – that’s quite an extensive list. I really hope something works!
I was actually tested for sleep apnea as part of a clinical trial, and I came out clean. I’m also below average in weight and nobody’s told me I snore, so I doubt that’s it.
Although the antihistamine bit is interesting, I take Remeron to help with sleep and that has antihistamine properties iirc. I’ll consider trying some at a later date.
Posting this shortly after having slept for 11 hours… I definitely could’ve slept more. Maybe I should be alarmed that nearly half my life is spent asleep when I can afford to do so.
Thank you for your well wishing, I appreciate it. If I ever find a cure the first thing I’ll do is post about it in an open thread here.
Haven’t tried but glycine worths a look. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3756114/
I find it interesting how the purported effects of glycine are reversing rather exactly the typical civilizational illnesses. Glycine is mostly from collagen, which means eating the less appetizing parts of animals, not just steaks. Could a glycine deficiency play a big part in our civilizational illnesses?
Interesting. I added TMG (Trimethylglycine) to the list. Nice thing about that is it’s a methyl donor, which I guess means that it supports the function of other neurochemical systems in addition to being a source of glycine. Thanks for the tip!
As mentioned above a weighted blanket started a virtuous cycle for me by greatly improving my sleep hygiene.
I’m pretty good about trying supplements one at a time for a couple months at a time, and like most people I agree they’re basically bunk. Some exceptions:
Fish oil does make my incredibly dry skin only very dry
Vitamin D really does make doctors stop saying I should take vitamin D
ZMA before bed occasionally to relax and sleep better and as part of my general curiosity about sex supplements I won’t go into here but can vouch for.
MSM – didn’t even know what it did, my body builder brother left it accidentally, but it had a noticeable effect on my joints and body aches. Basically it made me feel younger and made exercise and life a lot easier. (Full disclosure, I’ve heard this is basically just garlic, but even with my love for the stuff I’ve not been able to recreate the effect naturally)
Curcumin – a much milder form of above. I think both are anti-inflammatories? I take both now and when I leave for weeks somewhere and don’t take them I notice when I go off.
I do lsd once every 4-6 months and always feel nicely cleaned out emotionally by that. Weed, if anything, exacerbated my anhedonia. I like Kratom for a nice light body high that I find centering, but it’s super addictive so that’ll depend on the addictiveness of your personality. Legal though in most states. I do it once every other week or there abouts.
If you do the Keto thing you won’t fix anhedonia but you’ll be almost entirely without blood sugar fluctuations for days and I found that eerie and novel. I can see why people say it gives them a “clear head.” But, I found Keto joyless as anything and it came with the purity-status games all diets do that was quite unappealing.
Exercise is obviously objectively good but I still have no desire to do it, so I do it only out of some sense of obligation. But yeah, obviously exercise, I’m just not the one to say it totally worked for me. Sleep hygiene though. My god. I worship at the altar of Hypnos now for just how normal good sleep makes me feel. ZMA, no phone before bed, dimming lights and listening to brown noise and boring audiobooks to drown out the screaming nightmare of a city outside. That and deep breathing, which is halfway to meditation so maybe.
I am coming out of it though, and the anhedonia dogged me from before the city and to my surprise I might even conquer it before I leave, under honestly the worst circumstances for improving mental health outside of San Francisco, on this island the shape of a neurosis.
Weighted blanket is definitely on my list of things to try, though arguably it should be higher as I can never seem to get an adequate amount of sleep.
LSD I doubt will work, serotonergic psychedelics just kind of hurt at high doses. My experiences with shrooms and ayahuasca have made me less than optimistic about prospects similar drugs; I don’t seem to derive any lasting insight or emotional refreshment from them. That said, I do have a few tabs which I plan to try eventually.
Keto is interesting though I doubt it’d help me. Also want to try gluten free or low wheat, because why not
This and the vitamin D thing gave me a chuckle
Thanks for the tips, I think your glowing reviews are enough that I’ll be looking into buying a weighted blanket soon…
Spend more time outside.
From The New York Times today:
Well, if you’re going to play an arcane caster, you should at least learn your spell save DC, the concentration mechanic, and spell components, which a surprising number of people don’t. You can depend on your GM for the more esoteric stuff like casting more than one spell in the same turn and/or round. 🙂
8 players? I roll to disbelieve on how often they are able to get this game happening. I bet this is monthly, if that.
7 players and a DM. I think it’s great that she found an adult table with friends, and it sounds like she liked it, but we can infer:
1) That campaign either rarely met or was willing to meet without 1-3 players depending on schedule.
2) Unless Kate is a very experienced DM, combats took forever and were extremely easy most of the time, with occasional moments of outright deadliness.
3) Unless Kate is a great DM, players spent a lot of time building dice towers.
It says that “we,” i.e., DM Kate and player Annalee, were joined by 7 other friends. That’s 9 total.
Thanks, I wasn’t reading carefully enough.
I’m now trying to visualise the best way to build a tower of d4s. Or how to get a character with better attacks maybe…
This is a known packing problem!
I confess to building a lot of dice towers in my day—I’ve been in one-shots with 12+ players and several of our regular campaigns back in college were 7+ players. I always go cube, octahedron, decahedra, dodecahedron, icosahedron, tetrahedron. Le Maistre Chat and Nornagest may have heard my dice tower fall near the mic last night too, sorry about that. 😀
I’m slowly phasing out my cube dice for Better D6s. Namely dodecahedron D6s. Because they’re more evenly random.
Top to bottom:
d4
d6
d8
d12
d20
d10s aren’t platonic solids, they can go sit off to the side. Unless I’m going for a real challenge, in which case the d10 and d100 go between the d8 and the d12.
I always kind of hated the way tetrahedral dice (don’t) roll. I recently bought a few of these, which would stack reasonably nicely.
https://i.warosu.org/data/tg/img/0491/35/1472881934378.jpg
(There are also eight-sided dice with 1-4 twice, but I like the Roman numerals as an easy way to distinguish them from normal 12-siders.)
OK, so how much does it cost to get a set of those made up with four sides marked “IV” and only two marked “I”, and another set vice versa, and how long will it take for the average GM to note that you’re swapping those dice in and out of service for the really important rolls?
Actually, that would be pretty low-return given how rarely the D4 is used on really important rolls, but now that I think about it a D20 with two “18” faces and no “3” might go unnoticed for quite a while. Fiddling with the “1” or “20” might be more conspicuous.
Note to self: If gaming with John, bring my own dice.
Diplomacy is notably dice-free. Just saying.
I’ve been there and done that, and I don’t feel particularly eager to be backstabbed by you again.
@bean
You should have picked Athena.
…
Just saying…
@John Schilling – dice cheating is very enticing in DnD. It’s not that hard, but if other players and the GM start to notice unusual rolls at particularly important moments (a) they watch pretty closely and (b) they tend to react pretty strongly.
Rigged dice would have the advantage that your good luck wouldn’t be confined to key moments, but my bet is they would raise Bayesian hackles pretty quickly.
Would a smallish advantage like substituting an 18 for a 3 be that likely to get noticed?
3-4 players + DM is the sweet spot for D&D. DM Kate could be running two games a week in the same setting for people with different schedules!
I think four players is ideal, and I’d hard cap at six personally.
This. After 4 it becomes exponentially harder to give everyone a reasonable amount of your attention. After 6 you have at least two players consistently checked out at all times. Unless you’re running a larp, in which case none of these apply, but instead you get cliches of 4-6 people.
@Nick: If I were lucky enough to have seven players, I wouldn’t kick someone out if they all managed to show up for a session. I would not, however, build a group that big who normally have compatible schedules!
@Le Maistre Chat: Yeah, I think it’s a better idea to run multiple campaigns rather than a single large one. Gives players more variety too, and it cuts down on the eternal problem of having too many books and systems to run.
You could kick it old school and run two parallel campaigns in the same setting, with the two groups as rivals.
I’ve always kind of wanted to see the Head of Vecna incident live.
4 players is ideal. 3 players is playable, 5 is really pushing it. 6 or 2 are right out.
@Nornagest:
Yes, that’s what I want to do.
Absent a really good DM, I’d take five over three in any edition of D&D. Three players is probably optimal for keeping combat flowing, but most adventures aren’t written to deal with a party that’s missing one of the four basic classes or a close equivalent — take out the thief and you can’t deal with locks or traps, take out the mage and you don’t have battlefield control. Old school is actually more flexible, partly because you can fill the gaps with henchmen and hirelings and partly because you can usually get away with being more creative, but it bogs down less in the first place so it can handle a larger table more easily.
I’ve seen other games successfully run with a lot more than four players. Paranoia plays well all the way up to eight or so, but that’s partly because a third of them will be dead at any given time.
@Nornagest: You know I like giving each Old School D&D player 2 characters, to keep each player engaged in case their PC dies. 3.0 codified this as the Leadership feat, but using Leadership at even a 3-player table became horrifically tedious. In my ACKS game, we could probably recruit 2 more players who control 2 characters each before combat took as long as my small 3.x table did.
In 5E, it’s almost impossible for PCs to die, so 4 players with tank-healer-wizard-rogue is close to ideal.
For the unfamiliar: Nornagest plays in my Adventurer, Conqueror, King (Basic/Expert with more classes, more economics & D20 Thief skills, basically) Discord game as dual tanks, Nick is a Cleric with an attack dog and our third player covers both the Mage and Thief PCs.
I have no data, just anecdotes, but it feels like D&D podcasts have reeled in a ton of new players and 5e’s streamlined rules have helped keep some of them in the hobby.
That said, the rules are still very intimidating to a lot of would-be players. I’ve mostly been recruiting graduate students and FAANG employees for my games and a lot of them still struggle to keep track of their bonuses or spell slots. These aren’t dummies but while the book-keeping is greatly reduced from 3.X it’s still a huge challenge for new players.
I know this will never happen, but I would kill for a Basic line for 5e. The starter rules don’t really qualify: they’re just as fiddly in play as the full game, only with fewer options. If there was a simpler version with less of what my girlfriend calls “secret math,” it would help to ease people in and reduce attrition.
I made some 1-2 page “getting started” sheets for each of the players in my daughter’s Level 3 one-shot, and they went over well. I had two sections:
1) A brief explanation of a player’s action economy – actions, bonus, movement, and reactions.
2) A list of what that character could do during each phase, with some quick reference stats for each choice.
3) If I did it over, I’d add a section with a couple sentences about role playing and a quick rundown of their stats and proficiencies, plus maybe background and some backstory.
If you still have them and feel comfortable putting them up somewhere I’d love to see them.
I’ll see if I can find them tonight and reach out.
Well, now that I found them, they’re definitely a first draft. I threw them together a hour ahead of a one shot.
Nate’s sheet is for someone who never played before – we gave him a level 3 dragonborn fighter (champion), and the sheet explains the combat action economy and what his character can do in each phase of a round.
The other sheets are for other level 3 characters whose players had played before – I didn’t explain the action economy, but just summarized most of their combat options.
As I said, if I had it to do over again, I’d probably make 3 clean one page sheets.
1) Explaining out of combat role play and highlighting where the characters abilities and skills were strong and weak, plus maybe some backstory prompts.
2) Explaining the in-combat action economy.
3) Explaining that character’s options during each phase of combat.
I would also be interested in seeing these. I have some newbies to induct and I’ve never DMed so I’m looking for many diverse opinions on what information new players should see.
Thanks, I’ll try to find them.
So as not to oversell, I was trying to solve a specific problem, which is that players in one-shots often have no idea what their options are during combat, and fall back to “I cast that one cantrip again” or “I swing my sword,” so the guides are really focused on the specific combat mechanics of a level 3 druid, assassin, champion, etc., and not on how to role play, but I bet you could make a quick page on that too.
The B/X or BECMI rules from the ’80s are still really great for this. Look up the retroclone Dark Dungeons for a free version.
I mean if I want to run Rules Cyclopedia I’ll just run it, the only change that I noticed in Dark Dungeons is converting attack tables to BAB.
But yeah, I’d like to see an updated version of Basic which incorporates some of the things 5e got right, most notably the Advantage / Disadvantage mechanic. I’ve played around with the idea of making one and nailed down the basic mechanics but my attempt at writing it died when I realized that nobody would ever actually play it.
What, you can’t just force rules you wrote on your group? I’ve successfully done that. So far, going well.
I am with you on the ‘podcasts/streams lured in a new generation’ thing.
I think it is the fact that the saved media makes it impossible to not notice that these people are actually having fun. RPGs are fun, and people like fun. Then you get the ‘They ought to have…’ effect, and before you know it a game is starting up.
If parents of the ’80s had anticipated the unholy terror of social media, they would have been pushing D&D on their children left and right instead of banning ‘those evil books’.
Well I’m envious of Annalee Newitz for getting to play a form of D&D again, I got a little bit in (besides play-by-post which is a poor substitute) after my older son turned the age I was in ’78 when I started playing, but before my younger son was born – if I was to try any gaming now I’d wind up divorced or murdered.
I still buy, read a bit, and then hide game books – but that’s not the same.
In the Judgment of Paris, Paris was forced to declare Hera, Aphrodite, or Athena the fairest goddess. Naturally, being Greek Goddesses, they tried to bribe him. Hera offered to make him ruler of all Asia. Athena offered to make him the wisest and most skilled man, in both peace and war, in all the world. And Aprohdite offered him the truest love of the most beautiful woman in the world.
Which would you choose? Why?
Edit: For the female-identifying out there (or otherwise not interested in women), you can of course substitute ‘most beautiful woman’ for your preference.
“Most beautiful” is underspecified. Unlike the prototypical Greek man, I’m looking for a wife who’s got beautiful character more than one who’s got beautiful appearance. So Aphrodite’s out. That leaves Hera and Athena… Hera’s offer sounds interesting, but I’d probably need Athena’s offer to do much useful with it. And what’s more, I remember what happened to one guy who just might’ve taken Hera up on the deal.
So Athena it is. And for that matter, I’d say she really is “the fairest” in the other sense of the word.
Some say a throng of horsemen, others infantry,
others a fleet is the most beautiful thing
on this dark earth, but I say it is
whatever you love.
Wisest and most skilled, because then you are most skilled at conquering the world and at wooing the most beautiful women. So it is the superior option.
Being made ruler doesn’t mean that you will be granted the skill to rule. If you are just parachuted into the position, there is a good chance that it doesn’t last long before you are figuratively or literally stabbed in the back.
I was going to post a similar reply, but you beat me to it.
Additionally, this “ruler of all Asia” proposition reminds me of Lex Luthor’s line from the JLA cartoon: “President ? Are you kidding ? Do you realize how much power I’d have to give up just to be President ?”
There is probably zero probability that anyone in any time period could ever conquer all of Asia, no matter their qualities. And the contemporary world has so much more qualifiers on conquering your weaker neighbors compared to world history.
Athena’s bribe seems obviously the best (like choosing the orange pill in Scott’s story), but of course I would choose whichever of them was the fairest.
There has to be a catch to Athena’s bribe, otherwise you’d be skilled enough in war to conquer all Asia, and skilled enough in peace to rule it, and being so skilled in general would mean that you wouldn’t have much trouble attracting beautiful women, whether or not they are the most beautiful, or whether or not they can offer you the truest love. There has to be a catch, otherwise you’d obviously pick the option that makes you the best at everything.
This. Athena is as petty as the rest of that sorry bunch.
Being most skilled in war doesn’t make you skilled enough to actually conquer all Asia. Presumably the second-most-skilled person will beat you if you have a bad day, or if his army is just better or larger.
And where are you getting an army from anyway? Sure you could start building one from scratch using your skill in peace, but that’s a lengthly process and there are lots of chances to for things to go fatally wrong along the way.
@Chalid
I’m going with the other commenters who point out that recieving Asia by fiat doesn’t mean you are able to keep it, so although being the most skilled doesn’t mean you automatically get to conquer Asia, all other things (other armies being bigger, weather, geopolitics etc) being equal, you’ve got the best chance as far as things you can actually control if you are the most skilled. You raise a good point though, because although any given commenter here plonked in charge of a continent would surely screw things up, if you had pre-existing leadership skill sufficient to rule large territories, then taking a shortcut past the military conquest part may be the wiser option.
On the other hand, we could question whether we even want all of Asia. Besides bragging rights, or the desire to enact your morality in the world, the main upside of having all of Asia would be to live a lavish lifestyle with vast resources at your command. However, if you were the most skilled person on the planet then you needn’t bother with becoming a King or Emperor to have sufficient access to that lifestyle, as there are many ways you could use your skills to make vast wealth even as a lesser lord. You could invent loads of amazing devices unknown to the ancient world and have Kings and Emperors as your benefactors instead.
I think the catch goes away if you consider the goddesses will support Paris if chosen.
So the decision really comes down to Hera (wife of Zeus, the main god) or Athena (daughter of Zeus and carrier of the Aegis and other symbols, and friend of Nike, goddess of Victory). Aphrodite is clearly the worst option.
Honestly I’d throw my lot with Athena. Hera’s relationship with Zeus is fragile and in whatever conflict you end up, Hera may not be on Zeus side anyway, or if it is, it may be subject to betrayals.
Athena. Whichever you pick, there’s going to be trouble from the other two. Skill in war would be remarkably handy in handling that trouble.
I am pro-truth, so obviously Aphrodite. Surely the goddess of beauty and sex is objectively the most beautiful, whatever “objectively” means in this context.
As for the rewards, Athene.
Dunno if it follows. Her brother, Ares, the god of war, gets his ass handed to him by Athena in the Iliad, so dunno if they are guaranteed to be supreme on their respective domain.
Athena was also a goddess of war, though. Plus other stuff.
Only making Athena the better choice here.
Which definition of Asia are we going with?
I assume it’s Asia Major and Minor (in the classic meaning), which IIRC is modern Turkey and Syria, maybe a few countries over. Iran is probably too much.
Given that Paris was secretly Priam’s son, and Troy was in Turkey, I take this to mean Paris takes over Priam’s throne and goes on to conquest the rest of Turkey.
e: checking, Hera offers dominion over Europe and Asia Minor. Yeah I am gonna take a pass on that. Sounds like asking for trouble.
I had no idea that anyone ever used the phrase “Asia Major.” I assumed that “Asia Minor” meant “the small thing that earlier peoples meant by Asia.”
I can’t find anyone saying that Asia Major meant the Levant. Some sources say that Asia Major meant Mesopotamia or Persia or both. Would Mesopotamia really include the Levant? I guess Mesopotamia+Persia is big enough that you might just throw in the Levant. (These sources don’t make clear who uses it this way.)
This book says that it was first used in the 4th century and meant all of Asia, except Asia Minor. Here’s his list: “Sarmatia Asiatica with all the Scythian tribes, Colchis, Iberia, Albania, Armenia, Assyria, Babylonia, Media, Susiana, Persis, Ariana, Hyrcania, Margiana, Bactriana, Sogdiana, India, and the country of the Sinae and Serica.” The Levant is a glaring omission. I guess he must include it in Assyria, which can mean too many things (as the author notes!), which is a good reason not to use it in lists like this. [Maybe I shouldn’t have assumed that you meant the Levant by “Syria.”]
I’m pretty sure that in the preclassical period Asia meant Anatolia. In the classical period some people extended it to something bigger, but there was a big range of usage. “Major” was never used until late antiquity and was never popular.
Thanks
The Ancient Greek definition which either meant all of Asia today (including the northern steppes, China, and India) or the area from Turkey to Persia to Arabia, though explicitly not including Egypt or any part of Africa.
Another way to think of it: The Persian Empire minus its Greek and Egyptian territories. Or a third to half of the world in the classical Greek view.
The three finalists are disqualified for attempted bribery. I’m keeping the apple.
They all play favorites (though Aphrodite’s kind of a flake). It’s probably most faithful to the original to take Aphrodite as offering the most desirable woman in the world, but her offer still isn’t worth it; Athena and Hera are vengeful and will screw you over for not picking them, and Aphrodite can’t protect you (which is of course describes the fate of Paris when he makes that choice in the original story). But if you choose Hera, you can probably count on her taking care of you; you don’t have to worry about getting overthrown the next day when you have the blessing of the queen of the gods. Yoiu can count on your love life sucking, and and having Athena out to get you is going to be annoying as well, but probably more survivable than having both Athena and Hera working together against you. Having said that, in the end I pick Athena. With the queen of the gods actively undermining you, your prospects for success as a conqueror aren’t as good as some are suggesting, and your love life will still suck. But you’ll have the wisdom to make good use of what you can get.
On the bribery/corruption angle, it occurs to me that if you are supposed to be judging the nature of the goddess, rather than being completely shallow, all of the bribes are tied to their natures. So perhaps it isn’t unfair of them to be offering those bribes.
By “Asia” did Hera mean the continent or “Asia Minor”?
With the benefit of “hindsight”, it seems like the catch here is that whichever goddess you choose will want you to be legendary for her gifts, which makes it into a bit of a curse (see also the picnic that being the Chosen People of Yahweh turned out to be). Paris choosing Aphrodite means that not only does Helen love him, but a war is fought over Helen’s beauty, which does not end well for Paris. Odysseus being the favorite of Athena means that he gets endless opportunities to demonstrate his ability to overcome challenges (it’s amazing how often he just straight-up cries with self-pity in the Odyssey). Being famous for ruling all of Asia at least means you won’t rule it only nominally and briefly–either you’re famous for your skill in conquering it, or for your longevity or choices as a ruler.
Honestly I’m probably screwed no matter what; my best chance of getting out in one piece might be to declare that they are all the fairest because beauty is subjective, and “prove” it by finding three other people able to sincerely declare that each goddess is fairest. Then throw myself on the mercy of the contestants.
“I cannot tell, you are all wonderful, I have to go now”.
I can’t speak from experience whether this works with vain, vengeful immortals, but it kinda works with real people like e.g. women choosing dresses for a wedding.
I’d pick Hera’s offer. It’s the sure bet, assuming that I’m already marginally competent.
I don’t want to pick Aphrodite. The most beautiful woman in the world is nice, but not super important. Anyone in the top .5% is probably good enough to make me happy. In addition, ancient societies were either polygamous or accepted concubines and she only offered me one.
Athena’s offer is tempting, but loses out to Hera’s. Being the wisest and most skilled doesn’t assure victory. Victory also requires starting resources and position, and those are Hera’s domain. I’d rather trust my existing moderate skills with great resources than have great skill and lack resources.
In addition, Hera is the most vindictive of the three goddesses and the one I’m most afraid of upsetting.
Athena is extremely vindictive. I suppose there are more stories about Hera exercising that trait, but I submit that it is more because people mostly knew better than to screw with Athena, while in Hera’s case there was at least one person (Zeus, obviously) who just wasn’t scared of pissing her off.
None of the Greek gods are nice. Obviously I’d be sacrificing to all three of them afterwards.
I remember in Cryptonomicon, it was stated that Athena was less of a shithead than your typical Greek god. Did Stephenson mess up (or just not care)?
Athena’s vindictiveness, AFAIK, seems pretty narrowly targeted to people who were party to what were, in the context of that culture, fairly major insults against her.
Specifically, Arachnae persistently bragged of being a better weaver than Athena (who had weaving as part of her Goddess Portfolio). Athena responded first by warning Arachnae (in human guise) that she was offending against a major god and urging her to withdraw the insult, and then when Arachnae refused this, challenged her to a weaving contest to prove her boast. Arachnae’s entry in the contest was a tapestry depicting all the jackass things the various Olympians did to mortals (effectively doubling-down on her “insulting the gods” motif), and only then to Athena lash out at her (driving her to commit suicide, then reviving her and turning her into a spider).
The Medusa story is particularly harsh to modern readers, since we read Medusa as being an innocent victim who shouldn’t be blamed for her unwilling participation in Poseidon’s act defiling one of Athena’s temples. The classical Greeks, however, would probably have agreed with Athena that willing or no, Medusa was part of the defiling of the temple and thus catches a share of the punishment for it (the whole punishment, in this case, since Athena wasn’t in a position to punish Poseidon).
In both cases, she comes off badly to modern eyes largely because our norms of honor and morality have moved on quite a bit from those of Greek and Roman writers and audiences 2000-3000 years ago. And even by modern standards, she compares favorably to the likes of Zeus or Ares, whose misdeeds are a lot more numerous and arbitrary than Athena’s.
Athena for the reasons stated. I would find being wise and skilled very congenial, I could probably accomplish a lot of good, and wisdom and skill would be most helpful in responding to the blowback from choosing any one of the three.
As a side note, does anybody think that Athena’s offer is out of character? Because she should have known that no man would choose wisdom, if love is an alternative?
There is this joke that says “God was unjust with his distribution of talents except for intelligence, because everybody is happy with his/hers.”
Does anybody know a story/myth/fairytale were the protagonist actually wishes for more intelligence or wisdom?
P.S.: In case you, dear reader, are an exception, please substitute “no man would” with “most people wouldn’t” etc.
Well, there’s the classic example.
Didn’t Odin sacrifice his eye for wisdom?
It’s ambiguous. Some versions of the story make it sound like he got more knowledge out of the deal, not necessarily more wisdom; it’s usually how he learns the runes, for example. The Hávamál is one of the clearer versions:
Hanging on a tree, wounded with a spear, as a sacrifice to himself.
Is Odin what happens when a Player Character tries to game the same rules Christ followed?
It’s been pointed out before. And it’s probably not a coincidence. But it’s hard to tell exactly how much Christian influence there is in our sources for Norse paganism, since most of them were written down centuries after the fact by Christians, and even the stuff that wasn’t was written long after contact with Christianity (and even with Islam, by way of the Varangians and others).
Baldr gets a lot of attention for this, too — a son of the chief god, associated with light and spring, conspicuously good-natured among the grim and vicious Norse pantheon, killed before his time by malice and treachery, who will return after the end of the world and usher in a new golden age.
There’s a lot to unpack here, and almost no evidence to do it with.
It’s hard to tell exactly how much Christian influence there is in the Norse material, since it was written down so late.
How similar was the Woden of the 90s AD, when Tacitus mentions him as chief Germanic god, to the character of Odin we have written down?
How long before the 90s AD did Woden replace Tues as the head of the Germanic pantheon?
There seem to be some differences between the pantheon we see in Tacitus’s Germania and the one we see in the late Norse material, but that doesn’t necessarily mean much — for one thing, Tacitus took a characteristically Roman syncretic view of the Germanic pantheon (he refers to Wotan as Mercury and Ziu [Týr] as Mars; his Hercules is probably Donar, and he also mentions Isis, whose identity is anyone’s guess). For another, he doesn’t go into much detail about actual beliefs, spending more time on ritual, which he probably considered more significant. And finally, he’s not talking about quite the same ethnic group, and pre-Christian religion in most places showed quite a bit of regional variation.
As to when Woden became the chief god, about all we have to go on there is placenames, and those are tough to date. Dedications to Týr (under various names) appear throughout Scandinavia and the British Isles, though.
I pick Aphrodite. Ruling Asia sounds like a big headache, and as an engineer I don’t have much use for skill “in peace and war.”
The one with the biggest tits.
(Wait, that’s a different joke.)
In some versions Paris made them strip naked and apparently couldn’t make up his mind.
Wikipedia has a huge gallery on that subject.
“This is one of the very few versions in which all three goddesses are fully clothed.”
We know picking Aphrodite turned out badly. Hera is infamous for her jealousy and temper; making a deal with her is sure to end badly sooner rather than later, when she detects some real or imagined slight. And we’re a genre-savvy people now; we know this is a triple-bind; picking Athena will work out about as long as it takes Hera and Aphrodite to gang up on us.
So “Paris? No, no, I’m London. Paris is downstairs, I’ll go get him for you. Bye!” (Then I change my name to “York” or “Berlin”)
If your name is Stockholm you’re just screwed. But if your name is Lima maybe they’ll take mercy on you?
If his parents had dubbed that boy Stockholm instead of Paris, someone would have raptio-ed him, rather than him taking Helen.
I would shock everyone by offering the Apple to Persephone. Then, when any/all of the 3 goddesses contrived to smite me, I’d maybe get a good deal in the afterlife. Maybe not Elysian fields, but we could have tea and pomegranates every once and a while or something.
To take your request seriously, Hera because she’s the closest to Zeus and seems like you’d want to be on his good side. Play up that only she’s fit for him etc etc. Try and play both sides. Get Asia, do my best to institute meritocracy and order and squishy enlightenment values and maybe a land value tax while I had divine support and then when I felt the tide turn convert to a Hellenic-Judeo-Zoroastrianism of my own devising and get myself smote. See how long it lasts by asking people as they show up in the afterlife. Like playing a game of civ, basically.
Gotta keep in mind that the whole Troyan War is a long play by Zeus to get rid of demigods and overpopulation.
You may want to play along and send as many demigods against each other in the most balanced setups to maximize glorious deaths and avoid having Zeus concoct a huge war on your doorstep.
If anything else fails, just organize a huge demigod single elimination tournament. You are guaranteed to end up with a single demigod, and you can poison him or something.
We have a disproportionate number of coders here. Who will code this fighting game?
It’s probably a bad idea. A few years ago (a decade?) Tecmo Koei used their know how of battle games about Chinese and Japanese dynasties to make Troy: Legends of War or something like that. A game about the Troyan War where you battle thousands of enemies. It super flopped.
For whatever reason, the occidental world does not see much value in the Iliad/Odyssey, beyond the Brad Pitt movie. Netflix had a series adaptation a couple of years ago, which made waves by making Achilles, Zeus, and a couple of other characters, black people. That aspect was ok, but the series itself fell flat on some other aspects.
This is where I make some comment about Confucianism, but maybe the Iliad is just too alien for us now. And there’s the issue of copyright, where companies will not be as easily compelled to invest in characters they don’t own (Here’s where people mention stuff like Sherlock Holmes). It’s probable that in the modern world, feminism has made Homer less appealing, where women are mostly trophies and at best they remain at home and trick suitors. Or are Gods I guess.
Or maybe the Iliad is just too short, and companies instead need Marvel sized universes to invest in. Of course, mythology is a lot bigger than Homer, but at some point you run out of mythic stories to tell.
It’s really weird, cause I saw some data suggesting that Fantasy is a better genre for videogames than Scifi, not only in occident, but in the orient too. They seem to be easier to get into for people. Maybe Greek Myths somehow does not fit into the “fantasy” category, and only Tolkien derivatives do.
My guess is that the Trojan War isn’t good-and-evil enough to work with modern storytelling sensibilities. The grandeur of the big battle is at odds with how everyone involved is a selfish ass (except Hector), and characters that are closer to traditional heroes in other parts of their own history are pretty much just selfish asses in the context of the Trojan War.
There are certainly stories (movies/TV/games) than can work with a ‘this is a genre soap opera” setup, but they don’t also combine with a big city siege and battle. See all of the Rome-based shows that have been popular, and are rooted in the all of the backstabbing politics.
The Aeneid and Odyssey are much more protagonist/antagonist based, our heroes against an obstacle, a clear defined desirable ending, as well as being journey narratives instead of a battle narrative, so they fare better.
Btw, that swipe against feminism was entirely unnecessary. People were fine with the recent God of War: Parenting Edition and AC: Odyssey, and Xena is still beloved. There’s nothing inherent to the sword-and-sandal genre that feminists dislike.
The 2004 movie version did a fairly good job, and was fairly well received IIRC, with a bunch of selfish asses all around and the primary conflict between Achilles as a sympathetic selfish ass and Agamemnon as a decidedly unsympathetic one. Hector, Priam, and Odysseus were the only generally selfless and honorable ones.
What they did away with, and what probably doesn’t work with modern sensibilities, was the part where the Gods A: existed and B: were petty enough to use mortals as their proxies in their Olympian rivalries.
There’s two not-too-long-ago TV serieses based on a time “When the ancient gods were petty and cruel, and they plagued mankind with suffering”, but their Trojan War take was pretty far off.
Fair enough, and I suppose two generations’ worth of “Clash of the Titans” fits into that paradigm as well. I think the difference is that we now tend to split “myths and legends” into one category of half-forgotten history and a separate category of made-up fantasy, and since Schliemann, the Trojan War gets put into the former category.
Yeah, “the Powers that Be are selfish asses” is actually a pretty popular trope nowadays, but the difference is that they are then contrasted and pitted against our genuinely heroic protagonists, who chafe at being pawns of the petty divine.
The difference between the 1981 Clash of the Titans and the 2010 remake is pretty enlightening. The former does feature the gods being petty and Perseus just kind of going along with it, while the 2010 makes things much more starkly good-and-evil.
But you can’t make the Trojan War into Star Wars.
Just realized, though:
Fate Stay What?
@John Schilling
I really dislike this modern tendency of subtracting the mythical elements of mythological stories. I want an ancient tale to feel alien, and part of the reason for their morality feeling so different is that they believed there were squabbling supernatural agencies behind the fabric of the world, and that you could gain the favor of one or the ire of another. Without that it becomes sterile and I’d rather just watch a movie with a modern setting where characters do things because “it’s the right thing to do”.
I also think that religious motivations for things are underplayed in movies set in the middle ages. There’ll be some window dressing of religion, but they’ll always translate it into secular logic in order that we think the hero’s and villain’s motivations make sense.
@AG
Not taking a swipe at feminism, but at Homer. You gotta admit that the Iliad ain’t the most feminist of stories. Chapters can go without a woman appearing. The most proactive woman is Hera, who takes matters in her hands by…seducing her husband, and modern adaptations that remove the gods have to build up the role of the women in the saga (In the Brad Pitt movie, Briseis gets a ton more screentime, and the BBC series gives a ton more to do to Helen, and brings in the Amazons. The BBC series has gods, but their role is still low). The Odyssey fares similarly.
I don’t doubt that there are great sword and sandal/mythology stories with tons of great women out there, but when talking about Homer in particular, the record is not that good.
AFAIK Fate Stay Nite centers on Arthurian legends. I suspect part of the issue with fantasy leaving out the greek mythology is that Britain has their own legendarium in the Arthurian stories, which in time evolved into the Tolkien/Howard stories, which make the base of modern fantasy, which makes greek legends less attractive. British stories were inherited by America, who makes most of the media we consume nowadays. As for the rest of Europe, I don’t know. I suspect that Spain got the short end of the stick, because Don Quijote basically killed the romantic knight genre in the language, with nothing to replace it with.
@JPNunez
The Fate champions come from all mythologies, as well as historical figures. Arthur is one of the protagonists, but other notable characters include Alexander the Great, Gilgamesh, Medusa, Medea, and Herakles.
And that doesn’t even get into the spinoffs.
And Greek/Roman mythology still has a fair foothold in pop culture. Most notably, you have the Percy Jackson novels, but also the whole Wonder Woman section of DC. Most supernatural genre shows (Buffy et al) tend to reference them, too, with Lost Girl out right featuring the Greek gods in the flesh. King Arthur is not a part of the SHAZAM acronym.
And even medieval fantasy tends to steal the gladiator concept for their world-building, as well as tending more towards the Greco-Roman pantheons first, than Norse/Egyptian/Asian.
In literature, my impression is that swords-and-sandals fantasy is more likely to be written by lady authors, for whatever reason. So you’ll find more examples in YA, again because of the popularity of the gladiator concept.
Also, the Iliad and Odyssey are still fairly popular as fodder for non-mythological modern retellings (1, 2, 3). So people still like it as a story, less as fantasy.
No such resurgence for Jason and the Argonauts, though. That one seems to have really fallen through the cracks.
This thread is hitting some of my weirdly specific buttons.
@ JPNunez:
I can and have written long-form defenses of Homer as a promoter of strong female figures. While it is definitely true that the gender roles are not symmetrical and that there are plenty of disposable women, there are plenty of disposable men as well and it’s not obvious that the asymmetry is endorsed. Also, Athena.
The Odyssey includes several very blatant shots at the Patriarch(y). Calypso in 5:129-160 is the most direct, but I’m a fan of the depiction of Arete.
@ AG:
The Grail Wars and Servant system in general are unsuited to actually lowering the population of demigods, since it only pulls copies from the historical record* and can’t actually result in heroes being erased from that same record**.
* Exception: Alaya is a cheating bastard.
** Exception: Ab, frevbhfyl, vg’f n fcbvyre.
Hell, if anything, I’d try and keep up a good correspondence with Hera. “Hey how’s the husband? Run off again? If that lady comes into my Kingdom I’ll make sure to have her killed. Oh what’s that, people aren’t burning the right part of the cow? How many squab are equivalent to a cow? Too much god blood distracting people from sacrifices and the laws of hospitality? People these days!” then quietly try and align my Asian Empire to fly under the radar of the Gods.
Admittedly, that’s when having Athena’s wisdom would help, since Greek Mythology is not brimming with evidence of how humans and gods can totally work things out to their mutual benefit.
Presumably, Athena’s blessing includes being the world’s greatest philosopher, which seems like the best route for attaining happiness in this life and enduring fame in history, a la Plato or Socrates. More people can identify Socrates correctly than Mithradates, after all.
This. If I were a guy in Paris’s place, I’d choose the only goddess offering to make me a better person.
I’m not sure this is comparing like to like. Socrates is probably more comparable to Alexander the Great than Mithradates. And I suspect there’s closer parity there.
Athena’s gift is more broadly valuable, and can be pressed into service as a partial substitute for the other two. Also, Athena is a more reliable patron which, coupled with the use of her gift, would seem to offer better odds of surviving the attentions of the other two vengeful goddesses than any other combination. But I think I will be trying to keep a low profile going forward, rather than trying to conquer Asiatic domains and woo Spartan princesses through my augmented skills.
A tactful “none of the above” would be the safe answer, but there’s no fun in that and not much chance of anyone remembering my name in ten thousand years.
Athena.
I don’t want to rule things, and beauty is not very high up on my list of desiderata for a partner.
If we’re going by the rewards, and not the personalities of the givers, I’d go with Aphrodite, then Athena, then Hera.
Assuming I was not already married, in case cross Aphrodite off the list. And also assuming that the truest love of the most beautiful woman means an enduring nurturing love and not simply the most intense transient lust.
I’m wise enough to rule my own life (not that I’d turn down more wisdom if offered, of course) and have no ambitions at ruling Asia. Reliable romantic companionship adds the most to my life.
You don’t take the bribe.
That’s why Paris is punished with tragedy and ruin. Because when asked to judge a contest by the gods themselves, because of his reputation for fairness in a previous contest involving gods, he took a bribe.
You need not judge the contest. Zeus refused to, and as a mere mortal you can certainly take your cue from him. Or you can judge the contest by whatever standard you want. Just do not take the bribe. Make a clean decision, uninfluenced by the gifts. Preferably find a polite way to refuse the gifts before making the decision.
Personally, based on what I’ve seen of statues, I’d pick Athena. And that’s probably the gift I’d pick, too. But picking any of the gifts will lead you to ruin. Refuse the gifts, then judge.
Athena. I might not conquer anything, I might not get the most beautiful woman, but if I am the wisest and most skilled at whatever I attempt (“in both peace and war”) I would probably have a pretty nice life.
Seems like if I were the wisest and most skilled, I could conquer asia if I wanted, and if I wasn’t wise and skilled, then I’d have trouble keeping it. Ditto wooing the most beautiful woman in the world, but that’s a close run second.
“I define the fairest goddess as the one most capable of protecting me from the other goddesses”. Add enough reasoning that it doesn’t sound as blunt as that.
Honestly, Athena and Hera are probably close enough to a tie in that department that you need a tiebreaker. Both are quite protective of their own, and while Athena is personally more formidable, Hera has better political connections. And similarly Athena’s gift makes you good at protecting yourself and Hera’s gift gives you lots of minions to protect you. I don’t see an easy answer as to which will end up protecting you better (maybe just go with whichever style you’re more comfortable with). But, yes, this is the reason Paris was an idiot; Aphrodite is the blazingly obvious wrong choice.
Are Greek Gods deterred by bodyguards? It seems to me that individual skill or getting help is key.
You don’t send minions against Medusa, but use a mirror.
Scott,
Shouldn’t you note that this thread is culture-war free?
There’s something I’m confused about regarding Google Stadia and cloud gaming more generally.
As a non-gaming-pc-haver, I’m intrigued by the prospect of cloud gaming but worried the input latency will be unplayably high. In connection with this, people often claim that the speed of light on its own guarantees that the latency will be too high, but as far as I can tell this is just wrong.
60 frames per second is widely considered good enough for the vast majority of even hardcore games (competitive shooters are the main exception). That comes out to roughly a frame every 17 milliseconds. So for the speed of light on its own to impose less than a frame of latency, the light must travel from your computer to the server and back within 17 ms.
In 17 ms, light can travel over 3000 miles. Apparently light only moves ~2/3 its vacuum speed in fiber optic cables, so we can lower that to 2000 miles. This means that Google can place a server 1000 miles away from you and the speed of light on its own will impose less than a single frame of latency. And obviously it’s well within Google’s abilities to create enough servers to ensure that 90% of Americans are way closer to a server than that.
Now, I have no idea how surmountable the other sources of input latency are, and so for all I know cloud gaming is independently doomed. But the speed of light does not seem to be that much of an issue.
Am I missing something obvious here?
The speed of light is not necessarily an issue. I think half-educated people phrase it that way to gussy up their knowledge of the fact that latency is an issue. And latency is definitely a substantial issue. Real latency is determined much more by the number hops and thus times a packet has to be processed by all the various routers and switches between your computer and the server in question. Right now, my ping to google.com is between a low of 25ms and a high of 166ms, with an average around 90.
So, the speed of light is an issue in the sense that it consumes some of the latency budget, but the amount it consumes is not that high compared to being routed around the internet. Cloud gaming can work, especially for less latency-sensitive titles, but there are definitely big technical hurdles. For the twitchiest games it may never beat having the silicon rendering the pixels next to the screen they’re being rendered to.
That’s not the speed of light limit for ‘one frame’ of latency. That’s the speed of light limit for 1.5 *additional* frames of input latency. (you are operating on data that is ~8ms out of date, and it will be ~16ms before you see the result of your input.
But internet doesn’t happen at the speed of light, because you don’t have a dedicated fiber optic line to the cloud system. Use ping times instead. So people are wrong in their explanation for why the latency is unacceptable, but right in that the latency is indeed unacceptable. There’s also throughput concerns; while hiqh-resolution video can be encoded and decoded fast enough to stream, the compression and decompression still has a pipeline time, which is added to all of the other times in determining total latency.
That’s why the esports championships have the competitors colocated; to remove any effect associated with network status from the equation.
I don’t know if you play any online FPS shooters but once people hit a ping of 200ms the game experience is severely degraded.
Now, lets imagine my keyboard and mouse are 2000 miles from the graphics card.
I press up on my keyboard, it takes 17ms to reach the server and 17ms for the result to travel back.
But there’s more delays. I have to get a screen-worth of data from the graphics card, there’s compression and tricks using the difference between frames… but lets assume worse case where something updates the whole screen like an ingame grenade.
my screen is 1200*1900 so 2280000 pixels.
When running locally that’s going over a 10.2Gbps HDMI cable.
But 10.2Gbps for a single connection is not practical over the internet.
So we need to compress that image in various ways. You know what compression takes? time. So we add in some more milliseconds for each frame to be processed and compressed. Then decompressed at the user end.
Now add in stutter and line congestion. We hit the evening and everyone on my street starts streaming HD netflix.
The congestion affects both upstream and downstream and it affects high bandwidth connections more.
online FPS games manage because they’re typically sending no video data at all, rather a tiny stream of data updating coordinates and positions.
Most of these problems remain even if the remote server is only 1 mile away. Light speed is only part of the problem.
30 FPS is fine for most mere mortals playing most games (emphasis on “most”). But even getting that reliably will be a challenge given existing infrastructure.
Online gaming including FPS already exists, is very popular and has been for decades. The latency argument doesn’t make any sense because it applies exactly the same to Google’s project as well as all existing online gaming.
What is different this time is the *throughput* requirement, not latency. It is much harder to achieve consistently low latency when you have to send the entire screen’s worth of pixels compared to just a handful of coordinates of players in an FPS. Nevertheless, I’m pretty sure that Google’s engineers were not born yesterday, are aware of the challenges and have a plan to deal with them.
No it doesn’t; normal games can do client-side prediction to reduce apparent latency.
Fair enough. This is harder.
But client-side prediction can only get you so far. It will completely smooth out a ping spike only if no interaction with other players is going on at the given moment (which is most of the time). So it looks to me that streaming gaming requires latency to be as low *all of the time* as it is just *most of the time* in traditional gaming to give the same experience.
Is this true?
I’m not really sure what you mean. My perspective is that are things like moving your mouse to change direction that require x ms latency for the game to be fun which can be done client-side, and things like seeing the position of an opponent that require y ms and can’t be done client-side. If x < y then a streaming service with z ms total latency is going to have problems if x < z, which is possible even though z < y is required even in the traditional model.
I agree with rlms here. A user can put up with quite a bit of server lag, but input lag could be horrendous (especially in VR where users expect their display to reflect their proprioception instantly); when I submit an input I expect to see feedback immediately. Possibly google will come up with some clever way to run some things cheaply on the client and reduce apparent input lag.
@Incurian
But you’re effectively just streaming video from the server, so there would be no way to reflect user input client side without traveling to the server, rendering, and traveling back.
That’s why it would have to be clever. I have no idea how it would be done, but I wouldn’t discount the possibility based on my lack of imagination.
I guess my theory was that x = y but x-things (moving mouse) happen all the time while y-things (shooting, being hit) happen occasionally.
Either way, plenty of people game online and get ping of 20ms so that’s all the proof you need that ping 20ms is physically possible.
Which is not to say it is easy to achieve it consistently and at 10000x the throughput and (God forbid) outside major urban areas.
It might be useful to examine PVP gaming to understand how much of an advantage is perceived to come from reducing latency. PVP gamers will happily pay thousands of dollars to increase their frames from 60 FPS to 144 or even 250 FPS. They will also pay for a low latency monitor, reducing latency from a more typical 5 ms to < 1 ms. They will do both of these things even if they are playing on a 60 Hz monitor. That’s how much they want to reduce input lag.
Once you play a game at 144 FPS on a 144 Hz monitor, you will not want to go back. You can sense the difference 100%.
Or we can look at online musical collaboration and realize that once we get past 35 ms of desync, it becomes impossible to play together.
Now, on the other end of this are console games played on a TV at 30 FPS with a 50 ms output lag on the TV.
I’d say that “games as streaming” are most likely a competitor for the console market, but robust local hardware will always have a place.
I would also like to believe that eventually there will be a backlash against everything being a subscription model and people will want to go back to owning stuff (I’ve been there for like 10 years already) but I’m not sure what it would take to reach that point or if we ever will.
One of the difficulties is teasing out latency from inconsistency because they are so often correlated. Its true that an upper bound of around 250 ms seems to be what makes it nearly impossible to be competitive on games like CS, and Quake, with slightly higher limits for slower games like EQ and WOW. However, almost no one has a 250 MS connection that is consistent just based on how the internet works. Its usually a 150-500 MS connection with spikes and that just murders you.
@Clutzy:
If your ping to the server is 150 ms you have a pretty cruddy internet (overall network setup) or you are playing on a server on another continent.
But (and this a big but) 150 ms lag from when an enemy action affects you is very, very different from 150 ms of input lag. 150 ms of enemy desync is playable. 150 ms of input lag is going to feel like everything you do is happening in tar.
If you snap your mouse or controller to the side and you have that much lag until your camera starts to move on screen the only game you will want to play is something involving naval fleet maneuvers.
I’m just referring to what I know about old CS:GO non-lan competitions. And often you’d have American and Russian players playing against each other (or against EU teams) in these competitions and at certain points it was manageable, but very often the lag spikes just ruined everything. Obviously all the big stuff has always been lan for that reason, but many qualifiers used to work like that (although now its big enough that everyone can usually do regional qualifiers).
@Clutzy:
You are right that variable “desync” is harder to deal with than steady desync. I’d generally take higher, but steady desync over lower, but greatly variable desync.
This also is true for frame rates BTW.
But this doesn’t really have anything to do with the question at hand dealing with input lag. Having your client respond immediately to inputs is much more important than how delayed your view of the server state (and other clients) is. They, are apples and pears. Similar, but quite different.
As long as the view I have on screen is consistent, and my controls cause nearly instant response on screen, I can play. But if I have to snap my camera view ~90 degrees to respond to an incoming threat, it will be “very hard” to stop my turn precisely if the camera doesn’t even start to move until after my mouse movement has stopped.
Not discussed: forms of gaming that are more sensitive to hardware weakness than to input lag.
Dwarf Fortress can overpower anyone’s PC with pathfinding and temperature calculations, but would tolerate 100ms input lag pretty well.
Large Civilization games will chug on credible hardware, but again will tolerate high input lag.
It doesn’t have to be only the specific games that run worst on the service.
@deciusbrutus:
Absolutely 100%. That’s why I said in the beginning that there are certainly plenty of examples of people happily playing console games in high-input lag environments (consoles on a high lag TV being an easy example). The game itself determines how much or little input lag you are willing to tolerate. I explicitly pointed out that streaming computing services could definitely threaten the console markets because of this.
I just was pointing out that low input lag is definitely highly prized in many cases, and that this is always an advantage for (more) local hardware.
Console games have a very particular following; because the typical console controller is a thumbstick that does not allow very precise control, console games can’t require very precise analog controls.
Console games optimized for streaming could simply increase the size of their windows to mask the input lag. New AAA titles could probably even afford to make their controls retroact, so that it seems responsive.
@deciusbrutus:
The fact that controller based games use a “keep moving until I say stop” approach (as opposed to a mouse based “move to here” controls scheme) does help them. Some amount of interpolation (prediction) of when the user is likely to stop can also help.
But if you have 150ms of lag between stopping controls and the screen responding, you ARE going to feel that. Much in the same way that a speaker whose own voice is echoed back to them will feel it. Games that I play on controller on the PC, like Rocket League or Dark Souls, feel much better to play than they did on console. In fact Dark Souls Remastered changed the experience of playing in Blight-town from one of the the most frustrating areas due to lag induced by frame rate issues to not different than anything other than the normal “fuck you, this is Dark Souls.”
Input lag is just a bitch to deal with, and there is no way around that in a game that tests your reactions to novel events. You need some indication of what you have done in order to adjust your future actions, and when that feedback is delayed, it limits you.
I assume the plan is “serve the people who can use this today, and expect this creates demand for better infrastructure, while expecting to keep a first mover advantage until then”.
Yeah, that mirrors my thoughts on it. Someone asked me about it at my convention meeting on Saturday and my response was basically “The infrastructure doesn’t exist yet.” Although my prediction was a little more pessimistic than yours was. I expect that what will happen is that it’ll flop and then over the next five years or so our infrastructure will upgrade and someone will try again and succeed.
It can flop, and Google’s track record says they might cancel it after a while. My prediction is that it will stay alive for a decade without taking over the console market, but by then the streaming market may be different enough for them to not be important anymore.
If anything, MS seems way better poised to take it all. They have the cloud infrastructure too, a current console, and may ride the market on both sides without committing to either, letting people use whatever fits them better. My prediction is that in the end, streaming biggest player will be MS, the end being whenever streaming gaming is 2x revenue than consoles.
Online FPS games typically run 2 copies of the map, one on the client and one on the server, the server one of course sans-actual-graphics.
FPS games then get to cheat because if you run down the hallway, if the connection stutters you may never notice because you continue running down the local copy of the hallway until the client and server catch up with each other. It’s only if the connection really cuts out and the server starts disagreeing with the local client or the time steps get bigger than the server will allow that you start getting players lagging badly that they really notice.
Hell, it’s not unusual in FPS games for deaths to involve the server making some slight edits to recent history as far as the clients are concerned. Bob shoots at mike and the server agrees he hit him, meanwhile mike thinks he made it round the corner but then the server updates his client that he’s dead on the floor a few steps back.
Try to do that with screen frames and you’re gonna have a bad time.
For most day to day stuff, the speed of light is so many orders of magnitude above any other speed that it’s not worth thinking about. If someone told you that if you want to cross the atlantic, you need to keep in mind that it will take you at least 50ms no matter what mode of travel you use, you would look at them like they’re crazy.
The speed of light is not the only limiting factor in cloud gaming, as others have pointed out. However, it is definitely relevant enough that you actually need to include it in your calculations.
Yep, in network computing it can yield some interesting behavior.
There’s a famous old story of the 500-mile-email.
https://www.ibiblio.org/harris/500milemail.html
I love the tone of that report – it reads like neo-Lovecraftian scifi, and I kept expecting him to discover that the limit was imposed by aliens or elder gods, or that he was a brain in a vat.
A magnificent story, thanks for sharing!
That’s an awesome story.
+1 to all replies – this is a lovely (and presumably actually true) story,.
+1
Thanks for all the informative replies.
I’m certainly interested in all the other more prosaic latency issues facing cloud gaming as well. But as far as I can tell there has been no substantial pushback against the base claim that Google should be capable of making enough servers for the speed of light to not really be part of the problem anymore, at least not a major part. From this lack of pushback, I’m inferring that all the authoritative people saying otherwise are as full of it as I’d suspected.
Doesn’t that only follow for games with large, well-distributed bases and require that they only play against other people on their nearest servers?
It doesn’t matter how many servers they have worldwide, if I want to play with my friends in Taiwan or whatever speed of light is a hard limit on minimum latency (plus whatever other latency is introduced).
Nah, this is for single-player gaming where your graphics card is hundreds of miles from you.
Any multiplayer related latency would get layered on top of the latency from having your graphics card in another country to your screen. Your remote-graphics card server would connect out to the taiwan server just like your home PC does now
In addition to Murphy’s response, I’ll add that I’m mostly interested in single player games anyway, so not that concerned on the multiplayer front. But my taste in single-player games is still such that sufficient input latency could drive me crazy.
It largely comes down to how well they can simulate a dedicated 1-meter 10.2Gbps connection over a Mb-scale, [large number]-mile long shared pipe.
If major companies struggle to get smooth real-time 800*640 low-fps video and audio to work between meetings in their own internal corporate offices with thick corporate pipes, I’m not gonna hold my breath for high-def gaming to work well enough that I wouldn’t spend half my time swearing at the screen.
They might pull some magic out… but my guess is that it’ll continue to be crap until people have Gb-scale connections running protocols that can guarantee smooth transmission.
Yeah, I’m not exactly throwing my consoles in the trash already (or no longer keeping one eye on gpu prices). From what I’ve heard, the current levels of latency are not something I would have enjoyed playing Celeste or Sekiro with.
I guess my my original post was mostly inspired by the primate part of my brain that is more interested in ensuring people don’t wrongfully gain status through bad science than with the practical matters.
One other thing I’d add to this discussion: The average home network connection arguably wasn’t good enough for Facebook when Facebook was built, either.
What? The original Facebook implementation was incredibly lightweight. Unless you are suggesting that the average home network connection arguably wasn’t good enough for anything (which feels true now but wasn’t true then in practice).
The key difference in either case is that if your network is too slow for Facebook, Facebook just loads slowly. Same with early editions of Youtube or other video streaming. So you buffer, and wait for it to load (maybe go make a sandwich or watch some TV) and then come back and it is ready.
With live gaming, there is no intermediate “it works, but slowly” there is either “usable” or “not usable”.
This whole system would also leave people not in or near fairly large urban areas out in the cold.
Very few early FB users were on home connections. But the relevant point is, “half of your potential customers don’t have a fast enough computer/pipe/etc” is not a death sentence for a new service, as long as the other half do.
Some Quake 3 users never played with a ping over 50 and others never saw one below 200, especially the people in rural areas you mentioned; does that mean it was “usable” or “not usable”? Stadia will presumably work fine if you have a good server nearby and not work well if you don’t, just like Q3, right?
200ms? Ha! I yearned for the low-latency 350ms trips when I was on dial-up.
Here’s another linguistics-related suggestion for a linkpost: The Small Island Where 500 People Speak Nine Languages.
How about a buddy system rather than a ban. You can only post a culture war response, if you can convince someone else to post it in their own words.
That’s just the normal ban loophole but explicitly allowed.
Sure, my flames will be posted by my dear friend Honrad Concho.
Sounds good, as long as it hits those dreadful other people and not me.
No Comment of the Week?
I would nominate the trolling-versus-shitposting comment.
Guess we better up our game.
One of these days maybe we’ll have an AI that has learned how to detect culture warlike comments and acts as a filter to prevent them from getting posted (rather like how the presence of a tabooed term for a certain ideology automatically disqualifies a full comment from appearing here); until then, privately emailing warnings to people and then checking that they comply afterwards sounds like it would be a major headache.
shadow-banning commenters and posts seems to be more and more of a thing.
basically: outright ban people and they get angry and move somewhere else, taking their ad revenue with them.
So more and more sites are switching to shadow bans and similar: everything looks the same to the user…. but their posts start getting hidden, or moved to the bottom of all discussion and don’t get highlighted to anyone except those who really go looking.
And so they continue to rant…. but they get less and less replies.
I don’t think it’s about ad revenue. It’s about coming back with an even worse sockpuppet.
At least on reddit, this is the motivation for shadowbanning users on most subs; it basically only happens to spammers and repeat trolls who come back on new accounts, because it doesn’t notify them and means the sockpuppet turnover is slower.
Shadowbans are about cowardly admins who don’t want the personal confrontation of telling someone they are kicked off. They can’t even bother tossing them in the oubliette.
If the person really is a monster trying to destroy you, sure, shadowban away, but most people aren’t that monster, and it’s a shitty way to treat people who aren’t monsters. But once you have that tool to avoid the confrontation, you will come up with a way of deciding the people you have to manage really are monsters.
The more general version of this idea is to use NLP to detect the level of outrage in a comment and sort the comment thread accordingly.
If I knew that SSC engages in shadowbanning people, then I’d stop reading the comment sections — because I could never be sure if the discussions were genuine or indirectly scripted. Perhaps this is the end goal, I’m not sure…
I guess? But I think there are many of us who read most comments on most open threads, and it wouldn’t be that much work to fire off a quick boilerplate email. Scott presumably doesn’t read everything, but even 20% + user flagging would work well enough, I’d imagine.
This is of course the modern Opentathlon Thread, otherwise known as the 19th-century action-hero event as it tests the skills required of a young officer attempting to return to his unit through enemy lines- cross-country running, swimming, riding an unfamiliar horse (because he’s just liberated it!) and fighting with sword and pistol.
The ancient pentathlon was sprint (one stade, about 180m), wrestling, long-jump, javelin and discus. This was held at some early Olympic Games, with a 1500 metre run replacing the wrestling.
The modern pentathlon (not to be confused with the Modern Pentathlon) is held at indoor competitions instead of the (women’s) heptathlon, as indoor javelin throwing is impractical. It also removes the 200m race from the heptathlon, leaving 60m hurdles, high jump, long jump, shot put and 800m. Men compete in the decathlon outdoors, and a different heptathlon indoors.
So what would the event look like if we were testing the skills of the 20th or 21st century military?
Swimming, sprinting, and distance running still seem appropriate though perhaps these should be done with a heavy load. Instead of javelin, you might have shot-put as it is more like throwing a grenade. And maybe a crawling race? Competitive digging?
Running, swimming, grenade-throw, rifle shooting seem like pretty obvious choices. I’m not sure what the fifth event should be, though.
Orienteering?
Hand-to-hand combat?
An obstacle course?
Would a stealth component be appropriate? It seems really useful in modern warfare, but I’m not sure how to measure it.
Maybe mix it up with navigation. The competitor must get from point A to point B in random terrain guarded/patrolled by either neutral parties or the other competitors with cameras. Points are lost if the searchers can locate the competitor. More points if they can get a picture demonstrating they could have shot them.
Night orienteering (a seriously fun varient, if like me you enjoy running through dark forests at night). Most modern warfare is nocturnal.
Possibly put in a combined event- modern pentathlon combines running and pistol shooting in one event, though it works slightly differently from biathlon.
Perhaps have orienteering carrying a rifle, at certain points competitors return to the shooting range and have to hit some targets.
(As far as ways to combine running and shooting, in biathlon competitors race carrying their weapon. If they miss they must either complete an extra “penalty loop” or have a penalty added to their final time. Meanwhile, in modern pentathlon they do not carry their pistols- at the shooting range, they must either hit 5 targets or wait 50 seconds before they can start running again. There is no additional penalty for a miss.)
Have runners start with the pistol, 1 empty magazine and no ammunition. At the shooting stations there is a tray of loose ammo of a standard caliber (say – 9mm). When shooting you have to first load the rounds into the magazine by hand choose how many rounds to load. You keep shooting until all 5 targets are hit. If you load 5 rounds and only hit 4 targets you’ll need to eject and load at least 1 more round in order to drop that last target. If you load more rounds than you needed you either need to choose to eject the extra rounds prior to running again, or you get stuck with the extra weight.
Drone operation?
I would like to see a modern version of the Hoplitodromos.
Starcraft, Counter Strike, Rocket League, Dota and Fortnite.
Hmm….
The first four are pretty easy:
Obstacle course
Distance march/run with rucksack
Rifle shooting
Kill house (rifle in a more tactical environment, graded on both speed and accuracy)
Not so sure on the last one. If Doing Paperwork or Suicide Prevention Training are disallowed for not being athletic enough, swimming wouldn’t be a bad choice. Maybe some form of land nav/orienteering course would work. Or a first-aid event. If all else fails, test pistol skills, too.
Probably like this: http://www.bestrangercompetition.com/
Or this: https://en.wikipedia.org/wiki/Sandhurst_Competition
The Olympics are primarily for entertainment, and the Modern Pentathalon has more to do with being a Swashbuckling Action Hero out of Dumas et al than with any prioritized ranking of actual martial skills. So the Post-Modern Pentathalon will presumably have more to do with John Wick or James Bond than with SOCOM.
In order:
Fencing gets replaced with Mixed Martial Arts, details TBD.
Pistol Shooting is folded into an IPSC Three-Gun match, probably spaced along the running course.
The running component will be upgraded to Parkour, perhaps using the natural urban terrain of the host city.
The equestrian event is replaced with motorcycle racing, either motocross or street racing using the host city’s streets.
Swimming is tricky, because it doesn’t seem to fit into the common action-hero skillset. Possibly we keep it anyway for tradition and martial utility, but I’d be open for a thematically-appropriate replacement. Any suggestions?
Skydiving?
Thematically appropriate but hard to do as a competitive individual sport.
I did consider a HALO jump to the start point of the Parkour course, with the obstacles laid out to severely handicap anyone who doesn’t land right on target. Minimum time to ground plus precision landing gives you a head start over the competition. But that looks like several sorts of hazard lining up to kill the competitors, and I think they frown on that in the Olympics.
Underwater obstacle course, maybe.
In a Scuba mask + Tuxedo combo.
Same rule for the women’s?
I believe female movie spies traditionally wear some variant of the Little Black Dress in the scenes where their male colleagues would be wearing tuxedos. Let’s go with that.
Evening gown, of course.
Trash talk and one-liners, scored like gymnastics.
I think we have a winner.
Surely this would be integrated into the other events?
I’m thinking the trash talk and one-liners would take place over the course of the other events but be scored separately. You might come in third in the parkour segment, but come up with a really good quip when #4 misses a roll and breaks an ankle and the Russian judge might give you the nod later on over #1 and #2. This would then be incorporated into the final scoring in some relatively balanced way.
This reminds me of the sword-fighting in the Monkey Island games, where sword thrusts must be accompanied by insults (rhyming insults when fighting at sea), and the quality of the insults determines the winner.
Weighted blankets and similar things, weighted vests, compression clothing (specialized, but perhaps also the ones from athletic stores) subthread. Anything. Have you even heard about it? It is meant for autistic people originally, but it is getting realized it is just a good generic stress-reduction thing. What is your experience and everything that you have to say about them.
Fun: one research realized that ADHD kids liked weighted vests so much, they were acting out just to get them, hence originally it did not work in calming then down, only when they were allowed to wear them all the time.
I have ordered myself a compression t-shirt, but I don’t understand weighted vests. Don’t they put all the weight on your shoulders, effectively being the same as a heavy backpack, which I hated as a kid?
I have a weighted blanket, and love it. I don’t have any rigorous double-blind data or anything to back this up, but it certainly feels like it helps me and my partner sleep, especially in the summer when it’s too warm for a duvet.
I haven’t heard of weighted vests being used for stress-reduction, only for athletic training. Looking at Amazon, the heavier ones all seem to have chest straps and/or waist belts, which should take a lot of the weight off your shoulders. This should also be true of backpacks, by the way! If you’re carrying heavy loads on your back, make sure you have a backpack with a waist belt, and ideally a chest strap as well – most of the weight should be on your hips rather than your shoulders.
Hikers’ backpacks have a built-in belt that puts the weight on your hips. I looked at pictures of weighted vests just now and it looks like they might put the weight on your chest or belly.
I’ve always slept quite well my whole life, and even though I currently live basically on top of a major Manhattan Avenue I thought I still slept like a rock. But I’ve always enjoyed the weight of people on top of me during sex/cuddling, and since I’m utterly burnt out on dating I decided to get a weighted blanket.
The effect was so extreme I actually wondered if I’d been autistic or anxious for my whole life and never noticed it. I was blissed out for a solid week, not a care in the world. The compression shirt followed to extend the effect to the day. It pulled me most of the way out of a huge spiritual/physical/let’s-not-get-this-diagnosed-that-sounds-expensive funk. I actually still spend parts of the day completely giddy. I sleep like the dead and dream in a much more fantastical way than I have in years, and often lucidly whichever I used to be able to do but had basically lost. Sex drive returned, and acid reflux from the restriction of the compression shirt showed me I was eating way too much (next thing to work on).
Both the compression shirt and the weighted blanket also give me something to push against with my stomach for deep breathing. I am a much much too shallow breather, and being able to push against a force – however gentle – with my diaphragm, has been life changing for all the reasons deep breathing is usually said to be good.
Only thing is the weighted blanket gets holy hell hot.
I tried a weighted blanket, but other than being much warmer, didn’t notice much difference either way. If it gets cold enough to try, I might try doubling it.
I used one and didn’t notice a substantial difference in stress or anxiety, but the blanket I used may have been too light for a person of my weight and size.
Suppose that someone has heard about all this hype, and wants to actually purchase a weighted X, but they seem to be expensive enough that making a poor choice is semi-costly. Which items do people recommend or anti-recommend? When looking at size and weight, how much of a function of my various measurements is it? I’ve seen ~10% of your body weight mentioned for blankets (and a comment that more than this is good too), but I don’t know if this is universally accepted.
I went for 10% of my body weight rounded up. They seem to come in about 5 lb intervals so that meant 30 lbs and indeed it was expensive $169 on amazon. (Currently out of stock in that weight but this set.)
I expected to be disappointed since I’d enjoyed the weight of a full human body on me and figured there couldn’t be a weighted blanket that could recreate the feeling, but the more even spread and compression of the blanket is a wonderful feeling in its own right. Like being swaddled or hugged.
A friend also suggested that buying a large bean bag chair would probably have the same effect, since that’s basically what a weighted blanket is, but looking now it seems bean bag chairs are much more expensive than I thought.
You also don’t need to get one the size of your bed. Mine is basically a full size, because the weight keeps them in place and any excess would just drape off the bed, so there’s perhaps some saving there.
I’ve tried out a weighted blanket recently and had small, but definitely positive results – less moving around during the night and I believe faster falling asleep.
I never thought about compression shirts – any particular recommendations?
I don’t have great recommendations there I’m afraid. My current strategy has been to buy cheap compression shirts from Gotoly that’s more for hiding fatness. Since the biggest benefit for me is the deeper breathing, the focus on the stomach is fine. They wear out quick, though, as I suppose they must, being under tension by definition. So I go for cheapness and replaceability.
I struggle to think of what the difference is between a shaping compression shirt and a medical one, though, besides distribution of the pressure. I got a CalmWear once but it was much too small sadly.
More than anything I think you should feel comfortable with the material.
We got a weighted blanket and used it for months. My wife liked it, but I hated it and we eventually got rid of it because it was seriously hurting my ability to get enough sleep during nights.
I got a weighted blanket about four months ago, and have been using it almost every night. My experience is that it definitely has a calming effect, but the magnitude is not that big. The first morning after I’d slept with it on the whole night through (it takes a bit of getting used to, so it’s better to start gently), the effect was comparable to 5 or 10 mg of diazepam – I felt very very relaxed, and the effect didn’t wear off until after lunchtime.
In the beginning I was concerned over dependence or addiction, that I would start to have a hard time sleeping anywhere except my own bed. There has been some habituation, to the extent that regular duvets now feel comically light-weight, but thankfully I can easily get to sleep anywhere.
It would be interesting if someone did some rigorous research on weighted blankets. I would like to know what kind of person it works for, what kind of person it doesn’t work for, and why.
Weighted blankets only – I don’t know about weighted vests/compression clothing (though this thread is interesting).
I got a weighted blanket a year or so back, and it works very well for me. The downside is pretty much just habituation – I find it much harder to sleep without it (or other heavy blankets at minimum), to the point of hauling the thing through airports on a few occasions just so I could get enough sleep (though my sleep with it is still better* than my sleep without it ever was). I have always slept better with significant weight on top of me, which is why I tried it – the only time I can remember starting to fall asleep without wanting to since being very small indeed was the time child-me crawled into a folded-over futon, which was heavy enough to send me halfway to sleep before I realized and crawled out, and I have never slept well during the summer. (Heat = no blankets = no sleep.) That said, I also know people who weighted blankets don’t work for, who find them uncomfortable. Anecdotally, the best evidence for whether it works is whether having heavy blankets (or presumably other weight) on top of you works. If you don’t like that, you probably won’t like the blanket either.
I got mine from an SSC link: https://www.etsy.com/shop/AutisticRabbit#about. I’ve been quite happy with it thus far. The only downside is it is not washable, though you can get a cover that is. I get the impression this is a common thing for weighted blankets, but I could be wrong.
* Note: “Sleep better” = “be able to get to sleep”; the blanket does not, so far as I can tell, actually cut the hours I need to sleep or anything like that. It just shifts time-trying-to-fall-asleep from 1-3h at worst to 30m at worst.
I have autism and anxiety and find that weighted blankets are really comforting. It doesn’t have to be a blanket–anything that weighs at least 10-15 lbs and distributes the weight across my torso and upper legs works–but I’m just going to say “weighted blanket” for simplicity. If I’m stressed or anxious, the extra weight helps calm me down significantly. If I’m not stressed or anxious, it still puts me into a more relaxed state (and puts me to sleep if I’m not careful).
Even when I was a kid, before I knew that weighted blankets were A Thing, I took advantage of the benefits. I was the weird kid who slept under a full bed of blankets, even in the summer, because overheating was preferable to sleeping without any weight on top of me.
My only complaint about weighted blankets is how expensive they are. I’m too much of a cheapskate to buy a pre-made one, and I don’t have the skills to make one from scratch. My mother said she’d make me one for Christmas a few years back, but alas, those plans unraveled.
I wonder what’s involved in making a weighted blanket. What skills are required?
So I looked it up, and it seems as though the only skill needed is minimal ability to use a sewing machine. Minimal because the weighted blanket doesn’t have to look nice.
At least some maker spaces have sewing machines.
I’m not sure whether making a weighted blanket will save much money– the cheapest ones seem to be about 30 or 40 dollars.
Wow, weighted blankets are that cheap now? Last I checked (which, admittedly, was about two years ago) a weighted blanket of the correct size and weight for an adult cost about $150, whereas making one cost around $30-$40.
The demand must have increased dramatically if the market changed that rapidly.
The recent college admissions scandal and the continual drumbeat of stories about the extreme lengths students and parents go to to secure admission to a small number of top colleges has me wondering if there isn’t a better way to do all this. Right now, prestigious colleges are known for being hard to get into. But they are not known for being hard to get through. I expect the instruction offered by a college such as Yale is of high quality, but it has no particular reputation for particularly high standards or difficulty.
What if we turned that around? Could one run a prestigious institution that admitted pretty much anyone, but set the bar high academically, with a demanding curriculum and uncompromising grading standards? I would expect a large freshman class each year composed of bright-eyed young men and women, and a much smaller graduating class of stainless steel motherfuckers who made it through the entire course of study successfully. (How much smaller? 1 in 4 candidates admitted to US Navy SEAL training makes it through the course, though that’s after passing pretty demanding preliminary qualifications. If you let anyone try, might 1 in 10 make it through? 1 in 20?)
The great advantage of doing things this way is that there would be no point in trying to impress anyone up front. Looking impressive after high school really wouldn’t get you anything, if you didn’t have what it took to make it through the actual studies. Trying to cheat your way through would probably also be quite difficult since by senior year or so the remaining students would be quite a small group, known personally to the rest of the class and the professors. If someone else sat an exam for you, it would be obvious.
Is there some reason this wouldn’t work? Or might there be some great downside I am missing?
A friend who studied there tells me that ETH Zurich (11th in the 2019 THES ranking, 7th in the QS ranking) works this way, with something like a 50% dropout rate after the first year alone. However, I think going to a tough university that you might drop out of is much lower-risk in Switzerland than it would be in the US – fees at ETH are about 1600 EUR/year, and I think there are also maintenance grants for Swiss citizens.
Another effect I’ve heard about: academic staff have little willingness or time to help struggling students, because most of them are going to drop out anyway, and there’s an “if you can’t stand the heat, get out of the kitchen” attitude. I think this is a downside: probably lots of them could overcome their difficulties and get through the whole course with a bit more help. But I guess it depends what you’re selecting for – the ability to survive an uncaring environment unaided is a useful thing, but it’s not the same as intellect or scientific potential.
I studied at ETH, and this information is correct. I think it’s a good system, given that the cost of attending one year and failing is reasonable.
The main reason for this system being in place is that every swiss student that posesses a matura (the highest-tier high-school diploma, we have two other tiers) has a right to be immediately admitted to any undergratuate program at any swiss university (with some exceptions for studies that have hard capacity constraints, like medicine).
Also, for the record, professors and TAs have lots of time and willingness to help students along, and it’s mostly students systematically failing to take advantage of office hours, etc. The whole system is just set up in a manner that requires a lot of personal responsibility, in the sense of “here’s a list of requirements to pass, do whatever you want with this information.”.
Oh, excellent – I’m delighted to be corrected on that point 🙂
In my University, many professors had a rule that while you were very welcome to attend office hours, you were not welcome to office hours the week before an exam, if that was the first time you used office hours.
The last few weeks, office hours tended to be done in groups, because of the huge deluge of students coming at the last moment.
So the real advantages of office hours (one-to-one tutoring, personalised explanations) can only be enjoyed by students who started studying in the beginning of the semester, not in the last few weeks.
I’ve never met a student who managed to get into university and started to study when the semester started, who didn’t manage to pass all the courses eventually.
The typical US state school engineering program is pretty similar. But the students drop out to the rest of the university, so the number is not publicized so much.
I’d point to the irish CAO system.
It’s not perfect but it means that if a college’s course has 100 places and 300 people want to get in, the places go to the 100 people who scored best in the leaving cert exams.
The Washout rate in my course was about about 40-50% per year. We started with ~160, 78 made it to second year, about 50 made it to third. Only about 30 of my original classmates graduated with me. 4th… there weren’t many washouts between 3rd and 4th to be fair.
Though I have some issues with it: the disability support service basically dragged one student unaffectionately known as “brick” through despite him not particularly understanding the material or doing the work.
But for most students they mostly weren’t afraid to let people fail. Personally I didn’t find the course too hard, but I have a skewed view because I took to the subject like a duck to water.
We didn’t have the weird “extracurricular” bullshit where US students have to pretend they spend their summers teaching orphans to fly or something. If you get a perfect score you can pretty much go where you like, in theory a course might get overwhelmed by people with perfect scores applying… but last year out of 55K exam takers only 0.2% got a perfect score and if all 157 of them applied to the same course I imagine the Uni would somehow make extra places or something to keep all the top 157 people.
I think it’s partly because how Uni is funded in ireland changes the risk profile. The state covers the cost of 1 degree (and has lots of negotiation power with the colleges) so someone can fail out of a degree without ending up financially ruined.
I think you get a degree of risk compensation and people being unwilling to inflict brutal punishments so failing someone who’s $400K in the hole is socially harder than doing the same to someone who isn’t even if they both deserve to fail.
Also, people are going to be risk averse: I wouldn’t want to gamble the price of a house on maybe-possibly-perhaps having a 1 in 10 chance of graduating.
So it kinda makes sense that some of the most expensive uni’s fail people less.
From talking to a french doctors I know: apparently some french medical schools have a battle-royal model. almost no selection on who gets into first year…. but then only about the top scoring 1/10th get to move on to second. Which creates a brutal competitive environment because helping someone might push them above you. He had some stories about people getting locked in toilets during the exams and similar (because one way to increase your chances is to eliminate someone who’s definitely in the top 10% from the running)
I’d expect a biological attack (releasing a virus or such), given that these are medical schools.
I would have expected being locked in the toilet of people who live on Irish food to have qualified as a biological attack, but apparently not.
Medicine is very competitive in Spain also, although you just need to pass exams. However, grades affect the specialization you will be able to get, as well as getting into the desired hospitals. So the medicine part may be to blame more than the university.
The worst/weirdest thing I’ve heard about medicine students in Spain is that they write their notes in green ink, because this way it can’t be photocopied.
I remember the old code books that used to come with video games back before networked DRM with black-on-black writing to try to make them hard to photocopy.
Of course modern scanners have no issue even with black on black.
They can be color photocopied, but that’s very expensive. Black and white photocopying of green ink has to be converted into greyscale for printing, and the grey is a very faint color.
You can do it, though. You first have to scan the green ink text, then convert into greyscale and saturate the color. That is still a lot of work, and cannot be done on a photocopying machine.
The only things modern scanners cannot scan are euro/dollar bills, and in Spain, University diplomas (because those are printed in the Royal Mint, with all the bells and whistles).
Get a cheaper scanner, some of the expensive ones respect the EURion constellation and lock up
https://upload.wikimedia.org/wikipedia/commons/thumb/a/ac/EURion.svg/1200px-EURion.svg.png
but most won’t.
Also, fun game: print the EURion constellation on a tshirt and wander tourist locations to get it in peoples photos.
Anti-photocopier stuff is really out of date , most people I know now would just scan a document and keep it digital.
I don’t think that will work. Even the wrap-around effect of a shirt will probably distort it enough to make it useless. It’s designed to counter copying, not photography.
Re: price of a house: the factors that make college so expensive aren’t fixed. If you’re making massive systemic changes, you can expect a massive shift in demand for college, lending behavior, value of the degree in the market…. All of which can have unpredictable and interesting – and perhaps salutary – effects on the cost of college.
An obvious downside is the cost occurred by those who drop out, especially late in the program. In a diploma-society, where almost graduating is worth little more than never having studied, this is very bad. This is not as bad for US Navy SEAL training because the military pays for the training
So ideally you want to set a very high bar very early and then make everything after that easier than the bar. A bit of quick googling suggests that US Navy SEAL training works this way as well, with most dropouts being at the beginning.
So, the curve would have to look something like this:
500 enter first year
100 enter second year
75 enter third year
60 enter fourth year
50 graduate
I guess it would suck pretty hard to be one of the ten who entered fourth year successfully, but didn’t graduate. Being one of the 400 who washed out in first year wouldn’t be quite so bad.
I feel that the problem here is the washout-quotas.
What do you do under your system if 400 people surpass the normal standard for getting into second year?
Perhaps you have an unusually good intake one year or perhaps they pull together, cooperate and help each other surpass the standards. Doesn’t really matter how but they surpass the standards.
There’s a difference between setting a high standard and letting people wash out if they don’t meet it vs treating it as a tournament for the aggrandizement of the institution involved.
it sounds like you’re treating the washout rate as some kind of metric of quality.
I can set low standards to enter then fail a thousand illiterate fools who didn’t study but that says almost nothing about the quality of my course.
Similarly if 500 hard-working future-nobel-prize-winners turn up at my door and I fail 90% of them because I’m trying to paint my course as having “high-standards” then I’m simply doing by job badly.
Slightly-related anecdote: Terman did this big study of mathematically precocious youth (basically people who did really well on the SAT math section when they were quite young). He tested a bunch of kids, followed the ones who were above his cutoff, and got the kind of results you’d expect (lots of those kids went on the get PhDs in technical subjects, many ended up as professors, inventors, etc.) But no Nobel prizewinners. Two kids who took his test got science Nobels, but both just barely missed his cutoff.
I think Nobels run into a few statistical issues.
They’re so rare and the number of scientists and papers so large with a certain amount of politics and chance thrown in that while nobel prize winners tend to be very smart… if you simply sample a few hundred of the smartest people in the country… your odds of hitting a nobel winner are pretty low.
And yet both Feynman and Shockley were in the group tested, but didn’t quite make his cutoff. Which is just another way of saying that test scores strongly correlate with brilliance, but aren’t the same thing, and a lot of other stuff goes into the kind of innovation that leads to a Nobel.
Shockley and Alvarez, not Feynman.
It was a systematic study of California children and Feynman was not from California. Also, I think he was too young, born in 1918, compared to 1910-1911 for Shockley and Alvarez. Also, the test was the Stanford-Binet test, not math-focused. (Terman wrote the Stanford-Binet test—that’s why it’s called Stanford—based on earlier work of Binet and Simon.) I think that the Termites were tested in elementary school, too.
There is a separate Feynman story, although it is oddly lacking in detail and corroboration. But it claims to be a high school test, which would be more predictive.
And you may be confusing this with the much larger SMPY study 1971– which uses the SAT in middle school.
You would have to scale your freshman class size way up to make admissions not very competitive. That would result in lowering quality of the freshman education.
Also, the freshman year becomes the admissions process, except that now you have to fail a large percentage of the freshmen by design.
Well, I did take part in something like that at the University of Waterloo. The Mathematics faculty, which includes a number of departments, including Pure Math, Applied Math, Statistics, and Computer Science, offers advanced versions of the first- and second-year required courses. They let anyone who is admitted try it, and you can drop down to the regular courses at any time. I took the advanced courses in the 1A semester, and we went from 200-some students to maybe 20 taking the final exam. I think 10 people showed up for the start of the next term. I decided to drop down to the regular mathematics stream at that point.
At some point don’t you end up with something that’s indistinguishable from an admissions exam and now you are back to the old system? I mean if 5000 people are going to be taking what amounts to a MOOC and then are only allowed to continue on to the “real” courses if they get a high enough score on the final exam, what kind of system is that really?
Why would first year have to be somehow fake? It could teach real material. It could grade real material. Obviously if you’re expecting to flunk well over half the class there has to be something of a mismatch between what the students were expecting and what they got, but people routinely have inflated views of themselves.
Because 5000 students in a course is sufficiently different from a class with 30 students to be a difference in kind rather than degree.
You could have the first years be distributed and delegated, and provide a path for people who don’t pass the actual admissions test to continue education afterwards.
See also: Transfer schools.
What do you mean by work? It’s become clear to me over the years after a lot of discussions that people have very different ideas about what goals colleges are supposed to be maximizing or even on a more basic level who should decide what goals colleges (in general or a college specifically) should be serving. Worse yet, many people seem to think their own private answers to these questions are so obviously correct that they don’t need to be stated. In the US the whole thing is further complicated by the dichotomy between public and private schools.
Who you know is just as important as what you know. More time spent surviving the gruelling sink-or-swim university means less time making friends who can get you a job down the road. This sort of model would work better for some subjects than others. Comparing it to SEALS runs into the problem that once someone gets through SEAL training, they are presumably a SEAL, employed by the same government that trains them. If someone goes to Sink or Swim U and gets a history degree – is the university going to be employing them?
I think the idea is an institution trying to have a “brand” for high quality graduates.
I know some utter incompetents with the same degree I have, if I walk into a company that hired one of them in the past, my degree is going to have all the value of used toilet paper in their eyes.
So some people would like the opposite: a hard to get degree that’s a real sign of being good at the subject.
So lets imagine, your company hired 3 Yale graduates who all share great stories about the wild parties and weird ceremonies involving a pig…. but unfortunately none of them are actually very good at the things they’re supposed to be experts on.
So you hire someone from quality-brand-university and they massively overperform in their field of experiences but lack any blackmail videos from any porcine related ceremonies with other great and good.
If the job in question doesn’t actually involve much skill, or could actually be done by low skill individuals or the skills involved are primarily nothing to do with the course you ask for on the job ad then the yale grads may be the ones you’re really looking for and the things you ask for are just to save you time looking through a big pile of CV’s.
I’d argue that in many companies this is the common case, many positions demand grossly overqualified individuals.
But if a lot of money actually hinges on the person in question knowing their stuff inside and out, if it actually matters: you probably want to hire the guy from quality-brand-university.
Some law schools work this way…it’s not a great setup, for precisely the reasons people articulated below: imagine throwing 3 years of your life into this super-competitive melee (where everyone is encouraged to dial it up to 100%, sacrifice everything to moloch, etc, just so THEY don’t drop out) just to drop out in the last year.
I have no doubt the resultant class would be excellent at whatever you were training them at, but the losers are just utterly wrecked: you’ve just sucked many years of a student’s life away for nothing, possibly less than nothing. A college dropout might be a worse signal than simply never having gone.
…and frankly, your institution wouldn’t fare well. People would prefer to go to the magical happyland where everyone gets A’s. As long as those “elite” institutions exist (a lot of ultraelite law schools are pass fail: https://lawschooli.com/law-schools-passfail/) the monster students who can get in anywhere will prefer to go there. You seem aware of the fact that the system currently has several big players that provide immense social rewards for merely getting in…unless you have some solution for stopping that, the best will go there, and your cutthroat academy will be left with the average and the worst.
It doesn’t? (Genuine question, I’m unfamiliar with the US system). If that’s true, why do people not just use acceptance letters for signalling purposes rather than going through the costs of actually attending Yale?
because employers select for conscientiousness, reliability, and frankly neurotypicality, and they think that actually graduating signals that, and that merely waving an acceptance letter around signals that you don’t understand the process.
Like dating, a lot of the admissions/job hunting game is a complex series of interactions where various unspoken criteria matter, but you have to act like they DON’T matter. Consider the signal that posting a SAT/GRE/LSAT score on a resume sends.
It doesn’t. Generally speaking the top US colleges are known to be very difficult to get into, but once you’re in, you’ll almost certainly make it through.
Harvard is probably the most prestigious US college, and there 96.6% of students graduate within six years.
https://www.collegefactual.com/colleges/harvard-university/academic-life/graduation-and-retention/#secGraduation
Even MIT, which is known for being difficult, has a 91.4% six-year graduation rate.
https://www.collegefactual.com/colleges/massachusetts-institute-of-technology/academic-life/graduation-and-retention/
and virtually all the highest-ranked law schools and (I believe) med schools are pass/fail, with very high pass rates
I might have misread you. High graduation rates are not inconsistent with high difficulty if your students are heavily selected. I wouldn’t expect Yale to be hugely more challenging to its students than Average College is to Average Student, but I’d be surprised if Average Student could get through Yale in one piece (which is how I interpreted your original comment).
My hypothesis is that the academics at Generic Ivy are somewhat more demanding than at Average U, but Generic Ivy is dramatically more selective in admissions than Average U. The typical Average U student would struggle at Generic Ivy, but would make it through. The typical Generic Ivy student would make it through Average U and it would be easy but not trivial.
But these are very much guesses on my part. I’ve never had an opportunity to compare Calc I as offered at Average U to Calc I at Generic Ivy.
Some of the students are heavily selected for academic ability but a big chunk aren’t. The son or daughter of the rich and famous can pick an easy major and get through even if their ability is far from stellar.
@johan_larson: I can’t answer your question directly, but I did my undergrad mathematics degree at a university in the top 5 of the 2019 THES world rankings, and TAed calculus courses at one that’s near the bottom of the top 100. The material covered in corresponding courses was very similar, but the top-5 school taught it in a more demanding way: problem sheets usually skipped the straightforward check-you’ve-understood-the-definitions questions and went straight to the “can you use this material in novel ways to solve unfamiliar problems?” questions. The best students at the top-100 university would have done fine in the top-5 course, but I think the less able students wouldn’t have made it through.
I don’t think this is at all hard to explain. The people who get into those schools are the ones who never got a B in high school and spent their summers teaching orphans to play classical didgeridoo. The vast majority of those people are going to pass any quasi-reasonable course of study, so unless the school is trying as a matter of deliberate policy to fail more, you’re going to see 90%+ graduation rates. Less-selective schools are the ones with students who are genuinely borderline in ability, so I’m not surprised that those schools have graduation rates higher than the freshmen retention at most schools.
I think you also have to account for the sizable minority of students who didn’t get in (solely) on academic skills or academic-adjacent skills. There are a few legacy admits, recruited athletes, and so forth who are generally still good students but certainly have gotten B’s before.
There are two ways I think the university addresses this. The first is that you can tone down your academic workload by carefully choosing courses. The second is that instead of failing students they just give them C’s even if their scores are significantly lower than the rest of the class. So the university does have to make some room in order to avoid failing anybody but the top students can still challenge themselves.
Employers who recruit heavily at ivy league schools know this too so they don’t just hire anyone who graduates.
Schools have been giving underqualified students degrees for decades by having a subset of courses/majors which are significantly easier than normal. This is most visible in the form of the majors the big athletic schools have for their star athletes (I think the University of Missouri has Breathing Studies), but I’m sure that the Ivys have something similar. And they’re reasonably selective with their legacies, too. It looks like being a Harvard legacy only gives you a ~40% chance of admission, and given the known heritability of intelligence and the like, I’d expect that the top 40% of Harvard legacy students are capable enough to avoid flunking out, even if they tend to be concentrated in the easier majors and get mostly Cs.
So what’s the deal here? Is Harvard too easy? Or somehow the selection is very good, only accepting people who are very likely to graduate?
I get the argument that selection is demanding, but they seem to still be taking in legacy students in droves. edit: quick google says that legacy is just a plus, but that they got so many candidates that it makes little difference. Selection explanation seems enough.
Harvard would probably also suggest the possibility that their teaching is so excellent that few fail to gain the minimum necessary knowledge. Probably some of each is true in a ratio that fluxuates slightly but is smoothed over by corruption.
@RandyM
have they ever said something even close to that? I am curious.
Seems I was wrong, they chalk it up to screening:
Although perhaps the meaning changes depending on whether you emphasize “admitted” or “Harvard”.
Now I am wary of the selection explanation if Harvard themselves are promoting it.
There isn’t one level of difficulty across the whole school. From my experience, athletes and legacy admits tend to take easier classes. For example, at Harvard there are a number of intro math classes, ranging from the quasi-remedial, to the legendary Math 55. Students who are struggling self-select into easier majors, or easier tracks within the same major.
@aashiq
Ah, that makes sense.
Besides what other people said, I’m sure there is the effect that dropping out of Harvard has higher costs. Once they get in, their marginal students have a much higher motivation to stick with it than those at a state school.
People in this conversation should know that you can actually check difficulty standards yourself since a lot of university courses post their exams and homeworks online. For example, here are some of Harvard’s multivariable calculus homeworks. At a glance, the problems do seem tricky compared to what you might have to do at other universities–see for example problem 5 on hw 1 or problem 2 on hw 18.
Just noting that that’s the easiest version of the course. Math majors (and others interested) sort into various harder tracks.
This is simple enough to explain, given how hard it is to get in. If Yale is 10% more rigorous than average, but the people who get in are 20% better, then the people who go there aren’t going to come away reporting more difficulty than an average student has at an average college.
My school sort of did what you’re suggesting, although in a less extreme format. They keep winning “best value college” awards and the like, and as a result, freshman enrollment is skyrocketing. Instead of trying to filter people out before they show up, the administration has decided to do it through Chem I and the various Calc classes. This works, but it does mean that, for instance, student housing is always packed for the first few months of fall semester, until enough people drop out to relieve the problem. My roommates junior and senior year both dropped out at Christmas (due to problems with math, not me, I think) and I ended up with a free single room both times. But that would never have happened if they’d left in September. The local hotels might not have even been cleared by then.
I think you’re going to see a lot of weird problems from this kind of extreme wash-out regime. At any given time, most of the people around will be doomed freshmen, and that population is going to fluctuate a lot. I’d be surprised if everyone didn’t get a room to themselves in the spring because of attrition, which is going to do terrible things to your housing budgets. Likewise, campus culture isn’t going to be very stable.
For that matter, don’t a lot of majors (not colleges) basically do this with a weed-out course or two? I know that in my Intro to Aerospace class, there were basically two populations: those who were going to pass fairly easily on their way to an AE degree, and those who were doing it for the third time, hoping to avoid their destiny as Engineering Management majors.
If the content per semester was the same but delivered in two weeks fewer that indicates higher difficulty.
He said they covered less material.
Calculus I really will be the same everywhere. I remember hearing someone laugh that, on a campus tour of MIT, they were covering the same things on the Calc class he dropped in on as he was studying in his (college-level) high school, and this was proof that MIT was a joke.
But why should it be any other way? Even if the students were on average 10% better, you still have a lot of students that are only 5% better and then students who only shine in a place that isn’t Calculus or Organic Chem.
I can probably be convinced that all the elite schools do better is to skim out the people who can’t hack it pre-admission. But an intro class isn’t the way to do it.
At my university, the honors calc classes were made more difficult not by covering material faster but by covering proofs rather than worked problems during lecture.
@Nick: same here, and I forgot I was in one of those classes.
The point of the basic Calculus I class is to teach the calculus you will need for all the other classes. Even if your students were 10% smarter, there is no need to make it 10% harder or 10% faster. They’ll do the homework in less non-classroom-time, and have more time for other things.
That anecdote would be more convincing if you’d taught orgo at two schools. Your perspective as a student is very different.
Similarly, Edward’s friend learns very little from dropping in on calculus. Virtually everyone who goes to MIT has had HS calculus with nominally the same syllabus, but 2/3 of them are asked to retake it (though 1/2 of those retake at at 2x normal speed, which was already 2x the speed of almost all elite schools, and has been since before entering students had calculus). The purpose of calculus is to provide tools for later classes, but later classes at MIT will demand a lot more from calculus than at other schools.
I can’t speak to the US experience, but when I was at Oxford I was expected to write 8 philosophy essays and 4 French literature essays, and translate 8 passages from English to French and 4 from French to English, in each 8 week term. A close school friend at a highly-ranked non-Oxbridge university did three essays in each of his much longer terms. I got around 4-5 hours of tutorial time (one-on-one or in a very small group) every week. He got almost none. And the vast majority of the teaching was outstanding. I really believe that Oxbridge is both much harder and much better at transmitting subject knowledge than other British universities, at least at undergraduate level. And that’s to say nothing of the extracurricular and networking opportunities.
In the US it is largely upon the students to avail themselves of professors’ office hours and to seek out other tutoring resources as necessary (though obviously there will be some differentiation between schools). Would you mind contextualizing a bit how it works across the pond? Like does Oxbridge make a point of pushing everyone to go to 1:1 lessons or do they just attract the students most motivated to do so? Do you mean office hours aren’t really a thing at the non-Oxbridge schools?
@Gobbobobble: One of the main methods of teaching at Oxbridge is the small-group class (referred to as a tutorial at Oxford or a supervision at Cambridge). Typically this is 2-3 undergraduates being taught by one supervisor- sometimes an academic, more often a graduate student or postdoc.
These are compulsory, all students go to them- each course has a certain number of supervisions attached, and they are arranged at the start of the course. Attendance at lectures, by contrast, is optional.
My tutorials were overwhelmingly taught by academics, not grad students or post-docs. I didn’t get the impression that was atypical, but perhaps that was a Merton thing rather than an Oxford thing.
Might be a humanities thing.
Oh wow, that is quite different. Thanks!
Side question, but what is supposed to make o-chem so difficult? I’ve now graduated and am considering going back and taking it for personal entertainment. I already have an engineering degree and have done the common engineering chemistry courses.
I struggled with my organic chemistry class until something clicked in my head about just what they were supposed to be teaching and just what I was supposed to be learning. They didn’t spell it out but, like integration, there are a series of tricks and you need to learn to pattern recognize them. I went from a C-student to an A-student in the course of a week.
Also, I think a large part of it was stress and anxiety.
The difficulty lies in getting people to pay for that kind of a setup. Right now a lot of students see the university as knowledge transfer, where they’ve done what they need to by getting accepted, coming to class, doing homework, etc. The college has vetted them in the admissions process as being able to do college-level work, so now it is the job of the college to transfer knowledge and earn the tuition they asked for.
I think most people pay lip service to the idea that you should be thrown out for not meeting standards, but practically they would view their own failure as the fault of the college. The college charged an exorbitant amount of money to take them, and they pre-vetted them with ACTs, SATs, high-school grades, etc. So now it is on the college to live up to their end.
To give an analogy, the current model is like an exercise class that you buy a pass for and go take for an hour for a couple months – providing whatever level of effort you feel comfortable with. Your proposed model is like hiring a personal trainer who can fire you for not doing the prep work they give you one too many times. Using the exercise class pricing model for that would result in lots of hurt feelings from fired clients, regardless of how they feel about personal responsibility when asked.
I look at the amount of money there is in running a high-prestige university and the lack of new high-prestige universities, and I conclude that it’s not feasible to start one.
After all, you’re asking promising students to bet that this new university will turn out to be high-prestige, or even continue to exist.
Or are there new high-prestige universities I haven’t heard of? What’s the situation in China and India?
Existing high prestige universities could expand by adding new campuses (Harvard in California) but that doesn’t seem to be happening either.
Schools can move their rankings, but it’s generally slow and expensive; the usual strategy is to spend a lot of money bringing in famous faculty (some of whom can be lured to lower status positions by sufficiently above market salaries). NYU is rather famous for having done this, and is much better regarded these days than it was half a century ago. But it’s still probably not what one would call elite, and I don’t know of a similar case where a school moved from non-elite to elite via any deliberate process in a reasonable time. A limitation of the NYU model is that they specifically targeted faculty in fields where the typical pay rates were low, in order to make the strategy affordable; this also limits how effective the strategy can be, but removing that restriction would make the cost enormous.
NYU is famous for hiring faculty, but it seems to me that it is not very famous for what seems to me a much more dramatic thing, moving from the Bronx to Greenwich Village.
That wasn’t an intentional effort to raise the school’s prestige. NYU was near bankruptcy and sold the Bronx campus as a last-ditch effort to stay afloat. It worked out well for them, but it’s hard to predict which neighborhoods will gentrify and become desirable and which will stay depressed. If circumstances had been different, NYU could have sold the Greenwich Village campus (then home to the graduate and professional schools) and consolidated in the Bronx instead, and then who knows how their prestige push would’ve turned out.
Sure, but in retrospect we know which way it affected the prestige and you have to factor that in to the trajectory, even if the effect size is difficult to know.
Anyhow, my main point had nothing to do with prestige, but was just that this seems to be unknown and I find it hard to understand how it isn’t widely known.
————
Maybe I don’t understand how it segregated undergrads from grad students. Did professors visit both campuses or were professors assigned to only undergrad and graduate teaching?
(Isn’t there a related fight going on at Harvard, where it has transferred professional schools to Boston, and now it wants to move biology labs, but it’s difficult because researchers talk to undergrads?)
Partly it’s that the Manhattan campus is the original location and NYU has branded itself on being in Washington Square for almost 200 years, and partly it’s that NYU undergrad was an obscure commuter college in the Bronx and only achieved its current prominence after it returned to Manhattan. The medical and law schools were historically well-regarded, but they’ve always been in Manhattan.
I’m not sure how it worked either. It might have been that Ph.D. students were in the Bronx, there just weren’t many of them compared to, say, masters students at the School of Education. Also there was a much smaller undergrad college in Washington Square during the Bronx years… maybe someone who was around at the time can fill us in.
I also think there may be some level of “conservation of prestige” going on – the rapid decline of CCNY happened shortly before the rapid rise of NYU, maybe it was just filling a vacuum.
If by “PhD students were in the Bronx” you mean that Washington Square was just professional schools, no, that wasn’t it. For a concrete example, the Courant Institute of Mathematics was in Washington Square before the war. But it’s possible that the PhD students had to commute between campuses so that the professors didn’t have to.
If NYU reset its undergrad school and was able to wipe out its old reputation and lever the grad and professional reputation into the undergrad reputation of what was effectively a new school, that’s yet a third strategy.
I’m skeptical of the connection to CCNY. NYU’s rise was mainly about getting people to come from far away.
It’s been attempted a couple of times internationally, and apparently CMU did it in California.
Part of the value in prestigious schools is socializing with other extremely talented individuals. If statistically, there’s not a critical mass of these uber-talented students until the fourth year, that’s not enough time to bounce ideas off each other.
Surely there’s a flaw in your logic here? 96% of students graduating a University with high entrance requirements does not indicate its an easy university to pass. It might indicate the students are well-off enough to guarantee fees, or that the institution can provide good scholarship support, but most likely indicates Yale et al are working as planned.
Note that Oxford University has a near 99% graduation rate and Cambridge 97%. Neither is easy by any means, as both require intensive work at a high level. But the students recruited are those able to cope with this, and therefore thrive (I say this as someone who failed an Oxford entrance interview and is in hindsight grateful to the tutors for that decision). Less able students would not cope so well, and failure and stress would cause a higher drop out. This can be seen by observing that the drop-out rates in the UK universities that take lower-ability students is often above 30% on objectively easier programmes because lower grades give much less clear signalling about whether students have the academic ability to cope with a degree programme.
If you swapped a cohort of Yale students with their counterparts from say the University of North Caroline Chapel Hill or California State University Monterrey Bay (to select two not awful institutions of which I am periphally aware), keeping the same teaching, don’t you think that the graduation rate would plummet? Yale provides programmes that are easy enough to pass (not necessarily pass well) if you are the sort of student who has the academic ability to get into Yale. It’s a system not designed to eliminate able students but to educate them, and it therefore is going to aim to be passable by most of the students they can recruit. This is the same logic as all universities (other than the odd exceptions noted above) use: educate your students, only getting rid of those who can’t or won’t learn, rather than act as some form of academic death match to create a (false – academic aptitude is not necessarily a transferable skill) intellectual elite.
You seem to assume a high graduation rate is a weakness. I’d argue its a university doing its job well, and that rationally it is better to have 96% of a high-achieving cohort educated at a high level than x% of a mixed-ability cohort passing an education set at an arbitrary level designed to cause people to fail.
Nice question though. Makes you think about what universities actually do (or should do).
It is really worth reiterating, as some people have pointed out earlier along in the thread, that people have wildly different ideas about what education should “do”…obviously it can “do” more than one thing, but many goals are completely at odds with one another.
Freddie Deboer had a good piece on this a while back, iirc, but I can’t seem to find it.
I generally won’t get into lengthy debates on education policy with people unless we can first have a conversation about what education is “for”
Indeed. I won’t disagree with that although I’ll put a marker down that someone needs to make a very strong case for the idea that we should be aiming to fail people in an education system, as I’ve not seen this.
Note also that universities clearly believe education is for producing people with degrees…
Watchman:
I think this is exactly the question.
Model #1: With the same level of effort and ability, Yale is easier to graduate from than (say) UNC. Their highly-competitive admissions guarantee that they get top students, but then they don’t require too much from them to graduate.
Model #2: With the same level of effort and ability, Yale is harder to graduate from than (say) UNC. Their highly competitive admissions ensure that the students who get in can get through the harder classes, however.
Model #3: This is just demonstrating the advantages of tracking. If you make sure that everyone in your class is ready to do serious college-level work, then you can be a lot more efficient in your classes–you don’t have to go back and review basic stuff, or spend a couple semesters making sure your students can write a coherent sentence. So the classes at Yale can move at the pace a college-level class should move at, whereas the classes at UNC or University of Maryland have to do a lot more review and hand-holding to get people up to speed in those first couple years.
How would we decide which of these models (if any) is a good model for reality?
ETA: One way to distinguish would be to look at how different the outcomes are for people admitted with lower qualifications–legacies, affirmative action admissions, athletic admissions, people whose parents gave huge donations, etc.
I was going to suggest that examining non-standard admissions might help, but realised that I know in the UK that some of these (disadvantaged backgrounds especially) if selected according to sensible criteria get marks in line with the cohort as a whole. We don’t have legacies and donor kids, or athletic scholarships so I can’t comment on how these work.
I’d say rather that we have very few. As for what happens to them, why do you think they call it a Gentleman’s Third?
The claim I’ve read several places (but this isn’t my area, so I don’t know how true it is) is that students admitted via affirmative-action tend to cluster in less demanding majors. Thomas Sowell pointed out at some point that there were a lot of black kids who would have gotten a EE degree from State U, but instead ended up with a Sociology degree from Stanford thanks to affirmative action. (That is, they came in planning to get a EE degree, found themselves massively outclassed in those classes, and ended up switching to a much easier major.)
I don’t know much about UNC, but it’s the state flag ship, so I would expect at least significant minority of the students to be Yale-caliber.
On the other hand, my high school classmates who went to CSUMB were B-average or C-average students, and even aside from academic strength Yale would be less helpful to their goals (local industry oriented).
(My vague impression was that CSUMB is one of the less “prestigious/rigorous” Cal States — like if you’re a good student but want to stick with the Cal State system, you want to go to the Cal Polys, Long Beach, San Jose State, a few others.)
The undergrad business program at the University of Michigan used to work this way. You came to UM as an undeclared major and applied to the B-school after your first year was done. If you didn’t do well enough in the pre-reqs (e.g. Econ 101), you wouldn’t get in. I had at least one friend who didn’t get in and transferred elsewhere because he specifically wanted a business degree. I think one other fried didn’t make it in and decided to stay at UM and major in Econ. A few years back the program changed and now you can be admitted directly into the B-school. My (unconfirmed) assumption is that prospective students didn’t like the risk of coming to UM and then not making it into the program and preferred places where they knew they had a high chance of graduating with a business degree (however questionable that decision might be…).
Some of the other more specialized programs also had this sort of first-year screening. I know engineering did and so did architecture. I don’t remember friends being forced out of engineering — my guess is that the criteria for getting into the program (good grades in intro Calc, Chem, Physics, first-year engineering courses) lined up pretty well with self-selection. In engineering there was a second cut in terms of declaring majors. I do remember people wanting to major in CS but not having a 3.2 GPA or whatever the cut-off was.
Presumably this is common across other high-ranking non-Ivies.
Doing it at the college/major level is a bit softer than at the university level. It gives students who don’t make the cut a chance to stay in their current community in a less demanding major or transfer to stick with their preferred major.
Maybe a school could do that, but I’d imagine the question is why. Yale/Harvard still attract the best students, those students still go on to great things, and the schools make so much money they basically run hedge funds, so from their point of view there’s no problem to solve, right? The kids who weren’t let in were the marginal ones — and in many cases marginal athletes — so it’s no tragedy they simply had to go to Georgetown or wherever they ended up.
Meanwhile, your hypothetical school would instantly run into culture war issues — consider the NY Times’ recently renewed focus on the demographics at Stuyvesant. The kids who can just go to Harvard anyways probably wouldn’t want to bother with a controversial school.
Some of the better state schools do something similar to this.
They have fairly loose admissions criteria and low tuition, at least for state residents, leading to huge class sizes. Most of those students slide through in easy programs, a smaller number can’t hack it and drop out, and an even smaller number excel in tougher programs. The latter group has a good shot at top graduate and professional schools as their programs have a good reputation within their fields, even if the school’s name normally doesn’t count for much.
It’s not exactly what you’re talking about, but it’s instructive.
The great downside is that parents (particularly your high performing alumni likely to donate) will not like this system because it creates large deadweight loss for them, and also would prevent their kids from graduating.
Yeah, a fairly obvious one, at least in the in-person colleges. Physical space is limited. Unless the college grows it’s personnel and infrastructure to a stupid degree, it won’t have space for this. If it does so, then it probably won’t have the same quality of instruction. And it will have trouble paying off all this infrastructure after enrollment drops. So instead what they do is try to only admit students that they think have a good chance of graduating. That seems fair.
Secondly, with a high drop-out rate, prospective students will start to look elsewhere. Eventually it might reach an equilibrium back to the status quo–but maybe they want some of the not too bright but rich students to attend and donate later?
I skimmed the responses and didn’t see this; sorry if a dupe.
Admission by lottery.
Here is a somewhat older story with a link to an even older essay https://www.nytimes.com/roomfordebate/2015/03/31/how-to-improve-the-college-admissions-process/do-college-admissions-by-lottery
Here is it updated for the modern mess:
https://www.npr.org/2019/03/27/705477877/what-if-elite-colleges-switched-to-a-lottery-for-admissions
Better would be to set a threshold for “these are the people we think can succeed,” then put all the people who pass that bar into an urn, and draw N names.
It benefits the students, who can stop the psychotic zero-sum credential race where they need to put in more and more and more effort. It stops people with super-degrees from lording it over other people, which I think is good (even though I have one of those super-degrees).
It doesn’t benefit the schools, directly, and probably harms them a little. But since schools pay a lot of lip service to putting the needs of society over their own, perhaps one of them can be shamed or coerced into making this their policy.
Fails for at least two reasons. One is CW, the other is the definition of “we think can succeed”. Unless you go with relentlessly objective measures like standardized test scores, you’ve just moved all the thumbs to those measures.
> the other is the definition of “we think can succeed”.
I don’t think this is as hard as you believe.
First, I wasn’t trying to make a 100% objective measure. Harvard can declare Malia Obama capable of succeeding even if they think she doesn’t[1]. But even if she gets into the lottery that doesn’t mean she’ll get picked. Similarly, it gets a lot stupider to bribe people to only get a chance at admission.
Second, if you can only admit 1000 people to Best School, and 3000 capable students capable of want in, they have to do a zero-sum credential race. It would spare everyone a lot of time and heartache to just do it randomly.
The schools know the profile of who can pass and who can’t. MIT’s admission policies, at least 20 years ago, were explicitly to let in NAMs that would succeed, not just the best fraction of them.
[1] I just picked a famous person. As the daughter of two HLS graduates, she probably can pass.
There’s already a school that does this. It’s either Cal Tech or Cal Poly, I forget which one b/c I always mix these two up. As I understand it, they make admissions decisions based almost entirely on test scores.
That sounds like Cal Poly, which is part of the Cal State Univerity system. I’m pretty sure the CSU application doesn’t even require essays, they just look at your test scores and HS grades (well for athletics or performing arts they presumably look at relevant skills).
I don’t know for sure but it wouldn’t surprise me if many non-flagship state schools only look at test scores and grades.
University rowing functions something like this. I rowed varsity crew all 4 years at my university and while there are hundreds of new applicants each year, the rigor of waking up at 4:30am 6 days a week for hard practices and punishing winter workouts when it’s too cold to row means that the attrition rate in the first year is brutal. We ended up with 12 people make it through to the end of 4 year program, but we were probably some of the 12 most serious and committed athletes anywhere.
Similar to Switzerland (as mentioned above), this is how a lot of higher education works in Germany. An important difference is that admission happens by subject at each university, not just by university. Also, you generally only study your subject and some ancillary stuff, e.g. if you study some science, there are no language/humanities/social sciences etc. requirements.
Popular and resource-intensive subjects have selective admission (medicine most of all, others are psychology, business at the best universities), but most STEM subjects have extremely low requirements or none at all, aside form the school diploma you need to go to university.
So basically almost everyone who has graduated high school with the Abitur (equivalent to the Swiss Matura mentioned above) can start studying e.g. computer science, physics, maths, or engineering at the best universities in the country. (And I’d say German STEM education is generally considered to be quite good.)
Those courses often have first-year dropout rates of 30 to over 50%. So they’re basically working off that model – let basically everyone who is interested in, then make it obvious to the student very quickly whether they’ll be able to succeed. It’s not uncommon for the occasional STEM exam to have 2/3rds of all students fail. A side effect is that there’s little to no grade inflation in those disciplines.
Part of the reasoning behind this is that someone might be great at mathy-sciency stuff, but received otherwise mediocre grades in school (in languages, humanities, arts), and therefore has an “unimpressive” diploma. This person might very well excel in their chosen subject.
But: The cost of going to university is negligible compared to the US (~1500/year, about what poyorvlak mentioned for the ETH Zurich), and students can get financial aid from the government depending on their parent’s income (i.e., children of wealthy parents can’t get any money, but the parents are required by law to finance their children’s higher education up to a certain extent). So if you try and fail, there’s not much lost except time.
Aside from the obvious inefficiencies others have pointed out, this simply means that the extreme lengths students and their parents go to in their quest for elite-university admissions, will become equally extreme lengths devoted to making sure the precious, precious heirs of the elite don’t flunk out. So, six-figure budgets devoted to bribing TAs and adjunct professors (and maybe not-so-adjunct), and threats and intimidation against professors giving bad grades to the wrong students, harassment of rival students who might set the curve to high, professional-grade cheating on exams and classwork that can’t be as standardized and rigorously proctored as e.g. SATs, and other things I haven’t thought of yet but which I can’t imagine will be helpful for the students who are actually trying to learn something. This has the potential to break higher education as anything but who’s-the-better-cheater signalling, so no thanks.
The experiment has been done, and ran for well over a thousand years—Imperial China’s examination system.
Almost anybody could take the first level of exams, and anyone could study for them–no university admission involved. There were three stages of the exams, and the second stage had a pass rate of about 1%. The third stage produced about 200 to 300 degrees from as many as 8000 candidates.
One of history’s more successful societies.
Other people say one of history’s worst catastrophes.
I don’t think the examination system is what led to some of the worst bits of Chinese history. And, all in all, they certainly aren’t doing too bad compared to the rest of the world.
ante hoc ergo propter hoc
The examination system was instituted under the Tang, and the Tang and Song constituted high points of Chinese history. The system continued under the Yuan, the Ming, and the Qing, and the two periods under foreign domination were bad times for China, while the Ming also had more bad times than good. But it is not at all clear how the examination system was responsible for China coming under foreign domination, or for the mediocrity of the Ming (the latter seems to have been a result of bad luck in emperors). Most of the historical criticism of the examination system focused on the state of China under the Qing, a period when the foreign emperors seem to have been the actual sources of most of China’s problems.
How did the Chinese deal with the problems of faking test results or candidates trying to bribe the testing officials that John mention elsewhere in this thread?
Fairly early on, they switched to blind grading. I’ve seen the claim that, prior to that, there was a serious nepotism problem.
I tried to interest my law school in a variant of this, back when law school applications had dropped sharply and law schools were in serious financial trouble. My proposal was for the school to lower its admissions standards enough to get, say, 25% more accepted students. At the end of the first year, offer all students in the bottom quarter of the class the option of withdrawing and having a full tuition refund.
If you tune the numbers right, the school ends up with the same net income, since the refunds balance the extra tuition from the additional students. One basis for attracting students was the bar passage rate, and it was generally believed that first year grades were a better predictor of eventual success than information on entering, so bar passage rate goes up. The problem of students spending a fortune and three years only to fail the bar and be unable to ever practice is reduced. The only cost is having to accomodate the extra students in the first year, but class size had shrunk substantially so the school had excess resources in teachers and classrooms.
I did not persuade them. Looking back at it, one reason may have been general conservatism of institutions, but another might be the fact that one of the things that goes into the U.S. News and World Reports rating, which is an important factors in applicant decision, is the average LSAT of the entering class, which under my system would go down.
I think the real issue is, we don’t know what high school is for. In Europe (broadly speaking) they have tracks – a university-bound student will attend a school that grants a standard diploma (Abitur, Baccalaureat, Matura) which is sufficient for university admission, while other students will attend vocational/technical schools and enter the workforce directly. In Asia (broadly speaking) the standardized test is the sole determinant of university admission, and grueling test prep regimes are the norm.
But in America, we don’t like tracks. We like to think everyone has equal potential and nobody should attend a “lesser” school, so everyone attends a (nominally) college-preparatory high school and earns the same kind of diploma. We also don’t like centralized authority, so there’s no national standard for a diploma; it varies state to state, district to district, school to school, so all you can really say it means is that you went to class for 12 years and didn’t get expelled. (There have been efforts like NCLB and Common Core to change this, but they are wildly unpopular at every level. And God help us if we ever try to create a national curriculum – just trying to work out how to teach the Civil War would spark another civil war.)
So in order to have some basis for comparison, we have standardized tests – SAT, ACT, AP, IB. But they’re designed to be used in combination with the high school transcript, rather than as credentials in themselves. For whatever reason (mine is probably different from yours), students of certain ethnic backgrounds tend to outperform others on these tests, and wealthier students can afford test prep classes that poorer students can’t, so in the name of equal opportunity we can’t rely solely on them. Thus, admissions are “holistic” and the current fail parade results.
And here’s the kicker – I agree with almost all the decisions that brought us here. It doesn’t make sense to move to a European- or Asian-style system. But at the same time, there’s no way to build any kind of remotely sensible college admissions on top of the incoherence that is American K-12 education.
…and then there’s Canada, where applying to universities is remarkably simple and there’s no standardized test to fret over. I don’t know how they do it – are high school grades really that consistent from school to school? How can we do that here, and do we want to?
I don’t know that much about Canadian high schools, however, my anecdotal understanding is that in Canada, at the university level, there’s far more “compression” than in the US – we have 2 or 3 schools that get into top-30 or whatever rankings worldwide, and there’s much less of a gap between those schools and the worst universities in Canada than there is between the best and worst universities in the US. I remember getting into university in Canada as remarkably stressless and with very little credentialism. I imagine that it’s because going to UWO or Queen’s isn’t really that much worse than going to U of T. Probably also a factor is that the best universities in Canada are all public – private universities are just not that much of a thing here – so the way they make their money is different.
The way I like to think about it is the the Canadian university system is like the US state-level university system: reasonably affordable, open to all but the worst students, and ranging in quality from somewhat questionable to really quite good. Apparently the admissions experience is quite similar too. If you only apply in-state in the US and aren’t seeking financial aid, it’s not that hard.
But the US then adds a private system that is much more variable. The best private US schools are the most prestigious in the world, but the worst are apparently pretty dismal. They are also much more expensive. And the admissions process to the top schools is a zoo, since they are in such demand.
All the “somewhat questionable” schools I can think of in Canada are private. The public universities here are at minimum “meh.”
I wonder if the Dartmouth College decision 200 years ago set the US down this path. It didn’t have much legal impact, since states just included “we reserve the right to modify” clauses in later corporate charters, but the moral precedent against government takeovers of private universities has stuck with us. Oxford and Cambridge became public, but Harvard and Yale (and Dartmouth) never did. (OK, there’s Rutgers and a few other exceptions, but none of them are in the upper echelons.)
Education in Canada is a provincial responsibility, so each province does things in its own way. At least when I was going through, there were provincial standards about what was to be taught in each grade and each subject. But there were no common province-wide tests, at least in my province (Ontario.)
Various comments from university professors made it clear to me that nominally identical courses were actually taught at rather different levels from school to school. The universities dealt with this in two ways. First, they somehow adjusted grades depending on the high school you had gone to. Second, they did a certain amount of review in first year. I remember first term of first year being pretty much review of what I had learned in the senior-year college-preparatory courses. I guess that means my high school did a good job. But some students apparently experience much more of a shock going from high school to first year.
In quebec they completely formalized this, first by using the z-score, then (when this turned out to encourage people to attend awful cegeps in order to maximize the score) a combination of z-score and how well the school’s students did on the standardized exams prior to entering cegep.
It’s used almost universally, with most university programs publishing a cutoff for admission, and only the most competitive programs (usually only medicine) having any additional filtering.
I would refute this based on personal experience.
I have an engineering degree from an Ivy League university. Freshman year, the Ivy League classes were relatively easy, with one or two exceptions, and getting A’s was not a problem. Starting in my sophomore year, there were several classes where I slid by with B’s. Starting in junior year, the classes became difficult enough that I had to devote most of my time to them. One computer science class in particular (which was a graduate-level course that we were required to take as juniors) was difficult enough that I would have failed, had my professor not let me take an Incomplete and redo my worst assignment over the following summer, after which she changed my grade to a C-.
The Ivy League classes also covered far more than their counterparts at the state university. For example, I took a freshman level econ course at the Ivy League (which was challenging enough to be interesting), then a sophomore/junior level econ course over the summer at the state school (which was complete review and I got perfect scores on tests without even trying). The junior/senior level math course that I took at said state school was also about 80-90% review of material covered in my freshman year at the Ivy League school.
I know that one anecdote does not prove anything, but at least in my experience, the STEM classes at Ivy League schools are pretty rigorous. The big difference is that, as long as you’re trying your best, the Ivy Leagues will jump through hoops to ensure that you graduate without failing any courses. They want you to remember them fondly and donate lots of money as an alumni, after all.
It’s probable that you’re smarter than me, and comparing anecdote to anecdote doesn’t mean squat, but… your engineering curriculum sounds much easier than mine if you made it through 2 whole years before you had to devote most of your time to your classes!
I assume everyone understands that the STEM classes are intense at these great schools, and that the reason graduation numbers are so high is because under-qualified matriculants choose the less rigorous majors. I’m under the impression that Ivy League schools are notorious for trying to whittle down pre-med numbers, especially.
I once met a guy from Princeton who went to a good med school, but he’d gotten there after majoring in English for undergrad then taking a much less demanding “post-bach” program to fulfill med school pre-reqs. He said Princeton pre-med was too competitive and his grades would’ve been too poor to get into a good med school, if he’d even survived the track at all. But because he was smart enough to get into Princeton in the first place, his MCAT scores were more than good enough when he was in competition with non-Princeton students. (That final sentence was only implied; he was the self-aware, “I went to school in New Jersey” type.)
There are various complications:
I’m told that the University of Iowa is easy to get in to, but the professors of the freshman classes are required to fail half of the students. So the freshman year cuts the class size by a factor of 4. OTOH it’s a state school, so you’re not risking a lot of money by attending. And also, parents of kids who aren’t admitted will complain to their legislators, but parents of kids who flunk out will not.
There is a huge prestige competition in affluent suburbs over what college you kid attends. This prestige can be traded off against the amount of tuition that affluent parents will pay. So if you are a school that is well-known enough to be in this competition, there are financial incentives to make your school appear to be as hard to get into as possible.
It seems highly unlikely to me that this is actual policy.
Some searching concludes that it is in fact false: more than 80% of freshmen return for their second year at the University of Iowa.
Typo: “This is the twice-weekly hidden open thread” -> “This is the fortnightly visible open thread”
Here’s an idea that’s technologically more complex:
Democratize the censoring process. For specific users that would have bans-lite, add a button next to ‘report’ that logged-in users see that says ‘remove’ and one that says ‘keep’. Remove the comment if the ‘remove’ thoughts exceed a specified number and the ratio of remove:keep thoughts exceeds a specified ratio (in the most restrictive case, “if there are more than zero votes and the ratio of remove:keep exceeds 0” is the same as ‘anyone can cause this comment to be removed’).
Absolute number is important because acting purely on ratio would give infinite weight if the first person happens to want to delete; ratio is important because that’s closer to what we actually want, easy to measure, and hard to cheat.
Don’t reveal the score, unless you want discussion of the score to be important.
I, for one, am extremely opposed to this. This is not a democracy, and shouldn’t be one. We’re here at Scott’s pleasure, and if he decides to burn the place to the ground on a whim, that is his prerogative.
EDIT: Just to be clear, I am not advocating him burning the place to the ground, nor do I think the proposal amounts to that. I do think there should still be the possibility of discussion over bans, but I don’t think there should be any assumption that said discussion would have an impact.
Yes, but any self respecting evil dictator also has some henchmen. And pretends to indulge in democracy when it suits him.
That greatly incentivizes brigading.
Those ‘keep’ and ‘remove’ buttons would immediately become ‘agree’ and ‘disagree’ buttons.
Democratizing the process will end in one side winning, maybe by underhanded tactics. In the end, Scott may or may not end up happy with the result, and if he isn’t, he will have to go back to taking matters on his own hands. Let’s skip to the end and let him take matters on his own hands.
If people want reddit-like systems, there’s a CW forum that split off the old CW thread, at reddit.
Every place I’ve ever seen that has a kind of “dislike” button that leads to posts disappearing automatically, has had a problem with self-appointed censors policing the attitudes expressed. Usually it’s just fan children who object to anything that might possibly be taken as disagreeing with any aspect of the site owner’s behaviour. (An example here would be suggesting improvements to the order in which posts appear.) Certainly disagreeing with Scott on something more substantive would be right out. Less commonly, it’s CW vigilanteism.
The set of posters we have here might not do that. And I haven’t seen this except in combination with some kind of “like” button, visible scores, and people motivated to increase their positive scores. But I’d still be wary.
Outsiders may use it to attack certain people/ideas.
Upvotes. You’ve invented upvotes.
Iterated on.
But yes, much of the same code could be used as for upvotes.
Is it possible to become resistant to pain through training? The sci-fi concept of wireheading is a hedonists dream, but I find it repulsive and scary; I don’t want to become an addict. What I would like instead is if there was a device that could cause pain through an interface with the brain, so I could A: slowly dial up the level of pain, and B: be able to get used to a given level of pain without any physical damage. The best way to cause pain without damage currently is electricity, which is why it’s sometimes used to torture, but it’s not perfect and I’d assume repeated electrical shocks even if weak would still cause nerve damage over time. To create a true super spy we need a device for training pain resistance that hacks directly into the brain and bypasses physical trauma.
I want to become totally resistant to pain so that I never reveal the location of the rebel base during WW3.
I recall reading somewhere that strong pain actually causes you to become more sensitive to the same kind of pain in the future. Your brain interprets this pain as an important signal and therefore strengthens the neural pathways involved over time.
In my experience, doctors are very aggressive when treating severe pain (i.e. treating pain as a serious condition that needs to be immediately addressed, beyond just concerns for the comfort of the patient). The explanation I’ve been given about this is that it gets increasingly difficult to reduce pain with any amount of medication once it has been allowed to be at a very high level for a prolonged period of time.
So, it’s not clear to me at all that the kind of exposure training you’re suggesting is the way to go to increase pain tolerance. If you do find a recipe that works, I’d love to hear about it though.
If you ever accidentally electrocute yourself you’ll notice that electricity causes extreme muscle spasms, which causes far-reaching damage and can even kill you via rhabdomyolysis. So that might not be a recommended approach.
You could also skip the entire process and acquire whatever weird gene this lady has: https://www.bbc.com/news/uk-scotland-highlands-islands-47719718
Intense exercise, done in a regimented manner, certainly can build up a related version of mental toughness. I’m not sure how well the skill would transfer to enduring other types of pain.
Not disagreeing with other posters, but in the short term if I experience extreme pain and then milder pain that would still normally bother me, I mostly shrug and move on because it feels not-that-bad by comparison.
Exercise is probably your best bet for learning to endure pure pain. Electricity is right out.
But the reason you talk under torture is not pain, it’s fear of permanent mutilation. (Unlike the movies, villains do not take pains to make sure the star’s good looks and physical capacity aren’t ruined.)
If you want to withstand torture, the important thing is to lie convincingly, and with enough verifiable truth mixed in that they believe they’ve recovered information.
Under prolonged torture, you will give up everything you know, everything you believe, and many lies you make up as well. Information gained through torture is not reliable, so sophisticated opponents will rarely use it, and unsophisticated opponents might be fooled by deception.
The important thing here is to make them believe that you have reached your breaking point and have yielded all useful information as early as possible, before you are seriously maimed. So it’s much more important to be a good actor than some sort of glutton for punishment. That said, the courage and nerve to act well under pressure are partly a product of physical fitness, so exercise is still important.
Are you sure? It seems like being prepared to die isn’t that uncommon a level of bravery. If you’re prepared to die then surely how you look or function matters little, whereas pain requires you to be able to withstand a very primitive mechanism that grasps ahold of your logical circuits so that it can confabulate reasons for you to give in.
It’s true that false information has been extracted through torture in many highly publicized cases (recently in relation to the Iraq War), but it seems like militaries the world over swear by torture and have to be restrained by human rights tribunals in order to soften the torture to the point where it’s only waterboarding. It makes me wonder whether the cases where false information was gathered actually outnumber the cases where true information was gathered.
I’m pretty skeptical about most speculation about the effectiveness or proper strategy to resist torture, since the people speculating virtually never actually have any data or personal experience.
It seems likely to me, and I’m not an expert, that it’s not that there isn’t a lot of really bad information gained, but that the goal of torture is to seek confirmation of information the torturers suspect and to demonstrate a willingness to “do whatever it takes” which signals to your opponent you will be ruthless when you finally find them. Of course, this takes the torturer not asking leading questions, which sounds like it takes an experienced person doing it. Because we regard it as “inhumane” we’ve reduced the pool of experienced people performing the questioning, thereby increasing the bad information gotten.
>If you’re prepared to die then surely how you look or function matters little
No. Death is much sweeter than mutilation and degradation. People who are seriously tortured often commit suicide to escape. There are qualities of life such that all but extreme outliers prefer death.
>Militaries over the world swear by torture
Not as an information gathering tool. There is a terrific gulf between behavior that is common and behavior that is effective. Competent intelligence agencies have reams of historical data proving that, under real torture, most if not all information yielded is factitious. It will not surprise you to learn that not everything that even advanced Western militaries do is competent, necessary, or useful from an intelligence perspective.
Torture as psychological warfare tool or torture as morale device is outside the scope of this discussion.
Do they? How do you know? Why do you trust information about intelligence agencies?
Which intelligence agencies do you think are competent?
Prince Humperdink would like a word with you.
I imagine the checkablity of information is crucial. Give us the codes to the laptop right here is not something the person is going to be able to lie about. Ask what is the contingency plan for these particular circumstances, and the torturee is going to sing an account that may have nothing to do with reality.
> Give us the codes to the laptop right here
> what is the contingency plan for these particular circumstances
True. Those are different and I think those are conflated on purpose.
If you want an anti-torture policy because torture is amoral, then good for you. I think it’s amoral, too. Tell me where to sign on. But it’s obviously useful, and insisting that it isn’t useful is just setting yourself up for torture to be allowed if/when people decide that it works.
Also, drug addiction. Give someone a meth addiction and they’ll sing for you.
Sorry to be that guy, but do you mean “amoral”? It sounds like “immoral” is closer to what you mean.
There’s certainly some torture that matches that model. But there’s also e.g. waterboarding, which pretty clearly isn’t going to mutilate anyone but is reportedly very very effective. And there seems to be a disturbingly large arsenal of “enhanced interrogation techniques” specifically designed to not leave any physical evidence of torture. So I am left to conclude that A: an awful lot of torture is in fact based on pain or something pain-adjacent, not fear of mutilation and B: it works, at least for skilled torturers and C: it’s very difficult to train people to resist it.
You missed that it’s the villains, which obviously Americans are inherently unable to be.
It’s not just the American military, though. The infamous Coca-Cola trick (which might be apocryphal) is usually linked with the Mexican federales.
I don’t think it really works like that (pain is merely a special case of leverage), but given your stated goals you may find some interesting material at painscience.org and in the bibliographies of John Sarno and Howard Schubiner (whose book our host reviewed.). And a lot of that in turn ties into the predictive processing literature. tl;dr no brain, no pain.
Thanks, but I can only find a .com site. I assume it’s the same one.
In my experience, some kinds of pain are way easier to get used to than others. It seems that resistance to cuts, bruises, mild burns and other tissue damage can be built semi-reliably through training/exercise.
The other stuff has to do with internal organs, teeth, electric shocks and such, and it doesn’t seem to build any resistance at all. I also seriously doubt anyone would willingly train for this.
As for the torture proof spy, achieving that level of fearlessness and pain tolerance would also likely make the spy prone to questionable decision making in situations that don’t involve torture (hopefully a majority of situations you want your spy to deal with), which might compromise the main objective. So in the end a good old cyanide pill or some kind of brain frying switch might be a more prudent solution.
First time I’ve tried to start an open thread here, be gentle. Non-CW topic.
I’ve been imagining what the musical landscape would look like today if Mozart and Schubert had each lived another 50 years, well into their 80’s. My guess is that Mozart would have gone on producing more masterpieces in his accustomed vein (masses, symphonies, concertos), but would have remained “stuck” (if you can call it that) in classical mode.
But Schubert…if Schubert had lived another 50 years, I think music today would be *completely* different, in all genres. Who knows what he was on the verge of producing, given his last two piano sonatas and the unfinished symphony (the short movements in those piano sonatas point to a kind of proto-jazz, if you listen just the right way). Schubert dying at 31 was like Bernhard Riemann dying at 40, with consequences we can’t begin to fathom. Thoughts?
I don’t think it’s imaginable. It’s like trying to imagine what would have happened if the Holocaust hadn’t ended that cohort of Hungarian geniuses.
These days, you can have a computer program which can write what sounds like Bach on an off-day, but that’s not like being to conceive by any method what Bach would have written if he’d had another decade.
Speaking of such things, I’ve always wondered what the symphony that Scott Joplin was working on when he died would have sounded like. The guy made a lot of effort to go beyond the piano tunes he was best known for, but so much of that is lost.
I disagree that Mozart would have been “stuck.” He was pushing new instruments and for the inclusion of more “vulglar” elements like ballet in opera, so I think it would have revolutionized and blurred the lines between high and low art much sooner. Going big and Romantic seems like something that he would embrace, and just imagine how it would have been if Beethoven had gotten to be his pupil after all.
I read an alternate history novel that postulated that Mozart would’ve moved out of music, become a general in the Austrian Army, and helped suppress the French Revolution.
In other words, like Nancy says, we have utterly no idea what he would’ve done.
in RE: Mozart, the story I have heard is that Mozart wanted, I mean really wanted to write more Operas. But Operas unlike Concerto’s and Symphonies etc require a commission from a Opera House. And he was looking outside Vienna for that. Prague loved him and commissioned some of his last Operas. As an Opera fan; I regret all the Opera’s that Mozart was unable to write; more than his instrumental stuff.
Richard Taruskin’s article on the roots of Stravinsky’s tonality discusses Schubert’s work– in particular the Mass in E Flat– as evidence that, though he doesn’t put it this way, if Schubert had lived another fifty years he might have produced something in a Stravinskian direction.
https://www.jstor.org/stable/831550
My bet is that “really late” Schubert would have been more like Messaien; there’s a dreaminess to Messaien’s stuff that Schubert seems to have been reaching toward as well.
Who would win in the following race:
* A beetle driving a spider
* A viper driving a stingray
* A spider driving a mustang
* A stingray driving a jaguar
* A mustang driving a viper
* A jaguar driving a beetle.
Assumptions:
* the animal has some suitably comical way to actually work the controls, and has set its intelligence, however small, to actually winning the race. The stingray has a tank full of water.
*the race is a street race, in an urban environment, on a course that the animal knows very well, or at least as traveled through dozens of times prior to the race. (In the case of the spider we’ll say she’s run a miniature version of the race. In the case of the stingray we’ll say he’s done an underwater course very like this race). It is in a suitably busy city like Los Angeles, but is conducted in the dead of night and with reasonable certainty that the police will not interfere. The race is 10 miles long and features multiple turns.
* You can use whatever sub-model of car, above, that you wish, but you must make a good-faith argument that this is the “iconic” or archetypal sub-model of that particular car
* You can use whatever sub-species of animal, above, that you wish, but you must explain why its behavior patterns, in particular, would contribute to it winning the race.
What?
I mean I think my question is pretty straightforward.
“Pretty” is doing a lot of work in that sentence.
Some people lack the proper education to contemplate the most important issue facing the well being of humanity this century. Only with time and patience can we hope to bring others beyond their current mental and physical limitations and eventually transcend science and technology.
Basically: what if animals that share name with cars drove a car that is named after an animal? And then raced each other?
Capitalizing proper nouns would have helped a lot in the clarity here.
Each relevant proper noun is a hypertext link, which is be a pretty clear signal that it’s not necessarily referring to what you’d expect. Not the OP’s fault if you don’t bother to even hover over them.
I’m going to go with the mustang driving the viper. Horses are really the only animal on the list evolved for long distance overland travel, and they’re quite trainable as well. And the mustang has a decent vehicle. The jaguar is stuck in a crappy car, and I have a hard time believing that limited intelligence won’t cripple all the others.
It probably would have to be a convertible Viper, otherwise he probably wouldn’t fit (or are there mustang ponies?)
Agreed. The mustang driving the viper has the best combination of superior intelligence and an actually powerful car. The mustang also has the best-developed brain for dealing with obstacles, unless the spider is a Portia spider.
I think it’s down to the controls. If pressing a tiny gas pedal sends dopamine directly into that little bug brain, then I don’t see why a beetle or spider strictly can’t win. Though as far as learning the track I’m not sure that only dozens of times’ practice suffices.
The beetle in a spider would do really well in a course with many turns and would likely have reflexes good enough to avoid an accident and any obstacles in the way. Top contender.
The viper in a stingray, while in one of the best suitable cars for this contest, would likely not do very well due to it’s demeanor. I imagine short periods of aggressive and precision driving, followed by a long slow deliberate pace. This is a second place combo, no matter who wins.
The spider in a mustang would have a lot of trouble avoiding any crowds along the sides of the road. I mean, that much horsepower in the rear wheels with an inexperienced driver is just a bad combo. But on a more serious note, the spider would likely do really well towards the beginning of the race, then sit somewhere and wait to trap it’s opponents.
A stingray in a jaguar. What the shit man. I don’t even know about this one.
A mustang in a viper. Similar issues with the spider and the mustang, but coupled with a more intelligent creature with a history of racing. If the mustang can keep it on the road, or there are a lot of straights, this is the clear winner.
A jaguar in a beetle. Not gonna lie, a reasonable easy driving car, an intelligent creature capable of strategic planning and execution, this is the most underrated contestant. A first place finish here depends on how well the others are able to execute.
Your point on the spider is inspired. I’m wondering how successful a strategy it would be to “get out ahead” of the other racers, hide in an alley, and then ambush them.
My knowledge of the animals involved outweighs my knowledge of the cars, but it strikes me that the spider is actually driving an ideal car for this.
The jaguar driving the beetle clearly wins. The beetle may be among the slowest of the cars mentioned, but the jaguar is the smartest of the animals mentioned, and by far. Not only is it smartest, but it’s skillset is well suited to the race. The jaguar can navigate a complex obstruction rich jungle environment which mirrors obstructions its likely to encounter in the urban environment. The jaguar is a predator and can model other minds to some extent and is mindful of things like speed, whether paths are intersecting, and being aware of lines of sight. Additionally, the jaguar is among the fastest moving of the animals considered. I doubt that most animals would be capable of tracking movement at even a small fraction of the speeds their vehicles are capable of.
I think the jaguar is the only animal that finishes the race without crashing. If the jaguar has any completion, it’s the mustang which is also fast moving and also has some ability to model other minds as a social herd animal. I think the jaguar still has the edge though as its better equipped to deal with obstructions and the mustang is more likely to spook.
The mustang would win a an open daytime race, but I think a dark and twisty city environment is no good and he wrecks.
Stingrays and snakes don’t seem to be adapted to mazes, so they’re not going to do so well.
At least some kinds of spider seem to have the brain for it, and a Mustang will do.
Dung beetles, at least, can do celestial navigation.
The jaguar should also have no problem, but the VW Beetle isn’t up to it. So finishers are
1) The spider in the Mustang, narrowly beating
2) The beetle in the Spider
coming in comfortably ahead of
3) The jaguar in the Beetle.
The others fail to finish.
This is silly, but I agree with the consensus that seems to argue the Mustang in the Viper and Jaguar driving a Beetle would be contenders for first.
Now, if there was a Cougar driving Kurt Busch’s Stewart-Haas Racing Chevrolet Impala SS I would argue that pair as the winner.
you only think it’s silly because you speak from a position of privilege. For some of us, this is not just a hypothetical problem. Some of us deal with this problem every day.
Wrecks because it can’t see where it is going
Wrecks because it can’t see where it is going
Wrecks because it can’t see where it is going
Wrecks because it can’t see where it is going
So we’re left with two contenders:
Both have good enough night-vision (and presumably our urban area is reasonably well lit) so day/night doesn’t come into play. The viper is the superior car. On a straightaway or NASCAR style oval track the horse wins going away.
I’m inclined to believe our horse will be competent at drifting around corners and give the horse the win in the urban environment as well.
I think this is the correct answer (at least the eliminating most of the animals part, I defer to your car expertise). I don’t think the visual systems of these animals could handle the kind of inputs necessary to steer a car at speed.
Hmmmm… more detail. What you care about when driving has low spatial frequencies. You can run over small things, and you don’t care about the fine details of large things, as what matters is that they are large. So you don’t need high resolution vision per se. I think most of what you’re doing when driving a car uses rods, as opposed to cones, and that gets rid of the need for a large amount of cortical hardware. (Rods are best for fast-moving stimuli, but they have very little in the way of ability to provide high-resolution information.)
But you need to be able to both predict the consequences of your movements and respond to unexpected changes in visual inputs very rapidly, and I don’t think any of the non-mammalian animals listed could do that. These are all animals (well, with the exception of the stingray, which I don’t know much about) that spend most of their lives moving at well under 5 km/h, with very occasional hard-wired responses to rapid movement.
The best option would probably be a bird that is used to flying through forests in packs.
Epistemological status: I’m a sort of vision scientist, but I don’t really know all that much about non-human vision, and not much beyond David Attenborough-level stuff about animal behaviours, so take all this with thirty metric tons of salt. But that’s my best stab at it.
So what you’re telling me is we need a Firebird driven by a Falcon for a real contender?
I got to thinking about this problem when I considered human adaptions to driving. (truthfully, several years ago, this is on an old collection of interesting problems i proposed to a friend)
Humans, when I consider them in the abstract, really shouldn’t be capable of doing the kinds of things that drivers do. At the very least, it mystifies me. While I CAN drive competently, even in cities and over long distances, I’ve never really been able to get over the fear of being in a vehicle moving far faster than I’m really equipped to move (I believe XKCD has a good comic on this). Driving long distance is enjoyable, but I’ve never really enjoyed driving in even very small cities. There’s too much to account for that my brain doesn’t feel like it can account for.
And it’s not an issue of practice either. I drove for 5+ years before essentially giving up. I’ve never been in an accident, but i’m always afraid I’m gonna be.
@woah – I spent a while trying to think of a bird-named car and couldn’t come up with one, but obviously a Firebird would do it.
@theodidactus – I’d guess one of the main reasons why we can drive well is that we’ve managed to create such a predictable driving environment. And when it isn’t predictable, accident rates increase substantially (cf. much of China or India vs anywhere in the western world, I don’t have the stats but I’m pretty sure they support this). Generally speaking, we have half a second or more to respond to any sudden changes (and if we don’t, we should increase our following distance – this is why tailgating is bad). And that’s plenty of time for our visuomotor systems. It’s almost certainly plenty of time for the visuomotor system of a spider or whatever to get out of the way of a looming stimulus, but in those cases (I assume) the stimulus-response channel is completely hardwired, so there is only a very specific class of stimuli it will respond to, in a very predictable way every time. There are spiders like the Portia that use relatively high-resolution visual information to plan complex manoeuvres, but this takes them minutes to process, not milliseconds. So yeah, unless someone who knows more about animal behaviour and visual systems (which wouldn’t be that hard to find) says otherwise, I’m gonna stick with the mustang or jaguar being the only ones that are actually in contention.
@Enkidum I cheated and used google to find one. Or rather the second one, since I knew that the falcon was a name of a car.
Or a Road Runner driving a Reliant Robin?
Or my favorite, an Eagle(AMC Eagle/Eagle Talon) driving a Tercel. Male falcons are called tercels/tiercels.
Loser: a pill Bug in a Reliant Robin
rollin’, rollin’, rollin’
All the earlier answers are missing one key piece of data. Watching the car chase scene inBullitt has demonstrated that the Volkswagen Beetle is undoubtedly the fastest urban car available. However fast the participants in the chase go, the same green Beetle keeps getting to places before them (and it’s clearly not racing as it let’s the obviously in a rush cars past repeatedly).
So, equipped with the driver probably best suited to understanding racing, albeit probably better and stalking and catching a victim than being a frontrunner, the Beetle is clearly the best car and should win.
The type of car seems mostly irrelevant because the challenge lies is training an animal to actually be able to drive a car in any kind of satisfactory way. I don’t think you could actually do it under normal circumstances, certainly anything but the mustang and janguar are right out.
Of course you could kind of cheat and specify that the controls directly interface with the animals brain in way that maps 1:1 with its normal locomotion, then in terms of car it has gotta be a Stingray or Spider (the weight of the mustang being significant)
Even with the brain interface (they’ve actually kind of done this with cockroaches and little robot vehicles, I think), I’d still rule out anything but the mustang and jaguar, along with @acymetric above.
Yeah, it has to be one of the mammals. I’m concerned about the horse, though. They don’t have binocular vision do they? How much of a handicap would that be?
If you are going to do all this work, why not take your list of names and post them once a week in an open thread saying “these are people I think can do better in culture war discussions” and see if
wethey can police themselves appropriately. This gives them a chance to behave better in different ways, and takes less work.Having that feedback be offline/invisible also undermines the quality of discussions a lot–X, Y, and Z don’t respond to some CW discussion because they’ve been told not to, rather than because they don’t really have strong feelings about the matter.
Biweekly Naval Gazing links post:
The Battle of Manila Bay is the most famous naval engagement of the Spanish-American War, a shattering American victory over a Spanish fleet that could barely float.
I’ve taken a look at the best of naval fiction, and gotten some excellent suggestions from my commenters.
So You Want to Build a Battleship continues with a look at the surprisingly complex process of getting a completed hull into the water.
I’ve already linked my 4/1 post on the Philadelphia Experiment, but I’ll throw it in here, too.
The destroyer currently is the most powerful combatant in most navies, but it wasn’t always this way. I’ve looked briefly at the history of this type of ship, from its origins in the torpedo boat to the present.
My long and winding design history of the battleship has finally reached the pinnacle of the breed, the magnificent Iowa class.
Museum Review: The Tulsa Air and Space Museum. Not a bad way to kill a few hours, but not worth going far out of the way for, either.
And Naval Gazing is having its usual open thread.
One thing that you may not have considered is that some of us are using fake emails.
WordPress has had several large email leaks before, and a lot of the culture war discussion here is firmly in get-you-fired territory. Long time pseudonymous commenters have probably left enough bread crumbs for a determined doxxer to find our real identities but at least that requires work on their parts. But a dump of WordPress accounts and linked emails is searchable: an attacker could take one of the dozens of “I rated SSC posters for comment frequency and political alignment” lists users here have made over the years and ID every single right-leaning commenter dumb enough to use their real email in a matter of minutes.
Seems to me the solution is asking folks to at least use email accounts without a real name attached. And if Scott sends the email and it bounces, he can just ban them instead.
I think that’s workable for the regulars, but 1) it’s yet one more layer of “I just started posting here and don’t know all these obscure rules!” and 2) it still suffers from all the problems that the emailing scheme has in the first place.
Some time back our emails were visible in the WordPress page. That made me change my mail address to a semi-obfuscated one. I will still get mails to that address, but others may have used fake addresses.
I use a real address that I could, in theory, check. But I never do.
+1.
Right, I also don’t check the particular email account this is linked to.
I don’t think there’s a need to keep the feedback semi-private. Just yellow card in public.
Ditto.
How hard would it be to set a flag for the user so when they log on or write a comment a small red banner appears above the input text field?
Normally, I’d say, “pretty easy”. But this is WordPress, so I’m going to go with, “functionally impossible”.
“Let me know what you think.”
IMHO, most people want operating this blog to be a congenial experience to you – more feedback on what kind of comments cause you grief would probably be helpful. I’d be up for experimenting with a subject matter ban, or some kind of yellow card system that lets you warn people who are causing you grief.
On the other hand, the more transparency there is, the more incentive there may be for people to game the system, so some kind of PM to offenders might be a good idea too.
The only problem I have with the system is that a ban after a private warning could look quite capricious. But Scott can mitigate that by making warnings public after the fact–“Banned for discussing Space combat in Ian Bank’s novels after explicit warning via e-mail on 3/12/2027” or whatever.
It’s Iain, depraved outgroup!
I have the same concern about the private warning looking capricious. If there’s an upside I see, it’s that folks getting warned in private and then “saving face” by apologizing gracefully is a good thing.
*Sad Snoopy soundtrack plays*
I had posed as one of you for so long, only to blow it now on a botched in-joke.
Damn, managed to miss that joke as well. I presumab6also do not belong here…
Wow I never noticed that. I guess I’m not as big a fan as I thought.
1. I had the same reaction: Dude, focus on administrative convenience!
For almost four years, I was a regular commenter at First Things. True, I often expressed more progressive views than many of the authors. Then David Azerrad posted a commentary decrying a college “Diversity Awareness Activity” for simply shaming and stigmatizing wealthy, heterosexual, white males for having “unearned privilege.”
I wrote a post concluding that the exercise seemed to be designed to help students recognize the range of circumstances and backgrounds reflected in the student body, helping incoming students overcome a natural tendency to project their own circumstances onto people they don’t know. And yes, some of the questions in the exercise acknowledged differences based on wealth, some on sexual orientation, some on race, some on gender. But I noted that there were also questions based on other criteria—and, moreover, that acknowledging difference does not imply stigmatizing difference. In conclusion, I stated, “So here’s the real question: How can we talk about differences when merely raising the topic causes people to close their ears?”
With that, I was banned from site—and my posts were removed from that discussion.
It kinda sucked. (Someone had responded to my post by saying “Ah, nobody.really, my old gay arch-nemesis.” I was able to respond, “HEY! Who says I’m old? Or gay? Or arch? (Ok, you have me there….)” I was kind of proud of that one—and sorry when the moderators removed it.)
But that’s the nature of editorial pages and blogs: The publisher gets to pick what does and doesn’t get published. I commend Alexander for an abundant concern for fairness—but ultimately, editorial discretion is a judgment call, and I fear he’ll make himself crazy trying to adjudicate every friggin’ post.
2. Then again, I cannot fathom the mind(s?) that drive this blog. Alexander writes faster than I can read. So perhaps my mere mortal concerns are of no consequence.
They go a lot further than this. For example, take this question:
“For every dollar earned by white men, women earn only 72 cents. African American women earn only 65 cents; and Hispanic women earn only 57 cents to the dollar. All white men please take 2 steps forward.”
This question makes the implied claim that earnings reflect the privilege difference between men vs women and that this is twice as bad as various other privileges they note, which merit only one step.
These are extremely subjective and ‘culture warry’ claims that heavily depend on a certain worldview. Many people dispute these claims, for example because:
– they think that earnings often require sacrifice and a (large) part of the gender earnings gap is caused by men making more of these sacrifices
– they believe that earnings are much less relevant than actual material well-being, which is much more equal between men and women due to large wealth transfers
– they believe that low earnings can reflect (forms of) privilege (for example, people choosing low earning careers or becoming a stay at home mother because they get financial support)
– they believe that the gender earnings gap is not reflective of the overall privilege difference between men and women
I merely note that these rebuttals exist to illustrate the subjectivity and ‘culture warriness’ of the question. If you actually want to discuss the extent to which the gender earnings gap reflects gender privilege, I suggest making a comment in one of the OTs where the culture war is allowed.
This may be true for questions like: “All those raised in homes with libraries of both children’s and adult books, please take one step forward.” In that case, people are asked to display their supposed privilege by stepping forward if they personally had a certain experience. People of all races, genders, etc could step forward or backward for that question.
The gender earnings question that I presented earlier doesn’t ask for the experiences of the specific person. It simply declares that white men earn more and thus demands that people sort themselves according to a stereotype. So it asks people to project a stereotype on themselves and then for others to judge this stereotype.
Imagine a similar exercise for crime which includes these two questions:
– If you have ever stolen something, please take one step forward.
– Blacks people are three times as likely to be arrested as white people. All black people please take 2 steps forward.
Do you see the problem?
Stereotypes are acknowledged group differences…
A fairly recent study found that exposure to multiculturalism increases race essentialism. So acknowledging group difference may actually increase the belief that those differences are innate and thus unchangeable.
Furthermore, studies do suggest that pointing out stereotypes may actually increase the application of stereotypes.
Other studies have found that stereotypes tend to reflect reality (to some extent), so pointing out stereotypes may simply remind people that such stereotypes are useful.
I have seen a lot of complaints by Social Justice advocates that people who are “merely raising the topic” or “merely asking questions” are disingenuous trolls who don’t honestly want to engage, but want to derail the discussion or such. Some seem to consider statements like yours as passive aggressive trolling themselves, using a false claim of victimhood to deter the policing of important social norms.
Stating very subjective claims as objective truths can also be perceived as trolling.
If your comment(s) were similar to the one you made here, I can see how you might be perceived as a troll by people with similar low tolerance for such comments.
Above, Aapje offers some more-than-cogent thoughts on a college “Diversity Awareness Activity”–and on the nature of comments in general. I concur that this isn’t the place for hashing out the merits of the college exercise. I only wish we could have had that discussion on First Things.
For purposes of this discussion, arguably Aapje’s most relevant comment is this:
This suggests that Aapje might join me in saying that Alexander need not maintain some high barrier to banning commenters that provoke him. Alexander has already acknowledged the strain of dealing with cultural war comments. This suggests that Alexander’s enthusiasm for maintaining this site is a scarce commodity, and policies should be devised with the goal of shepherding that resource prudently.
Culture warriors will never lack for forums for combat. This web page needn’t be one of those forums.
A feature of the culture war is that some (extremist and one-sided) culture warriors aggressively try to force others to discuss topics in a way that fits with their ideology, rather than depend on mere persuasion.
Of course one can (try to) avoid that by yielding these topics to those extreme culture warriors or only debating it as they demand, but that has serious consequences for the debate. It means that other viewpoints that don’t fit either side of the dominant partisan divide, are not expressed. This especially includes moderate viewpoints, that don’t fit ideologies that have highly simplified explanations. Furthermore, it segregates society into ideological ghettos, where other viewpoints may not be expressed, causing bubbles where biased and/or false views are not challenged.
There are indications that:
– many segments of society feel unable/unsafe to express their views
– these segments are more moderate than those who speak up
– these segments also feel unable/unsafe to influence discussion norms
So this then leads to extremist and one-sided speech to be dominant, which in turn causes fear among the moderate, who then feel forced to support radicals who fight against other radicals.
It seems to me that yielding to (extremist and one-sided) culture warriors, even just by opting out, then makes things worse. Unless one believes in accelerationism, this seems to have bad outcomes.
Many people, including me, seem to believe that CW topics are often not discussed here as they are discussed pretty much anywhere else and thus that this forum does provide a fairly unique variant that is worth preserving.
While that is true, there is evidence to suggest that the comments themselves are not so much a strain as the backlash. Perhaps that can not be alleviated without surrendering.
+1
There are a million places on the internet where CW topics are discussed in terms of owning the libs or driving off the nazis, but not so many where people from different viewpoints politely discuss their differences in good faith.
There is also a whole thread of thinking that says that taking part in, listening to, or even allowing such discussions to take place is evil and should be punished. People who accept that idea are pretty adept at using the heckler’s veto to make such conversations very difficult to have. But I don’t think those of us who don’t accept that idea are obliged to go along.
It’s notable that a bunch of IDW types have managed to pack auditoriums and get large numbers of subscribers for podcasts/Youtube channels based on trying to have exactly that kind of conversation. That suggests that there is a substantial number of people who want to hear those conversations, and who are frustrated at the ability of the heckers’ veto crowd to shut them down.
My 2 cents worth: Scott should just ban people who seem like they ought to be banned, and not worry much or put too much effort in to it lest it become too much work. I’m more worried of the “This is exhausting so I’m shutting it down” failure mode than the “forum is overrun by trolls” failure mode. (This may be partly because I was a big fan of Andrew Sullivan’s blog, which met the same fate, though for different reasons)
I’d take a few years of Andrew Sullivan’s comments than a life time of say, marginal revolution’s.
Professor Friedman, you may be interested in this as I think you were asking about aids to maintain cognition during aging?
Local institute of technology has research out: carotenoids and omega-3 in combination are good for the brain!
So we all need to start eating our leafy green vegetables, red yellow and orange veggies and fruits, and oily fish a lot more!
Thanks. Tomatoes, bananas, and broccoli are all things I eat pretty regularly. Very little oily fish, but Omega 3 comes in pills.
I’m not sure this is a good idea. Considering the extent to which you make money from Amazon affiliate links of your book reviews & Patreon, ads could very easily cannibalize much more of your revenue than they deliver. Look at past estimates of ad effects: https://www.gwern.net/Ads#replication Your revenue from a single person via Patreon or affiliates could easily exceed the chump change of $30/month or whatever you’d get from AdSense.
Seconded that this seems somewhat risky. As an existing fan of the blog, it’s not going to affect my traffic at all and I’ll keep my adblock off, but I think many new readers may find it a slight impediment to getting attached to the blog if they’re perusing some random post; it also removes the reaction where people are pleasantly surprised to find a site which has uniformly high-quality ads that you don’t want to remove. Unless it’s extremely lucrative, I’d expect that this is a negative-EV gamble.
I think Scott has to do the experiment to a-lieve that the number really is $30/month.
I strongly support your use of advertisements to monetize your excellent website. It warms my heart to know that corporations are paying your website in my place.
A question for English, Scottish, and Welsh people who live in the British Isles: Do you think Irish people look different from your own ethnic group? If yes, then what is different about them? Are there multiple “subgroups” of Irish people (e.g. (I’m making these up as examples) – short redhead, brunette with gray eyes and big forehead)?
If you think the Irish have a distinct appearance, are you sure it isn’t due to different clothing and hairstyle preferences that are more common in Ireland, or to culturally-rooted differences in facial expression (e.g. – maybe you can recognize Irish on sight merely because they tend to smile more/less than your ethnic group)?
I’m English but I have a quarter Irish blood and my mother, who is half, has extremely stereotypical Irish looks; the red hair, strong forehead, high cheekbones, turned up nose, freckles. I have inherited some of these features (I have dark – almost black – brown hair though, minus the grey, and my freckles are very minor). I’m not sure how common that stereotypical look is, but it’s definitely a thing that exists and was played up in caricatures of the Irish in a similar way to caricatures of Africans. My guess is that it’s more that only Irish people are very likely to have these looks, but the vast majority of Irish people don’t look this way, and are largely indistinguishable from other members of the British Isles.
I’d say rather that some British people look Germanic or Nordic in a way I wouldn’t expect an Irish person to. I don’t think there is an Irish appearance that is distinct from most white Brits.
I’m white, 5/8 English, 3/16 Scottish, 1/8 Swiss and 1/16 Irish, have darkish brown hair that goes a bit wavy if it gets long enough and a beard that goes a bit ginger in the summer, and think my appearance would make me a thoroughly unremarkable native of any country in the British Isles.
I don’t think any British national group can be reliably recognised (it’s not possible to do that even at the more granular level of European countries) but some people definitely do look [nationality].
I think if you go to the level of “Northerners” versus “Meds” then you start to see clearer distinctions.
I can definitely distinguish between e.g. French and German with significantly better than chance accuracy, but that’s still in the realm of educated guessing rather than anything that could be done consciously.
Defining things a little more concretely here might help.
For example, my great-grandfather was Irish. Because he was born in Ireland. HIS father, however, was Scottish, and they were Scottish for ages before that.
So do you mean “can you spot people from Ireland?” Or do you mean “can you spot people whose ancestors are mostly from Ireland for the previous few centuries?”
They said “ethnic group”, and it’d be a bit silly to just call anyone born in Ireland “ethnically Irish”.
So do you mean “can you spot people from Ireland?” Or do you mean “can you spot people whose ancestors are mostly from Ireland for the previous few centuries?”
Yes to the second. I define “Irish” people as people who credibly claim to be “pure Irish” or something like that.
I’m Scottish. I don’t really think Irish people look different. I guess, yeah, there are some features that are more or less common, but I feel like everywhere’s such a mixture these days that I can’t really tell.
Question not for irish people?
Pretty sure yeah:
-it seems like dark hair is the norm rather than mid or lightish brown like for whites in england (also in scotland)
-and Red hair is much more common
-maybe green eyes too?
This could be regional*, as apparently ireland has historically had less movement between places than other countries. (and even in england you can recognise people to some extent by region)
(*I mean, I could also be mistaken, it could be regional even if my impression is accurate)
There’s also some looks that seem very distinctively irish, like how not every male canadian or scot is a giant with a boyish face, but you don’t seem to get them in numbers anywhere else.
For example the guy on the right, and a black haired “intense scraggly looking survivalist” type that banning dog-fighting wasn’t fair on. (Lots of others too, those are just the ones which I saw the most there and the least elsewhere.)
_
Some variance could be due to other non-genetic factors though, for a mundane example no one in england plays hurling or gaelic football, and I think you’re liable to end up with a different gait, body, attitude, etc, if you play these (at least the first) instead of football/cricket/rugby.
Couple of “irish faces”
http://image.guardian.co.uk/sys-images/Arts/Arts_/Pictures/2007/08/02/nesbitt460.jpg.
https://www.sciencedaily.com/images/2013/11/131121130027_1_900x600.jpg
http://img1.wikia.nocookie.net/__cb20120701034125/harrypotterfanon/images/f/f8/Mack.jpg (this one is more subtle, could be seeing things. Though note the green eyes.)
https://www.theapricity.com/snpa/bilder/colmmeaney.jpg (lol)
rupert grint (ron weasley) looks pretty english/irish. more english than irish despite the orange-red hair and green eyes, but you can see why they cast him.
just found a source for irish red hair: http://www.my-secret-northern-ireland.com/irish-red-hair.html
Regarding irish regional seperateness, I struggle to recall *ever* seeing an orange haired person of irish blood in ireland – lots of dark/maroonish red instead, but looking up “irish red hair” I found this https://www.theguardian.com/world/gallery/2016/aug/21/irish-redhead-convention-in-pictures -aparently there’s an irish redhead convention, where see for yourself, and I know people of irish blood in england with orangey hair.
Lastly this set of search terms might provide some education/amusement/edificiation:
https://duckduckgo.com/?q=irish+farmer&atb=v153-7_j&iax=images&ia=images
Internet problems I know how to solve, but don’t understand: reddit occasionally has this problem when they update it, where Chromium based browsers won’t load the site properly and it will suddenly go to a white page, but if you delete your reddit cookies it fixes the site for good (well until the next update), even though you’d surely reacquire the same cookies, right?
Sounds to me like something on the server side is refusing to serve anything to a client with a cookie it considers invalid.
Yeah, possibly a change to what is stored in the cookie (or how it is stored) to authenticate or even just populate options…I would expect the site to handle that gracefully though but maybe it depends on your browser/settings.
The Great Filter hypothesis posits that we haven’t detected intelligent aliens because they all go extinct, but how can this happen once the aliens establish self-sustaining colonies in multiple star systems? Even if their home planet in their home system exploded, the colonies in the other systems would be unaffected. I don’t see how every remnant of an interstellar civilization could just disappear.
The point of the Great Filter hypothesis is that it happens before a civilization reaches the “interstellar” level. It’s attempting to explain why we don’t see signs of alien life – past or present – and so the fact that we don’t see signs of alien life isn’t evidence against it.
Mass Effect had an excellent answer for this: They get hunted by some extragalactic entity that consumes sapients. Which is to say: traversing the stars is a noisy affair, there ain’t no stealth in space, and a quieter hunter could eliminate a civilization without too many troubles assuming a reasonably large timescale.
Then why isn’t this hunter consuming all the available resources we see lying around? You have to posit pretty weird preferences for this dominant civilization that it decides to just sneak around and assassinate any other sapiens. Some sort of extreme Gaia cult, at an intergalactic scale.
I mean, maybe it needs organic minds to fuel some kind of collective and they need to evolve to a certain level to be useful? I was positing an example of how a starfaring society could still go extinct. The epistemic status of this is “Gotten from a video game”. There could be all sorts of explanations for why, but that probably is the least useful question to ask oneself. The better questions are “How likely” and “what might be done”
My reply was eaten by the filter. Basically: see Yudkowsky’s “Generalizing from Fictional Evidence”. But in this case generalizing from a game might be fine, since one solution to the FP is that we’re in a sim which is a game, and we haven’t seen anyone else yet because all the players started out at the same tech level.
Well, in the Mass Effect Cannon,
** Spoilers for a 10 year old game series **
the hunter species (called reapers) were actually rogue AI programed to periodically cull the galaxy of all species capable of interstellar travel. Since that culling was their only purpose, and the time horizon between their “harvests” wasn’t long enough for any species to develop technology capable of threatening them, they didn’t need to consume any more resources beyond what they need to keep them functioning.
Gotcha. By “rogue” I assume you mean they screwed up and the AI wiped them out?
So the aliens solved the control problem well enough to successfully limit the AIs to not consuming all available matter (the most straightforward way to guarantee they don’t miss any interstellar civs), but not well enough to avoid getting wiped out themselves? That’s a *very* narrow part of outcome space, exactly corresponding to what makes for a good story.
Aftagley is incorrect: the Reapers weren’t originally programmed to destroy intelligent races. In fact, the species stems from an AI that was programmed to solve the problem that synthetic and organic intelligences inevitably end up killing each other. It apparently didn’t find a solution, destroyed its creators, and processed them into the first Reaper.
The Reapers transform the “harvested” intelligent species into more Reapers (a Reaper takes the form of a large capital ship) and apparently couldn’t reproduce in a satisfactory form without them. So wiping out the galaxy doesn’t actually help them, they just lose their prey.
They were actually trying to speed up the cycle a bit: the FTL system in-game is a network of jump gates that were originally created by the Reapers. This helps species develop and uplift each other, and then also lets the Reapers cut the species off from each other when the harvest comes. The center of the gates is a large space station that tends to be a galactic hub, and is also where the Reapers first emerge in a harvest, which helps throw the organic species into disarray.
Things started falling apart for the Reapers when one species managed to have a group survive the harvest, monkey with some of the Reaper technology, and pass enough information to the next generation (us) that they knew about the harvest in advance.
Yep, going deeper into cannon:
A long time ago there was an alien species that could dominate other intelligence life via some kind of psychic control, which the humans of Mass Effect call “leviathans”. This species ran a galactic empire based on mental repression. Their control wasn’t total, however, and the species kept some degree of independence of action, but not of motivation. So, they could live life and develop technology in a relatively normal fashion, but couldn’t consider trying to overthrow the Leviathans.
Unfortunately, a trend emerged among these slave races: at some point they would all develop artificial intelligence. This artificial intelligence would then slaughter the race that created them and try to take over the universe/maximize paperclips/whatever. This meant the Leviathans were constantly (on a galactic timescale for immortal psychic aliens, I mean) having to go to war with AIs.
They won these wars, but eventually got tired of the constant warfare. They didn’t want to just kill every other species, since their way of life required a galaxy full of mental slaves, but everything they tried to stop them from creating AIs failed: eventually the client race would develop AI, then it would destroy them.
Eventually the Leviathans realized they couldn’t solve this problem on their own and (in a massive oversight, imo) decided to build an AI to help them think of a solution. The AI then decided the proper strategy to reduce AI risk was “kill all species as soon as they develop the technology necessary to leave their home planet.” This would keep a stable of slave races around, but end the potential for AI risk.
Unfortunately for the Leviathans, the AI choose to begin its mission by culling the Leviathans, although one or two of them survived in hiding.
I’m not sure I had ever seen the details of the leviathan civilization before. Where did those show up? EDIT: found it, looks like there was a Mass Effect 3 DLC I didn’t know about.
It’s sort of weird that the Reapers aren’t actually enforcing this solution though. The “hide in the galactic depths, then come forth to destroy all” strategy comes far too late for actually accomplishing the AI’s objective. Heck, we even have an AI risk situation with the geth, and that was hundreds of years before the Reapers came.
One line of thinking is that having a dominant civilization kill all the other civilizations is plausibly a stable equilibrium, and possibly the only one or one of a few. So eat or be eaten; eventually the universe will spit out a civilization suited to destroying all the other ones. This doesn’t predict that they should be stealthy (other than stealthy things being better able to eliminate other civilizations). Nor does it say anything about resource utilization, but seeing as our civilization is projected to have a plateauing population the assumption that all civilizations would try to expand as much as possible seems tenuous anyway.
A temporary reprieve, I’d wager. Malthus/Darwin are not so easily evaded.
That’s a very good question, but it’s one that Mass Effect infamously failed to come up with a good answer to.
Wasn’t the answer just “lying dormant outside the galaxy until the appointed time”? It seems consistent with the (definitely weird) values that the Reapers have.
Why would the hunter want resources as opposed to safety and a lack of rivals?
Because you need to eat to live, and the food is gradually vanishing, and the less you stockpile the sooner you die.
If we’re getting into fictional territory, it’s strongly hinted that the final book of The Expanse series will involve facing a great filter risk.
I think you are (roughly) correct, and therefore it is likely that the great filter (if one exists) lies in our evolutionary past.
Except I would slightly amend your argument to say that the real problem is that we are on the verge of the singularity. Thus the timeline to launching an interstellar civilization is quite short (though whether it is “our” civilization remains to be seen).
(Incidentally, I favor another explanation for the Fermi Paradox).
Many of the things that would cause a species to wipe itself out aren’t mitigated by building a colony on a distant planet.
Like what?
I was thinking of societal ills caused by overcrowding, e.g. wars over resources; you can’t ship people off-planet fast enough to overcome them.
Even a Mars colony has some problems as species-survival insurance.
Case #1: The Mars colony is a smallish outpost, like the Antartica research stations now. If Earth blows itself up/finds itself knee-deep in grey goo/has everyone die from a hobbyist-made plague, then they’re just the last to die.
Case #2: Mars is a major part of humanity, with millions of humans living there and substantial political, cultural, and economic power concentrated there. At this point, Mars is potentially also a target for whatever goes wrong on Earth. The two sides fighting it out on Earth may have their counterparts on Mars, or Mars might even be one side of the war (think of the world of _The Expanse_). Alternatively, the recipe for nukes in your kitchen gets out and is read on Mars as well as Earth. It wrecks civilization both places.
But we’re not talking about Earth and Mars in this context; we’re talking Earth and Alpha Centauri. If there’s an outpost at Alpha Centauri, then it pretty much has to be self-sufficient and isn’t going to automatically die out just because Earth isn’t there any more. And even if there’s a large and thriving civilization at Alpha Centauri, it isn’t engaging in regular trade with Earth, isn’t likely to be a target of Earth’s self-inflicted wrath, and would be difficult for Earth to eliminate even if it wanted to.
I think the assumption is not that the home planet blows up the colony, it’s that if the home planet nukes itself then it’s likely the colony will as well. You can think of counter-examples, but they’re the exceptions that prove the rule.
I’m not sure how to make any sense at all of that last sentence. There are no examples, and “the exception that proves the rule” is not reason.
Furthermore, we are postulating essentially autarkic colonies founded by people who decided to leave their homeworld and never return, and whose subsequent interaction with that homeworld (and any other colonies) consists of electronic communications with a decade or so of latency and perhaps occasional immigrants who have also decided to leave their homeworld and never return. The colony may fail on its own. But if the colony succeeds on its own, it is far from obvious that a radio message from the homeworld saying “OBTW we’re blowing ourselves up now” would result in the colony doing the same. A shipload of refugees from home might have that result, but even that is far from certain – and since the radio message (or radio silence) would preceed the refugees by decades, if you are going to assert that it is certain or even likely that the arrival of a refugee ship would have that effect then presumably the colonists would have the same understanding and would use those decades to prepare.
Then I will be more explicit. Suppose the human race founds a colony on Alpha Centauri, and then the Earthlings manage to destroy themselves somehow. What’s the reason for believing the colony on Alpha Centauri won’t meet the same fate?
Perhaps it’s different somehow. Maybe what destroyed Earth was a shortage of strontium, which AC has plenty of. Maybe what destroyed Earth was religious warfare, and AC was colonized by fleeing atheists. Maybe what destroyed Earth was food riots, and shortly afterward the scientists of AC invented the Star Trek replicator.
Those are the exceptions. The rule, the default assumption, is that what destroyed Earth was not specific to Earth, it was just humans being human and Moloch being Moloch, and there’s no reason to assume AC won’t eventually meet the same fate.
(Note that I’m not arguing that human colonies will necessarily perish; just that if the first one does, it’s reasonable to assume the others will)
I don’t see much reason to believe humans will wipe themselves out though. Most extinctions are probably due to either being outcompeted by close relatives for your niche or due to astronomical events like asteroid impacts. The asteroid impact type scenario worries me. That or just not leaving earth ever. For the first one, I’m not sure I could care about such a distant possibility as to which human descendant species won.
A lot of things people talk about as if they are extinction risks kind of aren’t. For example, global warming is laughable as an extinction risk for humans even in a realistic worst case. The worst case may or may not suck for humans for a couple hundred years, but it’s super far from an extinction scenario.
Even if all colonies eventually perish, that’s not necessarily a reason to think extinction of all humans and their descendants would happen until something like all the suns stop fusion. If the extinction rate is significantly lower than the founding rate of new colonies, then humans won’t go extinct if they can escape the riskiest time before the number of colonies gets big enough to not worry about a fluctuation in luck where the small number of all existing colonies all go extinct.
Well, mostly it will be that whatever caused the Earthlings to destroy themselves will be on Earth, about twenty five trillion miles away. Also the fact that the people on Alpha Centauri will be not Earthlings, and indeed selected (probably self-selected) for being very unlike typical Earthlings. And the bit where the destruction of human civilization on Earth will have given them a detailed advance warning of the possible threat.
I do not agree with this assertion, and I certainly do not cede this argument merely because you claim your victory is the “default”.
Prove it.
Er, I thought we were trading opinions about hypothetical future events. If you think our colony will necessarily be the sort of place that doesn’t do self-destructive Bay of Pigs shit then fair enough, you’re a smart guy who knows lots of stuff, I don’t think you’re provably wrong. It just doesn’t look that way to me; it seems intuitively obvious that P( colony 2 blows itself up | colony 1 blows itself up ) should be substantial.
I think the point is that the people on the colony are from the same species as people on earth, and are at risk for making the same mistakes.
You could probably make a good story about the colony getting news of the disaster up to the end or near it, and what they do to not let that happen.
Two thoughts I’d add, after pondering over breakfast:
1. I’m making the case that, supposing Earth manages to off itself, the smart money suggests that our colony(s) will be likely to follow the same route, and John feels the opposite, one reason being that the colonists will have been selected somehow and thus be a somewhat different sort of group. It occur to me that this seems just as likely to make them more susceptible to collapse as less. The colonists must either be selected by flawed humans (e.g. the AC Corp’s Colonist Selection subcommittee) or some emergent process (e.g. one political faction fleeing another); neither is guaranteed to produce a more stable/robust/generally better group of humans than the home planet.
2) This being SSC, I must point out that my use of “Bay of Pigs shit” was metonymy for the sorts of human folly that might lead us to destroy ourselves, please do not interpret it as “I think the Bay of Pigs was a genuine near-extinction event and anyone who thinks it was actually not that big a deal should come at me bro”.
I think you probably don’t even need for them to go extinct in similar ways. You were right that they just have to go extinct for some reason. I’m not big on the human folly reason, but maybe asteroids of sufficient size collide often enough with planets, supernovas go off, showers of electromagnetic energy in the right bandwith from nearby astronomical phenomena occur often enough etc. We could roughly estimate how often those things should annihilate life on a planet. I’m not aware of any species wiping itself out, but no other species is quite like humans either so estimating the odds on that is just a big ?????. Something close to a species driving itself extinct has probably happened at some point on earth even if it’s a weird example, but I dunno what the odds are.
But if we’re talking loss of all humans, we need to have some sort of guess about the colonization rate. That combined with the extinction rate of a colony is going to determine the odds humans and their descendants go extinct everywhere. If the colonization rate is much higher than the extinction rate, many colonies may go extinct but not the human species. If the extinction rate is comparable or higher than the colonization rate, then humans almost certainly go extinct.
I assume the Great Filter is that there are hard relativistic limits to movement/communication, so establishing important colonies on other stars will always be a hard proposition. If somehow the home planet goes kaboom for whatever reason, small colonies on distant stars may find difficult to keep progressing in science until they have successfully colonized their own planet, which may take millenia. If anything, smaller colonies will be more susceptible of going kaboom and never recover than the home planet.
I mean, we aren’t *that* far away from possible futures in which a wave of self-replicating space probes expands outward at an appreciable fraction of the speed of light, transforming all available matter into whatever concept of Eudaimonia we manage to transmit to our successors. At that point, interstellar distances become pretty small.
Sure, we might avoid that fate. But if you want to posit a great filter lying ahead of us, it doesn’t have a lot of time left.
As is common in this community, you are greatly overconfident in your belief that the singularity will happen.
I did say “possible” future. Though your inference that I think a near Singularity probable is correct, even if not strictly implied by my words.
I’m curious why you think a Singularity is unlikely.
(I suppose the Fermi Paradox is evidence against a near Singularity.)
Von Neumann machines don’t require a singularity. I don’t think they even require AGI.
I wonder if paperclip maximizers are detectable from several light years away.
But you know what’s detectable with current, non-clip-maximizer technology? delicious delicious inhabitable planets. Well, right now we can _kind_ of detect them but that technology will only get better, even in the short term, while reliable self replicating probes still look a little iffy in the short term.
What I am getting at is that civilizations capable of self replicating probes _probably_ already detected us, so why aren’t we tiled in paperclip form yet? Maybe we already were detected, and a probe is coming our way, in which case we are fucked, but on the other hand, in the timescale of a star like the sun? The milky way should have been colonized a few times over so, that’s not the Great Filter.
Self replicating probes capable of tiling the galaxy sound hard to be honest. The slower they are, the more things can go wrong over that long time.
“Habitable planet” is probably too vague. If you want a shirt-sleeve or near shirt-sleeve environment, only a small fraction of planets which are habitable for someone will be suitable.
Fair. It’s more of a “planets with some chance of hosting life of some form”. But still. That only makes the Earth only more desirable from afar, as the only planet in thousands with shirt-sleeve environment. If interestellar civilizations don’t give much of a fuck about the human (?) rights of dinosaurs, at least one of them should have sent their self replicating probe our way.
Assuming of course, universal shirt sleeves dress code here.
Shirt-sleeve environment? Do you realize that planet is so cold that water vapor is a liquid–amazing heat drinking stuff. Sometimes even solid if you can believe it.
(Channeling Hal Clement)
Oh man, an Iceworld reference? I loved that book!
I am a little on the fat side so I will wear a polo shirt from above 15C (59F)
What I am getting at is that civilizations capable of self replicating probes _probably_ already detected us, so why aren’t we tiled in paperclip form yet? Maybe we already were detected, and a probe is coming our way, in which case we are fucked, but on the other hand, in the timescale of a star like the sun? The milky way should have been colonized a few times over so, that’s not the Great Filter.
Maybe advanced aliens respect the value of organic life, so they don’t destroy Earth even though they could.
Maybe these advanced aliens are machines, or exist in some other non-organic form (pure energy?), so Earth’s climate is not any more hospitable to them than a barren planet like Mars. Hence, there’s no special reason for them to colonize Earth.
Self replicating probes capable of tiling the galaxy sound hard to be honest. The slower they are, the more things can go wrong over that long time.
I think there are ways they could be designed to be extremely reliable and resistant to malfunction.
https://dpconline.org/handbook/technical-solutions-and-tools/fixity-and-checksums
Or even more obviously, all those stars wastefully burning away in the night sky. Anyone out there that meant business would want to tap into that energy.
Right. This is the hard version of the Fermi Paradox. If advanced civs with reasonably high probability undergo intelligence explosions and start tiling their future light cones with paperclips, why do we find ourselves in a (relatively) old universe, not paperclipped?
There are several possible answers, of course. (The universe isn’t actually old; we are paperclipped; advanced civs super rare; intelligence explosions much harder than they seem; I’m wrong about something).
Other possible answer: the terminal goal of this advanced civ isn’t conquest/paperclip tiling.
Also, depending on how far away they are (wouldn’t need to be too far), assuming they’ve noticed our planet, they might have even seen evidence that we exist yet (“hey, that planet way over there looks like it could potentially support life!”). An advanced civ might well have noticed our planet, but they more than likely wouldn’t have noticed us yet assuming they are limited by the speed of light.
That’s basically what I meant by saying we’re already paperclipped.
If you get a superintelligence, it just turns its future light cone into whatever it wants; i.e. something high in its preference ordering. Call this thing “paperclips”. Therefore if we lie in the future light cone of a SI, we are paperclipped.
Maybe we got a local diety that has weird preferences, and this is it.
My notion of what its like in the future light cone of a SI mostly excludes explanations along the lines of “it hasn’t noticed us”.
Heck, even a pre-IE advanced race would presumably be eating all stars within travel distance pretty quickly.
If they are intelligent machines, maybe they’d be building computronium around the stars. Maybe they wouldn’t touch Earth, but I feel we’d notice the Technocore building a Halo or Ringworld around the sun, to say nothing of a Dyson Sphere.
IIRC astronomers have searched for Dyson Spheres by looking at stars with suspicious IR emissions or something, and came up empty handed.
I assume the Great Filter is that there are hard relativistic limits to movement/communication, so establishing important colonies on other stars will always be a hard proposition.
Somewhere out there, there must be stars less than 1 light year apart that both have habitable planets orbiting them. Such a distance could be traveled with entirely feasible space technology.
I’m not sure this is necessarily true. There’s lots of stars, yes, but but lots of constraints on habitability and vast distance for those stars to fill. I’d want to see numbers before assuming this had to be so.
It changes if we mean “habitable” in the sense of:
a. Mars and the Moon, places we could colonize but never walk around outside.
b. Antartica, the Sahara Desert, the top of Mt Everest, places we could colonize and live and even go outside sometimes, but where we’d still need a lot of technology to survive for long.
c. What America and Australia were to the Europeans, places we could just go colonize and live with relatively limited technology or hardship once we got there.
I assume (c) is very unlikely. We have (a) in our solar system, so it doesn’t look so improbable. I have no idea whether (b) is at all likely.
Yes, that’s true, I wasn’t considering places that could merely house a fragile, barely self-sufficient outpost as habitable.
If we can’t have an ecosystem that we can be a part of, eventually the system will fail.
Maybe? But it’s probably rare and just hoping on astronomical coincidences across two stellar systems does not make a galactic empire. Maybe they are out there, posting how come their two system empire is seemingly alone in the galaxy.
As I said upthread: your civilization’s lifespan is limited by the lifespan of your star(s), and more broadly by the lifespan of the universe. A star you haven’t colonized isn’t just sitting there — it’s destroying itself and the computation its energy could perform. The more stars you enclose in Dyson swarms, the more consciousness you can support before the universe ends.
Personally I think consciousness is good, even if we can’t communicate with it; and I suspect there’ll be enough people who agree with me to get some colony ships out there. After all, if there are people who believe that more of them is good and people who don’t believe that, and the contest is proliferation…well, one would expect the ones who want more people to win.
The great filter largely implies that civilizations go extinct prior to the interstellar civilization stage. It’s possible to imagine scenarios where far-flung empires would go extinct (brutal civil war, or some resource required for interstellar travel being extremely rare and easily exhausted, for starters) I think placing extinction (or crippling) events earlier in the timeline is more likely.
brutal civil war
But the brutal civil war only works as a Great Filter event if it ends with the whole alien civilization being destroyed, as in, all of them dying. That’s implausible and becomes ever more so as you assume more star systems and planets belong to the alien civilization (e.g. – odds increase that one or more star systems will stay neutral, or will be too far away from the fighting to be hurt by it).
Look at Earth’s history. There have been countless devastating civil wars, but none where both sides destroyed each other to the last person simultaneously.
or some resource required for interstellar travel being extremely rare and easily exhausted, for starters
If you’re implying that there might be a resource that enables superluminal travel, then yes, I agree that its exhaustion would pose major problems to an interstellar alien civilization, but it wouldn’t by any means lead to them dying out. Also, there’d be nothing to stop them from continuing to expand their civilization, but at sub-light speeds.
Right, these scenarios could plausibly end an empire, but they by no means would, so I don’t think a search for a single great filter should look there. But as one more failure mode among many, they might possibly contribute to a dearth of interstellar life.
None were undertaken by civilizations capable of interstellar flight. Possibly there exists planet destroying weapons.
The one (interstellar flight) does tend to imply the other.
True, true, and it may be easier to create a planet destroying missile that to create a planet destroying missile with life support systems attached!
There haven’t been a civil war where the nation was extinguished to the last man, but civilizations have disappeared mysteriously all the same. The Mayas come to mind, and the Mycaeans. Probably many more. I don’t think it’s impossible for a civilization to die out in a civil war, particularly with more powerful weapons. Maybe they do not kill 100% of the population, but the survivors may not last for long, or develop a distaste for advanced technology, or just become less technologically advanced for a long while.
Classical Mayan civilisation declined severely from its peak, but their descendants are still there.
Great filter idea that will destroy (completely) an interstellar civilization.
Assumption: Travel back in time is possible.
Assumption: There aren’t infinite parallel universes. There’s just one, and if you go back in time and change something, it change the one-and-only. You can destroy yourself, you can change yourself, you can change the future, and the past, but the new history is the only history unless/until it is changed again.
Eventually, even if you establish guidelines, monitors, and laws, somebody will go back in time and make it so that your society is unable to develop time travelling. Someone from your society is going to keep going back in time until prevented, and the only thing that 100% prevents anyone in the future from ever going back and changing anything ever again is that they change the timeline such that time travel is not developed. Any other outcome will result in more fiddling with the past. Anyone in your interstellar empire that has sufficient technology will eventually wipe out the empire itself.
It sounds like you’ve chosen one of the time travel models that has paradoxes. If I go back in time and kill myself, who was it that killed me?
I’m hesitant to jump in here, because I am definitely don’t have any real knowledge of various theories of time-travel, paradoxes, and causality, but my intuition:
You did. Your future self (that traveled to the past) continues to exist along with whatever you brought back with you (time machine, perhaps)? This would be true even if you went back further and killed your grandparents. The future changes, and you are not born, but you are now part of history and exist as part of the past (which is your new present).
That’s what happens under my assumption. Otherwise, time travel probably doesn’t work as a Great Filter candidate.
Of course, I think it’s unlikely that humanity ever gets to travel back in time at all, but
1) If it can work
and
2) If it works this way
then
3) It’s a plausible Great Filter candidate.
It has a really nice effect (for the purpose of explaining the Fermi paradox) of allowing sapient species to develop up to the point of time travel and then erasing themselves before they can go interstellar, particularly if time travel tech is approximately as hard as interstellar travel. That the speed of light is a plausible obstacle to both problems implies that that assumption isn’t unreasonable.
Maybe the reason we can’t go faster-than-light without infinity energy is that the process of going back in time with faster-than-light travel needs all that energy to create a new universe.
Something I feel is under-explored related to time travel (and that relates to your point):
Time travel is also interstellar travel (well, at least interplanetary travel). Not only do you have to pinpoint the time where you a arrive, you have to pinpoint where the planet (Earth) will be when you arrive there, and plan your landing accordingly, and fairly precisely lest you end up:
1. In space
2. In the wall of a building
3. Somewhere deep under the crust of the Earth
4. Somewhere in the ocean (under the surface)
Time travel is roughly as difficult as landing in a target with a 1m radius on Mars, except also you are somehow navigating time in addition to the regular spacial dimensions involved.
I think bringing in those considerations make it a computational nightmare, so in fiction I’m okay with handwaving at gravity and saying that you stay in the same relative position on earth.
Well, a time machine that’s not also a space ship would have that problem. If you’re already in an interstellar craft, going back in time is just a navigation problem that’s probably not too much harder than FTL travel itself. You want to stay in open space the entire time and don’t run into anything.
If your time machine is a DeLorean driving in a parking lot, you’re gonna have a bad time.
@Randy M
Still might end up stuck in a wall, or if you weren’t on a ground floor suspended in mid-air if the building you were in doesn’t exist in whatever time you travel to.
I believe this was a plot point in Michael Crichton’s Sphere.
@acymetric That’s certainly true, and if you go back to the days of the dinosaurs you probably drop into the ocean due to continental drift.
There’s the Terminator method, where your little sphere of time displacement either over-writes, or swaps with whatever was at the target destination.
Well, so is ocean travel, or bicycle travel. But gravity and momentum keep you pretty tied to your context; I don’t see why time travel would have to be any different.
Pretty much any of the remotely plausible time-travel mechanisms (e.g. a Tipler cylinder) serves incidentally as an anchor point — you can’t just follow a closed timelike curve anywhere you like.
Do you prefer one where, if you go back in time to kill yourself, your action results in the creation of a new universe?
I guess I prefer the one where as soon as you arrive in the back-in-time time, you create a new universe. The universe where you built your time machine and left from is without you, just as if you’d died. The universe you arrive in has two of you (until one of you dies, whether or not by the hand of the other).
But “prefer” seems like a weird choice of word here. Obviously time travel is impossible, but the one you posed sounds logically inconsistent to me.
Ok, but there’s an inconsistency in your version too, right? You power your time machine with a relatively small amount of energy from Universe 1, and the process results in the creation of Universe 2, which is a copy of Universe 1, let’s say, 1 second earlier. Isn’t it going to take every bit of energy from Universe 1 to create Universe 2? This is a conservation of mass/energy problem.
Universe 1 can’t still be there, right?
There’s an episode of Stargate Atlantis where the team tries drawing energy from alternate universes to power their stuff. Unfortunately, it turns out they want to exist pretty bad too.
As others have noted, the Great Filter pretty much has to apply before a species starts colonizing other star systems. Only a subset of plausible star-colonization behavior patterns are vulnerable to inconspicuous extinction on an interstellar scale. That subset may overlap strongly with what we imagine ourselves doing over the next few centuries and have been glorifying via e.g. Space-Opera science fiction, but it isn’t a perfect overlap and in any event the range of plausible behaviors is rather larger than SF normally allows for.
If the universe generates many technological civilizations, and nothing gets around to extinctifying them before they develop starflight, some of them will survive and some other ones will go down in a blaze of really conspicuously blazing glory.
The exception is if the Great Filter is something that destroys the entire universe, though that just reduces to the unsatisfying explanation that we are the first (in our universe).
Why does a interstellar civilization need the “self-sustaining colonies” part?
They might all need to rely somewhat on the parent system (possibly purposely created reliance or some unique entity) and following its fall, unable to adapt quickly enough to continue existing.
Second – perhaps there’s limitations in their growth that’s beyond the planet but below inter-galactic scale.
Perhaps, terraforming ability is limited to them and the kind of planet they require is only really common in whatever pocket of space they are in, and are too sparse outside of it.
It depends a lot on whether they keep researching useful science. Maybe at some point you just hit a wall where even full planet sized research projects won’t do you any good, so if you lose the mother planet, colonies still have all-the-possible-science anyway, and they keep on trucking.
But there’s also the chance that there’s just no economic upside to massive projects like colonizing another planet 10 light years away and stuff never gets done.
Indeed, it’s possible that it never really pays off to get a substantial chunk of humanity off our planet, and then one fine day something happens to wreck our planet, and a century later there are no more humans left.
I think there’s a lot of economic benefit to, say, terraforming Mars and building colonies around the different planets. That saves us if the Earth goes kaboom due to someone’s finger slipping on the nukes or whatever.
The question changes if we are talking of colonizing even Proxima Centauri, and chances are Trisolaris is not really inhabitable for us.
My opinion is that there is in fact no economic upside, unless near light speed travel is invented, since trading with a colony that it takes a generation to reach is not going to be personally profitable to anyone, and moreover you can probably develop substitutes in the meantime, and the expense if the voyage would be, pardon me, astronomical.
Or it would be economically useful if there exists some extra terrestrial unobtanium which we somehow realize a use for from all the way over here, ala Avatar.
Colonization is basically taking out a fairly expensive life insurance policy on your species, with the exception that no one you know personally will benefit from it.
I’m writing a novel which starts from that premise, and concluded that it had to basically be a vanity project of a vastly wealthy visionary.
It could still be a exploration based venture, or be used to escape prosecution/create new nation/get rid of the useless third.
Plus, you are making two assumptions with that –
There’s no great increase in lifespans if not immortality
There’s no great increase/appetite in automation of the colonization level.
Yes, but not one that benefits the sending nation economically.
Perhaps, but only if an extremely advanced and wealthy group was on the receiving end of the persecution. But that’s just the ‘insurance policy’ aspect writ small.
It will by definition create a new nation, but how will that benefit the people fronting the cost?
No. (Although I do chuckled at the reference) Aside from the difficulty of getting two billion people onto one spaceship, or building and launching millions of spaceships, in a generation or two you’ll have replaced them and you’ll still have people of below average utility. Colonization can’t be a cure for over population.
True, in some kind of scenario where lifespans are increased by an order of magnitude, very many things change and I can’t really speak to all the implications (many of which will depend on the particular details of how that came to be).
By automation of the colonization level, you mean some sort of singularity? Similar to above, that will certainly change things in unpredictable ways. Barring a post scarcity society, I think the distance still precludes really profiting much from interstellar trade. Anything you can do without for a century you can probably find substitutes in that timeframe.
With long long lifespans, some things become more plausible. You can just ride out your expensive and slow spaceship to the next star and arrive while old. Still, making machinery that lasts thousands of years is a though problem, even with intelligent beings aboard who can repair it (and who have to carry spare material).
Methuselah aliens colonizing the galaxy is an interesting variation, tho still doesn’t explain where are they. Maybe they just don’t build detectable megastructures. Then the Fermi Paradox answer is that the old aliens just don’t want to contact us.
The people fronting the cost ARE the people that want to create a new nation.
As in, the cost/repercussions/morality in establishing a new nation on the home planet will be more costly than finding a new space colony.
This was meant to refer what non-economic reasons humans had in establishing colonies in America.
No. Rather making the establishment of colonies an automated process. The most obvious one is via replicating robots. In those cases, can allow exponential growth or at least a cheaper unmanned process.
Not sure I count that as a ‘colony’ but for reasons of the original topic it might qualify. Unless those robots are preparing a home for humans or sending back the unobtanium, that’s just us depositing some self-replicating junk across the expanse.
Just because it is more costly to establish a new nation on earth doesn’t mean you have the resources to establish a possibly less costly elsewhere. It’s the people already in possession of nations or equivalent who will be able to fund such ventures.
But I think that still falls under “no economic upside.” I don’t mean that there are no reasons to colonize; just that I don’t foresee trade being one of them given the transit time and cost involved. All remaining reasons are basically ideological, since the colonists probably won’t improve their lives any by going (by virtue of giving up much of it to get there, or else there being close enough to reach in a lifetime but pretty inhospitable).
I will note that empirically living creatures have a strong tendency to fill all available space and increase their population to the extent possible given available resources, and evolutionary theory implies this tendency will be essentially universal.
Yes, a self-aware species can formulate an ethical system and then take steps to codify and preserve those values (protecting them against further drift/selection), and these values might not fully support unchecked expansion (though, since they evolved, they presumably *will* be such as facilitated growth in the ancestral environment).
However, it’s extremely likely that the vast majority of advanced species will engage in expansion and interstellar colonization. The reason is simply population pressure: resources (partly raw materials, but mainly energy) in any star system will be limited. Population growth is exponential (barring fine balancing — and there are good evolutionary reasons for growth to win out). At some point you run out of available resources. To be concrete, this might take the form of enclosing the local star in a Dyson sphere/swarm, until all solar energy is tapped.
At this point (or realistically, long before) some locals will start thinking that all those other stars out there offer a lot of free untapped resources, with the only cost being transport. Some combination of the desperate and the entrepreneurial will set out.
Once it begins, the expansion will be inexorable and indeed quite rapid, driven by a simple mathematical fact: growth rates of any local population (e.g. around a given star system) are exponential, but available space (and therefore resources) in the future light cone is cubic in time. Thus for any positive growth rate, population pressures will far outstrip reachable uninhabited star systems, and the civ will continuously send out new colony ships.
The only future Great Filter hypothesis that I find vaguely plausible would be that it is impossible to develop really high tech without making mass destruction really easy. Society wouldn’t last very long if anyone who was having a bad day could whip up a nuke or planet-devouring black hole in 15 minutes. This is somewhat explored in Vernor Vinge’s book “Rainbows End”.
It is of course easy to imagine many plausible past Great Filters.
Isn’t this part of Bostrom’s argument. From his Sam Harris interview, I think his example was that human civilization probably couldn’t have survived if you could make a Hiroshima-sized nuke in your kitchen with easily-found ingredients.
I’m not 100% sure this is correct–you can imagine cultures evolving that could survive this ability, but they might be awful in many other ways. In any near-term space colony, I think you’d have something a little like this. Someone going crazy and opening the airlocks/starting a fire/dumping poison into the air supply might kill off a big chunk of the colony’s population. One solution to that would be that anyone who seems even a little weird or “off” gets locked up or spaced.
One thing that’s kind of worrying w.r.t. the Great Filter is that it looks rather like substantial animal intelligence has evolved multiple times here on Earth. Not just us and other primates, but also in elephants, dolphins, corvids, wolves, etc. (And I think octopi are considered quite intelligent–that’s another species that’s not even a vertibrate!) There’s also a very different kind of intelligence that’s evolved multiple times on Earth–eusocial species. That might (or might not) be an alternative path to some kind of technological civilization, though it’s hard to imagine what it might look like. But it’s worth noting that large-scale war, farming, and herding were all invented by eusocial insects a long, long time before humans arrived on the scene. All this makes it look to me like probably evolving substantial animal intelligence isn’t so hard, once you’ve got complex multicellular life.
I have the feeling that intelligence as in raw problem-solving ability probably isn’t as important here as abstract language. It looks very probable to me that you can’t build anything like technological civilization if you don’t have language, even if you can fish for termites or escape aquaria like a boss. And as far as we can tell that really has only evolved once, quite recently; lots of species communicate in some fashion, and a lot can even learn the meanings of a limited set of human words, but nobody, even our closest relatives in the great apes, seems to be able to use them with anything like the structure and generality that we do.
On top of that, it’s exactly the sort of lateral breakthrough that we’d expect to be evolutionarily rare: learning lots of “nouns” would have steep diminishing returns in the wild without a “grammar”, yet there isn’t a clear evolutionary advantage to building the first steps of one.
The Great Filter hypothesis is about why Earth hasn’t already been colonized. It would be pretty quick (on an interstellar time scale) to spread exponentially through the galaxy using self-replicating Von Neumann probes and colonize every habitable planet, but apparently it hasn’t been done, despite hundreds of billions of stars in the Milky Way which could have birthed an interstellar species. Something must be stopping this from happening. Either intelligent species arise rarely, or their development reliably gets arrested permanently before they reach this phase.
You are implicitly assuming that interstellar self-replicating Von Neumann probes are possible, which IMO is a huge assumption. Given our current level of technology, we couldn’t even begin to imagine how to build one of those things; and I’m not convinced that the laws of physics do not outright prevent it — unless, of course, you sneak in molecular nanotechnology or superhuman AIs or some other science-fictional shortcut.
In that case, the Great Filter is ahead of us, and we will be stuck on this planet (or in this star system).
From what we know currently, it should certainly be possible to colonize other planets, and if we can do that then we can colonize (slowly) other star systems. But maybe there are Hard Things we just don’t understand yet.
Do you mean we can colonize any planet or that we can colonize a planet given (some set of requirements). If the former, our expansion is on the scale of at least a couple hundred years to reach and colonize each system.
Assuming the latter, colonization of the next livable system is probably closer to the scale of all recorded human history to reach and colonize the system.
My first “colonize other planets” was “colonize other planets in our star system.” Not that we can necessarily colonize any hell-scape planet we find.
Say we colonize Mars in 2050, and most of the rest of the solar system by 2300. Then a group launches a fast ship to Proxima Centauri b at 5% of the speed of light and it gets there in around a century. (We can probably get people living that long.) They establish at 2400, and take 300 years to build up the wealth to start the process over, colonizing other bodies in their system. That’s about 700 years to cover 4 light years, of 17 million years to go from one edge of the galaxy to the other.
Maybe there is some reason that humans can’t do this: maybe everything else in the inner solar system turns out to be uninhabitable (and it’s too big a leap to go from Earth to the outer solar system, to say nothing of other star systems), or maybe humans just can’t live long enough, or we started in a particular bad region in the galaxy such that all the planets in nearby star systems are hell-scapes that are too challenging for “baby’s first interstellar mission.” But they would also need to apply to every other species out there.
@Edward Scizzorhands
But what motivates this constant, rapid expansion? The time scale is too long for us to rely on “adventurous spirits” who do it “not because it is easy, but because it is hard” unless you find a way to instill that across generations (some fanatical religion or something maybe). Otherwise the expansion is going to be a lot slower because it will be driven mostly by need.
Unless expansion is actively suppressed (which is possible), things work out fine if only 1% of the society wants to expand. That is how most expansion works, anyway. 99% of the people stay home and less than 1% colonize.
Reaching the next star system is probably do-able in one life time, if not for humans then for some other species.
So, gen A decides to head to Proxima Centari b and starts colonizing. Gen B and C were probably born on the ship, with no choice in the matter and are probably doing most of the work actually colonizing. I find it more likely that the future generations would build a ship to leave Centari and go back to Earth than that they would say “well, this desolate, borderline unihabital planet has been fun, but let’s embark on another generations long journey to the next one!”
The only way I buy expansion to other systems is the discovery of a way to travel between points in space such that the travel takes an insignificant amount of an individual lifespan to do so.
Consider also that even if you have a group committed to doing this, and their future generations also stay committed, the risk of catastrophic failure killing them all at some point along the way is probably relatively high.
Some version of suspended animation gives you that.
Who said anything about “constant”?
If it takes ten thousand years for the average colony to grow to the point where it is capable of building starships in their spare time, and even then only once every thousand years does the random-walk of local politics, sociology, and economics give rise to an oppressed and/or adventurously spirited minority population desperate and resourceful enough to launch a single interstellar colony mission before reverting to apathy and hedonism, and if colony ships are limited to 0.01c and ten light-years maximum range and new colonies have a 90% failure rate…
…the Milky Way is still fully colonized roughly half a billion years after the first technological civilization develops starflight.
The Milky Way is approximately twelve billion years old. Even if we assume that the first generation of stars(*) were too metal-poor to support life-bearing worlds, that still gives us ten billion years to evolve a technological civilization and colonize the galaxy. Fermi’s question stands.
* Called “Population II” because someone guessed wrong
@DavidFriedman
Granted, and we are probably closer to that than the alternative, although I’m not sure how legitimately close we are.
@John Schilling
I was responding to Edward Scizorhands who proposed a much faster rate of expansion, which (at the time scales we’re talking about here) I would call more or lest constant.
Your proposal, which I suspect is a conservative take on your part, is more reasonable. The “great filter” in that case is simply time. There are a nigh-infinite ways for a civilization to collapse, especially a fledgling colony civilization traveling through deep space or even after reaching their destination planet. The odds of the civilization making it that far are just incredibly low not because of any single cataclysmic type of event but essentially because over the course of half a billion years attrition ends up outrunning expansion.
Call it Murphy’s law of Interstellar Colonialism
I mean, we’re talking about exponential growth here. That’s going to fill up space pretty quick for any reasonable constant you pick. And the highest constant dominates here, so if civs differ the one with the highest growth rate will just take over everything.
And the patterns of colonization people are talking about above strike me as insanely slow compared to what is likely — even assuming no intelligence explosion.
If by “civilization” you mean an individual planetary or system-level colony, then sure – which is why my model allowed for 90% of colony civilizations to collapse before ever getting around to launching even a single starship. And you could up that to 99% or even 99.9% if you allow for the tiny handful that make it, to launch starships once per century or decade rather than once per millenium. The collapse of planetary civlizations isn’t a showstopper if there are lots of planetary civilizations to work with – and if interstellar travel is a marginal proposition, then “…and we get to loot the remains of a Lost Civilization for sure!” is probably going to push it over the top for recolonization missions, so probably not much ground lost in the long term.
If by “civilization” you mean the set of all planetary or system-scale colonies descended from the same source, then particularly for the hypothetical where interstellar travel is difficult and rare, then I disagree with their being a nigh-infinite number of ways for interstellar civilization to collapse, because it wouldn’t be a single civilization in the sense that we normally use the term and the gulf of interstellar space would make for a most effective firebreak against a nigh-infinite number of possible civilization-collapsers.
@acymetric that is the exact plot of KSR’s Aurora, pretty good book as I recall.
Also the plot of Stephen Baxter’s Ark, although it’s important to note that in both books there’s something wrong with the colony, so it’s not so much “reaching out again from a successful colony” as “this attempt to colonize isn’t going to work, let’s just go back”. Ark actually goes in all three directions, with some going back, some staying on the problematic world, and some going forward to a third planet.
@Edward Scizorhands:
I actually fear that you are right; although, in the best-case scenario, the Great Filter might be something like our Sun dying, which won’t happen for a good long time.
As far as I understand, it should be possible to set up human presence on the Moon, or perhaps even on Mars, given incremental enhancements to our current technology. However, I am far from convinced we will ever do it; the costs involved seem to be much higher than any government or corporation is willing, or able, to pay. China might go to the Moon, though, just to spite the US — but I doubt they’d ever maintain a permanent presence there.
Traveling to Alpha Centauri would take on the order of 100 years, and that’s just for a robotic probe. No present human institution operates on such time-scales; unless maybe you count dictatorships whose only relevant goals are “stay in power” and “keep being a dictatorship”, not “travel to other stars”.
“Seem”, means that you are basing your cost estimates on observation.
And the only sort of manned(*) space flight activity anyone has had a chance to see, is the sort that has been done either A: under an explicit mandate to deliver the most spectacular possible results in the fastest possible time without regard to cost, or B: exactly the same way it was done last time so that nobody can be blamed if anything goes wrong, and with an explicit mandate that no price is too high for “safety”.
This may give a misleading impression as to the plausible cost range.
* Very nearly the only sort of unmanned space activity, for that matter.
Colonization doesn’t need to happen via government, although governments must allow it. If you let enough centuries pass, eventually private groups accumulate enough wealth to do it on their own.
@John Schilling:
Er… yes ? What else should I base them on ? Logical deduction from first principles ?
@Edward Scizorhands:
What is their incentive to actually do it ? Why accumulate all that wealth on a long-shot blue-sky project, when you could instead invest it into reliable short-term gains ?
The money men would have to see it as a philanthropic expense and have enough corporate control to start the project and keep it going long enough to build, stock, crew, and launch the ship–which may be difficult because it could take many years.
Perhaps there was a recent near-miss of an asteroid or nuclear strike which motivates someone to use their funds on such a venture, or maybe they want the renown.
I don’t think they should expect universal acclaim for doing so, however. A lot of people are going to see it as wasting resources that could be spent otherwise, irrevocably.
As far as colonists go, once funded I don’t think it would be a problem to find some willing to go, frozen or as breeders. Whether from desire for fame, adventure, or escape.
You are living at a time when two billionaires are fighting it out with space companies, including the world’s richest man (still, post-divorce).
Why is Bill Gates trying to cure polio? Aren’t there better returns somewhere else?
Maybe 50 years from now, when there are even more of them, none of the billionaires are interested in space. Fine. Wait another 50 years, and there will be even more (in real terms) billionaires. Oh, they all want to cure Alzheimer’s? Fine, wait another 50 years. Eventually, unless man goes extinct or the government confiscates everyone’s property or disallows space travel [1], you are going to get someone with enough drive to make it happen.
Also, while looking up billionaires, I found out that Kylie Jenner is the world’s youngest “self-made billionaire.” Never mind the life support, launch me off this planet now.
[1] Those are actual possibilities, and if aliens are common I’m sure a lot of them got taken out of the space race through one of those three methods. But if aliens are common, then you only need one to get past that and wallpaper the galaxy.
Um — since superhuman AI is obviously allowed by the laws of physics (it would be trivial to selectively breed super-intelligent humans starting with actually existing historical geniuses, which puts a lower bound on possible intelligent agents a decent step above the current human level. Then factor in running these super-humans on faster hardware, and we’re already up to pretty superhuman level, and we’ve barely gotten started on possible improvements!), saying VN probes are not permitted by the laws of physics “unless…you sneak in …superhuman AIs” is simply incoherent.
You’re claiming that a thing is actually physically impossible (an insanely high bar to prove!) unless we posit a thing that plainly is physically possible. So…it’s physically possible?
Besides, there are plenty of biological replicators. We’re essentially colonies of said replicators, and we can build spaceships! Heck, we’ve sent out interstellar probes *already* — how hard would it have been to throw a cell culture on board?
So…still confident VN probes are *physically impossible*?
I guess you and I have very different definitions of “trivial”, and perhaps “superhuman”. I will grant you that a genius could technically be considered “superhuman” — as in, several sigmas above the mean — but that’s not what it would take to create a reliable Von Neumann probe. When I say “superhuman”, I’m thinking in terms of “several orders of magnitude”, assuming such a concept is even coherent.
Which we currently have no idea how to even begin researching; and, again, I’m not at all convinced that it’s even possible without some sort of self-replicating molecular nanotechnology… which may, in turn, be impossible. And no, running Google Maps on a really big cluster doesn’t count.
I said that I was not convinced that it was possible, not that I was convinced it was impossible. You are the one who is proposing a self-replicating probe that can not only survive interstellar distances, but also make perfect copies of itself out of raw materials, such as rocks and interstellar hydrogen, every 100 years or so (if you’re lucky). The burden of proof is on you.
I don’t know, how hard is it to maintain a livable environment for hundreds of years between stars ? Also, how hard is it to engineer a cell culture that actually does something useful… such as generate computer hardware out of rocks in the vacuum of space ? You tell me, you seem to know how to make one !
@Bugmaster:
I apologize if my initial comment came off as a bit sharp.
Before engaging on this topic further, I think we should clarify terms a bit, since I’m not interested in arguing about the meaning of words.
I gather from your comments here and elsewhere that you are a technological pessimist (particularly compared to me). That is a perfectly coherent position, and I’m quite happy to discuss it further.
However, the post I was originally responding to made (what I take to be) a *much* stronger claim, that Von Neumann probes and superintelligent AI (and MNT) are not merely difficult, but might plausibly be *physically impossible* — that is, literally not allowed by the laws of physics, ala perpetual motion machines or FTL signalling.
My comment was responding specifically to that claim. However, if that’s not your true objection, and you’re merely asserting a more generic kind of technological pessimism, I’ll be happy to continue the conversation in that vein instead.
@Eponymous:
I bet you hear this all the time, but still, I consider myself more of a realist 🙂
I am not claiming absolute certainty that superintelligent AIs and Von Neumann probes are physically impossible; but, currently, I’m about 60%..80% convinced of this — depending on the specifics.
For example, I am fairly sure that “gray goo”-style molecular nanotechnology is impossible. True, molecular replicators do exist — that’s what we’re made of — but the energies required to do the same thing outside of water-based chemistry (or some equivalent); and in much shorter timeframes; are just too large.
On the other hand, constructing an interstellar probe is pretty easy — you can launch a wrought-iron cannonball quite a long way, with the right rocket. However, making that probe do anything useful is much harder, depending on what you want it to do. Making a probe that can create multiple copies of itself may be prohibitively difficult, depending on what you want to make it out of — and that’s assuming that you’ve solved the problem of perfect self-replication in the first place, not to mention survival over hundreds of years of exposure to hard vacuum and radiation.
The Boring But Practical ™ answer is that technologically advanced intelligent life is much less likely to arise than optimistic futurists (and science fiction authors) tend to think. It is entirely possible that we humans are the only technological civilization in the Milky Way. Even if there are others like us, the diameter of our galaxy is about 100,000 light years, IIRC. We’ve only invented radio telescopes about 80 years ago.
The really sad thing, as I see it, is that the Universe is probably teeming with intelligent life, relatively speaking… and we will most likely never see it. The Andromeda Galaxy is 2.5 million light-years away. There could be an alien there right now, writing a post much like this one… and in 2.5 million years, we could possibly read it… except we won’t, because its light will be too weak by the time it reaches us.
I am reasonably sure that, for all intents and purposes (and barring some sort of a magical FTL engine), we are alone in the Universe… and so are all the other intelligent species.
I think you are correct.
I think you’re neglecting timescales here. Aliens don’t have to exist right now and only right now; we can look at aliens on the other side of the Milky Way who were sending out signals 50,000 years ago, even if they’re now long gone.
And andromeda isn’t actually that hard to get to/from for technologically mature civilizations, at least not much more than colonizing the Milky Way is (see this to-scale diagram). Get your Von Neumann probe sped up to a decent fraction of c, wait a few million years, and have it start up the process in the new galaxy. Sure, 10 million years is a while, but the universe is billions of years old. The evolutionary history of Earth doesn’t seem so sped-up as to have been rushing to the finish line of sapience within a few million years as every other biosphere; aliens from Andromeda who got to multicellular life a few hundred million years earlier would have no trouble turning all available matter in the Milky Way into whatever configurations they liked.
Space is big, but so is time. It doesn’t seem implausible that there might be civilizations with a billion-year head start on homo sapiens, which is enough to traverse the entire Virgo Supercluster.
Nobody is answering why an intelligent species would do this. For fun?
I mean, if I had control over society, the answer would be both “Because we can” and “Because I want to see what’s out there”
@woah77
But…you won’t see what’s out there. Nobody in your civilization will, just the Von Neumann probe.
You mean, no one alive today would. Assuming my society is stable enough to survive for 10 million years (that’s a big if, but I’d be willing to operate under it), my offspring millions of generations later will see what’s there.
How will they see it? Have you developed some kind of massively powerful communication device that can transmit information that far?
We’re talking Von Neumann style probes. If one is insufficient, an array of them could easily transmit that far. Light doesn’t lose energy with distance, so you just need enough of them for it to be easily detected. The bigger concern to things like the Fermi Paradox is why can’t we see any craft traveling? There isn’t any stealth in space, and anything accelerating to a significant fraction of c will show like a star (not using that lightly, it takes substantial energy to get up to those speeds).
There are lots of old men who plant trees whose shade they know they shall never sit in.
If you needed the whole society to work together to build a von Neumann probe to get to the next galaxy, then “why would they do that?” is a good question. But I don’t think that was part of the thesis. It’s much easier to argue “someone will do this” than “no one, anywhere in the universe, will do this.”
There are people trying to do all sorts of weird things that don’t fit your model of them. It doesn’t mean they aren’t doing them. It means your model is wrong.
The question isn’t how to build the telescope.
The question is what intensifiers to put before ‘large telescope’.
Given that it only takes 15 inches of telescope to make out individual stars in Andromeda, signalling back shouldn’t be too hard for a type II civ.
As I mentioned in the thread above, I’m not even convinced that Von Neumann probes are physically possible. And I will absolutely grant you that, assuming that molecular nanotechnology and superintelligent AIs are more than just science fiction (which, again, I doubt), then there could be civilizations out there who have mastered them. In fact, there could be tons of such civilizations… way outside of our light-cone, because the probability of such things happening is extremely low.
I mean, look at us: we could colonize Mars in the next 50 years, if we really wanted to, but it doesn’t look like we want to. And we have a huge leg up on all those other aliens — we actually exist !
I agree with that conclusion; I assign fairly high probability to the hypothesis “advanced civilization happens with a probability that is nonzero but too small to be likely to reside within our lightcone”.
For Von Neumann machines, the existence of humans seems to suggest that there are no fundamental limitations to their existence (the only added capability of a Von Neumann machine is the ability to accelerate more effectively from place to place – self-replication and material/habitat fabrication we can already do). As for AGI, I doubt I can provide better arguments than e.g. Nick Bostrom in Superintelligence, but I don’t think it’s a prerequisite for galactic colonization.
The obvious contention is that we are amongst the most advanced species if not the most advanced, and we’re not seeing anything more advanced because they weren’t there when the light set off. We know species capable of interstellar travel exist, because we’re here. We know we can’t see evidence of species capable of travelling between solar systems, because we haven’t (albeit there is the would we recognise it question). And we know of no yet apparent reason why we can’t follow Voyager beyond the heliosphere. All of this suggests interstellar travel (not ftl necessarily) is possible and we’re as close to it as anyone. There is no filter, just the fact that intelligent life has developed no further than humanity.
It’s clearly a hypothetical position, but note the basic point here is that in the universe there has to be a first species to travel between stars, assuming that this can be done. Why not us?
I find your answer appealing, but I think it’s unlikely we’re at the forefront of an advancing universe. The Earth is only 4.5 billion years old. The first stars formed (relatively) very shortly after the beginning of the universe, about 13 billion years ago. Of course, you need a few stars to go supernova to get heavy elements, but I sill think it’s likely there are many stars with a big head start on us.
That’s kinda the point about the Fermi Paradox. If we believe that our star/system is nothing special, why don’t we see any other star faring civilizations? Now, it’s entirely possible that something happened or that circumstances to support life only became possible around 5 billion years ago. Or that the seeds of life took several billion years to form in space and that all planets capable of supporting life were seeded within a very short time frame (relative to the billion year time scale) and so evidence of farther away civilizations hasn’t reached us yet because they’re only tens of thousands/hundreds of thousands of years ahead of us.
Obviously that seems unlikely without something even older being the prime mover, but that would suggest an alien intelligence staying hidden or one that has already gone extinct. Since we haven’t looked at any planets outside our system that closely, we have no idea what might have once been.
There might be unusual details like having a relatively large moon being required.
Generally speaking, I don’t think supercivilizations leaving Earth alone because of wanting to leave aliens to develop by themselves is that farfetched.
But I also suspect that supercivilizations capable of making that decision would also build detectable megastructures. Maybe it’s too early to detect them, tho.
I always see “we are the first” as just a slightly altered case of “we are alone.”
The great filter has a precise meaning. In your scenario, there still is a great filter, it’s just that it’s behind us, rather than in front of us. You need a filter to explain why we’re the first, despite our being late compared to how easy it looks for intelligent life to develop. If you think it is hard for intelligent life to develop, that reason is the filter.
There is one category of explanations for the Great Filter that I don’t think has been discussed. Perhaps one of the effects of the sort of developments that make possible an interstellar empire is that people don’t want one any more. They discover the nature of reality, conclude that life is not worth living, and kill themselves. Or they learn how to wirehead really well, and do it. Or they become buddhist philosophers and switch to a life of contemplation. Or …
That seems quite unlikely on a civilizational level. Is there anything you could learn about reality that would make you conclude that?
This one on the other hand I totally buy. Imagine people using futuristic VR to avoid going crazy in the close confines of lengthy space flight, and then neglecting all duties because the VR is too enticing compared to the drab reality of life in a small metal tube or desolate colony.
Or they find out that the really interesting things are inside your head, so there’s no point in expansion.
I could easily imagine a universe where most species simply decide not to expand.
But it only takes one to decide to expand, and a billion years later the galaxy is covered.
So I’ve got bored with conversations about the relative merits of various translators of various classics, and have thus resolved to become a polyglot.
It seems sensible to start at the beginning of the Western Cannon.
So does anybody know any good resources for learning to read Classical Greek?
I’ve found this, so far: https://lrc.la.utexas.edu/eieol/grkol/50 (wow, the Scythians were massive stoners)
It depends on what you’re trying to read. Most courses teach Attic Greek, so you’ll be able to translate Plato but will have a good deal of difficulty with the epics. We used Groton’s From Alpha to Omega in my undergrad, which suits me very well but not, perhaps, many others—it’s heavy on grammatical explanations verging into technical.
Read the chapter, take notes, practice verb and noun forms regularly throughout the week by writing paradigms from memory, and assign yourself a selection of sentences at the end of the chapter. Normally I’d recommend you leave yourself some for review later, but Groton’s pretty good about reusing tricky word forms or grammatical features you learned at a pace approximating the forgetting curve, so just doing the ten into-English and 5-10 of the into-Greek sentences each chapter should suffice. I’ll grade them for you if you like.
If you make it through about 30 chapters of that, though, plus the chapter on μι-verbs, you should be able to stumble your way through guided translations. We translated from Steadman’s edition of the Symposium and it was quite good, but beware that there will be quite a few typos in the notes—I sent him about 30 from the Symposium at the end of the semester, and we didn’t even translate the whole thing. But these typos will rarely actually trip you up. If it’s the epics you want, try his Iliad or Odyssey books instead.
Recitation/singing and calligraphy would also be useful skills.
(for some definition of the word useful)
Canon. They’re both serious business, but the canon is the standard you adopt, and the cannon is what you enforce it with.
While you were composing this, someone downthread made the exact same typo.
Truly there is nothing new under the sun.
Also not to be confused with qanun, which is the same word as canon, setting the standard for what the other instruments have to