This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:
1. Those of you who don’t use ad-blocker may notice some more traditional Google-style sidebar ads. I’m experimenting to see how much money they make vs. how much they annoy people. If you are annoyed by them, please let me know.
2. Someone is doing one of those tag your location on a map things for SSC users. If you sign up, you may want to include some identifying details or contact information, since right now most of the tags don’t seem very helpful unless people are regularly checking their accounts on the site.
3. I’m considering a “culture war ban” for users who make generally positive contributions to the community but don’t seem to be able to discuss politics responsibly. This would look like me emailing them saying “You’re banned from discussing culture war topics here for three months” and banning them outright if they break the restriction. Pros: I could stop users who break rules only in the context of culture war topics without removing them from the blog entirely. Cons: I would be tempted to use it much more than I use current bans, it might be infuriating for people to read other people’s bad politics but not be able to respond, I’m not sure how to do it without it being an administrative headache for me. Let me know what you think.
@Scott: I’d like to register my annoyance with a google-style sidebar ad. This ad was flashing at me in all the colors of a poorly-palletted rainbow as I tried to read a comment thread. If it were static I’d have probably ignored it and possibly not even noticed it consciously, but having something actively flashing on the side of the screen is acutely distracting and annoying.
I missed the classified thread, but here’s a link anyway.
I just released a small puzzle game on itch.io.
Please check it out, and feel free to give me feedback. Thanks!
https://metamechanical.itch.io/text-mode-adventure
Neat! Considering that it’s free I don’t think you need to wait for a Classified thread, and it’d be fine to repost it in the new OT so more people see it.
On my Mac, the cursor vanishes when it passes over the black area, so I can’t click on the “start new game” marker.
Just click in the window anywhere and then use the arrow keys.
You’re intended to use keyboard.
Thanks for the feedback though. You’re probably not the only one who was confused by that part. I’ll see if I can fix it later.
It’s working now. Fun.
Stuck on last level.
Spoilers ROT13ed.
V’z cerggl fher V xabj jung vf fhccbfrq gb or qbar, ohg gur gbc cneg vf whfg fybjre guna gur obggbz cneg naq V pna bayl unyg gur zvqqyr naq pna’g fybj be fgbc gur obggbz cneg.
Fairly difficult, got stuck for a while on I think on the 7th and 12th.
Could use a reset level key and return to start option on the menu.
Very cool. Stuck on level 9. I’ll probably take a break and come back to it later.
I have found what appears to be a bug on level 12. I get to this position and am unable to move down.
I can’t reproduce it. I’m not sure what would cause that.
Weird. I did it two or three times. But I guess what I was doing was far enough from the right answer that it’s unlikely many people will even get to the same point.
I’m stuck on the last level now. Really great game! 🙂
So I noticed that most of the residential houses in the area have a double step to get to the front door.
That is, there is a raised porch platform, and then one or more additional steps from the porch level to the raised front door.
Isn’t this violating the ADA? Like, I guess the logic is that a wheelchair user could use the door inside the garage, but there are no doorbells by the garage door. Assuming that they have a cell phone to call their friend to open the garage also seams in the spirit of violating the ADA.
And there are houses that have the door facing parallel to the street, which is even worse, because you either need two ramps at the required degrees (one to get on the porch, one at a perpendicular direction to get from the porch to the door), or a mess of a ramp at some wacky 45 degree angle or something.
Residential houses are not open to the general public, so the ADA doesn’t apply.
The ADA does not apply to private residences unless a business open to the public is located inside (which would usually be a zoning violation anyway…)
Residential building codes often have nods toward accessibility, like requiring doors to be a minimum size. Hopefully science will figure out how to fix disabled people before politicians get the idea to mandate elevators in all residential buildings and so forth.
Are apartment buildings included in the private residence definition?
It seems so, or at least walk-ups aren’t that rare.
ADA/ABA requirements are a huge ball of nightmare, so I don’t know the answer right off the top of my head. However, apartment buildings are a Group R-2 occupancy, and there doesn’t appear to be an exception to the accessibility requirements for those (R-1 [hotels] with less than 5 sleeping units or detached one- or two-family dwellings do have exceptions). Once you’ve determined that a building or part thereof needs to be accessible, you get pointed at another 400 page code to cover detailed requirements. I don’t know if those detailed requirements are written so you can have an apartment building with accessible units on the ground floor and not worry about upper floors, and it would take a fair bit of reading to figure out. I got as far as there being “dwelling units” (apartments) categorized as Accessible, Type A, Type B, or Type C, but I have no idea what differentiates “types” or how many you would need.
Note that even if you need to have all floors accessible, this is for new apartment buildings. Old buildings are typically grandfathered in, so a lot of walk-ups don’t necessarily mean you’ll be able to do that in new construction. You can see plenty of metal fire escapes on the exteriors of buildings in a lot of US cities, but those are completely illegal in new construction; they’re only permitted as retrofits because at least some way of escaping a fire is better than none.
One story about the 737max is that there were changes to the plane that weren’t deemed significant enough to trigger the type of regulatory review and pilot (re)training that would be required for a new plane design. Is there an ordered list of commercial plane types ranked by this type of risk? In other words, current models that are the most different from the reference model that was fully certified and trained?
How do you propose to quantify “most different” for this purpose?
I’m asking if there is a resource like this that already exists, not proposing new research to create one. I don’t have any expertise in this area, so my guesses about metrics are probably useless.
Given that you are one of the most expert here, your reply implies this isn’t something that exists.
It probably would exist if there were an unambiguous metric that could used for the purpose. Since there isn’t, any attempt at a ranking would inspire the same sort of complaining and gamesmanship as e.g. college rankings, complete with all involved getting nastygrams from Boeing and/or Airbus lawyers, and it seems that nobody is willing to dive into that mess.
With regards to the ads, they don’t annoy me at all and I think you should do whatever generates the most revenue for you. Even though this isn’t the most insightful comment ever left on this blog I thought it might be helpful to balance out any feedback from people who say they are annoying (looking at the responses here I’d say most people don’t care though)
I don’t mind ads as long as they aren’t animated. A loading/startup animation is bearable, but a looping animation makes the page containing it all but unreadable for me, because it grabs my attention every 3-5 seconds. I realize this is due to my screwed up brain chemistry, but I’m going to ask anyway because I really like this site, and I would greatly prefer supporting it by viewing ads over feeling guilty about ad-blocking.
Conquest’s third rule of politics is “The simplest way to explain the behavior of any bureaucratic organization is to assume it is controlled by a cabal of its enemies.”
In my experience, a similar rule could be made for the learning management systems used by many large organizations:
“The simplest way to explain the behavior of any large company’s learning management system is to assume it is controlled by a cabal of people absolutely opposed to the use of a learning management system for learning.”
Regarding input lag, discussed here and in the last OT: I saw someone mention 15ms being noticeable and doubted it, but rather than doing research (boring!) I wrote up a little webapp to test the idea. Find it here: https://ineptech.com/latency.html
I’m curious to hear others’ results. I haven’t done a ton of tests yet but it seems like I’m better than chance at 60ms, which is better than I expected.
Practice helped a lot here, on 50ms I started at 6-7 and got to 9-10 after two more rounds. If I’m performing an action in a game over and over, I’d probably develop a similar “feel” of how long it’s supposed to take.
I’d be interested in how well I could feel differences between non-zero latencies, and whether that difference would be larger or smaller than my best 0/X. I can usually tell the difference between 30/60 in League of Legends and other Mobas, but did pretty poorly on 0/30 in your app.
FWIW, this app is measuring input lag only, which is not the same as what shows as Ping in most video games. In LoL, a 60ms ping will manifest very differently for some game events than others; it might be that what you’re detecting is the actually client-side prediction failures, which can be caused by an opponent with a higher ping than you and hence appear to involve latency much higher than your ping.
I worked my way through 150, 100, 75, and I had a pretty easy time with 60 (I had to get some intentionally wrong to make sure it even told me when I was wrong). 40 I got 9/10 right. I started having a lot of trouble with 30 ms, but then I realized if I just rapidly alternated left-right-left-right, I could pretty easily find the right frequency and just tell if it was in phase or out of phase. That got me through 30, and through 20 (with some mistakes, 7/10, but I’m pretty confident I was seeing a real signal). That seemed to break down with 10 ms and I ended up getting 6/10, but I’m pretty sure that wasn’t a real signal.
Neat, but it sounds like this is measuring your ability to tap your fingers at a certain interval, not detect input lag of a certain interval. So it seems like this is a vote for 30-40.
Here is a blog post on how latency is getting worse. (Although it may be easy to improve by throwing gpu resources at the problem.)
I assume you’re measuring the effect of added latency on top of the existing latency. If the background latency were smaller, it would be easier to notice small increases. There is declining marginal cost to adding 15ms latency. Since we have high latency, it’s easy to say that your particular 15ms doesn’t matter, but that doesn’t mean that the first 15ms didn’t matter.
Latency is much more noticeable and important in direct manipulation (finger on touch screen) than keyboard. His other blog posts have two links from one research group claiming that people dragging objects can discriminate 1ms from 2ms latency and that they spontaneously notice 10ms latency. (But drawing is maybe 4x less sensitive.)
Yes, I’m measuring whether you notice latency over-and-above the latency incurred by your system, but those are pretty worst-case numbers, I’d be surprised if anyone here sees triple-digit lag typing a character into a browser. (A console, maybe – part of the problem he was pointing out was how bad some platforms’ consoles are)
There speaks someone who has never tried to use Facebook Messenger on my laptop…
I did it at 15ms and got 10/10, so either I’m really lucky, or I can objectively notice input latency. If it’s the latter, I lay the blame at 25 years of PC gaming. Although, one of the tests I did to see if it had latency was quickly pressing left then right and seeing if it followed properly. So maybe I just have a solid test for seeing if latency exists?
I’m surprised that the variance is so wide. Hey it’s like we’re doing science!
It’d be interesting to see if there’s an obvious difference on this metric between, say, a really good LoL player and a mediocre player of the same age. I recall reading that there’s a noticeable difference in reaction speed, but not a big one.
I don’t play only or fast paced games often. Not LoL or any of the other online ones. I think it’s largely that I’ve played enough games like Unreal Tournament on a keyboard to notice when it’s not responsive, and I tested it in a way that very small latency would result in detectable results. That is to say: it takes some amount of time to press a key twice. It takes far less time to press two keys in very short succession. Noticing when it hung on the two key presses allowed me to identify when latency existed or not.
It seems like that is similar to Douglas Knight’s strategy, and … well, I don’t want to say cheating, but measuring something other than what I was intending to measure. I could fix it by just removing the ability to go left…
What was my strategy? Using a high frequency monitor? Maybe that’s what woah is doing…
[surely you mean drunkfish, not me]
I think woah misunderstands his strategy. I think he’s exploiting a weird bug. If you press a different button in the latency period, it gets discarded. But if you press the same button repeatedly, it doesn’t get discarded. Setting latency high makes this easy to verify and also easy to see that it’s not about human speed.
(I tried to cheat by pressing right repeatedly, to see if latency accumulated, but it didn’t, so good job!)
I mean, if we’re testing the ability for a user to detect latency… that’s one way you would notice latency. Latency is inherently about the difference between perceived response and desired response. I don’t try to figure out how much latency exists, just that what I see isn’t what I told it to do. If everything behaves how I expect it to, then as far as I am concerned, no latency exists (which might not be accurate from a technological perspective, but that doesn’t matter to users). The same thing occurs in games: they only care about what they expect, not what is going on behind the scenes.
What I intended to measure is your ability to do a thing and see the response to the thing. What you two appear to be doing is testing your ability to tap two keys at precise intervals.
Suppose your system needs 10ms between two keypresses for the second one to register; you two are (it sounds like) testing your ability to distinguish the trials where you need to wait 10ms between keypresses from the ones where you need to wait longer. That’s kind of like UI lag, but it’s not visual – you could do it with your eyes closed.
edit: it occurs to me that I can change this, see continuation in the next OT.
That doesn’t explain, to me at least, why I could get 100% accuracy at 15ms. Yes, it’s a type of bug, but that’s what users notice. I’m fairly certain that if I went down to 1ms latency, it wouldn’t work so well, but at 1ms latency we’re well below the threshold of human vision. When we talk about the ability to discern latency, the only thing the person detecting “lag” is going to care about is responsiveness, not their threshold or method of detection. Maybe I “cheated” but that’s what someone complaining about lag is going to do. “I pressed crouch then jump and all I did was crouch. AWFUL”
I think you’re still not understanding the distinction I’m making.
Define “latency1” as the time between when you press the key and you see the thing happen as a result. Define “latency2” as the time that must be left between two keypresses for the second one to register. The tool I made can be used to find either, depending on how you “play” it, but they’re not the same thing. The first one is inherently visual, and the second one is tactile. More to the point, the first one is very close to what people generally mean when they discuss UI lag, and the second one isn’t, because almost all real apps will accept overlapping keypresses.
Even if there are no keyboard queuing issues, there is generally speaking no well defined limit below which latency has literally zero impact. (No, that isn’t supposed to be particularly surprising or a rebuttal to anyone else’s claim here.) For instance, imagine a simple reaction time tester for which the user’s median response time is Gaussian with a 200ms mean and a 30 ms standard deviation. If the player “wins” by reacting in less than their median time, then if I count right, their probability of winning drops with a slope of 1.3% per extra millisecond of latency, whether the user can detect the latency directly or not.
I could not tell at 60ms, but I’m older and don’t play many speed video games.
I felt like I was guessing randomly at 50ms, but I did do a little better than chance: 7/10 and 8/10
@Scott Alexander
I signed up for the tag thing, I’m unsure of how it us to be used.
Scott Alexander,
Whatever keeps you blogging is better than the alternative.
Scott:
Is the issue you’re trying to address more like:
a. You don’t want radioactive subjects discussed too much here, lest you have to deal with radiation-attracted angry online mobs?
b. You don’t want to be personally associated with some of the subjects/points of view discussed here because you find them offensive or dumb or deeply wrongheaded?
c. You don’t want the community to go up into a flamefest every open thread as some regular participants can’t restrain themselves from having a flamewar with the other side on their pet issue?
It seems like those three lead to pretty different sorts of moderation policies.
Can we go for all three?
a. and b. he’s at least heavily implied in previous posts, and c. is much less of an issue here than on other sites, but its obviously something better avoided. All three (as well as other issues) are possible consequences of the kind of posts he’s discussing (at least that’s my parsing of the post).
c. has always been an issue, and it’s not that much of an issue because the moderation policy has always been geared primarily towards preventing c. The new moderation method may just be a different enforcement method of that policy.
There are at least two different ways of preventing c. One is by banning people for flaming other people. The other is by banning people for raising topics that might result in someone else flaming them.
The second is the one I find problematic.
Not annoyed at all by the ads, even though as a European user some of them aren’t relevant for me.
I get annoyed by: pop-ups and autoplaying videos, ads that take lots of time/bandwidth/processor power to load, and porn ads. I haven’t seen any of those things here yet – the sidebar here is a good model of how I’d like all web advertising to work.
I second everything in your comment.
So I recently signed up for a keybase, and in the process actually got around to making a private key and stuff that I plan to keep long-term. I am thus considering trying to switch over my immediate social circle to keybase chat, or perhaps xmpp. This fits a long pattern in my life where I convinced people to switch to google hangouts from msn messenger, then messenger from google hangouts, then to discord from messenger, and so on and so forth. Why am I perpetually dissatisfied with chat applications? Does anyone else suffer from this debilitating disease?
I really like Keybase and I hope someday their business model leads them to open up kbfs programmatically because you could build all sorts of cool shit on top of it, but I don’t really use it for chatting, I use it to store my private git repos.
If you can specify what exactly it was about those chat applications you were dissatisfied with each time, that would provide clues into why you are perpetually dissatisfied.
It might be that none of them offered the features and benefits you wanted at whatever stage of life you were in when you were using them. Or it might be that you were always noticing problems in each application, and once you noticed them you were eager to switch to whatever new application didn’t have those problems. Or it might be that you really wanted to use the new applications for other reasons but convinced yourself it was because you were dissatisfied with whichever one you were currently using.
So, Vincent D’Onofrio playing Robert E. Howard, in a screenplay adapted from the memoir by his one-time girlfriend Novalyne Price and co-starring Renee Zellweger as Novalyne, is a thing that totally happened.
So is it any good?
I’m positive I’ve seen this movie sitting in Blockbuster or whatever, but I had no idea it was about Robert E Howard.
RPG DMs: how do you handle splitting the party?
I’ve long wondered how this is supposed to go, because it’s basically impossible not to leave a few characters hanging during this, and I’ve had sessions where a player or two split off and are running the show for an hour or more. But you can’t just ban it, can you? Or have some had success this way?
Do you switch between the groups every ten minutes or so to keep them engaged? Do you contrive to be sure party splits end quickly? Do you just roll with it and let the others take a food break?
Yes. “Meanwhile, what are [other PC names] doing?” is the most important thing I can do, to keep everyone engaged.
I can either contrive to remove anything the split party encounters that will slow down ending the party split or hit them with the same random combat encounter as the full party would meet, as disincentive to future party splits.
This pretty much. Party size makes this a less or more likely thing, also session setting. If we’re doing an in town session, I expect each and every party member to more or less do their own thing, so I take a few minutes with each and rotate. In a dungeon, it’s far less likely and often is very short in duration, meaning the party hasn’t really split in a long term fashion. Doing a quick scout down the hall is not nearly as boring because all the other players care.
I run a lot of Cthulhu games, which are primarily investigative, so splitting the party happens a bunch – when they’re investigating something, which may or may not have a time limit, they can’t structure things around keeping the party together.
Mostly I just flip back and forth as quickly as I can, usually each time one part of the group completes a task. “OK, so, you two are done at the library – what are the rest of you doing down by the old reservoir?”
Depends on if you are referring to dungeon-crawl type games or more storytelling games.
Storytelling games usually expect the party to be split a lot of the time, and each player having their scenes/limelight is as important as each player having their combat turn. Usually you keep other people engaged by making the scenes short and rotating quickly and/or farming out NPC duties to other players.
For dungeon-crawlers, I employ a co-/sub-DM if I can afford it, or just make things so clearly lethal that splitting the party is tantamount to suicide. If I’m feeling fancy I might prep something that requires splitting and coordination to solve simultaneously, but that’s high-prep and I’m generally a low-prep GM.
I had a good session where I intentionally split each member apart on minor quests; only one ended up with combat while the others possibly could have, but it wasn’t expected. I think I cut to each person once or twice and kept it moving pretty quick. Eventually the sorcerer who was fighting a ghost in an underground jail was blown into the sky, allowing the others to grab him on from their airship and use the clues they had found to track the spirit to where it was headed and finish up together.
I think three separate instances that each involved tracking turns or whatever would have been a headache, but as done it was a good tool to give each player a chance at the spotlight.
If they’re all in the same town and not separated for more than a day or a skirmish, it usually suffices to just rotate between groups every 10-15 minutes. Minor combat encounters that run maybe an hour of playing time don’t get broken up unless e.g. the B team is racing to the assistance of the beleaguered A team, but an hour of RPG combat can be a decent spectator sport for people already invested in the campaign.
For major splits, which as dndnrsn notes is more a CoC than D&D thing, a split and/or separate gaming session is the only thing I’ve really seen work. Unfortunately, my RPG days almost entirely preceded the rise of Euro-style social gaming, because sending half the party off to Settle Cataan in the other room would have been ideal.
And splitting the party can be driven from a split playing session as well – if you know Alice and Bob aren’t going to make the next weekly session or two but still want to be a part of the campaign, arranging a major side quest that can be resolved whenever the two of them and the DM can get together is a useful approach.
Coincidentally, I’m pretty sure that the one or two times I’ve had a major split (with the PCs in different locations) it’s coincided with only the players for one of the groups being present. Another solution to something like this (or to a situation where one PC is going to be stuck in a lab or a library all session) is by handing over NPCs who might be accompanying the party – of course, contingent on there being NPCs accompanying the party.
A lot depends on the GM and players. The last Vampire game i played the party was split much of the time, since the focus was plotting and political manoeuvring rather than adventuring. The level of split could vary from every single member being in different parts of the city doing different things, but still able to hit each other up for help if needed, to having four members of the party go on an expedition while the remaining two stayed behind, and spending multiple entire sessions in which the two groups could not communicate or cooperate.
This worked for a few of reasons. The first is that the GM had a great sense of pacing and we trusted him to give us a fair share of the spotlight. Players were fine with the others having multiple scenes in a row focused on them because we all knew later on we’d get our own chance to shine. The second is that all the players were deeply invested in each other’s characters. It was rare for anyone to check out or stop paying attention when their character was not in the scene because they still wanted to follow the other’s player’s stories. A third reason is that we were willing to tolerate a fair deal of peanut gallery commentary from whoever is not in the scene at present, which also helps them stay engaged. The unspoken rule is to keep the comments to the little pauses in actions and dialogue so as to avoid interrupting.
I’ve definitely had times where I paused the action and said: “Guys, you can’t split the party because it’ll be boring for the people who aren’t there. How about you all go do Thing A, and then you all go do Thing B. I promise you have enough time to get everything done without splitting up.”
I’ve also had times where the rogue announced that they wanted to scout ahead because they were stealthy and nobody else was stealthy, and I sort of just rolled with it and felt guilty later.
Isn’t that the point of the rogue? Everyone else should care what the rogue is doing – it is directly linked to their survival (or not) later.
For some reason, I thought I was banned until 4/10 and there’s been a few things I wanted to contribute to since my ban expired. Weird.
Anyway, I was part of a community about a decade ago that imposed a sub-forum ban system for the political forum, and it was relentlessly gamed by the same type of people causing the problem here. Eventually that forum became a one-sided venue that set the political tone for the forum as a whole before being shuttered as a cesspit of sniping and backbiting.
I don’t think it’s a good idea, I think compromising on your core values will encourage the same people to keep pushing for more and more compromise until either you’re one of them or you shut up. I’m confident that would represent a significant loss to America as a whole.
I also believe that there is an effort to shut down all venues for discussion between political parties and ideological movements to encourage greater division and resentment among groups, in the expectation that the disruptor’s positions have stronger emotional appeal and/or can be enforced through mob action. This comments section is the best comments section and one of the best discussion forums on the internet. The willingness to tolerate abhorrent opinions allows them to be explored and understood, and that’s the first step to overcoming them and reaching the person instead. You don’t de-radicalize people by calling them monsters and you don’t prevent them from gaining adherents by saying “That man’s bad”.
More, not fewer, spaces need to be like this.
People I give you comment of the thread?
Not to be pretentious or anything, but this is how I feel and believe that silencing a minority viewpoint being communicated in a respectful way has the potential to make this place much worse. I do my best, as a person, to see what axioms someone is using to establish their viewpoint and having a place where multiple viewpoints from all across the political spectrum can meet and mix feels like an extremely valuable thing to me.
This post represents how I feel about this, but delivered far more eloquently than I could have imagined delivering it.
That’s very kind, thanks.
Given that SSC has had many posts on the All Debates Are Bravery Debates topic of how it’s oh-so-very-seductive to see one’s self as the oppressed truth-teller surrounded by hegemonic conspiracy, I think beliefs like this demand extremely critical reflection rather than “Top comment of the thread, all posters may award themselves 100 victim points”.
Unless you have screencaps of the divisor’s discord chats where they discuss their ebil plans to shut down freethinkers, I call Oppression Olympics on this line of thought. “There’s a concerted effort to shut us down!” pattern-matches so well to base efforts in claiming the moral high ground of a victim culture society, that, well, pics or it didn’t happen.
@Butlerian …there’s literally an effort to shut Scott down explicitly because he’s allowing Bad People to have conversations. That’s what this whole discussion is about. Some folks like Dave Rubin and Joe Rogan get protested because they talk to Bad People.
I’m calling Isolated Demand for Rigor on the requirements in your demand for proof, too. I haven’t posted much, but I’m fluent in SSC too 😉
What’s an example of how to game a sub-forum ban system?
Same ways you game a normal ban system. Brigading, having friendly mods who aren’t that aware of their bias, etc. It just only applied to the political forum, which made the thresholds for banning people lower.
My sense is that discussion between political parties and ideological movements work much better in person than online. Online text leaves out too many social queues, and without that in-person feedback people have a tendency to go way beyond what they would do in person, causing a lot of unnecessary offense which then just escalates.
It requires strong moderation, which if somebody like Scott is willing to do it, good for him! If people don’t want to do that work on their forums, I fully understand, because I wouldn’t want to either. It seems like a thankless task.
Without that moderation you get the usual shitshow we can already see across the internet.
Are nuts and seeds healthy foods?
Mainstream nutritional advice, if I understand correctly, says that they are mostly healthy fats plus some proteins, vitamins and other good stuff, but mainstream nutritional advice is somewhat questionable, mainly due to its long history of failure to mitigate the obesity epidemic in the West, and people on teh interwebz have been recently talking about the risks of excessive omega-6 fatty acids (mostly they refer to the cheap seed oils used in processed food production, such as sunflower or canola oil, but I wonder if the arguments extends to things that I like such as walnuts, pecans, peanuts or chia seeds). But of course people on teh interwebz believe all sorts of crazy stuff especially about nutrition, so I’m not going to trust them acritically.
Are there any studies on the matter?
Wouldn’t there need to be some history of the population actually following mainstream nutritional advice for it to be implicated in the obesity epidemic?
If the advice given is too difficult to follow, something is wrong somewhere, since obesity was so prevalent in the past.
Either we are terrible at resisting temptation (in which case, maybe there ought to be a law) or the advice doesn’t account for the difficulty in following it.
Or many people don’t even try.
Many things were different in the past; modern processed foods are a new thing.
And either those conform to diet advice, in which case it is wrong, or people eat them in spite of it, in which case the temptation is too great. Or alternatively, people choose to trade off health for pleasure, but in that case they should bear all the costs of doing so.
Aren’t they paying it?
It’s a very complicated subject, as we discussed in a recent thread on how we would change health care.
But, to give a non-medical cost example, the fat acceptance movement is all about mitigating social costs of being overweight.
Are you implying higher healthcare costs, some of which are paid by others through insurance (assuming that insurers are not allowed to discriminate by weight)? Then we should also take into account the lower cost to the pension system. People tend to forget about that when using healthcare costs to justify taxing unhealthy habits.
I don’t expect the fat acceptance movement to have any significant success. And in any case, what sort of social cost do fat people impose on others, even if others are to stop shaming them? An ugly sight?
I’m going to gently extricate myself from this topic, with the qualification that no, I hadn’t done the numbers to see if anyone opting for any particular trade-off was creating net negative externalities or not.
Last post was worded poorly.
IDK if the ‘diet wars’ are considered CW but my understanding is that Mainstream dietary advise was to shift calorie intake from saturated fats to grains. IIRC Saturated fat a s a % of calories fell but IDK if it fell in absolute terms.
Perhaps the argument can be made that the weight gain was from calories in between meals which was neither the result of the mainstream diet or a failure to follow it.
In addition to what you said: they’re very calorie-dense, which is arguable the biggest problem with the modern diet. And they have a surprisingly large thermic effect, so you can cut about 15% of the calories on the label. Which still leaves you with quite a lot.
I’m not current with mainstream nutritional advice. I’m aware that it fucked up big time sometime at the end of the last century with stuff like cholesterol bad, trans fats good, but other that that I’m not sure how up to date it is. Cutting edge nutritional advice is very much evidence based, AFAIK. when in doubt, google scholar – there should be something for almost any question.
No one ever said that.
I agree. My mother’s a dietician; I’ve read the newsletters she was getting in the 90’s promulgating the Mainstream View. The only disagreement on trans fats was whether they were Bad or Even Worse.
Sure, neither doctors nor dieticians ever praised trans fats. But those are two very different statements! There’s often two different consensuses, one among dieticians and one among physicians. When I say “no one” I mean neither. But if you’re going to talk about the Mainstream View, you should ignore the dieticians and talk about the physicians because they have so much more contact with patients. That’s a pity because the dieticians are much closer to the research, but it’s important to acknowledge because it’s half the problem.
There was a point some 20-30 years ago when margarine was considered an alternative with less cholesterol, at least in my corner of the world. I definitely don’t suggest it was the scientific consensus at any time, but it definitely was popular enough.
That’s a very different statement. At most someone said “trans fats are bad, but the trace amounts in margarine don’t matter compared to the dangers of saturated fats,” but that’s very far from “trans fats good.”
The standard advice, as I remember it, was to replace butter with margarine. The margarine in question was hydrogenated vegetable oil, which is transfats, which, as I understand it, turned out to be much worse for you than the saturated fats in butter.
The advice wasn’t put as “transfats good” but as “margarine is healthier than butter,” which was not true.
The 90’s is pretty recent. What about the advice in the sixties?
This. All through my childhood there was loud and fierce insistence among members of my family that margarine was better for you while butter was verboten. I’m sure my dad still uses margarine exclusively.
As I said in the other comment, that is not true. Margarine is mostly not trans fats. I think it is 10% trans, but sources vary. But regardless of the number, people promoting it were not thinking in terms of these categories. They were promoting it as unsaturated.
Also, as far as I understand, since it was discovered that trans fats are harmful, the amount of trans fats in margarine was reduced (eliminated?).
I don’t think it’s possible to reduce the amount of trans fats to 0 without completely changing the technology.
AFAIK, vegetable oils are saturated with the help of a chemical catalyst, and this catalyst doesn’t have the specificity enzymes have. So unless they start producing margarine with enzymes, I don’t think it’s possible.
I have seen butter spreads which contain vegetable oils, and it is possible to make those trans-fat free.
Is there a practical way to separate the trans/cis fats after saturation? If so, they could be produced as normal and then the trans-fats separated and discarded.
In the 80s, the Center for Science in the Public Interest (Nutrition Action) promoted trans fats as a healthier alternative to saturated fats. Is this the same as saying they’re good?
Thanks!
I retract my sweeping statement. But I don’t think that this was representative. And while it sometimes claims that trans fats are better than saturated fat, it mainly claims that they’re no worse than saturated fat, to conclude that the bundle of fats in margarine is better than butter.
The CSPI seems pretty mainstream as far as I can tell. My parents get the newsletter, and it’s mostly reasonable stuff – but they have a way of sneaking low-carb in while pretending they’re only addressing fat which I find disingenuous. Stuff like, “instead of this high-fat meal” (picture of steak with baked potato) “why not have this low-fat alternative” (picture of boneless skinless chicken breast with green salad.”
I don’t think that it was representative in that most sources weren’t so explicit as to say “trans fat bad” the way that this one did (which is the opposite of “trans fat good”).
It was representative in that its conclusion was about the bundle of fats in margarine, not driven by trans fat alone.
As best I can tell from a little googling, the shift away from transfats and towards margarine made by a process that didn’t produce them happened in the nineties.
It sounds as though the process for making margarine before that produced both saturated fats and transfats. It didn’t produce cholesterol, and the amount of saturated fat may have been lower than in butter.
Here is advice from the Mayo Clinic to use margarine instead of butter. The writer does say to use softer margarine, because she says it has less trans fats.
So mainstream dietary advice still says to use margarine.
I would think they would tell us to use mayo.
Please tip your waitress.
Where does this claim come from? Seriously, can someone help me out here?
If I go to the store looking for processed crap, I can get something like canned soup or TV dinners. The canned soup will be about 250 calories at the upper end and the TV dinner will be about 350 at the upper end. Soups like chicken noodle can be as low as 120 calories per can. As far as I can tell, you could be stuffing your face with these things all day and you’d be vomiting it up before you reached 2000 calories.
If I go to a restaurant I will be struggling to find a 1000 calorie dish. Take a look at Subway’s menu; their 6″ (meat) subs range from 730 calories to 260 (!!). A foot long will leave you full for most of the day; even if you ate two of their highest calorie subs every day, you’d be gaining, what, a pound or two a week? Which tails off after a while, too. I tried the nutrition calculator at Taco Bell just now, and apparently my last visit I ate 780 calories. I calorie count at every restaurant I go to, and I rarely find a big meal that’s more than a thousand or so calories. If you skip breakfast as many folks do and eat two big meals, I don’t see how it’s physically possible to do anything more than maintain an average weight.
Throw in extra condiments, a large coke, bag of chips, and a cookie, and you’ve got yourself a meal. Also, the direction of the problem, I suspect.
As apparently you’ve escaped their clutches, Starbucks would like a word with you.
Yeah, I don’t drink coffee. Anyway, neither Subway’s nor Taco Bell even do fries (well, the latter only seasonally). You can buy Sun Chips at Subway, but those are only 140 calories.
The only two places I’ve been to recently where I can get a guaranteed high calorie meal—the kind that will get me to 2000 even if I eat a low calorie dinner—is Five Guys and Mr Hero. The former I can build a 1200 or so calorie burger plus a few hundred in fries, while the latter has insane 1800 calorie sandwiches that leave you hungry in a few hours, and that’s before fries and a drink. I can’t do the same with most restaurants, fast food or not; if I try, I’m at like 1800 calories at the end of the day and not particularly hungry.
@Nick
For the restaurant meals, what exactly are you ordering? You can easily get a higher cal meal at a sit-down than 5-Guys. Plus factor in 5 soda refills or whatever. Plus maybe an appetizer or just some bread, and even worse if you do dessert. A typical person could easily exceed 2000 calories at a sit down restaurant and only feel “mildly stuffed”, and that probably wasn’t the first thing they ate that day and may not be the last.
I actually haven’t been to many sit down restaurants lately, so I haven’t been doing the calorie counting there. (It’s also harder to find nutritional information for those places, so you’d have to estimate anyway.) I’ll just pick two local places and see what the numbers look like.
Let’s go with the Melt, which I went to not long ago. The menu items are in the range of 500-1400, with a lot around 800, which is definitely better. Add fries and a drink and most meals will be 1500. But those meals are also enormous, and I don’t know anyone who walks away from those feeling “mildly stuffed.” @baconbits9 can contradict me if he’s had a different experience, he’s from the area I think.
For the other let’s go with the Macaroni Grill, which looks to me a nice sit down place in Fairlawn. There’s a much bigger spread here, from little 500 calorie pasta dinners to a 2,000 calorie meal (!!), and lots in the 700-900 and 1100-1300 range. I don’t think sides make sense here, but appetizers and drinks do, so these could easily be 1500-2000 calorie meals. I think you’d be pretty full, though.
So it seems I was wrong, sit down restaurants go to 2000 calories easily. But if people have counterexamples in the fast to fast-casual range I’d be more interested in those.
I think the drinks and unlimited refills are definitely a huge culprit here. Just to look at some numbers, if you get a large drink at a fast food place, lets say a 40oz Coke, that’s an additional 480 calories. If you get a refill on your way out, that’s 960 calories that probably don’t even register as having eaten anything.
If you are looking for the calorie culprit, I think focusing on meals is the wrong track. People get a lot of calories through snacks and drinks. Starbucks frappe-whatever on the way to work, two donuts from the break room, jamba juice on the way home, handful of chips before dinner, bit of ice cream before bed.
“Oh, look, I skipped breakfast today, must be keeping the calories down!”
See previous answer by Randy M.
I’d also add a dose of healthy skepticism on low cal TV dinners. I’m struggling to find anything under 400cals. Most things with some fat have about 250-300 cals/100g, and they weight 250-300gs. Add a soda/beer and a dessert/snack, and you’re into “Damn!” territory.
I think you’re just somehow avoiding all the sources of Calorie-dense foods that are common in America.
I went to Bertucci’s for dinner yesterday, a mid-scale sit-down chain Italian restaurant. Their menu has calorie counts listed, and I had what was a fairly typical pasta dish, which was around 1,100 Calories. They also provide free rolls with olive oil, and I had a glass of beer, so just from those alone, I’m thinking I probably hit 1,400-1,500 Calories. As an adult male, that’s like 3/4 of my daily recommended Caloric intake, all in 1 meal – it’s a higher proportion for non-males.
And Bertucci’s isn’t some outlier in this – I always see entrees in the 1,000-1,500 Calories range when I go to similar-scale chain restaurants like Cheesecake Factory, Friendly’s, or 99. Some of them offer free bread at the beginning, and if you tack on a beer or non-diet soda, that’s easily >=75% of one’s daily recommended Calorie intake in 1 meal.
Also, I haven’t had TV dinners in a while, but I’ve definitely seen ones in the super market that have 800+ Calories per serving. I think there’s a brand called Hungry Man or Working Man or something like that which actually advertises the fact that it’s so high-calorie.
But really, even as calorie-dense as these meals are, they’re not what I think of when I think of modern US diet being too dense in calories. I think of snacks and sweets. A single medium-sized chocolate chip cookie can easily have 200-300 Calories in it, and it’s often easy to snack on 2-3 of those at a time, which don’t really fill you up much but take up over a third of the amount of Calories you need in a day. A small bag of chips can be 200 Calories as well, and if you’re frugal and buy a big bag of chips, you’re looking at 1,000+ Calories which can be easy to just mindlessly eat through once you get started. A 20 oz non-diet soda can be 200-300 Calories as well.
You know, sometimes i wonder if there’s a certain moralistic impulse that sabotages people’s diets. Like, you’re supposed to eat dinner, right? Part of being a proper upright person is eating a healthy dinner. If you live with family, it’s also often times a social activity that can be hard to duck out of. So my mental image of what the typical person does when mindlessly eat through a whole bag of chips is to feel really bad about it, but still make themselves a healthy dinner, because that’s what they’re supposed to do.
This is obviously counter-productive, they ate too many calories, and now are adding even more calories. That’s how you get fat. Whereas what i do when i eat a whole bag of chips, or an entire tray of cookies, or half a jar of peanut butter, is to call to simply call it dinner. Sure having my dinner be junk food is bad, not eating my veggies is bad, but in terms of weight control dinner being all the cookies is strictly superior to eating all the cookies before dinner.
Basically i’m under the impression that a lot of people are operating on a “healthy” and “unhealthy” food dichotomy that supposes the one makes up for the other, as if weight gain was caused by an imbalance of the humours. This makes it difficult for them to actually take the proper corrective measures when they make mistakes.
Reminds me of breakfast cereal ads. Calvin and Hobbes said it best:
Of course, breakfast for me is usually three cups of black coffee and no food, so it’s not like I have a leg to stand on here.
Nornagest says “I like my women like I like my coffee: black and more than one.”
I am not convinced of this. Occasionally, sure, no loss, but habitually, it may be better to have a nutrient + calorie surplus than nutrient deficit.
Some “junk” food might not be so bad, though. nachos with salsa could be a meal, or some homemade cookies with eggs and oats and so on. Make too many meals “bag of funions” and that will catch up with you fast.
In any event, though, I’m careful about the messages I send to my kids about your point. I’m the one who puts food on the plates; why should they be compelled to finish them? We aren’t about to start a fast or anything like that, so we don’t have a “clean your plates” rule, we have a “eat til your full, then stop” rule, accompanied by only infrequent junk being available at all.
@Randy M
Good for you, I understand why “clean your plate” was a thing in the past but it is long past time to retire it, at least for the majority of the population in the developed world (essentially anyone not living in extreme poverty).
Is the practice of bribing kids to eat more dinner with the prospect of dessert/snacks after even worse? “Finish the rest of your dinner and we can have ice cream!” (Now the kid has eaten too much dinner and topped off with ice cream)
Probably, although I’m guilty of it on occasion. “You want ice cream? But there’s dinner left, eat that!”
I think the better practice is to keep portions smaller when you know you will be serving dessert and limit availability of sweets generally.
And please don’t bribe every little bit of good behavior with candy. I’m shocked at how many fillings some children I know have.
But operant conditioning works so well. Telling a kid to do something unpleasant because it will be good for him, or because it is his duty, and the only reward for this will be in the doing… well, there’s a reason there aren’t any children’s books by Marcus Aurelius.
@Nybbler
There seem to be children’s books for everything these days. Your comment immediately made me think about Zen Shorts, but it turns out someone even wrote a series called Little Stoics.
This is just the result of reading a bunch of anecdotes, but it seems that no-sugar households result in children who binge on sugar. It’s better to have a moderate sugar household.
@The Nybbler
Also to create compliance when the reward/punishment structure no longer exists? I doubt it.
The goal of getting kids to eat veggies is not just to have them eat well while under control of the parents, but also when grown up and in control of their own diet.
@Nancy Lebovitz
An attempt was made to raise me without sugar, so I wouldn’t develop a taste for it. Then Halloween came around. Turns out that liking sweet things is innate.
@Aapje
If conditioning and habituation (that is, getting used to the unpleasant taste of vegetables) doesn’t work, what does? I expect most kids, as adults, will end up continuing to eat more or less what they ate as kids, or perhaps what their co-habitants ate as kids.
@The Nybbler
I was opposing the idea that operant conditioning is sufficient.
People who are raised under strict rules that they don’t come to believe in, often do not autonomously follow the rules when they believe that authority is not present. Of course, if the rules actually had a purpose, this can result in a (painful) learning experience and possibly even one that one cannot recover from.
It works a lot better to teach the child that their own goals require the behavior: “If you eat fast food too often, you will get fat and will get bullied.”
Operant conditioning can then be used when the kid is too young to have self-control, sufficient causal reasoning or such, but once the kid gets older, the response to misbehavior should shift to appeals to their self-interest and/or goals.
This is not only more reliable, but it fundamentally improves the parental/child relationship, as it’s not adversarial, but cooperative: “I am helping you achieve your goals” rather than “I am making you do what I want.”
PS. I have no problem with habituation to some extent, but it can be taken too far.
A kid who never has to eat vegetables is how you get Warren Buffet who only[1] eats hot dogs at 89.
You can learn to like foods, or at least tolerate them. Kids can’t realize this. Sometimes they outgrow it on their own, but knowing that trying a new food won’t kill you is a valuable life skill parents want kids to learn.
[1] Not literally, but his diet is like an autistic kid’s.
here and here is a randomized controlled trial showing that giving people an ounce of nuts each day causes them to have fewer heart attacks and fewer deaths. Giving people olive oil was even better. (Giving people nuts is a better intervention than telling them to eat nuts, because it is causally downstream.)
This is by a very large margin the best study of the health effects of diet that has ever been done in the history of the world. It had some randomization problems and people are currently freaking out about it, but while it’s possible that they have secret information that they refuse to share, it’s probably just that they are bad at statistics.
Nuts are diverse with different fat profiles. I think that almonds are considered to have the best (relatively high 3:6 ratio, though not much polyunsaturated, lots of monounsaturated). This study was walnuts+almonds+hazelnuts.
Thanks!
My feeling is that not much is known about what people actually eat, let alone how people used to eat, and the effect of food on weight and health.
As a result, people invent examples of good or bad eating and guess at the results.
I’ve heard that snack foods are engineered to pleasant to eat without being satiating, which explains why it’s easy to go through a whole bag of chips or liters of soda. And that people were exercising less, but also eating less (in the 70s) until these engineered foods were developed. I give this a maybe.
I have a notion that some fraction of the gain in weight is a result of dieting. I’ve seen a lot of anecdotes from people who lose weight by dieting, then gain it all back plus 25 pounds. Some people do this three or four times. And then (at least in the anecdotes I see), they stop dieting. It’s more common for their weight to stabilize than for them to lose weight.
I agree that some portion of weight gain is caused by dieting. We’ve all heard that yoyo dieting is bad, but it wasn’t until I started seeing a bariatric specialist* that I understood why.
Most dieters don’t eat enough protein. So while they lose fat, they also lose muscle, and muscle mass is the best way to boost your daily calorie burn. This is why low-fat lots-of-whole-grains-and-not-much-else diets start out with rapid weight loss that slows as you approach your target weight. You lose fat but you also lose your fat burners. Then you go off the diet and start eating junk again, and still don’t eat enough protein, so your muscle mass stays lowered, while your fat increases again. Eventually you get fat enough that you diet again, which is more difficult this time, and you lose even more muscle mass in the process. Rinse and repeat, and eventually you join the fat acceptance movement because it’s just not possible for you to lose weight without starving yourself.
The moral of the story is eat lots of protein! Egg whites and fatty fish: eat ’em every day if you can! Don’t drink alcohol with food! Limit yourself to 100g of carbs per day to ensure slow and steady weight loss! I’m not a dietician, don’t take my advice without consulting an expert!
*I’m not morbidly obese, but I’m 265lbs and would like to weigh 200lbs (I’m 6’1″), so I enlisted the help of someone who can make that happen.
Be careful — you’re running into the fact that the concept of “a healthy food” resonates emotionally but doesn’t correspond to the real world. You can have “a healthy diet” or at least a diet that is healthier than another diet. But there is no food for which eating an unlimited quantity of it will always be better than the same diet without it.
I’m against CW-specific bans because of the risk, even unintended, of banning opinion space in the name of banning bad behavior. If a topic is off-limits it should be off-limits for everyone. That said, in cases when a poster is good but for a particular bete noire they bring up all the time inappropriately, I can see the reasonableness of issuing issue-specific warnings or “soft bans,” so long as public and specific (not just “culture war,” but “Alice is warned to stop posting low-effort Trump swipes,” “Bob is warned not to post about horrible banned discourse for three months or a permaban will result”).
We’ve had that before. A user was told to stop bringing up Ayn Rand unless they did a book report on Atlas Shrugged to show they read it. It was never tried again, maybe for good reason, or maybe Scott forgot about it.
It was Jill:
Seems likely that a ban would be a kinder punishment (I’m yet to find a readable in my opinion political.
That’s not quite the only case we’ve seen. We had another who was specifically warned to stop using too many weirdness points. Both of these quickly lead to actual bans, although I’d say that EC (who sparked this latest round) was a somewhat higher-quality poster than either of those two. (At the very least, he was polite and didn’t cause huge fights every other OT.)
By that standard, CW topics have to be off-limits everywhere that isn’t willing to accept dumpster-fire-in-a-cesspool level dialogue, because there will always be people who want to drag the dialog down to that level and they will go out of their way to find unspoiled fora to spoil. For productive dialogue of some topics, anywhere, you absolutely have to exclude some people.
The question at hand is whether it is necessary to exclude those people absolutely, or whether we (by which I mean Scott) can make room for them in a limited non-CW capacity. I am skeptical that this would work very well in practice, but it might be worth a try.
This is more or less my position. There’s a line to be walked between recognizing that certain types of conversation are unlikely to further a site’s purpose and allowing for a heckler’s veto.
On individuals with inconsistent contribution quality, I’m a fan of short-duration action on a hair trigger, e.g. 72-hour ban with no warning for a litigable infraction. This contrasts with waiting until someone’s net value goes negative, which can mean tolerating a lot of bad behavior from a regular. The hard question isn’t what erosion of norms they personally are responsible for, it’s what effortposts you missed because their authors went elsewhere.
I think this is probably a good idea. A 72-hour ban for a stupid comment is definitely more helpful than a 3-month ban after a dozen stupid comments with no action before that point.
+1
Which is also good parenting or teaching advice.
Not the bit about banning from the premises for 72 hours, but the clear, consistent enforcement from the start.
It is better, but it requires much more prompt action from Scott, which I don’t think he wants to do. Given that low administrative headache for him is one of the things we’re optimizing for, I’m not sure it’s a realistic option.
I think you would have to live in San Diego to get away with a 72 hour ban.
Is there a story behind that?
I don’t suppose Mark Kleiman would like to volunteer to be a mod?
@ Nick:
Unfortunately, I fear that goal directly trades off with the goals of active moderation*. As Randy alludes to, action is made most effective by shortening the feedback loop and minimizing false negatives. Trying to substitute by increasing punishment as a deterrence has mixed success and notable downsides. This can be confirmed by either a steely-eyed criminologist with decades of data, a competent schoolteacher, or a mediocre dog trainer. But that’s a different rant.
A very notable benefit of preferring short-term punishment is that the lower stakes means false positives are less damaging. There’s less inherent need to litigate moderator decisions and the moderator has more opportunities to hear about it if they legitimately overstep.
(I thought of a few techniques that could be used if community pushback continues to be a problem, but never needed to implement them when I was moderating.)
*I think there’s a difference in philosophy between moderation that shapes behavior of individuals and that which selects for a certain population. The aspriational goal of promoting positive contribution from a diverse population would necessitate the former, I think.
I’ve thought before that highly specific Personal Topic Bans would be an improvement over full bans, but they might not be practical. Remembering a list of banned people is probably hard enough; this would involve remembering a matrix.
In any case, I’m against tightening moderation any further until such time as Scott feels able to return to ideological even-handedness.
On the other hand, an informal “Hey, why don’t you lay off the CW posts for a couple weeks, you’re getting to be a one-trick pony” might work out okay.
https://www.thecut.com/2019/01/does-duolingo-even-work.html
Claims that Duolingo is pretty useless. Anyone with experience one way or the other?
I think as a stand-alone tool it’s unlikely to get you anywhere, but might be a good way just to feel one is getting started and/or bone up on basic vocab. I think the sort of practice it offers is too decontextualized and divorced from practical situations in which you’d use a language. And so far as that kind of tool goes it seems to me not as good as Rosetta Stone or Pimsleur, though has the advantage of being free. As usual I will recommend live tutoring on italki and jumping in with reading and listening as soon as possible with Lingq.
My experience with Duolingo is “hit and run” — using it intensely for a week or two, then ignoring it completely for months, and again; which of course defeats the entire spaced repetition approach — so I can’t talk about how useful it is at its goal, but at least I have an idea of how it works.
From my perspective, the article is “kinda true, but in a boring way”. It says that Duolingo is less efficient than being fully immersed in the foreign language environment. No shit, Sherlock!
Then it complains about the choice of topics. The usual textbooks have lessons focused on conversational situations, such as “family”, “in a restaurant”, “in a shopping center”, while Duolingo has lessons more like “adjectives”, “numbers”, “past tense”. (Note: Duolingo also teaches those words within the context of entire sentences. It’s just that the sentences do not try to make a coherent story.) Maybe this is a valid criticism; I am not sure. On the other hand, I am happy that when I want to refresh the past tense, I can click on the lesson called “past tense”, instead of having to remember that the past tense was introduced in the lesson “visiting Grandma”.
Speaking about practicality of the lessons, I guess you can’t make everyone happy. Different people use language for different purposes; from my perspective “ordering food in a restaurant, small talk, arguing with a policeman about traffic violations” is not the central way to use language. I used to be annoyed when in a textbook the lesson 6 is about ordering various kinds of meat in restaurant — as an aspiring vegetarian, I often don’t even know what some of those words used to describe various ways of chopping and cooking meat actually refer to, so why would I spend an entire lesson learning their English versions? — but I guess for people whose preferred outcome of learning a language is “go for a vacation, eat in a restaurant, get wasted, and cause a traffic accident on your way home” this is one of the most important topics.
My complaints about Duolingo would be completely different:
They keep changing how the main page works. At some moment I was satisfied: the lessons were clearly marked as “freshly learned”, “learned long ago, needs refreshing”, and “not learned”; then they kept playing with colors and meanings, and I am not sure anymore what anything is supposed to mean. (I guess the attempt to dumb it down for an average user made it less useful for me; and I am not really sure the average user was actually made happier.)
When you choose to exercise a random topic, it chooses a random lesson and gives you 30 exercises in a row from the same lesson. I would prefer to have 30 exercises from different lessons instead; my whole reason for clicking the “random” button was that I don’t want to proceed lesson by lesson.
But none of this is addressed in the article.
I’ve been doing Duolingo Spanish for two years. IMHO, if you want to play a game on your phone, it will teach you more Spanish than playing Candy Crush would, and isn’t a bad way to build vocabulary, but you need to supplement it with something. I’m a big fan of the Living Language Spanish podcast, and of chatting online.
I tried Rosetta Stone Spanish and didn’t find it any better.
The only things it could even in theory be good for are vocabulary and beginner-level grammar (I don’t think you could seriously claim that the transcription/speaking/translation exercises are helpful in teaching you to understand and speak to people or translate actual texts, since in the real world language doesn’t come in bite-sized chunks and only contain vocabulary you’re familiar with). Is it good for those things in practice? Well, it’s better than nothing, and probably as good as a lot of language courses. Definitely worse than proper spaced repetition software and a grammar book in terms of efficiency, but being less efficient means you’re more likely to do it.
I have kind of vaguely dabbled on Duolingo in a few languages, but the only language that I have studied only on Duolingo is Hindi – I got about two thirds of the way through the tree – and I now can remember very little that I could produce (though would presumably be able to recognise a fair bit more), and basically nothing that I might actually want to say to someone in the real world. One of the phrases I remember was one that translated as ‘What is tea? What is water? Who am I?’, which is… likely to be useful only in some very specific circumstances.
(Luckily, there are people working on adding Hindi to LingQ, the Krashenite comprehensible-input-based site that Onyomi turned me on to and is recommending here, so hopefully that will soon be available as a better alternative, albeit not a free one)
I’ve used DuoLingo for the last 2 years or so for Spanish, and I wish I’d moved onto something else sooner. I can get the right answer so easily without really having learned anything. Especially for multiple choice questions. The only really difficult exercise was translating from English to Spanish, and having to write it out myself, which in the App was quite rare. I completed the course quite a while ago, and kept practicing, but am extremely far from fluent.
I’ve switched to Anki, and I love it. Sure, I can’t ask Anki questions, and that can be frustrating, but fortunately I have some Spanish-speaking friends who help me with that. It’s so quick, because I don’t have to enter anything, I just indicate how difficult it was for me to come up with the answer.
For what it’s worth, you should try the Krashenite style and see if it works for you. For Spanish it’s pretty easy to get beginner to intermediate books of short stories that come with matching audio, such that you can listen and read at the same time, without having to stop and look up too many words. Much repetition is recommended; and, like Onyomi says, also book tutors for one-on-one lessons so that you can practice speaking with a real human interlocutor.
I’m not a connoisseur of language learning methods, but I’ve tried Duolingo – twice. It’s better than nothing, but I don’t like it.
Good:
– gave me a chance to practice voice-to-meaning and voice-to-spelling, which I wasn’t getting elsewhere
– if you use chrome, it can do text-to-voice exercises, but not very well
– it was quite useful for attempting to ressurect my knowledge of a language in which I was once reasonably fluent, after 20 or more years with limited opportunity to read the language, and almost no opportunities to write or hear it
Bad:
– lots of memorizing the specific answer to use for this particular question – sometimes there are n possible translations, it only accepts one, and when the same word comes up elsewhere, it allows more or different choices. This is well beyond “meanings in context” and amounts to “sloppy coding”
– system thinks that a word is a sequence of characters. “boy” and “boys” are different words. But two different parts of speech, with the same spelling, are the same “word”, even if the meaning is unrelated.
– poor proof-reading
– feedback on errors is eccentric, and sometimes incomprehensible. E.g. suppose you are asked to translate “you run”. In many languages, there are different words for *one* you and a group of “you” – e.g. “tu” and “vous” in french. There are also often multiple words potentially translated as “run” – e.g. “rennen” and “laufen” in german. So you try “du laufst” – except you can’t spell, and wind up with “du lauffst”. Its favourite choice for this question happens to be “Sie rennen” – so it tells you “du lauffst” should have been “Sie rennen” – even though it would have accepted “du laufst” – because it’s “too hard” in general for it to figure out which form you were trying to use. This was new breakage the last time I was on Duolingo; it precipitated me leaving again.
– almost no explanation (e.g. of grammar) – it’s all “interpret this sentence” (or sometimes, phrase.
– material changes frequently, whereupon it winds up confused about what material you actually know, leading to rework. Also, there’s always a huge spike of typos etc. after any new material is released, and you can’t choose to stick with the old
– meh gamification, with rewards you can’t use for anything
Summary: maybe someone who learns differently than I do could learn a language from scratch with this, at least to some vaguely usable level. But they’d memorize a lot of errors in the process. With German, where I’d spent some time with a “German for reading knowledge” course, travelled in the country (attempting to use my bad German), and had a fluent friend to give reality checks, it was somewhat useful **on top of other methods**. With French – where I was once fluent – it was excellent for blowing off the rust. But I went from knowing a handful of Spanish words – to still knowing only a handful of Spanish words. (I never tried a language I knew absoultely nothing of.)
Good for vocab, terrible for everything else. Repeated exposure can help, but it has to be a lot of exposure, more than 15 minutes or half an hour a day. The app version doesn’t give enough grammar instruction – there’s a whole bunch of things in German where I never knew how they worked, and basically just memorized the answer to something I’d gotten wrong so I could get through the lesson – but very quickly in an actual lesson I got it.
The weird made-up sentences seem intended to be shared on social media.
A QM question for anyone with a solid background in Everettian mechanics and Bell’s inequality. Hopefully the exposition of the question will make sense to someone; sorry if it’s too technical.
So, it’s clear that singlet states can’t be straightforward superpositions. Two particles in a singlet state are guaranteed to have opposite spin if measured in the same basis. That is, a singlet state acts like A:(1/√2|up-x⟩𝛼|down-x⟩𝛽 + 1/√2|down-x⟩𝛼|up-x⟩𝛽) when measured in x. And it acts like B:(1/√2 |up-y⟩𝛼|down-y⟩𝛽 + 1/√2 |down-y⟩𝛼|up-y⟩𝛽) when measured in y. But measure A in y, or B in x, and you don’t get guaranteed opposite spin; so the singlet state is clearly neither. So particles in a singlet state aren’t in a superposition in any single basis at all. It’s almost like the singlet state “decides” which superposition to act like it’s in only once it’s measured.
Everettians have a good explanation for why there isn’t a “measurement problem” when it comes to superpositions: all eigenstates are simultaneously real, and there is no “collapse”, just “decoherence” when you figure out which world you’re in. But again, singlets aren’t simple superpositions, and their problems run deeper. It seems like measurement in a particular basis forces them into “deciding” what kind of superposition they are. But that can’t be right (to many-worlders at least); clearly it’s the kind of “measurement effect” the MWI abhors. So what’s the Everettian alternative? What’s the explanation for what’s going on with singlet states?
You’ve made an error–the singlet state is (1/√2|up-x⟩𝛼|down-x⟩𝛽 – 1/√2|down-x⟩𝛼|up-x⟩𝛽). That minus is a big deal. In the singlet state, angular momentum is zero in every direction. With the + sign, you have a triplet state, which only has zero angular momentum in a particular direction. Your A and B aren’t the same state–if you change A’s basis to the up-y/down-y basis, it won’t be B (it’ll be (1/√2 |up-y⟩𝛼|up-y⟩𝛽 – 1/√2 |down-y⟩𝛼|down-y⟩𝛽)). But the singlet state (1/√2|up-x⟩𝛼|down-x⟩𝛽 – 1/√2|down-x⟩𝛼|up-x⟩𝛽) has that form in every basis, and really will get opposite spins no matter what basis you measure it in.
Aha! Thank you, that makes sense.
My understanding is that there’s nothing really special about singlet states here: the ambiguity you’re noticing is that the state 2^(-1/2) [ |U_x D_x> + |D_x U_x > ] can also be written, using the basis |U_y>, |D_y>, as 2^(-1/2) [ |U_y D_y > + |D_y U_y> ].
What’s happening is not that your state isn’t a simple superposition: it is a super-position over the two entangled product states |U_x D_x> and |D_x U_x>. The real issue is that it is a superposition over other possible product states as well, for example |U_y D_y> and |D_y U_y>.
This is not unique to singlet states: even a simple state like |U_x>, if expressed in the |U_y>, |D_y> basis, seems to be a non-trivial superposition in a way that |U_x> does not seem to be:
|U_x> = 2^(-1/2) [ |U_y> + i |D_y> ].
You might as well ask the same question about this state: if you were to measure it in the x-axis, you’d reliably see the state |U_x>; if you were to measure it in the y-axis, you would get 50% of the time that it’s |U_y> and 50% that it’s |D_y>. As before, whether this state is in a superposition or not seems to be decided only once you choose which axis to measure: if you only measure the x-axis, it looks like you have a trivial superposition, if you measure the y-axis, then it looks non-trivial.
This is sometimes called the ‘preferred basis’ problem: any pure state can be expressed in any of an uncountably infinite number of bases, but different choice of bases will give different apparent descriptions of the state: is it a classical state? a weird superposition? It depends on the choice of basis.
As you point out, the standard Everettian answer these days has to do with decoherence: until you choose to measure it, there is not a good answer to which basis is to be preferred. However, when you get out your measuring device, your measuring device will work by becoming entangled with the qubit you’re trying to measure. The idea behind decoherence is that the measurement device acts as an environment to which the qubit becomes entangled, and due to the nature of the measuring device (i.e., that it has two distinct, essentially classical states corresponding to the two measurements you can make) this selects a natural basis in the qubit system such that when regarded in this basis, the behaviour of the qubit looks like classical probabilities.
So the key thing to note is that decoherence doesn’t kick in until the qubit gets entangled with an appropriately “measurement-like” environment, and measuring devices are just the sort of environment to do the trick. If you like, the presence of the measuring device imposes a preferred basis , and so the qubit decides which superposition to act like only once it becomes entangled with a measuring device.
Hopefully now, it’s clear what the Everettian explanation to the entangled state is going to be: just like with simpler superpositions, we have the choice of regarding our state as a superposition in any number of different ways before decoherence occurs–once we start making a measurement though, the measuring device entangles with the system, and the internal structure of the measuring device will pick out a preferred basis in which we will see the effects of decoherence.
Since the state you mention is an entangled state, we imagine two measuring devices, each of which becomes entangled with the state, but the idea is the same: the choice of which measurement each of our two measurers chooses to perform will determine the possible internal states of their respective measuring devices, and so the joint state of the two measuring devices is an environment with which the original system can become entangled, allowing decoherence to take place w.r.t. to the basis picked out by the internal states of the two devices.
This is quite long-winded, but hopefully the point is clear: the behaviour you indicate is not actually unique to the state you mention and doesn’t really require entanglement: it’s simply a result of the fact that you can choose multiple bases in which to express a superposition; without a choice of basis, it’s unclear what superposition the state will “decide” to act like. But the presence of an appropriate environment (such as a measuring device) to which the state becomes entangled picks out a preferred basis, and determines how the world will branch.
In some respects, this doesn’t look too different from the Copenhagen interpreatation: we’ve replaced “a state doesn’t collapse until it’s measured” with “the world doesn’t branch until it’s measured”–but branching is still arguably a simpler description of what’s going on than state collapse, and with decoherence we can at least point to what is necessary for a measurement to have occurred: the state must have become entangled with a suitable environment that ‘prefers’ a certain basis.
Or to simplify and summarize – whenever any two particles interact, that interaction in general may entangle those two particles along a particular basis, with the basis depending on the relative position and orientation of those particles and the mechanism by which they’re interacting.
So when your measuring device interacts with the particle, it will entangle with it along a basis that depends on how the measuring device is oriented or what kind of device you use, the same as when anything else interacts with the particle. Exactly as desired: “measurement” is not anything special and works like all other particle interactions.
Not sure if self-replies violate comment etiquette here, but just to add an addendum thought: If all of this still seems weird to you, then to shoot in the dark at the possible intuition that might be at the root here:
Perhaps you may need to discard any ontological intuition that there is a single correct way to identify the possible “states” that the world can be in. As Eugene explained, even whether something is in a superposition at all or whether it’s in a pure state (e.g. whether there are two worlds, or whether there is only one) can depend on your choice of basis! Since different interactions will all be “in” different bases, there is no globally canonical way to choose one. This is one way that “many worlds” is an imperfect name – it can suggest such intuitions that don’t quite make sense.
Or in the specific case of the singlet, it’s not as if anything special happens when the singlet “chooses” between being a superposition of x-up/down versus being y-up/down, those are literally the same thing in this case. And it may happen that the x basis or the y basis is more convenient to describe the next particle interaction, based on the orientation, etc. of that particle, but even so you could describe it in the other basis too if you liked – nature doesn’t care.
There’s no problem with self-replies.
I don’t think this is correct. (Do the transformation out, they’re not the same.) The transformation only works when you substitute subtraction for addition, as eyeballfrog pointed out above. With that accounted for, my concern vanishes (because all that’s left is the preferred basis problem, which I’m familiar with and which is far less troubling). Thanks!
I’m having trouble thinking of all the reasons why someone might carry a pair of handcuffs. Luckily, I have helpful and creative friends who can be counted on to help me.
First one’s free, kid: they found the cuffs on the street and are taking them to the lost-and-found.
A policeman, obviously
A criminal (burglar/kidnapper/rapist), as a way to restrain victims/interlopers
A prostitute carrying a pair as bondage gear (and others expecting to use them the same way)
An eccentric patriot prepared to make “citizen’s arrests”
A protester about to shackle him/herself to something
The house manager of a show, procuring a pair as a prop for the performance
Someone carrying several pairs in preparation for a local three-legged race (though that approach could be very uncomfortable)
A dog owner, as an easy way to attach the handle of a leash to a bike rack (Maybe? I don’t have pets)
Someone who is about to transport (or has just transported) cash or valuables in a briefcase handcuffed to them.
A travelling handcuff salesperson.
What’s a traveling handcuff?
Perhaps the wearer has moved illegally while holding the ball?
A magician/escape artist who will use them in his performance.
A magician, like Harry “Handcuff” Houdini.
Going on a visit for an SMBD session– the person they’re visiting doesn’t have handcuffs or doesn’t have that kind of handcuffs.
Yep, this.
Criminal memorabilia collector?
Scrap metal merchant?
Someone who just knows they look good in handcuffs?
I mean, I assume anyone carrying handcuffs must have a reason to be so doing. Whether they want to explain themselves to an inquisitive police officer or not is another matter entirely.
Lockpicking as a hobby, if you want a really innocent one.
Ostensibly innocent, anyway.
Anyone who actually takes up lock-picking as a hobby probably has a reason for doing so, few of which are likely to be law-abiding, amateur magician aside.
I’ve known lots of folks who wanted to learn to pick locks, and they didn’t mention any reasons beyond playful mischief. I’m half interested, not enough to actually pursue it, but one reason holding me back is that I don’t think I want the temptation!
They wouldn’t, would they? 😉
Lock your doorsconsider insuring your valuables.Oh, in the interest of full disclosure, I should mention I also played a locksmith in a campaign. The first plot hook had me hired by the Church to crack an old vault that contained some ancient artifact. Really fun character to play, but I’m afraid he was too uptight to get up to anything naughty. 😀
Most of the people I know who’ve learned to pick locks are nerds who think it’s cool.
I learned how back in college, from a member of my fencing team, and never used it for anything more nefarious than getting into the storage locker with our equipment before the professor came with the keys.
I’m sorry, there’s just no good reason for a civilian to have lockpicks of any kind. They are only tools used to rob people, and should all be confiscated.
/
(Previous tongue-in-cheek assertion largely retracted, while still giving everyone in this thread the squint eye)
We used to have a problem with teachers getting accidentally locked out of classrooms in my high school. When this happened to my social studies teacher, he just picked the lock. So there you go, picking locks raises test scores!
I pick locks a little, although I haven’t done it in a while. Because locks are interesting and it’s a fun skill to have. No ulterior motive.
Feynman had an interest in lockpciking, and I think it isn’t uncommon for young geek guys.
Explosives are a more common interest, or at least were.
I was under the impression actual thieves overwhelmingly just break stuff (often windows, but while some locks are impressively sturdy, many are actually not that hard to break if you attack them with the right tools). That was the case on the few occasions I was stolen from, anyway. Lockpicking is, I suppose, quieter, but takes more time and requires you to develop the skill.
They have intermittent psychotic breaks, can feel them coming on with a few minute’s notice, and have a history of misbehavior when under the influence.
They’re on the way to win a bet about not succumbing to temptation and are exploiting a loophole.
They’re on the way to a still-life drawing class and wanted a really evocative object.
They’re improvising a solution to a crafts project.
Last time they pitched a tent it blew away, maybe this will help?
They’re trying to improve their feet dexterity by forcing themselves to use them in place of their hands every so often.
A man who fears losing control, and carries them to be reassured that he can restrain himself before that happens. (Real case; saw this on a TV program some years ago.)
A prankster intending to perform a practical joke on the groom on his stag night.
Similarly, a prop to embarrass the bride at a hen night.
A pairing device as part of a pub crawl competition.
As an expedient short sturdy chain with handholds for use on some sort of flying fox / zip line.
Actually, as an expedient way of attaching stuff to stuff.
As a fashion accessory (very punk).
As part of a costume. Cosplay etc.
Technically – a prisoner, because they are under restraint and have no choice in the matter.
And of course all the other random reasons for which the device or its components might come in useful.
Why did you need a list of all the reasons anyway?
> As part of a costume.
Correct. An example is tom7’s prisoner costume in which he ran a marathon in 2011, documented at “http://radar.spacebar.org/?month=5&year=2011”
Bicycle fan, lost his lock and grabbed the most similar thing available.
Assuming they’re not a cop, nine out of ten it’s kinky shit. Outside chance of a costume or prop. Unlikely to be anything malicious — I’m not saying that doesn’t happen, but zip ties would work just as well but be more portable and less suspicious.
The handcuffs are being carried unknowingly, so there is no reason.
A guy who makes, sells, repairs, and/or collects handcuffs is transporting product from one place to another.
A prop person is carrying the handcuffs to the set/stage of a production. Because they are a prop.
The handcuffs are being carried but not because the person carrying them is going to use them. He just needs to move them somewhere: he’s letting someone borrow them and has agreed to drop them off, or he’s moving to a new house and is bringing them with him, etc.
The handcuffs belong to a very specialized martial arts instructor.
The handcuffs are evidence and are being transported by a non-officer policeman to a courtroom.
The handcuffs are an heirloom and are being transported by a lawyer to a will recitation. (I don’t know if it actually works this way outside of the movies.)
The handcuffs are part of a museum exhibit and are being transported by the curator or exhibit designer.
The handcuffs are metaphorical and are carried by each of us.
Oh, that’s clever.
Surprised noone’s mentioned fashion
Part of a trial or ritual.
Remodeled into a pair of large glasses frames
There’s always the wide category of art
Improvised shackle (like as climbing gear or to lock a wire gate)
If you’re a werewolf on a full-moon evening.
The only thing clear to me after the crash of the Boeing 737 Max was that Airbus 320 Neo would get more orders, and indeed, they cannot cope (FT) with all the orders they are getting.
Does the escalation in tariffs against Airbus have anything to do with helping Boeing after they have been harmed by the accident? I know that the conflict between the EU and the US over subsidies to civil arbitration is ongoing, and the cases have been in WTO for more than a decade. The EU gives illegal* subsidies to Airbus; the US gives illegal* subsidies to Boeing; this is true, and will continue to be so (and how harmful these subsidies are will be in WTO arbitration forever).
*According to the WTO.
Your analysis of the A320 orders is wrong, and I strongly believe your prediction is wrong. The A320 backlog existed long before the MAX grounding. The MAX has, last I checked, a roughly similar backlog. I don’t know if Airbus has taken any orders for the A320 since the grounding (probably), but I seriously doubt any are related to the grounding. Most airlines have only 737s or A320s to maximize the benefits of fleet commonality. Except for airlines that resulted from mergers of dissimilar fleets (e.g. United), mixed fleets are rare and fleet switches are even rarer.
Undoubtedly there will be more grandstanding by MAX operators to either extract additional concessions from Boeing, try to get out of MAX orders they don’t want for other reasons, or both. Don’t mistake that posturing for reality. Airlines and airframe OEMs are highly sophisticated actors engaged in a long-running, high-stakes negotiation.
As I recall, the 737 backlog is smaller and the 737 production rate is higher than the a320, but you are correct that the problem long precedes the max and the current crash is unlikely to seriously move the needle much.
Multi-year backlogs have been a fact of life in the airline industry for the past couple decades. To a round number, they hover around 5 years, and the manufacturer does what they can to keep it that way. Boeing did recently step down the 737 line from 52/month to 42/month because of the delivery freeze, but I expect it will go back up when they get the fix to the fleet.
I’d agree with mfm32 that this is likely to see the airlines trying to turn the screws on Boeing for new orders, and in a few cases, the fleet decision could go to Airbus instead. (A lot of big airlines have both types, although most small to medium sized carriers have one or the other.)
If you look at the big airlines with mixed fleets, all or almost all will be the result of mergers of airlines that each had single-type fleets. A very few small airlines have switched fleets historically, and during the switch they operated mixed fleets. But that’s an extreme corner case and even then only a transitory one.
The only exception I can think of is pre-merger American, which bought A320s and 737s to get out of its crippling MD-80 problem ASAP. Had Boeing been able to meet American’s demand in a timeframe that worked for the airline, I strongly suspect Boeing would have won the whole order.
Common fleets generate a very long list of very valuable financial and operational benefits. Airlines will go to great lengths to maintain them.
You’re right. I looked at several different cases, and you’re right about the root of all of the different fleets. Now I’m wondering why all of the Airbus and Boeing airlines decided to get married. (Seriously, I can’t think of a merger that hasn’t created a horribly mixed narrowbody fleet, except, I guess, for SWA-AirTran.)
(Legacy United did operate both types at once, but they never bought 737NGs, and retired the last ones before the merger. Also, there’s Lufthansa, who has both the 747-8 and A380, but they’ve always been advocates of the Pokemon school of aircraft procurement.)
Near my bed I’ve a pile of books, some from the library, and some I’ve owned for a while, many of which I read years (even decades) ago but only dimly remember, and I’m asking for suggestions on which to open right now:
The Broken Sword by Poul Anderson
The Dying Earth by Jack Vance
Stormbringer by Michael Moorcock
The Magic Goes Away by Larry Niven
The Hour of the Dragon by Robert E. Howard
The Call of Cthullu and Other Weird Stories by H.P. Lovecraft
The Shadow of the Torturer by Gene Wolfe
American Character by Colin Woodard (non-fiction from the library)
Viking Age by Kirsten Wolf (non-fiction from the library)
Slow Cooker Revolution by the editors of America’s Test Kitchen (a cook book)
Hawkmoon by Michael Moorcock
The Coming of Conan the Cimmerian by Robert E. Howard
The Stealer of Souls by Michael Moorcock
Let’s Bring Back by Lesley M.M. Blume (non-fiction)
Big Trouble by Matt Forbeck (an ‘Endless Quest’ Choose-Your-Own-Adventure-ish book)
King Arthur Pendragon by Greg Stafford (this one’s a big game rules book
So which one shall I pick?
“The Call of Cthulhu and Other Weird Stories by H.P. Lovecraft” is the only one of these that I’ve read, so I can’t recommend it above all the others, but I can tell you that this one is really good.
The Shadow of the Torturer is a terrific book. I remember rereading passages from it just to revel in the quality of the writing.
I’ve heard good things about The Dying Earth, but I haven’t read it myself.
Excellent book from memory although I can’t actually recall any of the stories which is odd.
As already recommended: “The Call of Cthulhu,” “The Shadow of the Torturer,” “The Dying Earth.” All three are on my re-read often list. One nice thing about them is that all have sequels or related books/stories, so if you like one, there is more where that came from. The same is true of the various Moorcocks, but I haven’t re-read them in a long time, alas.
I would rate The Shadow of the Torturer as the best overall, but it’s a pretty heavy read and of course you’re committing yourself to the rest of the New Sun books.
I’ve been wanting to read The Broken Sword but haven’t gotten around to it. An exemplar of the lost art of standalone fantasy novels, by all accounts.
Dying Earth is great because of how influential it is (especially on D&D, and not just for the magic system), is a work of incredible imagination, and is quite short.
“Shadow of the torturer” is the kind of book some people say is mind blowing, but this kind of experience might be more interesting for young people. If you’ve read/heard-of “the stranger” by Albert Camus I’d compare it to that in that it’s a weird, trippy, immersive, first person experience that’s (I think) intended as a journey that could change your perspective as much as a fun adventure or riveting tale.
Conan stories are like action movies. -Good, well choreographed action movies. The hour of the dragon is a novel and the other is a collection of short stories, so if you wanted to see how much you enjoy them the latter would be a good place to start. (also, howard iirc only wrote 1 or 2 conan novels, if that effects things- it might be read hour of the dragon second.)
The dying earth is a collection of adventures with a lot of and subtle dark/sardonic humour. If you like those latter things, I recommend it like a holy grail, if not then still quite highly.
I think Conan is best read in publication order. That’ll start you off with The Phoenix on the Sword, which is chronologically one of the later stories, but you’re missing a lot of the subtext if you don’t think of the character as someone you know is going to hack and slash his way onto a throne at some point. You won’t get much continuity, but that’s fine — relatively few characters carry over, and the sense is of an old warrior telling tales at random from his life, which is probably the best way to think of these stories anyway.
It’ll get a little formulaic about a third of the way in, but that’ll pass. The other weird patch is “Beyond the Black River”, which reads more like a Western than a fantasy.
Listen to this guy, he has a cooler way to think about it.
You don’t need to mainline them though. They’re a bizarre/alien place you can go- you wouldn’t neccesarrilly go to japan or iceland every month even if you think they’re awesome. It’s almost enough just knowing they exist.
Does it? There’s a frontier, but most of the action happens in the jungle beyond it (as per the title- “the black river” is the frontier) which turn out to be full of bad juju. The association I would have drawn is to something like warhammer not-40k’s Lustria- hostile jungle, ancient magic, savage humanoid people not liking outsiders outsiders intruding.
(these ones are more or less human, but they’re more people of the jungle than primitive people in the jungle)
The Broken Sword is amazing.
I love Lovecraft, but I don’t know the exact contents of that anthology you have.
The Dying Earth and the Wolfe are very good. Conan is up there.
I liked the premise of The Magic Goes Away but don’t remember the writing being that good.
Wikipedia has the contents of the Lovecraft anthology. “At the Mountains of Madness” and “The Case of Charles Dexter Ward” are two omissions that stick out to me. The sequel anthology has some greats: both those, plus “The Music of Erich Zann,” “Pickman’s Model,” and the essential “Dunwich Horror.”
I have the Barnes & Noble leatherbound Lovecraft with every work of fiction he wrote under his own name (plus the occasional piece of ghostwriting, like “Beneath the Pyramids” for Harry Houdini).
All my Lovecraft I’ve read online, but I have one of those Barnes & Noble Classics—Les Miserables, I think? They’re neat, and I like looking through them when I visit the stores, but given the price and the seller I feel like I’m the guy buying a fake Rolex. Might as well get Penguin Classics or Dover or Modern Library, you know?
Some of the Barnes & Noble hardbacks are tacky, and they’re not as high quality as e.g. Easton Press (of course, given the cost), but the B&N Lovecraft collection in particular is aesthetically appropriate, pretty exhaustive, and has excellent interstitial commentary from S.T. Joshi.
I think you’ve found a very polite and pleasant way to brag about your book collection. 😀
@Well…,
More my hoping to find kindred tastes, as my wife (who studied literature in college) calls my books “trashy paperbacks” as in “Throw out those trashy paperbacks already!”, and my co-workers give no indication of reading at all (though Game of Thrones the television show is spoken of highly by them).
When I brag about my reading it’s more 19th century histories of Guilds, Conrad and Steinbeck than it is Howard, Moorcock and Niven.
But this is much more fun than going “eeny-meany-minny-moe”!
I went with Anderson’s The Broken Sword over other contenders (mostly because it got some votes, I remember that I liked the ’71 version when I read that one or three decades ago, and the type is larger and it’s easier for me to read than all but.one of my Howard books, the game book, and a library book), but I see that Wolfe, Vance, and Lovecraft had a lot of votes as well, so those are next!
Thank you!
@Atlas,
That’s very kind of you to say!
If you’ve read The Book of the Long Sun and The Book of the Short Sun, what did you think of them?
@Nancy Lebovitz,
I haven’t read any of those but by Gene Wolfe I’ve read a bunch of short stories, Knight and most of Wizard plus the beginnings of Ares, Pirate Freedom, and There Are Doors.
Wolfe seems more “literary” to me than say Niven, or Anderson.
Oh man, if you haven’t read it then Shadow of the Torturer is definitely what you should be reading. Knight and Wizard are good for what they are, but New Sun is on a whole ‘nother level. It might be the single book that’s most influenced the way I look at fantasy. Definitely in the top five.
Vance might be as good in terms of pure use of language, but Wolfe’s easily got him beat for depth and complexity.
Italy didn’t become a nation-state until 1870, and there’s a slogan that’s been central to Italy’s nation-making project since before then: Fatta l’Italia, bisogna fare gli Italiani.
… this translates to “We have made Italy, now we must make Italians.” Oh baby.
That seems in accordance with the nature of Italian nationalism as a culture-based rather than ethnic-based nationalism. Anyone can join the Italian nation, regardless of the accidents of their birth, as long as they adopt Italian culture and language. This is fairly similar to French, British and American nationalisms*, who indeed could have adopted similar mottos.
*: that is, it’s the dominant conception; I am not denying that there have been and still are advocates of ethnic-based nationalism in all those countries, just like there were advocates of culture-based nationalism in countries where the ethnic-based conception eventually predominated (Imperial Germany, Eastern and South-Eastern Europe…)
This refers to the fact that Italians had, and to some extent still have, significant linguistic, cultural, economic and even genetic differences between geographical regions.
This has been a constant source of controversy in Italian politics ever since, with claims that the concept of Italian nation is artificial, accusations of oppression and exploitation between various regions, revanchism for the pre-unification states and even secessionist movements recurrently appearing both in the north and the south of Italy.
Right, the national struggle has been to make and keep people patriotic to the concept of Italy rather than their region or, say, “workers of the world.”
The same can be said of France, and other nation-states in Europe. With France, the kings of France consolidated their territory, and all French governments since have been trying to homogenize the cultural identity of France. IIUC, you still aren’t allowed to teach in a school in the same province where you grew up. But at least 20 years ago, regionalism was still quite strong. Someone commented about the EU, “First I am Provincial. Second I am French. Perhaps third I am European.”
Ironically, the EU is so diverse that this statement which argues for the existence of strong localized cultures that are hard to unify, itself makes the mistake of attributing a unified meta-culture to Europeans.
The regionalism of people in Amsterdam tends to be centered on the city, while the regionalism of Dutch rural citizens tends to be centered on the province.
If you’re going to implement “Culture War Bans”, you’re going to need to be very clear on what “Culture War” means. To use an arbitrary example, let’s say that I’ve posted a link to an article stating that glacier loss is proceeding much faster than predicted. Am I waging culture war ?
Without being able to define culture war, just forbidding someone from commenting in the Hidden Open Threads and “Things I will regret writing” posts seems like it would have the desired effect. I could be wrong, but it doesn’t seem like most of the notorious posters or over the line comments are being bred in the typical mainline blog topic threads or Visible Open Threads.
Bad idea, as they would let their culture warring urges run wild in the non-CW threads instead. Especially if we can’t define CW, or they don’t know how it’s defined. While if they know to recognize what is CW, then banning them from making CW comments should be enough.
I may have a bad read on what is and isn’t ‘Culture War’, but lately the post that seem the most ‘warrior’ to me are usually in the “no culture war” threads rather than the Hidden ones.
I disagree, because of Goodhart’s Law. Fear of being culture-banned will be more effective when the line is fuzzy, and Scott can judge on a case-by-case basis. If there is a clear line, it will be gamed, and people will be angrier if they get banned because they are technically not waging culture war by an earlier definition. This is not an open society run by the rule of law, but a dictatorship run by Scott.
In our current meatspace world, many people would prefer not to live in dictatorships where their lives are subject to the ruling monarch’s arbitrary whims. Some even risk their lives to leave such places for more lawful shores.
“In our current meatspace world…”
This is a blog / online forum. And I don’t know what Goodhart’s Law is, but aashiq’s point is solid. “Culture-warring” need not have a set definition.
Indeed, it’s better if it doesn’t. Because if Scott thinks there’s too much culture-warring going on down here—that is, if he thinks it’s warping the ideal of this, his, blog—then he’s well within his rights to cut down on it. Moreover, his ability to do so will be hampered by any strict, legal definition of such.
“So it’s arbitrary!” you say.
Yes. And that’s the point. Because I, personally, trust Scott to arbitrate on this issue. I’m confident, at the very least, that he can distinguish between culture-warring and any good-faith discussion on a given culture-war-related topic.
Well, yes; it’s his site, he’s within his rights to do anything with it for any reason whatsoever. He could turn it into a self-help forum for aspiring circus clowns, or something. The question is not, “what can Scott do ?”, but rather, “what is the smart thing for him to do ?”; with the implication that Scott is amenable to rational discourse and the pressures of his commentariat, at least to some minor extent.
I am not. This is not a slight on Scott; I am not confident in anyone’s capabilities to such an extreme extent.
Scott has the right to do as he wishes with his blog, but he asked for opinions and we have been giving them.
I do not trust Scott to always make correct decisions, any more than I trust myself to. I in particular do not trust him to always make correct decisions if the decisions are invisible to the rest of us, so that he will have no feedback to tell him if they are incorrect.
“Always” seems like a pretty high standard, even for Scott.
Not anyone’s? Really? Not even your own?
You don’t believe you’re capable of witnessing an exchange and observing, “Yeah, that’s just blatant culture-warring. The sweeping tone, the flat prose, all those subsidiary points inserted to widen the argument instead of refine it…”?
You haven’t noticed stuff like that, and disliked it?
I mean, I understand your concern when it comes to marginal cases, but surely that’s a discussion worth having after we cut down on the blatant stuff, right?
@LukeReeshus:
There are posts which are pretty clearly culture warring. But I don’t trust myself to never misinterpret as such one that isn’t.
The usual online alternative to dictatorships is rule-lawyering.
I don’t agree with all Scott’s decisions, but I prefer them to endless debates about whether comment X did or didn’t technically follow some rule Y.
(Precise rules don’t stop people from being assholes; they just make them argue whether “asshole” and “assh0le” is the same thing, if the former happens to be explicitly listed as a reason for ban, but the latter does not…)
+1.
In 25 years I have seen successful dictatorship web sites, where the vision/voice of the original website auteur is so clear that most of the rules wouldn’t have even affected how most posters were gonna post and it feels like benign neglect. I have seen unsuccessful dictatorships where everything is drama and purges are done in secret. I have seen terrible rules-lawyer sites with democratic feedback where everything gets bogged down on what subparagraph 3B of the “no parting shots or moderator sass” clause means.
I’ve never seen a good, functional site that feels good to post on where there are a ton of rules and rules-lawyering and democratic feedback contradicting the people in charge. It’s an empty quadrant
Yeah. I used to wonder about that, back when I did more Petty Internet Tyrant work, but then I thought about it a bit and realized that the RL institutions with tons of rules usually have decades to centuries’ worth of experience in finding the edge cases and filing the rough bits off, and large budgets and plenty of personnel dedicated to interpreting them. And a lot of them still don’t work that well.
With that in mind, it’s not too surprising that half-assed ad-hoc rules ginned up by a half-dozen laymen in their free time tend to cause more problems than they solve.
@Nornagest I saw on the subreddit once that the SQLite project adopted the Rule of St. Benedict for its code of conduct. I think that’s one way to avoid the problem of half-assed ad hoc rules!
I don’t know. The Rule of St. Benedict is really lofty and demanding. So much so that I can’t help but suspect anyone claiming to follow it, even imperfectly, of either cluelessness or deceit.
I can totally believe that trying to follow it will make you a better person. But claiming to be trying to follow it puts the rest of us in the position of deciding whether you are an aspiring saint or a plain liar. And while I have not conducted a census, there are probably more of the latter around. That suggests it is best to follow the rule silently.
Codes of conduct have never sat well with me. Unless you’re talking about actual criminality, it’s a judgment call as to what’s acceptable in a free discourse and what’s grounds for a ban. Context, up to and including years of experience with the parties involved, matters. So if you’re running a space that’s small enough for one person to police, make it explicit that it’s a dictatorship and get on with it.
See also, the CW thread on the SSC subreddit and how its moderators and their neutral, reasonable-sounding rules ended up making a place…like that.
If you are stuck someplace, you want to know you can influence the outcomes.
Lots of blogs, each one run as a dictatorship, is the best solution, since I can easily switch blogs. I never go to Cory Doctorow’s blogs because he deletes people that merely disagree with him. (Fair enough, I have more important things to do than worry about him.)
Agree, and I agree that people can up and leave if they think Scott is being too arbitrary.
However, it is not always possible to reduce some notion down to a concrete rule. Even in a society based on laws, there is room for both specific rules and more general “standards”. For example, in Jacobellis v. Ohio, the standard for obscenity was stated as “I know it when I see it”, and common law often appeals to what a “reasonable person” would do. To restrict ourselves to only specific rules on a topic as amorphous as culture war is to remove a valuable tool from our repertoire.
Another point is that I believe a forum where Scott decides what is best will be more pleasant to read than endless lawyering by the patrons. Scott is the only one correctlyincentivized to do what is best for the blog. In addition, I trust his ability to identify culture war more than his ability to be a good lawyer.
Last, I prefer a world with many competing blogs run under different philosophies of governance to all blogs mired in legalistic debate.
On the other hand, clear lines can be beneficial because they can prevent the definition from slipping over and over again. And Scott will be less tempted to make biased rulings when the bias has to be laid out for everyone to see and can’t be covered up by claiming it violates one of the rules that everyone violates.
It depends on how they are implemented. You could leave it to users to tag their own posts as CW/non-CW, perhaps by referencing a special @culturewar user. Besides the value of having a social requirement for users to think about whether their posts have CW content, that opens the way to implementable consequences: the most practical I can think of (based on my glance at the WordPress plug-in API) is a cool-off period, a dialog form for such postings that asks for content warnings, or a default for such comment threads to hidden.
I think 90% of the time people know culture war when they see it, and for the other 10%, that’s why there’s a warning.
I would also like to +1 the idea of CW bans. It might be best to administer public warnings, though.
Maybe one could have reddit-like flair for commenters with a “yellow card” warning, foe example? There could also be flair for “top comment” on a given two-week period (a trophy of sorts that migrates) and other positive labels.
+1 for the public warnings.
Public warnings are important. The lack of warnings in some recent instances (and the older Reign of Terror) when banning people for politely expressed bad opinions was a big piece of what made it so offensive. If it’s a new name that’s crapping all over a thread, that’s one thing, but if they’re otherwise following the rules it’s a betrayal of the values this place claims to hold when you ban them without warning.
What would the long term effect be of a society adopting Uterine succession? I know it’s existed in various cases but I have troubled wrapping my head around inheritance and the incentives such a system would create.
To clarify:
The Society remains patriarchal. Men govern and serve as head of the household.
Succession is by primogeniture.
Inheritance is founded on the notion of Mater semper certa est wealth and titles pass from mother to daughter.
So the most common path of inheritance will go from a man through his sister to his nephew.
Men rely on their sisters rather than their wives to produce children of their house
Incest is discouraged, although cousins may marry
So the first implication I can see is that an aristocratic family will want to secure both a son and a daughter, minimum. Their son will be trained to govern the family, and the daughter will be trained to breed. This seems pretty similar to the incentives aristocrats have in a patrilineal system. But marrying off your daughter seems to have different implications. She’s not just immediately property of her husband, she’s still producing your family. Sons, however, don’t propagate the family, even though they get to govern it.
In a way this feels like a separation of powers. Almost certainly an unstable one, but let’s say it’s really popular and not threatened by neighboring purely patrilineal systems.
So obviously if you want tight familial control of wealth, you’ll want to marry cousins quite a bit. Which seems true of Aristocracy already in most places. But can there ever be dynastic intermingling? Can one family secure a web of marriage alliances and come to control the region?
Obviously the longevity of one’s bloodline is always a toy feature of Aristocrats. In this case you can trace an endless fractal Matryoshka doll of mothers corded umbilically back through time. Do new houses ever arise? Are houses forced multiply through countless cadet branches or is there some incentive I’m not seeing to keep things tightly bound.
Does a Duke’s sister live with him even when she’s married? Her husband brought into her family rather than her going away? Would two families of the same rank ever marry or would that be anathema to them? God forbid we throw a caste system on top of this. I have trouble keeping track of this.
My understanding is something like this occurred in Kerala in India, and somewhere in Africa? But beyond just references to these things, I’m interested in how a society’s economy, family structure, and politics might be influenced by families structured along Uterine succession. I’ve read descriptions in the past, for instance, of how European societies had different incentives where inheritance was split between multiple children or reserved for the eldest.
Any reading on the subject (or advice on how to find some kind of inheritance simulator) would be much appreciated.
Probably, along the same process we see in historical patrilineal societies: “new” men/women join the nobility (as clients of higher nobles who create them as vassals, or by buying or conquering a noble estate and having their de facto status recognized by the existing social structure), cadet branches become seen as separate lines, or a son (or a daughter-in-law, or a granddaughter through a male line) inherits for want of a female heir.
Or a massively rich but landless noble family bribes a house to accept their son, who then rules the house and integrates the noble family in the house so deeply that they become indistinguishable.
While Sparta did not have the system you propose, I think in a society with martial goals it would have a similar effect of land being concentrated into a few female lines as women tend to die less in pre industrial societies. Regencies everywhere
A fun variant would be having female led elective primogeniture (ie. The woman can disinherit) which would basically mean a council of angey matriarchs with enormous power
Fast forward a few millennia and suddenly you live in a democracy where only women can vote and only men can hold public office.
Would it differ significantly from a democracy where women have the majority of votes, and men have the majority of public offices? 😀
In the patrilineal system, daughters get married off and are often sent to live with their new family. I think in this system the men would be moved. I think this would create a really interesting dynamic, because if you marry your son off to the neighbouring kingdom, suddenly your family controls both areas completely.
I think that would make accepting a marriage proposal from the other perspective a lot more hazardous. In a patrilineal system, a bride can be used to forge an alliance, in the matrilineal system, the groom is used to seal one. Either this would mean that nobody married out ever and large landmasses remained fractured until one kingdom can muster absolute military might, or one family can use this dynamic to snowball, marrying off multiple male heirs every generation, each acquiring a new slab of land for the kingdom.
So, when you making a Crusader Kings 2 mod for me to try this out?
For this system to be enforced it would require a different underlying social organisation than in historically patriarchal societies with descent of property through the male line. That much is clear from the simple observation that otherwise the dominant male would bypass his sister’s offspring in favour of his own so for the uterine system to work this requires constraints. The most obvious would be men can’t hold property, just use it but that might not fit the definition of uterine inheritance. Legal constraints might work, but if the powerful men opposed a customary practice, I am not sure that a legal authority would fare much better.
The best solution I can offer is that the continuation of uterine succession would require a female-descent-defined clan-based society, so that each family was dependent on the support of other, nominally related families, and where the clans would be incebtivised to keep land within their own lineages as opposed to passing into a different linkage by father-son inheritance. The need for clan support, and possible clan claims on the land, would perhaps incentive compliance with with law and custom.
Assuming you have lots of families following the same rules and intermarrying with each other, I can see this going two ways:
1. You keep your daughters at home and bring in their husbands in order to keep the family on the estate. So most of your men are only family by marriage, and one of them is the Duke. Duchess is the hereditary position, and whomever she marries is Duke. Women who seem likely to inherit would have lots of men competing for their hands.
2. As above, except that the Duke is family by blood; “Duke’s mother” is the hereditary position. When a man becomes Duke he moves back from his wife’s family estate to his mother’s, and his own children are raised on the “wrong” estate.
3. Both your (where the salient ‘you’ is female) sons and daughters stay at home because they’re more likely to be loyal to the family they grew up in and more likely to be good at governing a family if they have been able to take part in its governing for a long time as adults. Husbands and wives either don’t live together or only part time. Or marriages are arranged between older children (more likely to inherit the line/the top dog governing position) and younger children who are basically born without a chance to inherit anything and can be sent off to other households. (That is if anyone even cares where the Y chromosome comes from anyway.)
The only thing this society can’t really be is patriarchal, since men invest their lineage building efforts in their nephews, not their sons, and don’t end up ruling their direct descendents. It can still be a male supremacy, but it’d be a matrilineal avunculoarchy or something.
A more interesting version of Birtherism: if Obama were two years older than he actually is, would he still have been eligible for the presidency?
You mean, if he’d been born in the (incorporated) Territory of Hawaii rather than in the State of Hawaii? The Constitution doesn’t specifically address that point, and no President was born in similar circumstances. But, the historical English context of the phrase “natural-born citizen/subject” points to the distinction being that the citizen/subject was born in the realm rather than being born outside it and then naturalized by Act of Parliament. So given the Territory of Hawaii was governed by the Constitution and owed allegiance to the United States, I would say yes he would be a natural-born citizen. Congress seems to agree, since they passed a joint resolution affirming that McCain – who was born in the nonincorporated Panama Canal Zone to citizen parents – was eligible.
However, I can see an argument for the other conclusion: the Territory of Hawaii was not one of the United States with its own distinct sovereignty and eligible to be represented in Congress; it was merely an external territory belonging to the United States under the complete jurisdiction of Congress. That would fly in the face of a century of jurisprudence, but I personally wouldn’t dismiss it on first impression.
(Note that Obama would not gain citizenship due to his parents’ citizenship, either in this hypothetical or in reality. His father was not a citizen; while his mother was a citizen, the law in place at his birth said such a baby would only gain citizenship if his mother had lived in the United States for N years after age K. His mother was less than N+K years old when he was born, so this was obviously impossible. Therefore, Obama is an American citizen solely because he was born in Hawaii.)
An interesting hypothetical; thank you!
McCain isn’t the only losing major-party Presidential nominee who wasn’t born in a state. Barry Goldwater was born in the (incorporated) Arizona Territory, and I don’t think there were any serious questions raised about his eligibility (although the extreme long-shot nature of his candidacy may have made any such discussion moot).
I did a quick check, and was surprised I didn’t find anyone else. 19th century candidates seemed weighted heavily towards the older states (although not necessarily disproportionately, since most of the population lived east of the Mississippi (especially on the East Coast and in the Old Northwest/current midwest) until well into the 20th century), and candidates from newer states seem to have been born in older states and moved to the territories (or newly-admitted states) in childhood or early adulthood. Again, not that surprising in hindsight, since the western states were settled much more via internal migration rather than organic growth of the original cohort of American pioneers.
Ted Cruz was born in Canada, and although people tried to bring it up as an issue, it didn’t go anywhere. He didn’t get his party’s nomination though.
According to Wikipedia, “Goldwater was born in Phoenix in what was then the Arizona Territory, the son of Baron M. Goldwater and his wife, Hattie Josephine “JoJo” Williams.”
Also “During his presidential campaign in 1964, there was a minor controversy over Goldwater’s having been born in Arizona three years before it became a state. [110]” The line 110 is dead, and the Internet Archive shows a version that does not substantiate this. Given that Romney’ father was born in Mexico, I think people back then were not as careful about the Natural Born Citizen clause.
Well, Obama’s mother was a citizen, so he was a citizen at birth regardless of where the birth took place.
But in regard to place, I have a memory that some odd fellow ran for president in his state just so that he would have standing to sue regarding Obama, McCain (who was born in the Canal Zone), and some third-party candidate (who had yet another oddity of birth). I assume the outcome of the case was that the US District Court ruled that all three qualified as “natural born”, as otherwise there would have been big headlines about it.
+1 for the idea of CW only bans. I’m in favor of people experimenting with moderation in general.
In previous open threads we had some good explanations of quantum mechanics concepts.
With that in mind, I feel emboldened to ask about so called many worlds interpretation of quantum mechanics, which I know only from popularizations. And popularizing works make it look, frankly, dumb. So I am looking for steelmaned explanation.
Wikipedia, which for those purposes is not reliable, explains it thusly:
Those parallel universes, which together form multiverse, are, however, unobservable.
To me this seems like if Isaac Newton, after he invented his equations correctly predicting movement of celestial bodies, declared that those bodies are pushed around by invisible demons who precisely follow his equations.
Newton did not know what causes gravity, like we apparently do not know what causes certain quantum phenomena, but Newton famously refused to do unconfirmable speculations on this subject, at least according to foundational myth of modern science.
I realize that I painted a crude caricature and I probably should apologize in advance to proponents of multiverse hypothesis, but this thing bugs me for some time and I really do not know about any other forum where I could get steelmaned version of it.
I am not an expert, but this is also something that concerns me about many modern physics hyopthesis. Including many worlds, string, dark matter, and dark energy.
Once you go deep enough there is something that tethers them that is falsifiable (for some), but we are so far away from testing such things (it seems).
For the record, I do not think that dark matter and dark energy is on the same level as parallel universes. There is clear evidence that those things exist. Of course evidence might be wrong, but that is a different problem.
String theory is for me incomprehensible, which is not an evidence of anything, except my own intelectual limitations.
This might help clear up a few things for those
Dark matter is invoked to explain a number of gravitational phenomena which strongly suggest the existence of mass we can’t see. For example, the orbital periods of stars in the galaxy are very different from what they would be if the galaxy consisted of only the visible matter. The conclusion is that either general relativity, a theory so accurate it can predict the bending of radio signals from the GPS satellites to your phone to within a few meters, is wildly wrong at galactic scales, or there’s a lot of stuff out there we can’t see.
Dark energy is also invoked to explain gravitational phenomena, most notably the accelerating expansion of the universe. It suggests that the universe has a small but constant intrinsic curvature, which corresponds to a small vacuum energy density. This isn’t really that out there of an idea–quantum field theory also suggests that the vacuum should have an intrinsic energy, though it doesn’t say how much. Heuristic arguments get an answer that’s absurdly enormously larger than the measured value, though, so that connection is still being worked on.
Many worlds is a mathematical formalism for doing quantum mechanics. The idea of parallel universes is more of a metaphor here than an actual description of what’s going on. But the underlying formalism gives the same answers as other formalisms for quantum mechanics, and we’re very, very certain quantum mechanics is correct.
String theory is bullshit. Don’t let anyone tell you otherwise.
I really dislike the name “multiverse theory,” as I think it implies almost the opposite of the point of Everettian quantum mechanics. I’m going to call it the Everett interpretation from here on.
Quantum mechanics says that there are many more possible states reality can be in than we observe. For example, we only ever observe particles having spin up or spin down, but quantum mechanics requires us to believe in states like “spin up plus spin down” or “spin up plus three times spin down.” These are called “superposition states”
The question of interpretation of quantum mechanics is the question of why, if superposition states are real, we never observe them
Copenhagen-like interpretations posit a mechanism called collapse whereby the state of the universe changes when you make a measurement with a random component. For example, if at time t1 the state of the universe is “spin up plus spin down” then when you make a measurement at t2 the new state of the universe is either “spin up” or “spin down”. Why collapse happens or how it happens can be considered at varying levels of detail.
The Everett interpretation points out that we don’t need to posit collapse at all because your consciousness is part of the state of the universe. The real state before and after measurement looks like
state(t1) = "i don't know what the spin is; spin up plus spin down"
state(t2) = (i know the spin is up; spin up) plus (i know the spin is down; spin down)
We say that your consciousness becomes “entangled” with the state of the spin. Why the state never splits into
state(t2) = (i know the spin is up plus down; spin up plus spin down)
is explained by the mathematics of “decoherence,” which follows from the basic axioms of quantum mechanics. No collapse needed.
So you see there are not really separate universes. It’s just that the state of the universe is a huge sum of different entangled consciousnesses, “(i am alive and know particle 1 is here) plus (i am alive and know particle 1 is there) plus (I am dead and particle 1 is here) plus …” You can’t observe the other universes because there is no you external of the universe. There isn’t even a you external of the particular part of the huge sum that your current consciousness state occupies.
I used to really dislike the Everett interpretation, but I’ve come around to seeing some of its good points recently. For me the key insight was the one I started with: your consciousness is a physical process and so it is just another variable that can take different values in the big sum that makes up the state of the universe. Any description of different states has to include a description of what conscious observers are seeing. This is what allows Everett to explain the paucity of observed states without collapse.
* I’m still wary of Everett because I have philosophical reasons for not liking thinking of my consciousness as just another physical process, but at least I understand what it’s saying now.
The question you may be more interested in is whether we really have to posit the existence of unobserved states like “spin up plus spin down”
The answer is that as far as we know yes we do, unless you are willing to give up the notion of locality. Locality is the principle that the state of the universe at one point is only influenced by its immediate neighborhood, at least for short times.
Some people do like the Bohmian interpretations of quantum mechanics where instead of positing the existence of superposition states you posit the existence of a real “wavefunction.” However, this wavefunction is still only observable through its effect on other particles.
What’s more, there is a mathematical proof that the weird probability distributions that quantum mechanics can produce are incompatible with “local, real” classical theories, where both those terms have technical meanings. This is called Bell’s Theorem, and most physicists accept that so-called “Bell test” experiments rule on the side of quantum mechanics. The Bohmian interpretation is decidedly non-local; the wavefunction has to know about the position of every particle everywhere all at once to know how to evolve in time.
Thanks!
So, I am not sure I understand it correctly, but if I do, your explanation of Everett posits no parallel universes. Do you, by any chance, know how it happened that this interpretation of quantum mechanics is associated with them?
The phrase “parallel universes” I think here is being used in two different ways: I think a less loaded term for them is something like, “other branches of the superposition”: the thing that as far as I can tell most people like about Everettian QM is that it is, in a certain sense, just taking what the mathematical formalism says seriously: the mathematical formalism says that if you have a particle in the superposition |U> + |D>, referring to an equal mixture of being in spin and and in spin down, and you have a measuring device that can measure spin up particles, in which case the machine is in the position |MU> (for “measures up) and |MD> (for measures down), then when a measuring device measures a particle in such a superposition state, it ought to end up entangled with the particle to produce the state
|U>|MU> + |D> |MD> — that is, in the branch of the superposition where the particle is up, the measuring device will register as measuring spin up, and similarly for the branch where the particle is in spin down. But we never seem to observe measuring devices in these superpositions, so that’s weird. The Copenhagen view proposes that somehow, at some stage, this state must collapse, but it’s not very clear how or why this collapse should happen. The Everettian view says: the reason you never see these superpositions is because you yourself are a measuring device, and so when you observe a particle in superposition, you too go into superposition: the same rules of QM that govern how a measuring device becomes entangled with a particle apply equally well to you: if you observe a particle, you end up in the state
|U>|EU> + |D>|ED> where |EU> and |ED> are the quantum states describing you experiencing the particle as spin up, and as spin down respectively.
Superposition don’t collapse, they swallow things up: the reason you don’t see them is because you get trapped inside just a small part of them–it’s sort of the same reason you don’t see the curvature of the Earth: your perspective is limited in a way that traps you in a particular point of view.
Whether or not the two branches of the superposition, |U>|EU> and |D>|ED> ought to count as parallel universes is a different question. My understanding is that Everett himself downplayed this interpretation, possibly under the influence of his advisor Wheeler, who didn’t want him to estrange Niels Bohr and the Copenhagen faction who still wielded a lot of power; but there’s some evidence that Wheeler understood the implications of his point of view. I think the idea that the different branches ought to be regarded as truly different worlds owes to Bryce DeWitt, who was one of the first physicists to really latch on to and popularize the Everettian idea.
But as I say, the ontological status of these branches of the superposition don’t really fit with the popular notion of a parallel universe: for one thing, it ought to be possible for the two different branches to re-merge together, and interfere with each other, like in the two-slit experiment. Whether this is a feature that is intuitively conjured up by the phrase “parallel worlds” I leave to your judgement.
Yes and no. When smocc says that (according to many worlds) the universe enters the state “(i know the spin is up; spin up) plus (i know the spin is down; spin down)”, that means the universe actually contains an instance of you who “knows” the spin is up and one who “knows” the spin is down. If you’d be money on that spin, there’s a rich you and a poor you. Both are equally real. “Parallel universe” maybe isn’t quite the right word for this but it’s not too far off, either.
It legitimately is kind of crazy, but I find the arguments in favor compelling. To me, non-many worlds interpretations are sort of like if Newton presented the theory of gravitation and then said “But of course you can’t actually go to the Moon, that would be crazy, there’s an invisible wall or something”.
I really like this analogy. 🙂
Maybe it’s me, but I’ve always found explanations of decoherence somewhat handwavy. Do you know a good one?
What level of explanation are you looking for? When you say “handwavy” do you mean lacking in mathematical rigour?
Density matrices and stuff. I’m not a physicist, but I know the basic math of quantum mechanics, especially in the context of quantum computation and quantum information.
Wikipedia has something, but it seems to pull assumptions out of nowhere (e.g. the orthogonality of the environment states corresponding to the einselected basis).
Yes.
@vV_Vv, if you find one, let me know. I have Nielsen and Chuang, a standard intro to Quantum Information, sitting around somewhere but I’ve never dug into it.
@vV_Vv, my understanding is that decoherence is the off-diagonal elements of a density matrix going to zero. What counts as off-diagonal is, of course, basis dependent, so decoherence is a basis-dependent concept and depends on the context. I don’t think it’s a precise term (outside of some particular context).
Sometimes there is something that we’d really like to think of as classical (say, the state of the pointer on some piece of lab equipment, or the brainstate of a scientist making a measurement). Then the basis relative to which decoherence is accounted will be related to how this “classical” thing couples to the quantum system. For example, in quantum computing people talk about the “computational basis” which in practice might be the basis in which the computer is set up to measure output states. Then in this context when someone says decoherence they might mean off-diagonal elements of the density matrix going to zero when the density matrix is written in the computational basis.
At least, this is just the impression I get from how people use the word.
Quantum foundations people sometimes try to find some general principle for choosing a basis for defining decoherence. This is what the einselection stuff is about. I don’t understand any of that though.
I wrote this as a reply to Soy below, but it can at least partially answer your question. Can probably dig up references of concrete calculations but it gets complicated fast.
There are basis independent aspects of decoherence, at least if you can decompose your quantum system into subsystems (a choice of basis, but a very natural one). It can turn pure states with rho^2=rho into statistical mixtures Trace(rho^2)<1.
Intuitively pure states correpsond to systems where we have a good handle on what the wave function is, and mixed states arise when we use a statistical mixture of wavefunctions.
To give a concrete example, imagine we have a nice isolated spin pointing along the z axis:
psi=1
This is a pure state, rho=((1,0),(0,0)) in up-down basis.
Now imagine the spin flips and emits a microwave photon, creating an entangled state (assuming a equal amplitude to flip or not, this could be arranged by bringing the spin flip transition into resonance with a photon mode for a certain time):
psi=10+01
The second index corresponds to the photon mode occupation. If we keep track of the emitted photon, we need to use pure states and a 4X4 density matrix. We could do this by e.g. storing the photon in a resonator for the ~microsecond duration of some quantum process. It would be possible to undo the entanglement too. If we lose the photon — if it flys off as photons can do — then future measurements on the spin alone only probe the reduced density matrix obtained by tracing over the photon states. A good (easy) excercise is to prove this.
The reduced density matrix in this case is rho=((0.5,0),(0,0.5)). Trace(rho^2)=0.5. This behaves as a statistical mixture of spin up and down rather than the superposition 0+1, meaning that certain future manipulations you might want to do (e.g rotation to point purely along a different axis) will fail, unless more information is gained about the spin state. This process of entanglement with uncontrolled enviromental degrees of freedom therefore shows up as noise in attempts to control a given quantum system.
I wouldn’t say that decoherence is just any change from pure state to mixed state. For example a classical bit subject to classical noise isn’t something I’d call decoherence, but this is still a pure state [[1,0],[0,0]] going to a mixed state [[1/2,0],[0,1/2]]. What I’d call decoherence, at least, would be a superposition turning into a statistical mixture, and “superposition” is a basis dependent concept.
The second paragraph of your arxiv link has this idea in mind, I think. “Decoherence is a pure quantum effect” distinguished from classical noise. But in order to distinguish quantum from classical you need to have some sense of what counts as classical, maybe some states in your hilbert space that are distinguished as “classical,” a basis relative to which you can have a sense of superposition.
Decomposition into subsystems isn’t quite a choice of basis, either. If you have a tensor factorization of the hilbert space, that will rule out some bases, but will always still allow infinitely many choices of basis compatible with the factorization.
Hey. Nice comments. You’re right that there is some basis dependence to decoherence. I also will say I don’t have a very precise definition of the term. I think (maybe thought) of it as increase in the entropy of the density matrix due to entanglement with the environment, but on further reading I realized this might not be typical and others make a distinction between energy loss and decoherence. Do you now how this distinction is made quantitative?
Of course measurement outcomes are not going to depend on the basis we describe the process in, and decoherence, or entanglement with the environment, or whatever we want to call it, is going to affect our ability to predict/control these outcomes. (I’m pretty sure you agree, but this was basically what I was going for with the original post)
A few other comments:
For the classical bit, if we rotate the initial pure state into [[1/2,1/2],[1/2,1/2]], (physically rotate it, not just change our basis!) it would quickly return to a mixed state. Would have to go through an actual calculation for e.g. a capacitor in a macroscopic superposition of charge states coupled to some bath but that’s my guess. So the system-environment interaction plays a role in selecting privileged bases. A simple case of this is systems which are allowed to reach thermal equilibrium with their environment so that rho=exp(-H/T)/Z is diagonal in the energy eigenbasis. On the other hand, do the same thing for a single charge on a superconducting dot, and there is fuller access to the Hilbert space for practical times.
My comment about choice of subsystem also implying some choice of basis was that I chose to trace over the eigenstates of one subsystem whereas I could have e.g. traced out half of the Bell states which also span the Hilbert space of two spin-1/2.
But I’d like to know more about what the tensor product structure implies about the choice of bases 🙂 I really haven’t thought about it carefully at all … does the tensor product structure imply something different than e.g. the types of bases I can make in four dimensional vector space? (I guess it also has a tensor product structure … you can tell I’m not a mathematician).
Ohhhh I get what you meant about the decomposition into subsystems. I need the tensor product structure to even define the partial trace. Totally spaced on that sorry!
So it definitely doesn’t make sense to trace out “half of the Bell states.” Oops!
If you are still interested, I can recommend this:
https://arxiv.org/pdf/1404.2635.pdf
as a good review.
Starts out with some general comments and has discussions on how to model decoherence (equations for time evolution of reduced density matrix) in sec III, IV.
Isn’t it the perfectly valid state before the experiment, i.e., what you call state(t1)? Obviously, you are already entangled with the system, since your future state correlates with the future state of the particle and the measuring device.
After the particle is measured it indeed becomes
I think the most basic-level argument for why the many-worlds interpretation is preferable can be summarized as follows:
The famous double-slit experiment shows that to calculate physical phenomena, you need to consider all the paths a particle could take. Thus, any theory that produces correct results needs to have the mathematical machinery to consider all the possible outcomes and their interference in parallel. Then, you can show that this machinery is already enough by itself to explain observed results – you get parallel branches of “the cat is alive, I see no superposition” and “the cat is dead, I see no superposition”, both of which match what people experience.
At this point, trying to say that there actually isn’t a parallel world where the cat it alive while you see its dead body is not a simplification. The simplest mathematical theory which explains quantum interference produces “parallel worlds”. To avoid having them, you need to add extra constructs on top of your theory, like “wavefunction collapse”, which are not supported by any physical evidence (as the theory already predicts physics at least as well without them).
This. All the weirdness of many-worlds interpretations is also in the collapse interpretations… they just add something like “but then a
miraclecollapse happens, and the world becomes non-quantum again” or “however, this is all just amagicmathematical formalism that makes our calculations correct; it doesn’t refer to anything real“.Quantum physics implies the existence of parallel “states”. The only remaining question is how large can the differences between the states get, and what happens when for all practical purposes the different states stop interacting (because the larger the difference between the states, the less interaction there is).
Collapse interpretations say “at some unspecified moment, only one state remains and all the other states disappear”; many-world interpretations say “the differences between the states can get arbitrarily large, and all states remain”.
How is this any different from eg. rolling dice in a casino? The distribution of possible outcomes for a particular game is well known before you arrive on-site. And up until the dice come to rest their actual value isn’t known. But once they do, they have a defined value. Does this also somehow mean that there is a universe in which every different result occurred as well?
With classical probability, (e.g. idealized dice games), probabilities are always positive and sum up additively between different branches, so there is no difference between positing that the universe splits, or positing that at each point of randomness a fixed one of the possible outcomes happen and no others.
With quantum mechanics though, instead of each state having a probability at any given point, instead, each state has an “amplitude”. Amplitudes behave a little differently than probabilities. They can have different phases, as in the phase angle of a complex number, and cancel out. So for example, imagine from state A you’re 50-50 to go to B or C, and from B, you’re 50-50 to go to E or F, and from C, you’re 50-50 to go to F or G. Classically, that means the final probability of E-F-G would be 25-50-25, but with quantum mechanics, it is possible that the 25% going to F from B and the 25% going to F from C have different phase and cancel each other out, so depending on the phase angle of each split/interaction (which you would additionally need to specify), the final chance of E-F-G could be 50-0-50, for example. This is called interference. (This is also a little simplified, but follows the spirit of the math).
So with quantum mechanics, due the mechanism of interference, having branching and superpositions actually has observable consequences compared to saying that at each point of probability, a specific thing and only that thing happens. And of course, reality appears to behave consistently with the version where you do have superpositions, so that’s what you have to explain.
From there, it’s not too large a step to the Everett view, which basically says that if arbitrary particles can be in superposition, and you yourself are a human-shaped collection of particles, you yourself should go into superposition when you interact with the particles as well.
If you dig further into the math, you’ll also find that actually there’s not necessarily a discrete notion of different “branches” either – this is also a simplification of the actual view, which closer to something like that you have a continuously evolving amplitude distribution in a *very* high dimensional space. Particular “blobs” of amplitude we might colloquially identify with “this branch” or “that branch” using some particular basis, but it’s not as if nature says “this is the point where you branched”, in reality it’s more of a continuum than that.
That’s why I mentioned the double-slit experiment. It demonstrates that physics doesn’t work this way.
If you send particles one at a time, they still produce an interference pattern. Consider the state of the system after the particle has passed through one of the slits, but before it hits the target. If you could describe the situation as “50% probability the system is in a state where the particle went through slit A” and “50% probability the system is in a state where the particle went through slit B”, then there could be no interference pattern – in either case the system would be in a state where the particle simply went through one slit, the existence of the other slit would be completely irrelevant, and there would be nothing to produce interference.
You need to be able to say “the system is in a state where the particle went 50% through slit A and 50% through slit B, and these alternatives interfere like this”. This is like parallel worlds: you need to be able to say that the “true state of the multiverse” contains 30% world A, 30% world B and 40% world C . You need a mixture that has specified portions of each world, not just to pick a single world with specified probabilities.
Well, here’s one difference with dice rolls. Classical uncertainty like the uncertainty in the outcome of a dice roll isn’t fundamental. Classical uncertainty has to do with our ignorance. If you knew everything about the die’s shape, and exactly how it was thrown, and exactly what the microscopic properties of the table were, etc., you’d be able to predict the outcome, in principle. This isn’t the case with quantum uncertainty. With quantum uncertainty the nonrealized outcomes aren’t simply outcomes we can’t rule out due to our ignorance. Even an omniscient being couldn’t predict the outcome. Quantum uncertainty is fundamental.
That could never happen in science. If the model works, it must correspond directly to the underlying phenomena, or at least so we have to assume lest we get our empiricist card revoked.
https://en.wikipedia.org/wiki/Deferent_and_epicycle
Tangentially: is it correct to assume that “all possible worlds” does not include “all imaginable worlds”? It seems obvious to me that it shouldn’t, though I have seen popular/lay arguments that assume it does.
You are correct.
It means all worlds that evolve from the initial conditions with a non-zero probability.
Which, to be fair, is pretty damn close to “all possible worlds (that obey the same laws of physics)”
But those non-zero probabilities include all the atoms in my body spontaneously teleporting 5 meters to the left, right? So in practice, most of the things we could imagine, including the very unlikely universe containing a superhero who flies by quantum coincidence, would still be possible worlds.
I don’t think it’s “flying” if the underlying mechanism is coincidence rather than volition.
In a highly unlikely but still theoretically possible world, those coincidences line up exactly with his volition.
Also including the unlikely universe where all the water at one point in the Red Sea spontaneously teleports away leaving a dry pathway from one coast to the other, and where the nearby water does not drown that pathway for several hours.
Hmm…
This seems like a good excuse to mention David Drake’s The Dragon Lord. There’s magic which works by pulling moments from other universes. A dragon can’t live on earth, but you have something which might as well be a dragon by pulling an appropriate series of dragon moments from other universes.
The information-handling required isn’t addressed, and as I recall neither is what happened to a dragon who’d had a moment snipped out.
It’s also an early example of a nasty version of a standard story– Arthur and his knights are a bunch of thugs. The two viewpoint characters are somewhat better but not good guys.
Yes, “possible worlds” include all kinds of technically lawful miracles. But the less likely they are, the smaller the amplitude of the result.
Any particle in your body can spontaneously teleport anywhere, but the probability that one specific particle teleports 5 meters to the left is quite small; and to teleport your entire body, that is exponentially less likely, where the exponent is the number of particles in your body.
So, the worlds with miracles are there; but all of them combined are still only a negligible part of the whole.
Could you tell me exactly how likely? I’m going to a party later, and would like to make the atoms of the hostess’ dress jump a few meters to the side.
As David Friedman says.
The time evolution of quantum mechanics still conserves energy and momentum and the like, so “all possible worlds” doesn’t include ones where energy conservation is violated. Or charge conservation, or any number of other forbidden things happen.
There’s a caveat whereby energy can appear to maybe be non-conserved if you were in a state that did not have a definite value of energy to begin with.
First, what does it mean to interpret a physical theory? It means to ask what an observer sees. But to interpret QM usually means not the absolute question, but the relative question, why it seems so close to classical mechanics.
Traditionally you get a QM model by starting with a classical model and performing a transformation to it. This gives a family of models with a parameter, Planck’s constant h. The deviation of QM from classical is supposed to be of size h. When h goes to zero, you are supposed to recover the classical model. What you actually recover in, say, von Neumann’s formalism, is not a single world evolving according to classical mechanics, but a probability distribution over all possible worlds, each of which is evolving independently. Since they evolve independently, they are perfectly “parallel” in exactly the way that @smocc condemns.
This parallel classical worlds aren’t true. Of course they aren’t true, because classical physics isn’t true. The question was “in what sense is QM approximately classical,” so these parallel worlds are exactly what we have to talk about. QM is approximately a probability distribution over classical probabilities, which evolve approximately according to classical physics. Where “approximately” means proportional to h. The interactions between the worlds are proportional to h.
@Douglas Knight
This is really really great answer, thanks to which I feel I finally get it. Thank you!
Let me try to rephrase it, so you people could check whether I got it wrong:
So, apparent paradox is a result of a misunderstanding. Quantum mechanics equations do not have single classical solution, but this is fine and does not mean there are multiple classical worlds, because our world is not a classical world. Our world is a quantum world, and is only one.
I’m really not sure what you mean. Time evolution is deterministic. And it’s linear, so the only way that the parallel universes interact are (1) one universe spawns a cloud of others and (2) interference, because it’s not a probability distribution, but an “amplitude,” a detail that lies outside of my sketch. [(2a) even if all amplitudes are positive, probability is non-linear in superposition and (2b) because amplitudes can be negative, cancellation]
A shot in the dark:
In canonical quantization, where we start with a classical system the (pure) states are named after the states of the classical system. But these names are are only approximately (ie, h) correct. People often describe QM as Schrödinger evolution followed by collapse to a pure state; as a chain of discrete steps through classical space. But classical mechanics is false so there is a metaphysical error in claiming that these are classical states. But they are h-approximately like the classical states, and that’s OK. A much more popular related complaint is that this depends on choice of basis.
@Douglas Knight
I tried to be metaphorical, clearly too much. I understand that QM model is in certain contexts closer to what we observe than classical model. Your comment made me realize that interpretations of QM model are attempts to reconcile it with classical model, not with “reality”. In this context Everett interpretation makes intuitive sense.
Perhaps this should be obvious, but I don´t know anything about physics beyond what I learnt at high school and from popularizations.
Disclaimer: I’m pretty amateur when it comes to quantum physics, so I’m looking for explanations/citations, rather than an argument here.
Other than that’s the way we primitively observe/experience time ourselves, what makes you believe time is linear, rather than corresponding to an actual physical dimension? (I assume you mean by linear that time’s arrow only points one way and can only progress forward little by little.)
I tend to think of time as like solid objects. Sure, they appear solid to us when we observe them because of how our perception is constructed, but in reality they aren’t actually solid and are instead something like 99.9999999999996 empty space. It’s the energy in objects and the interacting forces which make things feel solid to us.
Conceptually, something/someone outside of time (as we are 3 dimensional, if it were 4 dimensional, I believe is the usual description) would be able to perceive the past and the future simultaneously (so not have a present), while we only perceive the present while (poorly) remembering the past and (even more poorly) predicting the future.
I think that’s what all of the theories are. Except that the demons are being built on something like epicycles- a system of kludges to get working results, rather than something like gravity.
Well, I disagree with that, but this is already a long thread so I do not want to get itsidetracked into discussion of what is scientific method.
I was scoffing at the idea of an RPG group with 8 players somewhere else on site, but the question stuck in my mind. How universal is my experience? How do other folks do this?
So, tabletop players of SSC, a few questions about your campaigns.
On your favorite campaign that ‘succeeded’ (that is, proceed to its story conclusion, or lasted at least, let’s say a year)
1. How often did/do you meet?
2. How many players in the group, counting the DM/GM/HHG/Whatever?
3. How long is a typical session?
Thanks for responding!
My answers are:
1. Once a week
2. 4 players is best
3. ~6 hours.
I don’t think I’ve ever had a campaign last a year (I only started playing after college). But I’ll answer anyway:
1. Weekly, with probably one cancellation per month
2. 4.5 players (my wife tended to fall asleep at 10:00.
3. 5 hours
My answers are for Tabletop:
Weekly; 5 (four PC and a GM); 3-4 hours, for what I’ve run.
Best ever was:
Weekly, but systems were different on consecutive weeks (two parallel games); Roughly 7 at the table (average closer to 5, but frequently enough 7); and 4-5 hours.
For LARP (which is an RPG group, but very different style), it was
20-30 players, once a week, for 4-5 hours. But, again, very different dynamics.
1. Weekly, with about one cancellation per season.
2. 4-5 including DM.
3. My face-to-face DM and I both start to run out of mental energy after 4 hours (we’re talking 5E and ACKS here). I ran a 3.5 campaign that lasted more than 16 months and I tried to push players to dedicate 6-hour blocks because I had exploration and enemies prepared so far in advance of the game’s slow combat system.
Our ACKS campaign has had very few cancellations and with good advance warning and that is so nice.
1. In my college group, our longest running campaign has been running for four years now. It was intended to run every two weeks, but we had long breaks due to summer and, following graduation, more long breaks due to everyone being unreliable as shit. It’s probably every four weeks on average.
2. It’s varied dramatically. When the campaign started there were nine of us, I think. That quickly grew to, I am not shitting you, about nineteen, which lasted for all of one session before we rebelled against this ridiculous state of affairs. It was split into two groups of about eight running parallel campaigns, and then shrunk naturally to seven. Since graduation it’s been five.
3. Five to six hours, but with a food break.
Nineteen is only the second largest campaign I’ve been in. We ran a Maid one shot with a solid twenty five people. It required the DM to run a scene with about four or five people at a time while all the rest f%^&ed off to parts unknown for a while. It actually didn’t work half badly because the DM kept each of these moving really quickly, but practically it still meant letting folks sit around distracted for long periods of time.
Addendum: Is it just me or is it a lot, lot harder to listen to more than one person in a voicechat compared to meatspace? Interrupting another player is always rude, of course, but at least in person it’s usually to have a brief side conversation or whatever. This doesn’t seem to work in voicechat at all, and I can’t just ignore anybody either, except by muting them I guess.
We had a problem player back in college who would bring a mandolin and sit there playing it during the session. (This at least had the excuse of being in character, since he was a bard.) It was annoying but at the same time pretty easy to ignore. But he tried it on voicechat once and it made it completely impossible to hear anyone. Anyway, this definitely makes playing the game harder for me.
Much harder, and it’s much harder to start talking because of the lag, I think. It’s much harder to play over a voice connection.
My longest-running campaign, back in college, met weekly for 3-4 hours. It peaked at 7 players early on, but stabilized at around 4 (sometimes 5, we had a couple of inconsistently available players) within a few sessions.
This was a 3.5 game, so it ended up being pretty slow going; once we gained a few levels, a big fight could easily take the whole session. And I strongly suspect my DM was doing a lot of fudging to keep it that low, although she was usually pretty good at hiding it.
I ran a D&D 5e campaign that last about a year and a half, concluded successfully, met every other week on average, had 4 people including me, and lasted ~4-5 hours per session.
Nice callout to Nobilis.
Most of the campaigns of my local group have been theoretically every two weeks, probably landing more like every three weeks once you take into account schedule sync. They’ve generally had 6 players + the GM, and sessions have been 5-6 hours.
We have at times had more players — one Star Wars game technically had 10 PCs (plus the GM), but never all at once, and some of them really very rarely.
… one thing that I’ve learned is that, if you follow the assumption that you’re telling the story of the PCs going from Level 1 to the height of their power, 5E gives you fewer sessions to proceed to that conclusion than 3.x, which in turn gives you fewer than Old School D&D. 5E official play is based around leveling up after every 8 hours of play. 3.x had XP awards that would get PCs to Level 20 after 250 encounters that each drained 25% of their daily resources. Old School means that if the recommended 75% of XP comes from treasure, a party of 4 has to kill ~266 men and take their stuff to get to Level 3.
My answers are “1). about once every two months, online, 2). three people, and 3). 6 hours”. The biggest problem with running long and/or large campaigns is, in my experience, scheduling. Due to real-world concerns, it is nearly impossible to get multiple people in the same place at the same time, even if the “place” is virtual. Sure, one could always plan a campaign with scheduling conflicts in mind, so that players can be rotated in and out as needed; but this quickly becomes boring for most people, who no longer feel like they have any impact on the story.
What I would do: the story is the impact a large group of adventurers has on the world. They have a ship that requires at least 30 rowers. When a session ends with the active players away from the ship, anyone playing next session who wasn’t there last time covered the distance off-screen and comes charging to the rescue.
Yes, this is the way we would usually handle the issue… but like I said, this severely dilutes each person’s investment in the story. It also creates a costly time investment — for both the GM and the players — that is required to keep everyone reasonably up to speed.
It’s been quite a few years, but IIRC, my all time favourite gaming groups was something like this
– once a week
– a lot more than 4 players – we had more than 4 even when we had 2 missing, which wasn’t unusual
– but this is where my memory gets fuzzy.
– not sure how late we played. certainly at least 4 hours.
Added info
– we generally played 1st edition AD&D + Unearthed Arcana
– we all had miniatures for our characters, and we enjoyed doing our battles as a table top battle setup, though we didn’t take things to the same extremes as those whose game *is* the tabletop battles alone
– I think we may have had some kind of setup where the main GM got breaks by having an alternate GM run their own campaign. At least one of those experimented with 2nd edition rules. (Might have been alternate groups, or in their own block of time, or even intermittently. I really only remember the main campaign.)
– All this was happening in the early 1990s. The main campaign was very high level; I’d joined later than most, and nonetheless had a druid that was getting very close to the level where the rules stipulated an explicitly limited number of druids of that level in the world, and I’d have been fighting an NPC to level.
The longest campaign I’ve ran that was definitely successful ran for… Perhaps 15 or 16 months? I didn’t keep track of dates of play in my notes. However, the group itself has been meeting for over five years.
1. Once a week. I’d say a game only gets cancelled once every 2 or 3 months.
2. The group plays with minimum 3 including GM. The most people in the room at once was 7, current max is 6. I think that one GM plus 5 players is the most that can be handled well.
3. Anywhere between 3-6 hours, 4 or 5 more normal.
This is with multiple GMs running multiple campaigns, but never with GMs trading off responsibility for the same campaign. “Oh hey man I have a cool idea for an adventure, make a PC and I’ll have my guy go somewhere else” kills campaigns dead in my experience. I also find that having a set time every week to meet is vastly superior to trying to meet once a week and hash out when on an ad hoc basis.
Scheduling once a week just seems to be impossible, even with a group of 4-5. People seem to load up their schedules with stuff all the time. It’s not even a kids thing, the people that are hardest to get hold of are the people without kids that seem to want to party every weekend and can’t stay beholden to any plans (we’ve basically cut those people out, since they clearly don’t think the group is a priority).
Once a week, regularly scheduled. I’ve been playing with the same group for about 6.5 years now, first in person, and now online. We cancel reasonably often because we don’t like playing with people missing, although that’s not a hard-and-fast rule. Usually the GM and 4 players, although we’ve had 5 on a couple of occasions. Typical session is only about 2 hours these days because we usually play weeknights, and have somebody who gets off work late.
1. We aim for biweekly, but push back if anyone’s unavailable, so we can do 1-2 times per month.
2. Six counting the GM.
3. 3.5-5.5 hours
IMHO, large parties (>5) in DnD 5e are a challenge because the game isn’t designed for them, but a good GM can cope. The biggest challenges are
a. Balancing combat encounters, because the “action economy” means that a group becomes exponentially more powerful as you add members, which requires large groups on the other side, or monsters that are effectively groups because they can do several things at once. It also means that concentrating fire on individual members can be super-deadly.
b. Keeping people from being bored – if you have 7 players role playing, a few need to cool their jets for quite a while until things get to them. If you have 7 players in combat against 4-10 opponents, it’s even harder to keep things moving.
<a href="https://www.the-scientist.com/news-opinion/microbes-may-take-some-of-the-blame-for-the-reproducibility-crisis-65707?utm_campaign=TS_DAILY+NEWSLETTER_2019&utm_source=hs_email&utm_medium=email&utm_content=71535657&_hsenc=p2ANqtz–GKd9DUSY9Z_Wm-HuH8X7mM-CG1jtif7Z6HWr_Z7wCznajWETNCs7rZPXsJonr97-YTkS5SW4nh8DvfV8Z0phEzCQOjg&_hsmi=71535657&fbclid=IwAR2yra5CvKqSjPrfhpEMl1IovoXtxxHmKXy5HznZZrp6eSu-eQIY5Yb8U9o"?Variations in mouse microbiomes may explain part of the replication crisis
Fixed:
Variations in mouse microbiomes may explain part of the replication crisis
Summary: genetically similar mice purchased from different vendors have different gut bacteria, causing different study outcomes.
Thank you.
My take is somewhere between “yeah, biology is like that” and “I wonder whether there’s a correlation between scientists’ gut biomes and their likelihood of doing replicable studies”
Perhaps saying “eat shit” to scientists is a useful suggestion for a study into replication problems?
😛
Suppose you took Hera’s offer to rule Asia. What would you do as ruler?
Do what I do every day: Try to take over the world.
“Egad! You astound me, Brain!”
Find the closest moderately responsible person and abdicate immediately? I am horribly unqualified for governing a continent.
Clearly you are qualified by virtue of your reluctance to take the post. Unless this is a clever double-bluff…
The logic of voluntary exchange is pretty sound: by giving up something they want less, to acquire something they want more (with relative levels of “want” demonstrated by the terms of the exchange and the fact that they agreed to it), both parties in a voluntary exchange are better off afterward. I considered myself a libertarian for about ten years between my late teens and late 20s, and I would cite this concept a lot, especially in college when I sometimes got into debates with left-wingers.
But what I can’t remember knowing is how this concept, at least when used in arguments for a more laissez-faire system (or against a less laissez-faire one), accounts for things like buyer’s remorse, or human irrationality, or the fact that people’s circumstances sometimes change so that what may have been a mutually beneficial exchange at the time it was made becomes non-beneficial soon after. Also, consumers often have very poor information and accessing good information is often a chore.
When I was a libertarian I’d probably have shrugged and said “Too bad. It’s just one more reason to educate yourself, learn to think critically, and be more rational, and it’s good that we have selective pressure to do those things” but now that doesn’t feel like a sufficient answer.
So, how would libertarians here address these issues?
I think that the track record of external organizations knowing what’s a good deal for an individual, better than said individual, is pretty bad. Sure, your own understanding of your wants/needs is irritatingly imperfect. Some large, impersonal rule-based organization’s understanding of your wants/needs is (usually) terrible.
Exceptions exist for particularly incompetent individuals and particularly straightforward wants/needs (like, “don’t hurt me,” and note that even that gets complicated quickly).
EDIT: This may be culture-war? Not sure. Seems like it could rapidly go that direction. Fine with my post being deleted.
Like JohnNV’s answer, I would file this answer under “Voluntary mutual exchange isn’t perfect but it’s better than the alternative”.
But it’s got me thinking: are there any times when some type of relatively centralized social decision-making infrastructure does actually have that society’s best interests in mind better than the average individual member of that society?
The example that comes to mind is the Amish, in which a central body (the elders of a given community) creates rules (in the form of ordnungs (ordnungen?)) governing the kinds of exchanges members can participate in (e.g. you can ride in a car, but may not own or drive one). Has this system actually hindered the Amish, or is it key to their success? Or is it irrelevant?
David Friedman is both a libertarian and has studied the Amish, so his answer would be especially valuable.
Well, in terms of outlawing transactions, I think you can think about it in a few categories.
1) The transaction is believed to net harmful to society, and arguably to at least one party to the transaction. Possibilities include gambling, sales of alcohol and drugs (to minors or to anyone), sex work, vote buying, murder for hire. As a libertarian-ish thinker, I approach those skeptically, but I believe there are some transactions in that class.
2) The transaction is believed to be so frequently abusive to one of the members that it requires regulation. Examples would include mandatory return periods for home or car purchases, outlawing payday loans, etc. I’m particularly suspicious of these.
3) We think we can streamline the transaction through regulations. For example, regulations requiring disclosure of GMOs, disclosure of various home terms, requirements for a simplified or consistent disclosure form, etc.
I’m interested in how to handle these cases. Since I’m not a dogmatic libertarian, here’s my current thinking:
Gambling: maybe ban? Casinos prey on people who can’t do math, letting their dopamine receptors hurt them. People who know how to win get banned by the casinos, so it only seems fair to ban the casinos from the people who don’t.
Sales of drugs and alcohol: dogmatically legalizing heroin seems likely to have worse net consequences. Making beta blockers a controlled substance is excessive. Prohibition of all addictive drugs has some famous failures. So less of this seems good.
Sex work: don’t ban.
Vote buying: Well it’s not illegal when the candidate offers you money…
Murder for hire: Yeah ban this.
I’ve read in a few places that opioids are not actually that harmful if it’s pure and the dose is controlled. Most of the harm of black market heroin comes from adulterants, uncertain potency, and the effects of regular injections. If this is true, the harms of prohibition are massively greater than the benefits:
• Overdose because of the uncertain potency of black market opioids (because of adulterants or because different opioids may be passed off as heroin), and (I guess) because the difficulty for a drug addict to accurately measure a powder. In a legal market, dosage is more accurate. (AFAIK the recent surge of overdoses in the US was caused by making it harder to get prescription opioids, so many addicts turned to the black market.)
• Harm caused by adulterants.
• New synthetic opioids may be cheaper, and they may be more harmful, their effects may be less studied, and they may require more frequent administration.
• I think people inject in a large part because heroin is expensive due to being banned, and less is needed for the same effect with injecting than with other routes of administration. Frequent injections cause a risk of thrombosis, and increase the harm caused by any impurity.
• I presume illegality is also the reason some people who inject share needles, transmitting infections (indirectly also affecting others).
• The high cost of the drugs causes additional problems to drug users, and it may also cause them to commit crimes that harm others.
• Indirect harms caused by gang activity, such as shootouts that sometimes hurt innocents as well.
• The cost of incarceration of drug users and dealers to the state, the criminals and their families, as well as the incarceration making them more hardened criminals, and a conviction (even without incarceration) makes it harder to get employment, possibly causing them to commit more crimes.
Even if it isn’t true that pure opioids of a controlled dose are not that harmful, it’s quite likely that all the above harms outweigh the benefits of prohibition. And, in particular, harm to others than the drug users is made much greater by prohibition, and IMO that should count much more than the harm to those who voluntarily make the decision to use drugs.
I don’t do casino gambling but I have friends who do. I don’t think it’s that they cannot do math but that they are willing to pay, on average, for the fun and excitement.
Then there are those of us who made tons of money gambling online and were quite annoyed when the Safe Ports Act was passed and Neteller shut down.
I think for drugs, gambling, any vice- most people enjoy it without any problem. Some people have problems, and those people should be helped. Maybe it makes sense for the industry to pay for some of that.
@Le Maistre Chat I used to think the same about gambling in terms of why people do it, then I watched this Louis Theroux documentary on gambling in Las Vegas. Going into it, understanding that he was looking at people who gamble often and treat it as a serious hobby, I assumed it was going to be all people counting cards and arbitraging any place where odds temporarily go against the house. Instead it’s just businessmen on weekend trips losing money on roulette and a woman who has essentially retired to the casino to slowly destroy the inheritance she would be leaving her son on penny slots.
Anyway, I think in general gambling makes more sense to treat, as David Friedman does, as a transaction, paying money for the atmosphere, some ‘free gifts’, and the experience. It kind of doesn’t seem any less seedy, but heavy gamblers have to know they’re not making any kind of smart investment, just renting a very expensive space where they get to imagine they could strike it rich without doing any work, and it still works as intended for them if they never get rich. It probably makes sense to model this as an addiction, as strong as any other that doesn’t become a physical dependency, but it’s not as cut and dried as, “These casinos are masking the fact that gambling at slots is, on average a losing enterprise, and are fleecing otherwise intelligent and hard-working people.”
I think that the success of governing bodies relates to:
1. Small size
2. Closeness to the governed domain
3. Homogenous governed community
4. Release valve/alternatives
Note that Amish central bodies don’t actually have coercive government power, and I think this is a key part of them. Also note that #1-3 are a very ordinary argument, but I’d like to focus some part of the discussion on, “What do you lose by having a small, close, homogenous community” rather than “How can we make sure everyone exists in a small, close, homogenous community.”
In principle, for most Amish congregations, any change in the Ordnung is by unanimous consent of the members. There is presumably some social pressure to go along with the changes that the clergy propose.
And there is nothing to prevent an individual who disagrees with the Ordnung of his congregation from joining a different one. The Ordung is specific to the congregation, and congregations are generally from 25 to 40 households. Whether he can change congregations without physically moving depends on whether he is in a community with overlapping congregations.
This. Don’t expect a system of libertarian exchanges to be utopian: we’re never going to have a perfect social system before the eschaton. Just compare it to known alternatives in the material world.
I’m mildly libertarian. I think my response is that yes, consumers often make mistakes that they later regret. But at least they have a vested interest in getting the answer right and trying to satisfy their own preferences. It’s hard to believe a third-party with no idea who the consumer is could possibly make decisions for the consumer that result in better outcomes. After all, no government organization could possibly analyze every consumer – merchant interaction on an individual basis and rationally decide if the consumer was making a wise decision, we probably make hundreds of these decisions per day. So the best a government could do is make broad sweeping rules that categorically ban (or mandate) certain types of transactions that they believe aren’t in the interests of one party or another. The question is how often do those bans prevent people from making decisions that they would later regret, and compare those to the ones where the bans just get in the way of people doing what they actually genuinely want?
I’ll file this under “Voluntary mutual exchange isn’t perfect but it’s better than the alternative”.
Another consideration is how an exchange affects third parties. For example, if a large enough percentage of car buyers opt for automatic transmissions, car manufacturers respond by discontinuing manual transmissions in their new lineups, causing most car buyers in the future to not have the option at all.
Producing two different kinds of transmission, instead of just one, indeed has a fixed cost, so if too few people prefer manual, then manufacturers may indeed drop it. However, if the minority who prefer manual are willing to pay enough extra for it to cover the cost of maintaining its production, then car makers are going to keep producing it. If they aren’t willing to pay enough, that suggests that the benefit of driving a manual car (for those who prefer it) is less than the cost of keeping producing manuals. Then making those who prefer automatic worse off in order to maintain the production of manual cars (e.g. by requiring a certain percentage of cars sold to be manual, or to require manufacturers to sell both at the same price) would be both unfair and bad for society overall.
Also, if I buy an automatic car, I’m not making those who prefer manual worse off compared to if I don’t buy a car at all (which I have the right to do, and which should, IMO, be considered the baseline). It might only make them worse off compared to me buying a manual.
Considering the number of models of automobile in production, and indeed the number of discrete engines and powertrains, I don’t think you get manual transmissions going away unless there is a truly overwhelming consensus among drivers that manual transmissions are not wanted. There may not be a manual transmission option available in every product line, but that’s to be expected and should not be a problem.
And if desire for manual transmissions becomes a sufficiently small niche that it won’t even support a handful of specialized models, then so be it – it has never been a market failure that tiny niche demands don’t get the benefits of high-rate mass production, and it certainly isn’t something that ought to be blamed on all those thoughtless automatic-transmission drivers buying the cars that they want without considering the “harm” they are causing to third parties by not instead subsidizing niche demand through buying stuff they don’t want.
“The question is…”
Under labor law of various European countries, when you go on a business trip you must receive per diem, either from the organization sending you or from that receiving you. This appears to be an example of a law meant to protect workers, but in my case can become a damned nuisance. I would much prefer that the employer be obligated to offer per diem, but that I not be required to take it, since the per diem is often so generous that it becomes the limiting budgetary constraint for the length of what could be a longer and more useful business trip.
In a similar vein, one summer as a student, I had two jobs at my college, tutoring and research. On paper, I was working too many hours, so human resources complained to my research advisor, who tried to explain to them that I wasn’t being exploited since my duties consisted of lying on the sofa, staring at the ceiling, and thinking, which I would be doing anyway. In the end, I was forced to drop half my tutoring load to keep the research job, making me poorer but not significantly less “exploited”.
Why couldn’t you drop the research? You would still be doing it, right?
Or was the research better paid than tutoring?
He had to do the research no matter what, I think. So if he can get paid for it, it’s probably better to keep that and drop the other; same money and less workload.
Assuming equal pay; tutoring probably paid less unless it was a group session.
Defining “beneficial” in a somewhat unusual and in extremis circular way that includes lying in a gutter ODing is one way round it.
Yeah, well, step 1 of this thought experiment would be asking whether more or less people would do that in Libertarian-land
No one intentionally overdoses. At most, one may decide to enjoy a drug, and accept some risk of overdose while doing so, in which case that entire risk-benefit profile can be considered beneficial, if we assume that someone’s voluntary choice should be automatically considered beneficial to that person. That assumption is not unique to libertarians, but also used by preference-utilitarians among others.
Alternatively, if we are unwilling to define a voluntary choice as beneficial if it’s detrimental in the eyes of some outside observer, it can be argued that what should matter is not how well you end up being, but how well you can end up being if you make the right choices, assuming that the necessary information to make the right choices is given to you. That is, we shouldn’t make people who are making the right choices worse off, just so that people who voluntarily make bad choices can’t hurt themselves.
> No one intentionally overdoses.
Some of the professional opinion I’ve heard on the radio in my neck of the woods, impacted by the opioid crisis, is that around 40% of OD fatalities are suicides.
A few answers:
1) Transactions don’t have to be awesome – it’s enough if no one else is better than the person involved at recognizing the best choice given the alternatives. This comes up with payday lending, lottery playing, etc. It sounds to someone who’s not in that situation that those are terrible deals, but when you look closely (a) there are often benefits relative to the alternatives that central planners don’t understand and (b) outlawing the legal transactions we find abusive often drive people to worse, illegal, alternatives.
2) Lack of knowledge often models rationally. I think Bryan Caplan has written a lot about “rational ignorance,” in that given the cost of knowledge, it never makes sense to have perfect knowledge, and there’s an optimal level.
There is a notion of “euvoluntary exchange” lately popular among some libertarian philosophers which captures some of the objections you mention by imposing stronger conditions than for “merely voluntary” exchange, and then asks whether and when, given that we believe euvoluntary exchange is just by the standard logic you mention, more loosely voluntary exchanges can also be determined to be just. Example paper:
http://people.duke.edu/~munger/euvol.pdf
As an attorney, there are some old common law concepts that could be grafted into libertarian exchanges to deal with something like this. This is probably not practical for your average grocery store purchase (although there are covenants that a practical to that as well), but the common law does have concepts of uncontemplated windfalls, and doctrines to deal with it. Something as silly (or not) as a sow (or cow I forget which was in the case we learned) that was assumed fallow becoming pregnant could be cause for rescission. Obviously there have always been complex covenants that travel with land sales, like the guarantee that you won’t unexpectedly have a hidden termite nest in the basement, etc.
One thing is true, which is that the law usually has tried to only punish the dishonest or the appearance of dishonesty, and stupidity alone has rarely been a reason for rescission (outside of those the courts deem unable to manage their own affairs, like Lennie from Mice and Men).
A cow—Rose of Aberlone.
Any legal system has to have rules of interpretation, since there is never enough fine print to cover all things that could happen.
People make decisions based on (roughly speaking) the expected value of the benefits of different choices (as well as risks etc.). So in terms of probability distributions, the exchange is still mutually beneficial when it’s made.
Also, if the exchange later becomes bad for both people, they can voluntarily undo it. If it becomes bad for one party (A), and it’s still good for the other party (B) but less than it’s bad for A (in monetary terms), then A can pay B to undo it.
That’s at most a reason to require sellers to give more information, not a reason to outright ban certain exchanges.
Arguably it shouldn’t even be necessary to require giving certain information, just make it so that if information is given on a product packaging, it’s considered legally binding. Then, if consumers want to see some information, then they should prefer to buy products that give it, so companies should have an interest in disclosing it. (At least products that are among the best on a certain metric should have an interest in disclosing it, and then consumers should assume the worst about a product that doesn’t disclose it.) That said, requiring giving information has little downside, and it’s possible that otherwise companies wouldn’t give out enough information out of inertia, so I can accept laws that require giving certain information. (Though even that is not costless, see those annoying cookie notifications that everyone agrees with anyway.)
Why not? Would you not prefer being given the information you need to make a decision, being given a recommendation, but being allowed to make the final choice?
Warranties and return policies help with information asymmetries and regret. Amazon has fantastic return policies, and a few times that I’ve complained they have given me a complete refund and told me to keep the product. Customer satisfaction leads to return business, giving an incentive for good companies to address concerns about regret.
Not sure what you’re asking. All systems are vulnerable to human irrationality. In Libertarianism, it only harms the irrational person. In a system where you’re ruled by others, you’re harmed by them when they’re irrational.
The information problem is a general critique of a naive belief in perfect free markets that always produce perfect outcomes in every instance. We don’t need anything near that to outcompete every other system. Just look around you!
Request for next survey: some sort of measure ifattachment style. And I hypothesize that avoidants will tend to have more libertarian politics.
How did pre-modern age soldiers survive conflicts? I’m especially thinking of sword fights. If you’ve ever watched any HEMA or Kendo or Dog Brothers (which is fantastic BTW)- in any melee conflict it’s impossible for them not to hit each other dozens of times in close contact. If two swordsmen face off, without heavy armor it seems like both men would be bleeding heavily in under a minute, and at least one would dead soon after from shock and blood loss. Defense where a swordfighter smoothly parries every strike is just for the movies. If you’re in close with a melee weapon, you’re going to get hit.
Then pre-modern/nonexistent medicine and ignorance of germs, anyone who didn’t die of blood loss on the battlefield is at a very high risk of dying of infection days afterwards. So- was the lifespan of every medieval soldier just a year or two? Did most sword-bearing men who went into battle die in their teens or 20s? How were there any old, experienced soldiers?
Yes yes, heavy armor deflects swords. But I suspect that only the very wealthy could afford such a thing. Also, if heavy armor was really that widespread on the battlefield, swords would go out of favor for maces and other blunt clubbing instruments, targeted towards the head. Even a glancing blow from a heavy mace to a metal helmet is likely a KO (the helmet might even make it worse by clanging).
So- did everyone in the medieval era die in their first few battlefield engagements? How could anyone survive multiple melee conflicts?
They didn’t have sword fights? Almost everyone on the field used spears? Most battles were archers picking off people who charged the opposing front line with sticks, while the other line had sticks and archers shooting at your front line. Why? Because swords, and heavy armor, are both expensive and not many had them. Lots of people did use blunt objects for just the reason you cite. Most soldiers used polearms and clubs because they are cheap and because they are easy to use.
This. Good armour was likely cheaper than a good sword. So sword fights with decent blades were rare.
In an actual melee the chance to properly swing a sword (they weren’t as useful as a dagger would be for stabbing) would be limited by the press of bodies as well. Think a rugby maul (or a football pushover touchdown for those who don’t know rugby) with hundreds of people involved…
Running away. Or, alternatively, watching the other side run away. Most ancient battles were decided by a rout, or even just a show of force.
Not being in the front also helped a lot too. And if you were, giant shields held by you and the two men to your sides were a big part of it. See Boudica vs the Romans.
A) Armies don’t fight like pairs of individuals And they tend not to use swords much.
B) Unless you’re fighting pikemen or foot companions, you’re only in great danger when right at the front of the formation. (or if you run away)
c) Big-arse shields
>swords would go out of favor for maces and other blunt clubbing instruments, targeted towards the head.
That did happen to an extent during the medieval era. Another option was to wrestle the opponent, pin them down, then go to town with a dagger on the joints in their amour, eye holes &c.
Pre-modern age is a long time (I assume you mean before antibiotics?).
As for infections, no joke, pissing on wounds and cauterization is fairly helpful. I’m sure others know of various other techniques, but you may overestimate risk of infection. Think of the bleeding of various predators suffered when interacting with resisting prey. Sure some may die of infection, but most go on to live and form scars without medical treatment.
But even knowing to cauterize or piss on wounds would require the germ theory- which didn’t develop till the 19th century, right? It would be intriguing if uneducated, illiterate medieval peasants had a crude ‘naive biology’ that intuitively knew about infection, centuries before science did
It might work for reasons they don’t understand, but they’d do it anyway. Urine is a readily available source of water for cleaning a wound, and fire stops bleeding.
Also Honey was used as a salve for wounds going back to Egypt. A lot of ancient medicine was BS, but some of it was accidental effectiveness that persevered.
Like, the Greeks didn’t need to understand the biological pathways of Coniine to know Hemlock would kill Socrates.
I think you have a common misconception that I used to have too. I blame it on scientists/science popularizers. They like to claim that science causes totally sweet technology and so you should definitely favor more money for science. But basically, you can accomplish a lot in a fairly empirical manner without a theory of the tiny things making stuff up. Often you have to.
Or even a theory of the big things.
For example, people built steam engines first. Then physicists invented thermodynamics. Humans built bridges, temples, etc. without knowing Newton’s laws. Humans had metallurgy for a very long time. And not a clue about atoms or anything like that.
A lot of the time, even if you do know the underlying theory, working with a toy phenomenological model works better. A lot of condensed matter physics (the study of semiconductors, fluids, granular materials, basically properties of bulk matter) works with phenomenological models rather than trying to compute how big things should work from our knowledge of quantum mechanics.
A lot of great science is a more rigorous method of trial and error with only a tentative connection to underlying theory. Although ideally you can build towards some solid theory from there.
The word you’re looking for is “shield wall”. Also armor, but not everybody could afford that, while shields and pointy sticks were cheap. A large shield, a long pointy stick, and mates you can trust, will beat any amount of flashy swordsmanship when it comes to Not Dying.
If the shield wall breaks, throw away the shield and run, and hope you were the first one to think of that strategy. Too bad for your mates that they were fool enough to trust you.
If you couldn’t manage to be part of a shield wall, you wanted javelins a light shield and a fast horse, or javelins and a light shield and for your enemies to not have horses. In either case, don’t weigh yourself down with other gear, maybe just sandals and a helmet and a short sword if you can afford it. Throw the javelins from the longest possible distance, then stay out of reach of anyone who can hurt you.
Substituting a bow for the javelins allows you to keep a somewhat greater distance, but it means you can’t use the shield and so probably isn’t a net win in survivability even if it does more damage to the enemy.
If you can afford the shield + pointy stick combo with the full armor option, and you can afford a large horse, you might as well call yourself a “knight” and ride the horse to the battlefield. But expect to dismount and form a shield wall for any serious fighting. Tactical horsemanship is for chasing down an enemy who decides to run away, or for running away yourself; if you impale your horse on the pointy sticks of an enemy who didn’t decide to run away after all, you deserve what you get.
I recall (from a lecture) that typical casualty rates in battles between Greek phalanxes were on the order of 10-20%. This would be most of the people in the front lines on the losing side, and some on the winning side. Back ranks usually made it out when they broke and ran.
Fun history fact: roman veterans were given land when they retired both as payment and to help romanize conquered territory. Several european cities at one point or another had military retirement communities.
Indeed many cities in Europe (e.g. Manchester) have names ending in a derivative of castrum: the Roman military camp or fortress.
Yes, but all a cester or chester (or Welsh caer) name meant is that identifiable masonry (which would have to be Roman period) was standing when the English {or Welsh) name was coined. Woodchester, Gloucestershire was named for a Roman villa for example. The Roman colonies in what is now England were Gloucester and Lincoln, both indicating their status by preserving the Latin word colonium in some form in their names, and Colchester, which was presumably named from the walls and civil buildings that may have stood long after the Romans left, at least according to the archaeology done there in the 1980s. So Chester type names aren’t madly significant in identifying colonies.
So I just got sidetracked at ask historians on this. Cursory search didn’t uncover what you seek but this account of the duel between Bazanez and Lagarde Valois was too good not to share. I’ll poke around there a bit further and see if I can find anything more specific.
Great thread; thanks for sharing!
I like all of the responses I’ve gotten so far, but I suspect that another part of it is simply that the poor & desperate died in massive numbers in warfare (or duels or from bandits or just from various interpersonal conflicts). Life was nasty, brutish and short, and fighting aged males died in large numbers- the end. Sort of like how childbirth had a much higher chance of being fatal back then.
I remember reading something where the CIA, who was giving anti-tank weapons to Afghan insurgents against the Soviets, figured out that the average lifespan of an Afghan antitank-wielding guy was like two weeks on the battlefield. Probably not that different from (some) medieval combatants
I mean, yes. This was history. A lot of people didn’t see 30. You fought wars all the time and they lasted forever. (forever meaning you could be born after it started and it’d still be happening when you died) Two weeks sounds on the short end. The unlucky lived at least 3-6 months because it took that long to get to the battle.
In wars before the 20th century, the majority of casualties were due to disease. The typical unlucky sod who went off to war and never came back took sick of dysentery and shat himself to death without ever coming in contact with the enemy.
You imply death but say ‘casualties’.
Do you mean the majority of dead soldiers from diseases, or do you mean the majority of soldiers dead + too sick/injured to fight due to disease?
Both. The majority of the dead died of disease, and the majority of the disabled were disabled due to disease. The latter should be obvious when you consider that sick people don’t just drop dead, and some sick people recover. For example, take the French during the Crimean War (taken from Wikipedia for convenience):
135,485 total casualties
8,490 killed in action
11,750 died of wounds
75,375 died of disease
39,870 wounded
You can see that 75% of the dead died from taking sick. However nearly all soldiers who died of illness must have been disabled at some point. If we count all who were wounded as being disabled, then you see that disabled by disease is still at least 65% of the total. When you account for those those who were disabled by sickness but did not die, the proportion should be even higher.
@woah77
Wars back then were very low intensity though. War often could only be fought during certain periods. There was a lot of marching and waiting between battles.
I feel like you just agreed with me. Or at least the notion I was trying to get at. Even if you were the unlucky sod who got stabbed on the battlefield, it’d be at least 3-6 months before you saw battle from the “start” of the war.
This post is getting nearer. Selected quote- “In pre-gunpowder combat, the battlefield was in many ways less lethal than it was today. The majority of casualties would likely happen when one side broke and fled, which would allow the victors to pursue them and kill them more easily. Movies portray pre-gunpowder warfare as a giant meat grinder, where two sides smash into each other and rip each other into shreds for hours. This is not how combat really worked in the period. The most convincing model among historians today is that battles happened in short “pulses” rather than one giant slog. Groups would advance, engage briefly, and whoever was starting to lose would break off to recollect themselves and prepare to try again”
The typical conflict wasn’t two swordsmen facing off, it was more like two lines of men butting their shields while trying to poke, hack and bash each other, mainly with spears, axes and hammers. Swords, if they were carried at all, were mostly used side weapons, much like the pistols that modern infantrymen carry.
And they almost always had some kind of armor made of various materials from hard vegetable fiber (e.g. this Coconut fiber armor from the Kiribati culture of Micronesia) to leather, to metal scales, chain mail or segmented plates. You might object that a light armor might not have been able to stop a clean, well aimed and powerful strike with a sharp blade, but in a real battle strikes were rarely clean and blades quickly lost their edges and points.
Full plate steel armor was developed relatively late, and war hammers and heavy swords such as the Zweihänder, as well as increasingly powerful projectile weapons such as the crossbow and firearms were developed in response to it. Eventually firearms became sufficiently powerful, accurate, reliable and cheap to mass produce that they made armor irrelevant until the modern plastic-ceramic armors, and in fact soldier mortality skyrocketed in that era.
The gladius was the main weapon of the Roman legions.
Sort of.
if I understand correctly the default tactic of Roman legionaries was to first engage at distance by throwing darts (plumba) and javelins (pilia). The Roman pilium had a shank designed to bend on impact, making it difficult to pull out of wounds or shields. Once the enemy ranks had been thinned out and partially de-shielded, the Romans formed a shield wall and engaged at close range with their short swords (gladii).
Usually it worked well, except when facing heavy cavalry (cataphract) who were essentially immune to projectiles and could easily mow down shield walls.
The gladius was the main weapon of the Roman legions only if you consider the pilum to be the secondary weapon. But, insofar as heavy infantry combat was decided by the clash of shield walls, the bit where one side would have the sort of second-rate shield wall you can form when you’re using swords instead of spears and the other side would have a ragged line of men whose former wall of front-rank shield bearers are now mostly lying bloody on the ground asking “weren’t our shields supposed to be javelin-proof?”, is not exactly secondary.
Approximately 75% of a Roman soldier’s offensive weaponry, by weight, consisted of extra-heavy armor-piercing javelins designed to counter shield walls and critical to the success of Roman legions in combat.
In context, I meant primary *hand to hand* or close-quarters weapon.
My comment should be read as giving a one-sentence counterexample to a specific claim, not a full description of Roman combat tactics.
It seems to me that the Roman killer app was the discipline to make peltasts (javelin man) lock together into a shield wall, and even a testudo immune to arcing arrows. Using the gladius when they ran out of javelins was probably an ergonomic compromise.
Polybius has a famous passage answering his reader’s incredulity that men with short swords could defeat a pike phalanx.
Thanks for the reference. I will have to check it out. I have wondered about that myself.
I’ve heard the claim that a Greek phalanx would defeat a Roman legion head on given flat terrain, but the greater flexibility of the Maniple formation allowed the Romans to outmaneuver the phalanx, either by flanking them or falling back to rougher terrain. Not sure if that is correct.
This might be based on the Battles of Cynoscephalae and Pydna. In the former, a Roman multi-legion force defeated a similarly-sized Macedonian army (primarily made of phalanx-style infantry), in part by detaching part of a legion to exploit a break in the Macedonian line and attack some of the Macedonian phalanxes in the flank. In the latter (about 30 years later in a different war between Rome and Macedon), the Romans won by retreating over rough terrain, which disorganized the Macedonian units, allowing the Romans to counterattack individual phalanxes in the flanks.
I’ve seen various people on youtube (IIRC, Lindybeige, Matt Easton, Shad, and Skallagrim) talking about this as a major unrealistic feature of modern SCA fighting, reenactment, and HEMA-based sparring: the surviving historical sword-fighting manuals place a much higher emphasis on not leaving your opponent openings to hit you, compared with the way most modern people spar with practice swords.
Battles were relatively rare in pre-modern warfare: wars were mostly fought by sieges (a large force sits outside a fortification and waits for the smaller force inside to starve and surrender or for the attackers’ engineers to batter the fortification into rubble) and harrying (looting and burning undefended and unfortified parts of the enemy’s territory). In the Hundred Years War, for example, wikipedia lists 56 “battles” over the course of 116 years, and a solid majority of the “battles” listed are actually sieges, naval engagements, or very lopsided affairs where a small force got in the way of a much larger one (at relatively little risk to each individual soldiers in the larger force, and it looks like it wasn’t uncommon for most of the smaller force to run away successfully). So maybe one major pitched land battle every 4-5 years even in wartime, and not the same forces engaged in each battle, so an individual soldier might only fight in 1-2 major pitched battles over the course of a 20-year career.
There’s also a huge spread in terms of training and equipment in pre-modern armies, and the highly-trained professional soldiers with good equipment aren’t necessarily going to be fighting each other directly. And it wasn’t uncommon for the hottest fighting to be done by lower-quality troops (militias, peasant levees, etc) while the higher-quality troops were held in reserve. The Romans in particular made this a formalized practice: the Triarii (older, experienced soldiers, equipped with spears at least in the Republic period — I’m not sure if the practice carried over to the Empire) would be held in reserve and only used if the less-experienced soldiers were in serious danger of losing. So the soldiers doing the most fighting, and especially the most fighting against equal-or-better-quality opponents, would be the kinds of soldiers who would see 0-1 battles in their career, while the professionals who might see several battles in their careers wouldn’t necessarily fight in every battle they appeared in, and when they fought, would often be fighting opponents whom they severely outclassed.
They did. Romans (at least in the late Republic and early Empire) used swords as primary melee weapons, but in medieval/renaissance eras, spears, polearms, and lances were the standard primary weapons. Swords were either specialized weapons (e.g. the giant Zweihander swords used to knock spears aside and create an opening for your buddies to stab in their own spears) or sidearms (used as a backup when you lose your primary weapon or find yourself in a situation where it’s unsuitable).
But still, outside of open war you had interpersonal conflicts and duels and drunken arguments that turn into fights and bandits and quarrels…. supposedly the homicide rate was so much higher in medieval times than modernity. Seems like a professional soldier or other tough guy would at least be in several armed conflicts of one kind or another in his life
The homicide rates in 14th century London and 15th century Amsterdam were both in the neighborhood of 50 per 100,000 (source), or about 1 murder per 2000 person-years. That’s high compared with modern murder rates (about double murder rates in New York City in the early 90s), but probably not as high as you’ve been thinking. Even if those deaths were concentrated in a particular class of people (e.g. men with soldierly/tough-guy backgrounds), they’re still more likely to die of ordinary causes (disease, age-related degeneration, accidental injury, etc) than violently.
I’ve heard that duels were typically fought to first blood, so one combatant would be uninjured and the other wounded (most commonly a superficial cut to the arm or leg*). Based on recorded homicide rates, I’d also expect there to have been some kind of generally-accepted limits on brawls to keep them from getting too far out of hand (social norms like “don’t be the first to pull out a knife” or “you don’t need to kill the other guy, just knock him down and claim victory if he doesn’t try to get up”).
(*) This changed in the Renaissance, when long, stabbing-optimized swords (rapiers and smallswords) became the standard weapons sword-armed civilians would carry and use in duels. With a rapier, a deep stab to the torso becomes a lot more likely as the first wound, and a punctured lung or perforated intestine is very likely to be a mortal wound without antibiotics and modern surgical techniques.
This answer seems the right one. If getting hit means a good chance of dying, then one will put a lot more effort into not getting hit.
My experience (for what its worth) with single combat is that when facing a real knife or sword with a real opponent, tactical distance becomes much more pronounced in your tactics. As a result, either the foolish guy gets hammered quickly, or else both experienced opponents do their best to stay out of range of the other guy’s weapon unless they perceive or create a sure-fire advantage.
That translates into larger, yet still relatively evenly sized groups in that if you retreat just a little bit and your direct opponent moves to close the distance again, he suddenly finds himself in range of your neighbors in the line who are now flanking him. So he’d tend to avoid that situation as well.
Without coordination and trust between members of a line of battle (which is what makes soldiers so much more deadly than a group of warriors, if you get the distinction), you’re either in a brawl or the smaller side is running pretty quickly.
The other factor to take into effect is the evolutionary one. There’s a reason the highest percentage of those killed tended to be the first-timers. If you haven’t figured out how to best avoid getting hit, then you’re much more likely to die or be seriously wounded. Once you have, your longer term survival rate is much higher than your first battle survival rate would imply.
BTW, it’s not just melee weapons. People behave much less recklessly over time in real combat with firearms than they do when playing paintball or first-person-shooter video games. Knowing its “for real” and you can’t actually respawn makes you much more cautious. That’s one factor I liked about the old America’s Army game, it was designed so if you died, you were dead. No re-spawning during that round. Made people at least a little bit more likely to use more realistic tactics.
Operation Flashpoint: Cold War Crisis was a precursor to America’s Army and was fairly revolutionary in its realism. Injuries would not heal and you had very few saves, so you had to be very careful.
It was so realistic that an army simulator was derived from it, Virtual Battlespace Systems 1 (VBS1).
I found OFP and its successor games extremely tedious, because while they aimed at realism, they still demanded action-movie heroics of the player character.
There was one, otherwise extremely undistinguished, shooter game (I mean really undistinguished; it was like playing a big map online shooter with a bunch of bots) that had an interesting frame narrative – the protagonist wasn’t the PC, but rather, was a journalist. The PCs were nameless soldiers, and when you got killed, you respawned as another soldier. Meant you could have a coherent narrative focusing on a character, without the “Private Wilson is dead; war’s over” nonsense.
Presumably, to avoid that and still have a decent experience, you need to play as a (trained) team, which was how VBS1 was used.
I used a save-cheat, allowing me to save more often 😛
I have a vague notion that people with armor and swords didn’t fight people with armor and swords; they attacked the amorless people on foot. You wanted to CAPTURE the guys wearing armor, ‘cuz they could fetch a fine ransom. Likewise if you wore armor and were captured, it was understood that you’d be preserved and ransomed. Outside of a tournament, only a fanatical knight wound betray his class and the rules of chivalry by doing mortal combat with another knight.
At the Battle of Agincourt, Henry V showed his loyalty to the English, not to his class, by declaring that he would not be ransomed–and later by ordering the execution of most of the French knights that had been captured. (Ok, there was also the small matter that the number of captured French troops outnumbered the number of English soldiers, and it was unclear whether the battle was over, but let’s make a virtue of necessity….)
One odd feature of Agincourt is the number of important nobles captured, the sort with really good ransom value (chivalry as a set of values was effectively very polite piracy). This meant that there was little value in keeping alive the non-noble knights (the majority of knights were household retainers, not nobles themselves) as they had minimal value, and little chance of being ransomed anyway since the household monies were going to have to pay for the lord’s ransom first. Henry was clearly a rational thinker.
You are grossly mischaracterizing the nature of Henry’s decision to order the lower ranking prisoners executed at the Battle of Azincourt. What happened is that towards the end of the battle Ysembart d’Azincourt and his personal troops used their knowledge of the local terrain to get around the English line and mount a successful attack on their baggage train.
At the same time this happened, the large and still fresh French rearguard started moving in a manner that appeared to the English as though they were preparing to enter the battle. In reality, they were making ready to withdraw from the field. However if they had advanced to engage, it would have left the English in a position of facing attack from two directions while their forces were intermingled with a vast number of prisoners who could have taken the opportunity to take up the weapons strewn about the field and resume fighting.
Also men-at-arms didn’t have “minimal value” as you say, the ones with minimal value would be common soldiers who were not even household retainers. (Though none were captured since the French men-at-arms left them in the rearguard.) Pretty much any free man captured in battle had some value to someone. Moreover people were willing to hold men for years while their families saved money to pay for the ransom. Were it not for the precarious situation, there would have been no rational reason to execute any of the prisoners, and indeed they would not have been.
In short, the Henry ordered the slaughter of the lower-ranking prisoners because of tactical concerns rather than economic ones. There were simply too many of them to control while fending off a two-pronged attack. He was indeed being rational, but not in the manner you describe.
“Pre-modern” (pre-gunpowder, I assume) covers an awful lot of time. But the answer you’re looking for is shields, mostly.
You’ve also been misled by the sparring matches you’ve watched. Real combatants were much more careful about minimizing their own vulnerability, since the penalty for being hit is a likely-fatal injury rather than just getting whacked with a stick. Likewise, the need to minimize exposure is a major culprit in the well-documented plummet in small arms accuracy in modern combat compared to range settings.
A few other factors: the average soldier, even a professional, spent the overwhelming majority of his time doing something other than fighting in battles.
The need to keep wounds clean was well understood, so while infection more dangerous than today, preventative measures against it were known and taken. In fact, there was a general understanding of hygiene/sanitation being important, even if all the mechanisms weren’t scientifically understood. A gunpowder example, but the soldiers in Stonewall Jackson’s army weren’t permitted to eat in their tents, for example.
And to be clear about the magnitude here, if you watch helmet cam combat footage from places like Syria or the Ukraine, there is a lot of firing blindly in the general direction of where you think the enemy is. This kind of thing tends to get left out of movies, TV shows, video games and so on because, well, it’s boring.
Suppression of the enemy is an important concern. This is a major reason why suppressors are not that often used in the military. Generals want the enemy to be afraid to accurately fire at you, while they (often in vain) try to get their own soldiers to aim in the face of enemy fire.
FWIW, my personal but not uninformed opinion is that suppressors aren’t used because they cost money, add weight, and because the National Firearms Act has retarded their refinement if not their development.
Also because they don’t actually make firearms that quiet, so it’s not like nobody can hear you. The distances required to make your firearm effectively quiet with a suppressor means that you are also far enough away to have echoes and other interference make it difficult to pinpoint your fire even without it.
Shields, however, are cheap.
That’s partially based A. on lack of incentive- you don’t die in real life if you die in hema B. on lack of skill- Even the most dedicated HEMA practitioners are not people who’s lives depend on it, nor do they dedicate their highly to it.
And nor is martial ability and skill any longer something that brings brilliant or ambitious people glory en masse. The people who would have tried to make their way by physical fearlessness and brilliance of coordination in the past is not the same demographic that dabbles or even devotes themselves to hema nowadays. If you have godlike reactions and focus, amazing physical attributes, or an unparalled ability to devote yourself to a craft, there are still life paths you can pursue to gain fame, glory, and riches, so why would someone like that devote themselves primarily to historical, -clues in the name, swordsmanship instead?
>Defense where a swordfighter smoothly parries every strike is just for the movies.
You don’t need to parry every strike, you need to parry 1 or 0 strikes in order to strike them a disabling blow.
_
>hit each other dozens of times in close contact
There are stories of this happening though. Few points/ideas:
-Getting “hit” doesn’t mean getting destroyed. If you’re serious about killing someone or getting killed you’ll ideally become a bit less attached to your flesh than you otherwise would be.
-I wouldn’t be surprised if professional soldiers had stronger regenerative capacities because of general vitality and practice for minor wounds.
-I remember reading that professional gladiators in rome used to be fatter than we’d expect, because it functioned as a layer of protection. (sort of like a walruses skin)
-Speaking of armour.. armour! -Most hema bouts simulated unarmoured combat.
Don’t want to tread into CW-territory so I’ll keep it personal when it comes to taxes. Curious if anyone else was surprised to see an additional bill rather than refund. I have always gotten a refund and was going to switch to H&R block, but joke’s on me. No refund to switch for.
1. I’m salty about SALT. Because I’m in NYC, I lost over $11k of deductions due to the $10k cap.
2. Increasing the standard deduction hurts donators on a relative basis. I think it increased from 6,350 to 12k. Seems like a tacit form a welfare but not positioned as such, but essentially it means the first X dollars of giving providing no tax benefit has increased
3. My tax attorney friend said that there were various exemptions for itemizers that were eliminated, and that some likely applied to me.
All that said, the effective tax rate along the way could have been lowered, but I suspect not by an amount that makes up for the 11k SALT deduction difference alone.
There are a bunch of deductions that should never have been deductions in the first place. SALT is one of them. Mortgage interest is another. Health insurance being tax-free is also a weird artifact that escaped JFK and Reagan’s tax reforms, and has bitten us in the ass even since.
Even when taking advantage of those deductions, I knew they needed to go.
Re: 1. The ultimate irony of the Trump administration so far is that Paul Ryan & the Republicans passed a progressive reform to the tax code that redistributes money from the high income crowd- a Sanders-like policy. (I say this as someone that’s going to have to write a $50k+ check to the IRS this year)
It’s worth bearing in mind that the guy that designed the tax reform for Trump was a globalist Democrat (Gary Cohn). The Freakonomics guy recently interviewed him and he claimed that that’s not a bug, it’s a feature.
Some of the complaints about that is that it seems designed to punish blue state rich more than red state rich, but I think most of the changes are good policy.
It does appear to be both designed to increase taxes on heavy blue areas, but also to not give them credit. Lots of democratic tax proposals discuss the rich paying for the services which these ones don’t.
I was struck after it passed by the degree to which people were complaining about features, such as restrictions on the deductibility of state taxes and mortgage payments, that mainly targeted the rich, in a bill that was routinely described as helping the rich.
That might be in part a sign of equivocation between different senses of the word “rich”. In particular, there are a lot of people in what might be termed the “professional” class (successful mid-career programmers, doctors, lawyers, etc) who consider themselves “upper-middle class” while still having far above-average incomes, and are substantially affected by the limits on those deductions, but who may still favor higher taxes on people substantially richer than they themselves are.
The self-perception as “upper-middle class” is somewhat understandable based on 1) norming off of your peers, who are likely disproportionately in similar fields and income brackets as yourself, 2) high-paying professional jobs being disproportionately concentrated in very high cost-of-living areas (e.g. the Bay Area and New York City), where a well-into-six-figures income doesn’t buy much more than what would be an upper-middle-class standard of living in most of the rest of the country, 3) there being a substantial separation between a “professional class” income/wealth level and the income/wealth level of, say, an executive or founding owner of a major company.
The mortgage interest deduction is now limited to $750,000. Someone paying more than that on his mortgage had better have an income that is more than “well-into-six-figures.”
The mortgage interest deduction is now limited to the interest on a mortgage with a principal amount of $750,000 or less. That’s easily achieved with a “well-into-six-figures” income. The SALT deduction of $10K is even easier achieved; that would be property tax on a ~$300,000 house in my town.
I’ve been hearing a lot more complaints about SALT than the mortgage limit, which doesn’t surprise me since the SALT cap is a lot easier to hit, and since existing mortgages are grandfathered at the old limit ($1M in principal), so only people who took out new loans in 2018 with a principal balance between $750k and $1M are affected.
@Nybbler:
You appear to be correct. My error. Mea Culpa.
I live in a relatively high-tax state (Oregon), but I didn’t notice anything change with my taxes. Of course I do have 2 kids and don’t make a ton of money, so I benefited with the 2k per child deduction.
I think the issue is that you live in NYC. It never made sense to me why the federal government should subsidize high-tax areas.
The thousand other things they subsidize make sense to you?
It’d be nice to cut almost all of those things too. Let the reaping begin. Farm subsidies first.
I think “thousand” might be a bit of an overstatement, at least as far as personal taxes go. It seems like most of the major “subsidies” make sense. The two especially I agree with are (a) long term capital gains and (b) children. I think the government should encourage long term financial investment in the economy as well as encourage productive members of society to have children.
I’m pleased with the SALT deduction loss (I thought it was bad policy), and losing it meant I also lost the charitable deduction (which I’ve argued before shouldn’t be itemization-dependent) and the mortgage interest deduction (another bit of bad polciy).
Net/net, it cost me a little money–but I’m feeling like the Pole wishing for a Mongol invasion.
I’d been itemizing in previous years, and now I’m taking the standard deduction, but the biggest itemized line item was state taxes, which are capped now. It worked out to be about the same, and I got a small refund from the feds, which is what I was shooting for.
The bigger surprise was a big (~$1000) refund from my state, which I wasn’t aiming for and still can’t really make sense of. Not going to argue, though.
We got a bill this year instead of a refund, but wasn’t overly surprised. Our mortgage interest is going down, and along with the SALT limit that means we used the standard deduction ($24K MFJ). But in previous years we’ve hit AMT anyway, which also eliminated the SALT deduction.
I owe about $140k. But that’s because I sold a house last year in the Bay Area and realized a decade’s worth of capital gains, and I expected and budgeted for the tax bill.
Big tax bill, but that was mostly the result of good fortune and basically expected.
I will say that the effort of tracking deductions, then finding that the standard deduction is now high enough that there was no point, is one of those good fortune things that’s going to get old. I imagine in a year or two, I’ll just quit keeping track of visits to Goodwill, etc.
I don’t expect my charitable activity or finance structure to change much. It would arguably make sense to shift towards “bunching” -e.g., giving five years of charitable contributions at a time in order to try to take the standard deduction 4 years and itemize one -but it’s way too much trouble at my level.
Since withholdings changed this year, wouldn’t a better question be to ask if people were surprised at their taxes going up or not?
A few months ago all my news feeds were people shocked that their refund went down, or that it did not go up as much as they were expecting. But at no point in the articles did they ever say if the total taxes paid went up or down (although one article had a tax advisor quoted as saying tax refund amount is different than taxes paid, it did not bother to go into more detail).
My refund went up and my withholding went down. But I also have two kids, a wife, live in a state without income tax, and have never had enough money where it made sense itemize.
Alternate years of taking the standard deduction with years of donating 2x as much?
Total federal taxes reduced by 30% on an apples to apples basis (controlling for changes in income and number of dependents from 2017 to 2018). Two grand tax credit for each kid, doubled standard deduction, and lower rates across the board led to the decreased tax obligation.
My deductions have been lower anyway since I paid off my mortgage in 2017, so the SALT cap didn’t hurt me and I would have likely taken the standard deduction for the first time this year even under the old tax law (last year I itemized but it was marginal benefit over the standard deduction).
We paid significantly less in taxes this past year (several thousand) despite earning more than last year due to the changes + adding another deduction to our family. We are right around the 6 figure mark in earnings plus another 15k or so in rental income (with lots of deductions against that).
I’m “upper middle class” in an area with high state taxes. I don’t know (yet) how much my total tax bill went up – but Turbo Tax suggests I’ll be writing a $10K check to the feds, which will probably also result in penalties for under deducting/not making quarterly estimated tax payments. (My automatic deductions claim no exemptions whatsoever; next year I’ll have to withhold an extra $420 per paycheque. But some of that may be the result of the deduction tables being slanted to increase people’s take home, as was announced to be the plan at the time.)
At the risk of cultural wars – I don’t object to a tax increase per se. I object to one that appears to me to be targetted at residents of specific states, and that was farther more sold as a tax decrease. And it would have been nice to have had useful guidance on what withholding levels to set.
Everyone likes their own carve out and wants someone else to pay up. You have met the enemy for why tax policy sucks and he is you. You may as well be complaining that the tax increase was targeted at rich people. I doubt taxes went up for poor people in NYC or California.
There is no sensible policy reason the federal government should lower the tax burden it imposes because the state and local government has a higher tax burden. Almost everyone who claimed SALT made more than 100,000 a year. That makes them high income. And the standard deduction doubled anyways, so the change probably doesn’t bite much until you go higher than that.
The mortgage interest deduction is poor policy too.
And I’ll be moving to California after finishing graduate school in a couple months to get whacked by the changes too, so it’s not like I’m not going to get bitten by the change.
EDIT:
I think this is a valid complaint though. Even if the info was somewhere (technically in the worst case it could be worked out after some tedious pain), it’s the sort of thing it’d be nice to get a piece of mail about or a big advertising push.
You are expecting to make more than 100,000 two months after graduating?
This is not unheard of. My brother (who didn’t even graduate) was making more than 100,000 in SF at a mobile game company.
Yeah, I’ve hired people at $100K right out of (graduate) school. I think the average in STEM world is a bit lower than that, $100K isn’t three- or even two-sigma high for a starting salary with an advanced degree.
Sigh. I’ve done nothing with my life.
…
Yeah, well, other than that I mean.
Possibly immediately, possibly within a few years. There’s a large amount of uncertainty here. I could make more or less. I’m not deeply invested in an exact salary number, so depending on my options I’d give up a significant amount in return for a more flexible schedule that let me have more time with my kids when I have them etc. After adjusting for cost of living though, 100,000 in California is like 75,000 where I live now which I’d bet is closer to the median of U.S. cost of living. Which is a great salary, but not as crazy as a six figure salary.
I’ve had friends with somewhat less technical degrees than me be making in the six figures at the first job they land after depositing.
If I could go back in time, I might not get the degree though for two reasons. I spent 7 years making 20,000 a year when if I had focused on job searching or took internships instead of doing abstruse research during summers of my college degree and then doing more of that in graduate school I could’ve taken a nice job at ~60,000 in California 5-7 years ago. But more importantly, I would’ve had a better chance to get married earlier and have kids earlier.
That’s a much stronger claim then “on the balance it’s a bad idea”. For example it’s plausible that higher state and local taxes substitute for some federal spending. Or that the total tax rate in e.g. California is an overall net negative for the country and we’d be collectively better off if the federal government lowered it.
I understand that in a theoretical discussion you are correct. But in the sense of how U.S. spending is actually apportioned, I don’t buy it as being a meaningful argument.
The carveout was there because upper middle class and richer people liked it. Now it’s gone. If the combined tax burden in California is too high, they should lower their own taxes. Having grown up there, California government is not famous for its efficiency or good spending habits.
There’s definitely a perverse incentive in a federal system where the states are told to set whatever level of taxation and spending they feel is appropriate and the federal government effectively covers ~1/3rd of that by subsidizing the taxpayers and thus cutting off part of the “this is too much taxing, knock it off” feedback loop.
Theoretically, anyway. But in NJ that feedback loop doesn’t really exist. It’s just government claims it needs more money, government increases taxes, government fails to do anything useful with said taxes, government claims it needs more money. If anyone objects the word “schools” usually shuts them up.
About whether taxes increased or decreased:
My taxes owed came out a little less, but my withholding had decreased by more than that…so I owe money to the Feds.
My previous itemized deductions came very near to the new standard deduction, so the change in deductions didn’t change the Taxable Income very much.
Before the change in standard deduction, I wanted to itemize interest-on-my-mortgage and the values for property tax. Now I don’t care. Also, my local-and-State-taxes are far below the new cutoff for deductibility…but since I’m using the standard deduction, I don’t care.
It does grind me, however, that the withholding done my my employer left me owing a noticeably-large amount of money to the Feds. (Large enough that the tax software recommended that I file Form 2210. That Form must be filled out if withholding was less than 90% of taxes owed. Some penalty may apply for under-withholding, but it will likely not apply in my case: my withholding was between 80% and 90% of taxes owed.
So I’ve got bored with conversations about the relative merits of various translators of various classics, and have thus resolved to become a polyglot.
It seems sensible to start at the beginning of the Western Cannon.
So does anybody know any good resources for learning to read Classical Greek?
I’ve found this, so far: https://lrc.la.utexas.edu/eieol/grkol/50 (wow, the Scythians were massive stoners)
It depends on what you’re trying to read. Most courses teach Attic Greek, so you’ll be able to translate Plato but will have a good deal of difficulty with the epics. We used Groton’s From Alpha to Omega in my undergrad, which suits me very well but not, perhaps, many others—it’s heavy on grammatical explanations verging into technical.
Read the chapter, take notes, practice verb and noun forms regularly throughout the week by writing paradigms from memory, and assign yourself a selection of sentences at the end of the chapter. Normally I’d recommend you leave yourself some for review later, but Groton’s pretty good about reusing tricky word forms or grammatical features you learned at a pace approximating the forgetting curve, so just doing the ten into-English and 5-10 of the into-Greek sentences each chapter should suffice. I’ll grade them for you if you like.
If you make it through about 30 chapters of that, though, plus the chapter on μι-verbs, you should be able to stumble your way through guided translations. We translated from Steadman’s edition of the Symposium and it was quite good, but beware that there will be quite a few typos in the notes—I sent him about 30 from the Symposium at the end of the semester, and we didn’t even translate the whole thing. But these typos will rarely actually trip you up. If it’s the epics you want, try his Iliad or Odyssey books instead.
Recitation/singing and calligraphy would also be useful skills.
(for some definition of the word useful)
Canon. They’re both serious business, but the canon is the standard you adopt, and the cannon is what you enforce it with.
While you were composing this, someone downthread made the exact same typo.
Truly there is nothing new under the sun.
Also not to be confused with qanun, which is the same word as canon, setting the standard for what the other instruments have to tune to, I guess.
Qanun can also be used in the sense in which it was borrowed; Avicenna wrote a Qanun.
https://www.logos.com/product/34090/a-reading-course-in-homeric-greek is a decent introductory book to Homer.
If you want free, Pharr’s Homeric Greek textbook is on google books. And there are answer keys online. But the book at JPNunez’s link looks a lot more user-friendly.
https://play.google.com/books/reader?id=C3gKAAAAIAAJ&printsec=frontcover&pg=GBS.PP1
https://commons.mtholyoke.edu/hrgs/interactive-exercises/lessons-6-10/
You’ll find a few grammars of Greek here to mix and match from. I recommend the Babbitt or Smyth for Attic if you’re after completeness.
If you get bored, there’s a few hundred other languages to browse through, including every classical language ever.
The Great Filter hypothesis posits that we haven’t detected intelligent aliens because they all go extinct, but how can this happen once the aliens establish self-sustaining colonies in multiple star systems? Even if their home planet in their home system exploded, the colonies in the other systems would be unaffected. I don’t see how every remnant of an interstellar civilization could just disappear.
The point of the Great Filter hypothesis is that it happens before a civilization reaches the “interstellar” level. It’s attempting to explain why we don’t see signs of alien life – past or present – and so the fact that we don’t see signs of alien life isn’t evidence against it.
Mass Effect had an excellent answer for this: They get hunted by some extragalactic entity that consumes sapients. Which is to say: traversing the stars is a noisy affair, there ain’t no stealth in space, and a quieter hunter could eliminate a civilization without too many troubles assuming a reasonably large timescale.
Then why isn’t this hunter consuming all the available resources we see lying around? You have to posit pretty weird preferences for this dominant civilization that it decides to just sneak around and assassinate any other sapiens. Some sort of extreme Gaia cult, at an intergalactic scale.
I mean, maybe it needs organic minds to fuel some kind of collective and they need to evolve to a certain level to be useful? I was positing an example of how a starfaring society could still go extinct. The epistemic status of this is “Gotten from a video game”. There could be all sorts of explanations for why, but that probably is the least useful question to ask oneself. The better questions are “How likely” and “what might be done”
My reply was eaten by the filter. Basically: see Yudkowsky’s “Generalizing from Fictional Evidence”. But in this case generalizing from a game might be fine, since one solution to the FP is that we’re in a sim which is a game, and we haven’t seen anyone else yet because all the players started out at the same tech level.
Well, in the Mass Effect Cannon,
** Spoilers for a 10 year old game series **
the hunter species (called reapers) were actually rogue AI programed to periodically cull the galaxy of all species capable of interstellar travel. Since that culling was their only purpose, and the time horizon between their “harvests” wasn’t long enough for any species to develop technology capable of threatening them, they didn’t need to consume any more resources beyond what they need to keep them functioning.
Gotcha. By “rogue” I assume you mean they screwed up and the AI wiped them out?
So the aliens solved the control problem well enough to successfully limit the AIs to not consuming all available matter (the most straightforward way to guarantee they don’t miss any interstellar civs), but not well enough to avoid getting wiped out themselves? That’s a *very* narrow part of outcome space, exactly corresponding to what makes for a good story.
Aftagley is incorrect: the Reapers weren’t originally programmed to destroy intelligent races. In fact, the species stems from an AI that was programmed to solve the problem that synthetic and organic intelligences inevitably end up killing each other. It apparently didn’t find a solution, destroyed its creators, and processed them into the first Reaper.
The Reapers transform the “harvested” intelligent species into more Reapers (a Reaper takes the form of a large capital ship) and apparently couldn’t reproduce in a satisfactory form without them. So wiping out the galaxy doesn’t actually help them, they just lose their prey.
They were actually trying to speed up the cycle a bit: the FTL system in-game is a network of jump gates that were originally created by the Reapers. This helps species develop and uplift each other, and then also lets the Reapers cut the species off from each other when the harvest comes. The center of the gates is a large space station that tends to be a galactic hub, and is also where the Reapers first emerge in a harvest, which helps throw the organic species into disarray.
Things started falling apart for the Reapers when one species managed to have a group survive the harvest, monkey with some of the Reaper technology, and pass enough information to the next generation (us) that they knew about the harvest in advance.
Yep, going deeper into cannon:
A long time ago there was an alien species that could dominate other intelligence life via some kind of psychic control, which the humans of Mass Effect call “leviathans”. This species ran a galactic empire based on mental repression. Their control wasn’t total, however, and the species kept some degree of independence of action, but not of motivation. So, they could live life and develop technology in a relatively normal fashion, but couldn’t consider trying to overthrow the Leviathans.
Unfortunately, a trend emerged among these slave races: at some point they would all develop artificial intelligence. This artificial intelligence would then slaughter the race that created them and try to take over the universe/maximize paperclips/whatever. This meant the Leviathans were constantly (on a galactic timescale for immortal psychic aliens, I mean) having to go to war with AIs.
They won these wars, but eventually got tired of the constant warfare. They didn’t want to just kill every other species, since their way of life required a galaxy full of mental slaves, but everything they tried to stop them from creating AIs failed: eventually the client race would develop AI, then it would destroy them.
Eventually the Leviathans realized they couldn’t solve this problem on their own and (in a massive oversight, imo) decided to build an AI to help them think of a solution. The AI then decided the proper strategy to reduce AI risk was “kill all species as soon as they develop the technology necessary to leave their home planet.” This would keep a stable of slave races around, but end the potential for AI risk.
Unfortunately for the Leviathans, the AI choose to begin its mission by culling the Leviathans, although one or two of them survived in hiding.
I’m not sure I had ever seen the details of the leviathan civilization before. Where did those show up? EDIT: found it, looks like there was a Mass Effect 3 DLC I didn’t know about.
It’s sort of weird that the Reapers aren’t actually enforcing this solution though. The “hide in the galactic depths, then come forth to destroy all” strategy comes far too late for actually accomplishing the AI’s objective. Heck, we even have an AI risk situation with the geth, and that was hundreds of years before the Reapers came.
One line of thinking is that having a dominant civilization kill all the other civilizations is plausibly a stable equilibrium, and possibly the only one or one of a few. So eat or be eaten; eventually the universe will spit out a civilization suited to destroying all the other ones. This doesn’t predict that they should be stealthy (other than stealthy things being better able to eliminate other civilizations). Nor does it say anything about resource utilization, but seeing as our civilization is projected to have a plateauing population the assumption that all civilizations would try to expand as much as possible seems tenuous anyway.
A temporary reprieve, I’d wager. Malthus/Darwin are not so easily evaded.
That’s a very good question, but it’s one that Mass Effect infamously failed to come up with a good answer to.
Wasn’t the answer just “lying dormant outside the galaxy until the appointed time”? It seems consistent with the (definitely weird) values that the Reapers have.
Why would the hunter want resources as opposed to safety and a lack of rivals?
Because you need to eat to live, and the food is gradually vanishing, and the less you stockpile the sooner you die.
If we’re getting into fictional territory, it’s strongly hinted that the final book of The Expanse series will involve facing a great filter risk.
I think you are (roughly) correct, and therefore it is likely that the great filter (if one exists) lies in our evolutionary past.
Except I would slightly amend your argument to say that the real problem is that we are on the verge of the singularity. Thus the timeline to launching an interstellar civilization is quite short (though whether it is “our” civilization remains to be seen).
(Incidentally, I favor another explanation for the Fermi Paradox).
Many of the things that would cause a species to wipe itself out aren’t mitigated by building a colony on a distant planet.
Like what?
I was thinking of societal ills caused by overcrowding, e.g. wars over resources; you can’t ship people off-planet fast enough to overcome them.
Even a Mars colony has some problems as species-survival insurance.
Case #1: The Mars colony is a smallish outpost, like the Antartica research stations now. If Earth blows itself up/finds itself knee-deep in grey goo/has everyone die from a hobbyist-made plague, then they’re just the last to die.
Case #2: Mars is a major part of humanity, with millions of humans living there and substantial political, cultural, and economic power concentrated there. At this point, Mars is potentially also a target for whatever goes wrong on Earth. The two sides fighting it out on Earth may have their counterparts on Mars, or Mars might even be one side of the war (think of the world of _The Expanse_). Alternatively, the recipe for nukes in your kitchen gets out and is read on Mars as well as Earth. It wrecks civilization both places.
But we’re not talking about Earth and Mars in this context; we’re talking Earth and Alpha Centauri. If there’s an outpost at Alpha Centauri, then it pretty much has to be self-sufficient and isn’t going to automatically die out just because Earth isn’t there any more. And even if there’s a large and thriving civilization at Alpha Centauri, it isn’t engaging in regular trade with Earth, isn’t likely to be a target of Earth’s self-inflicted wrath, and would be difficult for Earth to eliminate even if it wanted to.
I think the assumption is not that the home planet blows up the colony, it’s that if the home planet nukes itself then it’s likely the colony will as well. You can think of counter-examples, but they’re the exceptions that prove the rule.
I’m not sure how to make any sense at all of that last sentence. There are no examples, and “the exception that proves the rule” is not reason.
Furthermore, we are postulating essentially autarkic colonies founded by people who decided to leave their homeworld and never return, and whose subsequent interaction with that homeworld (and any other colonies) consists of electronic communications with a decade or so of latency and perhaps occasional immigrants who have also decided to leave their homeworld and never return. The colony may fail on its own. But if the colony succeeds on its own, it is far from obvious that a radio message from the homeworld saying “OBTW we’re blowing ourselves up now” would result in the colony doing the same. A shipload of refugees from home might have that result, but even that is far from certain – and since the radio message (or radio silence) would preceed the refugees by decades, if you are going to assert that it is certain or even likely that the arrival of a refugee ship would have that effect then presumably the colonists would have the same understanding and would use those decades to prepare.
Then I will be more explicit. Suppose the human race founds a colony on Alpha Centauri, and then the Earthlings manage to destroy themselves somehow. What’s the reason for believing the colony on Alpha Centauri won’t meet the same fate?
Perhaps it’s different somehow. Maybe what destroyed Earth was a shortage of strontium, which AC has plenty of. Maybe what destroyed Earth was religious warfare, and AC was colonized by fleeing atheists. Maybe what destroyed Earth was food riots, and shortly afterward the scientists of AC invented the Star Trek replicator.
Those are the exceptions. The rule, the default assumption, is that what destroyed Earth was not specific to Earth, it was just humans being human and Moloch being Moloch, and there’s no reason to assume AC won’t eventually meet the same fate.
(Note that I’m not arguing that human colonies will necessarily perish; just that if the first one does, it’s reasonable to assume the others will)
I don’t see much reason to believe humans will wipe themselves out though. Most extinctions are probably due to either being outcompeted by close relatives for your niche or due to astronomical events like asteroid impacts. The asteroid impact type scenario worries me. That or just not leaving earth ever. For the first one, I’m not sure I could care about such a distant possibility as to which human descendant species won.
A lot of things people talk about as if they are extinction risks kind of aren’t. For example, global warming is laughable as an extinction risk for humans even in a realistic worst case. The worst case may or may not suck for humans for a couple hundred years, but it’s super far from an extinction scenario.
Even if all colonies eventually perish, that’s not necessarily a reason to think extinction of all humans and their descendants would happen until something like all the suns stop fusion. If the extinction rate is significantly lower than the founding rate of new colonies, then humans won’t go extinct if they can escape the riskiest time before the number of colonies gets big enough to not worry about a fluctuation in luck where the small number of all existing colonies all go extinct.
Well, mostly it will be that whatever caused the Earthlings to destroy themselves will be on Earth, about twenty five trillion miles away. Also the fact that the people on Alpha Centauri will be not Earthlings, and indeed selected (probably self-selected) for being very unlike typical Earthlings. And the bit where the destruction of human civilization on Earth will have given them a detailed advance warning of the possible threat.
I do not agree with this assertion, and I certainly do not cede this argument merely because you claim your victory is the “default”.
Prove it.
Er, I thought we were trading opinions about hypothetical future events. If you think our colony will necessarily be the sort of place that doesn’t do self-destructive Bay of Pigs shit then fair enough, you’re a smart guy who knows lots of stuff, I don’t think you’re provably wrong. It just doesn’t look that way to me; it seems intuitively obvious that P( colony 2 blows itself up | colony 1 blows itself up ) should be substantial.
I think the point is that the people on the colony are from the same species as people on earth, and are at risk for making the same mistakes.
You could probably make a good story about the colony getting news of the disaster up to the end or near it, and what they do to not let that happen.
Two thoughts I’d add, after pondering over breakfast:
1. I’m making the case that, supposing Earth manages to off itself, the smart money suggests that our colony(s) will be likely to follow the same route, and John feels the opposite, one reason being that the colonists will have been selected somehow and thus be a somewhat different sort of group. It occur to me that this seems just as likely to make them more susceptible to collapse as less. The colonists must either be selected by flawed humans (e.g. the AC Corp’s Colonist Selection subcommittee) or some emergent process (e.g. one political faction fleeing another); neither is guaranteed to produce a more stable/robust/generally better group of humans than the home planet.
2) This being SSC, I must point out that my use of “Bay of Pigs shit” was metonymy for the sorts of human folly that might lead us to destroy ourselves, please do not interpret it as “I think the Bay of Pigs was a genuine near-extinction event and anyone who thinks it was actually not that big a deal should come at me bro”.
I think you probably don’t even need for them to go extinct in similar ways. You were right that they just have to go extinct for some reason. I’m not big on the human folly reason, but maybe asteroids of sufficient size collide often enough with planets, supernovas go off, showers of electromagnetic energy in the right bandwith from nearby astronomical phenomena occur often enough etc. We could roughly estimate how often those things should annihilate life on a planet. I’m not aware of any species wiping itself out, but no other species is quite like humans either so estimating the odds on that is just a big ?????. Something close to a species driving itself extinct has probably happened at some point on earth even if it’s a weird example, but I dunno what the odds are.
But if we’re talking loss of all humans, we need to have some sort of guess about the colonization rate. That combined with the extinction rate of a colony is going to determine the odds humans and their descendants go extinct everywhere. If the colonization rate is much higher than the extinction rate, many colonies may go extinct but not the human species. If the extinction rate is comparable or higher than the colonization rate, then humans almost certainly go extinct.
I assume the Great Filter is that there are hard relativistic limits to movement/communication, so establishing important colonies on other stars will always be a hard proposition. If somehow the home planet goes kaboom for whatever reason, small colonies on distant stars may find difficult to keep progressing in science until they have successfully colonized their own planet, which may take millenia. If anything, smaller colonies will be more susceptible of going kaboom and never recover than the home planet.
I mean, we aren’t *that* far away from possible futures in which a wave of self-replicating space probes expands outward at an appreciable fraction of the speed of light, transforming all available matter into whatever concept of Eudaimonia we manage to transmit to our successors. At that point, interstellar distances become pretty small.
Sure, we might avoid that fate. But if you want to posit a great filter lying ahead of us, it doesn’t have a lot of time left.
As is common in this community, you are greatly overconfident in your belief that the singularity will happen.
I did say “possible” future. Though your inference that I think a near Singularity probable is correct, even if not strictly implied by my words.
I’m curious why you think a Singularity is unlikely.
(I suppose the Fermi Paradox is evidence against a near Singularity.)
Von Neumann machines don’t require a singularity. I don’t think they even require AGI.
I wonder if paperclip maximizers are detectable from several light years away.
But you know what’s detectable with current, non-clip-maximizer technology? delicious delicious inhabitable planets. Well, right now we can _kind_ of detect them but that technology will only get better, even in the short term, while reliable self replicating probes still look a little iffy in the short term.
What I am getting at is that civilizations capable of self replicating probes _probably_ already detected us, so why aren’t we tiled in paperclip form yet? Maybe we already were detected, and a probe is coming our way, in which case we are fucked, but on the other hand, in the timescale of a star like the sun? The milky way should have been colonized a few times over so, that’s not the Great Filter.
Self replicating probes capable of tiling the galaxy sound hard to be honest. The slower they are, the more things can go wrong over that long time.
“Habitable planet” is probably too vague. If you want a shirt-sleeve or near shirt-sleeve environment, only a small fraction of planets which are habitable for someone will be suitable.
Fair. It’s more of a “planets with some chance of hosting life of some form”. But still. That only makes the Earth only more desirable from afar, as the only planet in thousands with shirt-sleeve environment. If interestellar civilizations don’t give much of a fuck about the human (?) rights of dinosaurs, at least one of them should have sent their self replicating probe our way.
Assuming of course, universal shirt sleeves dress code here.
Shirt-sleeve environment? Do you realize that planet is so cold that water vapor is a liquid–amazing heat drinking stuff. Sometimes even solid if you can believe it.
(Channeling Hal Clement)
Oh man, an Iceworld reference? I loved that book!
I am a little on the fat side so I will wear a polo shirt from above 15C (59F)
What I am getting at is that civilizations capable of self replicating probes _probably_ already detected us, so why aren’t we tiled in paperclip form yet? Maybe we already were detected, and a probe is coming our way, in which case we are fucked, but on the other hand, in the timescale of a star like the sun? The milky way should have been colonized a few times over so, that’s not the Great Filter.
Maybe advanced aliens respect the value of organic life, so they don’t destroy Earth even though they could.
Maybe these advanced aliens are machines, or exist in some other non-organic form (pure energy?), so Earth’s climate is not any more hospitable to them than a barren planet like Mars. Hence, there’s no special reason for them to colonize Earth.
Self replicating probes capable of tiling the galaxy sound hard to be honest. The slower they are, the more things can go wrong over that long time.
I think there are ways they could be designed to be extremely reliable and resistant to malfunction.
https://dpconline.org/handbook/technical-solutions-and-tools/fixity-and-checksums
Or even more obviously, all those stars wastefully burning away in the night sky. Anyone out there that meant business would want to tap into that energy.
Right. This is the hard version of the Fermi Paradox. If advanced civs with reasonably high probability undergo intelligence explosions and start tiling their future light cones with paperclips, why do we find ourselves in a (relatively) old universe, not paperclipped?
There are several possible answers, of course. (The universe isn’t actually old; we are paperclipped; advanced civs super rare; intelligence explosions much harder than they seem; I’m wrong about something).
Other possible answer: the terminal goal of this advanced civ isn’t conquest/paperclip tiling.
Also, depending on how far away they are (wouldn’t need to be too far), assuming they’ve noticed our planet, they might have even seen evidence that we exist yet (“hey, that planet way over there looks like it could potentially support life!”). An advanced civ might well have noticed our planet, but they more than likely wouldn’t have noticed us yet assuming they are limited by the speed of light.
That’s basically what I meant by saying we’re already paperclipped.
If you get a superintelligence, it just turns its future light cone into whatever it wants; i.e. something high in its preference ordering. Call this thing “paperclips”. Therefore if we lie in the future light cone of a SI, we are paperclipped.
Maybe we got a local diety that has weird preferences, and this is it.
My notion of what its like in the future light cone of a SI mostly excludes explanations along the lines of “it hasn’t noticed us”.
Heck, even a pre-IE advanced race would presumably be eating all stars within travel distance pretty quickly.
If they are intelligent machines, maybe they’d be building computronium around the stars. Maybe they wouldn’t touch Earth, but I feel we’d notice the Technocore building a Halo or Ringworld around the sun, to say nothing of a Dyson Sphere.
IIRC astronomers have searched for Dyson Spheres by looking at stars with suspicious IR emissions or something, and came up empty handed.
I assume the Great Filter is that there are hard relativistic limits to movement/communication, so establishing important colonies on other stars will always be a hard proposition.
Somewhere out there, there must be stars less than 1 light year apart that both have habitable planets orbiting them. Such a distance could be traveled with entirely feasible space technology.
I’m not sure this is necessarily true. There’s lots of stars, yes, but but lots of constraints on habitability and vast distance for those stars to fill. I’d want to see numbers before assuming this had to be so.
It changes if we mean “habitable” in the sense of:
a. Mars and the Moon, places we could colonize but never walk around outside.
b. Antartica, the Sahara Desert, the top of Mt Everest, places we could colonize and live and even go outside sometimes, but where we’d still need a lot of technology to survive for long.
c. What America and Australia were to the Europeans, places we could just go colonize and live with relatively limited technology or hardship once we got there.
I assume (c) is very unlikely. We have (a) in our solar system, so it doesn’t look so improbable. I have no idea whether (b) is at all likely.
Yes, that’s true, I wasn’t considering places that could merely house a fragile, barely self-sufficient outpost as habitable.
If we can’t have an ecosystem that we can be a part of, eventually the system will fail.
Maybe? But it’s probably rare and just hoping on astronomical coincidences across two stellar systems does not make a galactic empire. Maybe they are out there, posting how come their two system empire is seemingly alone in the galaxy.
As I said upthread: your civilization’s lifespan is limited by the lifespan of your star(s), and more broadly by the lifespan of the universe. A star you haven’t colonized isn’t just sitting there — it’s destroying itself and the computation its energy could perform. The more stars you enclose in Dyson swarms, the more consciousness you can support before the universe ends.
Personally I think consciousness is good, even if we can’t communicate with it; and I suspect there’ll be enough people who agree with me to get some colony ships out there. After all, if there are people who believe that more of them is good and people who don’t believe that, and the contest is proliferation…well, one would expect the ones who want more people to win.
The great filter largely implies that civilizations go extinct prior to the interstellar civilization stage. It’s possible to imagine scenarios where far-flung empires would go extinct (brutal civil war, or some resource required for interstellar travel being extremely rare and easily exhausted, for starters) I think placing extinction (or crippling) events earlier in the timeline is more likely.
brutal civil war
But the brutal civil war only works as a Great Filter event if it ends with the whole alien civilization being destroyed, as in, all of them dying. That’s implausible and becomes ever more so as you assume more star systems and planets belong to the alien civilization (e.g. – odds increase that one or more star systems will stay neutral, or will be too far away from the fighting to be hurt by it).
Look at Earth’s history. There have been countless devastating civil wars, but none where both sides destroyed each other to the last person simultaneously.
or some resource required for interstellar travel being extremely rare and easily exhausted, for starters
If you’re implying that there might be a resource that enables superluminal travel, then yes, I agree that its exhaustion would pose major problems to an interstellar alien civilization, but it wouldn’t by any means lead to them dying out. Also, there’d be nothing to stop them from continuing to expand their civilization, but at sub-light speeds.
Right, these scenarios could plausibly end an empire, but they by no means would, so I don’t think a search for a single great filter should look there. But as one more failure mode among many, they might possibly contribute to a dearth of interstellar life.
None were undertaken by civilizations capable of interstellar flight. Possibly there exists planet destroying weapons.
The one (interstellar flight) does tend to imply the other.
True, true, and it may be easier to create a planet destroying missile that to create a planet destroying missile with life support systems attached!
There haven’t been a civil war where the nation was extinguished to the last man, but civilizations have disappeared mysteriously all the same. The Mayas come to mind, and the Mycaeans. Probably many more. I don’t think it’s impossible for a civilization to die out in a civil war, particularly with more powerful weapons. Maybe they do not kill 100% of the population, but the survivors may not last for long, or develop a distaste for advanced technology, or just become less technologically advanced for a long while.
Classical Mayan civilisation declined severely from its peak, but their descendants are still there.
Great filter idea that will destroy (completely) an interstellar civilization.
Assumption: Travel back in time is possible.
Assumption: There aren’t infinite parallel universes. There’s just one, and if you go back in time and change something, it change the one-and-only. You can destroy yourself, you can change yourself, you can change the future, and the past, but the new history is the only history unless/until it is changed again.
Eventually, even if you establish guidelines, monitors, and laws, somebody will go back in time and make it so that your society is unable to develop time travelling. Someone from your society is going to keep going back in time until prevented, and the only thing that 100% prevents anyone in the future from ever going back and changing anything ever again is that they change the timeline such that time travel is not developed. Any other outcome will result in more fiddling with the past. Anyone in your interstellar empire that has sufficient technology will eventually wipe out the empire itself.
It sounds like you’ve chosen one of the time travel models that has paradoxes. If I go back in time and kill myself, who was it that killed me?
I’m hesitant to jump in here, because I am definitely don’t have any real knowledge of various theories of time-travel, paradoxes, and causality, but my intuition:
You did. Your future self (that traveled to the past) continues to exist along with whatever you brought back with you (time machine, perhaps)? This would be true even if you went back further and killed your grandparents. The future changes, and you are not born, but you are now part of history and exist as part of the past (which is your new present).
That’s what happens under my assumption. Otherwise, time travel probably doesn’t work as a Great Filter candidate.
Of course, I think it’s unlikely that humanity ever gets to travel back in time at all, but
1) If it can work
and
2) If it works this way
then
3) It’s a plausible Great Filter candidate.
It has a really nice effect (for the purpose of explaining the Fermi paradox) of allowing sapient species to develop up to the point of time travel and then erasing themselves before they can go interstellar, particularly if time travel tech is approximately as hard as interstellar travel. That the speed of light is a plausible obstacle to both problems implies that that assumption isn’t unreasonable.
Maybe the reason we can’t go faster-than-light without infinity energy is that the process of going back in time with faster-than-light travel needs all that energy to create a new universe.
Something I feel is under-explored related to time travel (and that relates to your point):
Time travel is also interstellar travel (well, at least interplanetary travel). Not only do you have to pinpoint the time where you a arrive, you have to pinpoint where the planet (Earth) will be when you arrive there, and plan your landing accordingly, and fairly precisely lest you end up:
1. In space
2. In the wall of a building
3. Somewhere deep under the crust of the Earth
4. Somewhere in the ocean (under the surface)
Time travel is roughly as difficult as landing in a target with a 1m radius on Mars, except also you are somehow navigating time in addition to the regular spacial dimensions involved.
I think bringing in those considerations make it a computational nightmare, so in fiction I’m okay with handwaving at gravity and saying that you stay in the same relative position on earth.
Well, a time machine that’s not also a space ship would have that problem. If you’re already in an interstellar craft, going back in time is just a navigation problem that’s probably not too much harder than FTL travel itself. You want to stay in open space the entire time and don’t run into anything.
If your time machine is a DeLorean driving in a parking lot, you’re gonna have a bad time.
@Randy M
Still might end up stuck in a wall, or if you weren’t on a ground floor suspended in mid-air if the building you were in doesn’t exist in whatever time you travel to.
I believe this was a plot point in Michael Crichton’s Sphere.
@acymetric That’s certainly true, and if you go back to the days of the dinosaurs you probably drop into the ocean due to continental drift.
There’s the Terminator method, where your little sphere of time displacement either over-writes, or swaps with whatever was at the target destination.
Well, so is ocean travel, or bicycle travel. But gravity and momentum keep you pretty tied to your context; I don’t see why time travel would have to be any different.
Pretty much any of the remotely plausible time-travel mechanisms (e.g. a Tipler cylinder) serves incidentally as an anchor point — you can’t just follow a closed timelike curve anywhere you like.
Do you prefer one where, if you go back in time to kill yourself, your action results in the creation of a new universe?
I guess I prefer the one where as soon as you arrive in the back-in-time time, you create a new universe. The universe where you built your time machine and left from is without you, just as if you’d died. The universe you arrive in has two of you (until one of you dies, whether or not by the hand of the other).
But “prefer” seems like a weird choice of word here. Obviously time travel is impossible, but the one you posed sounds logically inconsistent to me.
Ok, but there’s an inconsistency in your version too, right? You power your time machine with a relatively small amount of energy from Universe 1, and the process results in the creation of Universe 2, which is a copy of Universe 1, let’s say, 1 second earlier. Isn’t it going to take every bit of energy from Universe 1 to create Universe 2? This is a conservation of mass/energy problem.
Universe 1 can’t still be there, right?
There’s an episode of Stargate Atlantis where the team tries drawing energy from alternate universes to power their stuff. Unfortunately, it turns out they want to exist pretty bad too.
As others have noted, the Great Filter pretty much has to apply before a species starts colonizing other star systems. Only a subset of plausible star-colonization behavior patterns are vulnerable to inconspicuous extinction on an interstellar scale. That subset may overlap strongly with what we imagine ourselves doing over the next few centuries and have been glorifying via e.g. Space-Opera science fiction, but it isn’t a perfect overlap and in any event the range of plausible behaviors is rather larger than SF normally allows for.
If the universe generates many technological civilizations, and nothing gets around to extinctifying them before they develop starflight, some of them will survive and some other ones will go down in a blaze of really conspicuously blazing glory.
The exception is if the Great Filter is something that destroys the entire universe, though that just reduces to the unsatisfying explanation that we are the first (in our universe).
Why does a interstellar civilization need the “self-sustaining colonies” part?
They might all need to rely somewhat on the parent system (possibly purposely created reliance or some unique entity) and following its fall, unable to adapt quickly enough to continue existing.
Second – perhaps there’s limitations in their growth that’s beyond the planet but below inter-galactic scale.
Perhaps, terraforming ability is limited to them and the kind of planet they require is only really common in whatever pocket of space they are in, and are too sparse outside of it.
It depends a lot on whether they keep researching useful science. Maybe at some point you just hit a wall where even full planet sized research projects won’t do you any good, so if you lose the mother planet, colonies still have all-the-possible-science anyway, and they keep on trucking.
But there’s also the chance that there’s just no economic upside to massive projects like colonizing another planet 10 light years away and stuff never gets done.
Indeed, it’s possible that it never really pays off to get a substantial chunk of humanity off our planet, and then one fine day something happens to wreck our planet, and a century later there are no more humans left.
I think there’s a lot of economic benefit to, say, terraforming Mars and building colonies around the different planets. That saves us if the Earth goes kaboom due to someone’s finger slipping on the nukes or whatever.
The question changes if we are talking of colonizing even Proxima Centauri, and chances are Trisolaris is not really inhabitable for us.
My opinion is that there is in fact no economic upside, unless near light speed travel is invented, since trading with a colony that it takes a generation to reach is not going to be personally profitable to anyone, and moreover you can probably develop substitutes in the meantime, and the expense if the voyage would be, pardon me, astronomical.
Or it would be economically useful if there exists some extra terrestrial unobtanium which we somehow realize a use for from all the way over here, ala Avatar.
Colonization is basically taking out a fairly expensive life insurance policy on your species, with the exception that no one you know personally will benefit from it.
I’m writing a novel which starts from that premise, and concluded that it had to basically be a vanity project of a vastly wealthy visionary.
It could still be a exploration based venture, or be used to escape prosecution/create new nation/get rid of the useless third.
Plus, you are making two assumptions with that –
There’s no great increase in lifespans if not immortality
There’s no great increase/appetite in automation of the colonization level.
Yes, but not one that benefits the sending nation economically.
Perhaps, but only if an extremely advanced and wealthy group was on the receiving end of the persecution. But that’s just the ‘insurance policy’ aspect writ small.
It will by definition create a new nation, but how will that benefit the people fronting the cost?
No. (Although I do chuckled at the reference) Aside from the difficulty of getting two billion people onto one spaceship, or building and launching millions of spaceships, in a generation or two you’ll have replaced them and you’ll still have people of below average utility. Colonization can’t be a cure for over population.
True, in some kind of scenario where lifespans are increased by an order of magnitude, very many things change and I can’t really speak to all the implications (many of which will depend on the particular details of how that came to be).
By automation of the colonization level, you mean some sort of singularity? Similar to above, that will certainly change things in unpredictable ways. Barring a post scarcity society, I think the distance still precludes really profiting much from interstellar trade. Anything you can do without for a century you can probably find substitutes in that timeframe.
With long long lifespans, some things become more plausible. You can just ride out your expensive and slow spaceship to the next star and arrive while old. Still, making machinery that lasts thousands of years is a though problem, even with intelligent beings aboard who can repair it (and who have to carry spare material).
Methuselah aliens colonizing the galaxy is an interesting variation, tho still doesn’t explain where are they. Maybe they just don’t build detectable megastructures. Then the Fermi Paradox answer is that the old aliens just don’t want to contact us.
The people fronting the cost ARE the people that want to create a new nation.
As in, the cost/repercussions/morality in establishing a new nation on the home planet will be more costly than finding a new space colony.
This was meant to refer what non-economic reasons humans had in establishing colonies in America.
No. Rather making the establishment of colonies an automated process. The most obvious one is via replicating robots. In those cases, can allow exponential growth or at least a cheaper unmanned process.
Not sure I count that as a ‘colony’ but for reasons of the original topic it might qualify. Unless those robots are preparing a home for humans or sending back the unobtanium, that’s just us depositing some self-replicating junk across the expanse.
Just because it is more costly to establish a new nation on earth doesn’t mean you have the resources to establish a possibly less costly elsewhere. It’s the people already in possession of nations or equivalent who will be able to fund such ventures.
But I think that still falls under “no economic upside.” I don’t mean that there are no reasons to colonize; just that I don’t foresee trade being one of them given the transit time and cost involved. All remaining reasons are basically ideological, since the colonists probably won’t improve their lives any by going (by virtue of giving up much of it to get there, or else there being close enough to reach in a lifetime but pretty inhospitable).
I will note that empirically living creatures have a strong tendency to fill all available space and increase their population to the extent possible given available resources, and evolutionary theory implies this tendency will be essentially universal.
Yes, a self-aware species can formulate an ethical system and then take steps to codify and preserve those values (protecting them against further drift/selection), and these values might not fully support unchecked expansion (though, since they evolved, they presumably *will* be such as facilitated growth in the ancestral environment).
However, it’s extremely likely that the vast majority of advanced species will engage in expansion and interstellar colonization. The reason is simply population pressure: resources (partly raw materials, but mainly energy) in any star system will be limited. Population growth is exponential (barring fine balancing — and there are good evolutionary reasons for growth to win out). At some point you run out of available resources. To be concrete, this might take the form of enclosing the local star in a Dyson sphere/swarm, until all solar energy is tapped.
At this point (or realistically, long before) some locals will start thinking that all those other stars out there offer a lot of free untapped resources, with the only cost being transport. Some combination of the desperate and the entrepreneurial will set out.
Once it begins, the expansion will be inexorable and indeed quite rapid, driven by a simple mathematical fact: growth rates of any local population (e.g. around a given star system) are exponential, but available space (and therefore resources) in the future light cone is cubic in time. Thus for any positive growth rate, population pressures will far outstrip reachable uninhabited star systems, and the civ will continuously send out new colony ships.
The only future Great Filter hypothesis that I find vaguely plausible would be that it is impossible to develop really high tech without making mass destruction really easy. Society wouldn’t last very long if anyone who was having a bad day could whip up a nuke or planet-devouring black hole in 15 minutes. This is somewhat explored in Vernor Vinge’s book “Rainbows End”.
It is of course easy to imagine many plausible past Great Filters.
Isn’t this part of Bostrom’s argument. From his Sam Harris interview, I think his example was that human civilization probably couldn’t have survived if you could make a Hiroshima-sized nuke in your kitchen with easily-found ingredients.
I’m not 100% sure this is correct–you can imagine cultures evolving that could survive this ability, but they might be awful in many other ways. In any near-term space colony, I think you’d have something a little like this. Someone going crazy and opening the airlocks/starting a fire/dumping poison into the air supply might kill off a big chunk of the colony’s population. One solution to that would be that anyone who seems even a little weird or “off” gets locked up or spaced.
One thing that’s kind of worrying w.r.t. the Great Filter is that it looks rather like substantial animal intelligence has evolved multiple times here on Earth. Not just us and other primates, but also in elephants, dolphins, corvids, wolves, etc. (And I think octopi are considered quite intelligent–that’s another species that’s not even a vertibrate!) There’s also a very different kind of intelligence that’s evolved multiple times on Earth–eusocial species. That might (or might not) be an alternative path to some kind of technological civilization, though it’s hard to imagine what it might look like. But it’s worth noting that large-scale war, farming, and herding were all invented by eusocial insects a long, long time before humans arrived on the scene. All this makes it look to me like probably evolving substantial animal intelligence isn’t so hard, once you’ve got complex multicellular life.
I have the feeling that intelligence as in raw problem-solving ability probably isn’t as important here as abstract language. It looks very probable to me that you can’t build anything like technological civilization if you don’t have language, even if you can fish for termites or escape aquaria like a boss. And as far as we can tell that really has only evolved once, quite recently; lots of species communicate in some fashion, and a lot can even learn the meanings of a limited set of human words, but nobody, even our closest relatives in the great apes, seems to be able to use them with anything like the structure and generality that we do.
On top of that, it’s exactly the sort of lateral breakthrough that we’d expect to be evolutionarily rare: learning lots of “nouns” would have steep diminishing returns in the wild without a “grammar”, yet there isn’t a clear evolutionary advantage to building the first steps of one.
The Great Filter hypothesis is about why Earth hasn’t already been colonized. It would be pretty quick (on an interstellar time scale) to spread exponentially through the galaxy using self-replicating Von Neumann probes and colonize every habitable planet, but apparently it hasn’t been done, despite hundreds of billions of stars in the Milky Way which could have birthed an interstellar species. Something must be stopping this from happening. Either intelligent species arise rarely, or their development reliably gets arrested permanently before they reach this phase.
You are implicitly assuming that interstellar self-replicating Von Neumann probes are possible, which IMO is a huge assumption. Given our current level of technology, we couldn’t even begin to imagine how to build one of those things; and I’m not convinced that the laws of physics do not outright prevent it — unless, of course, you sneak in molecular nanotechnology or superhuman AIs or some other science-fictional shortcut.
In that case, the Great Filter is ahead of us, and we will be stuck on this planet (or in this star system).
From what we know currently, it should certainly be possible to colonize other planets, and if we can do that then we can colonize (slowly) other star systems. But maybe there are Hard Things we just don’t understand yet.
Do you mean we can colonize any planet or that we can colonize a planet given (some set of requirements). If the former, our expansion is on the scale of at least a couple hundred years to reach and colonize each system.
Assuming the latter, colonization of the next livable system is probably closer to the scale of all recorded human history to reach and colonize the system.
My first “colonize other planets” was “colonize other planets in our star system.” Not that we can necessarily colonize any hell-scape planet we find.
Say we colonize Mars in 2050, and most of the rest of the solar system by 2300. Then a group launches a fast ship to Proxima Centauri b at 5% of the speed of light and it gets there in around a century. (We can probably get people living that long.) They establish at 2400, and take 300 years to build up the wealth to start the process over, colonizing other bodies in their system. That’s about 700 years to cover 4 light years, of 17 million years to go from one edge of the galaxy to the other.
Maybe there is some reason that humans can’t do this: maybe everything else in the inner solar system turns out to be uninhabitable (and it’s too big a leap to go from Earth to the outer solar system, to say nothing of other star systems), or maybe humans just can’t live long enough, or we started in a particular bad region in the galaxy such that all the planets in nearby star systems are hell-scapes that are too challenging for “baby’s first interstellar mission.” But they would also need to apply to every other species out there.
@Edward Scizzorhands
But what motivates this constant, rapid expansion? The time scale is too long for us to rely on “adventurous spirits” who do it “not because it is easy, but because it is hard” unless you find a way to instill that across generations (some fanatical religion or something maybe). Otherwise the expansion is going to be a lot slower because it will be driven mostly by need.
Unless expansion is actively suppressed (which is possible), things work out fine if only 1% of the society wants to expand. That is how most expansion works, anyway. 99% of the people stay home and less than 1% colonize.
Reaching the next star system is probably do-able in one life time, if not for humans then for some other species.
So, gen A decides to head to Proxima Centari b and starts colonizing. Gen B and C were probably born on the ship, with no choice in the matter and are probably doing most of the work actually colonizing. I find it more likely that the future generations would build a ship to leave Centari and go back to Earth than that they would say “well, this desolate, borderline unihabital planet has been fun, but let’s embark on another generations long journey to the next one!”
The only way I buy expansion to other systems is the discovery of a way to travel between points in space such that the travel takes an insignificant amount of an individual lifespan to do so.
Consider also that even if you have a group committed to doing this, and their future generations also stay committed, the risk of catastrophic failure killing them all at some point along the way is probably relatively high.
Some version of suspended animation gives you that.
Who said anything about “constant”?
If it takes ten thousand years for the average colony to grow to the point where it is capable of building starships in their spare time, and even then only once every thousand years does the random-walk of local politics, sociology, and economics give rise to an oppressed and/or adventurously spirited minority population desperate and resourceful enough to launch a single interstellar colony mission before reverting to apathy and hedonism, and if colony ships are limited to 0.01c and ten light-years maximum range and new colonies have a 90% failure rate…
…the Milky Way is still fully colonized roughly half a billion years after the first technological civilization develops starflight.
The Milky Way is approximately twelve billion years old. Even if we assume that the first generation of stars(*) were too metal-poor to support life-bearing worlds, that still gives us ten billion years to evolve a technological civilization and colonize the galaxy. Fermi’s question stands.
* Called “Population II” because someone guessed wrong
@DavidFriedman
Granted, and we are probably closer to that than the alternative, although I’m not sure how legitimately close we are.
@John Schilling
I was responding to Edward Scizorhands who proposed a much faster rate of expansion, which (at the time scales we’re talking about here) I would call more or lest constant.
Your proposal, which I suspect is a conservative take on your part, is more reasonable. The “great filter” in that case is simply time. There are a nigh-infinite ways for a civilization to collapse, especially a fledgling colony civilization traveling through deep space or even after reaching their destination planet. The odds of the civilization making it that far are just incredibly low not because of any single cataclysmic type of event but essentially because over the course of half a billion years attrition ends up outrunning expansion.
Call it Murphy’s law of Interstellar Colonialism
I mean, we’re talking about exponential growth here. That’s going to fill up space pretty quick for any reasonable constant you pick. And the highest constant dominates here, so if civs differ the one with the highest growth rate will just take over everything.
And the patterns of colonization people are talking about above strike me as insanely slow compared to what is likely — even assuming no intelligence explosion.
If by “civilization” you mean an individual planetary or system-level colony, then sure – which is why my model allowed for 90% of colony civilizations to collapse before ever getting around to launching even a single starship. And you could up that to 99% or even 99.9% if you allow for the tiny handful that make it, to launch starships once per century or decade rather than once per millenium. The collapse of planetary civlizations isn’t a showstopper if there are lots of planetary civilizations to work with – and if interstellar travel is a marginal proposition, then “…and we get to loot the remains of a Lost Civilization for sure!” is probably going to push it over the top for recolonization missions, so probably not much ground lost in the long term.
If by “civilization” you mean the set of all planetary or system-scale colonies descended from the same source, then particularly for the hypothetical where interstellar travel is difficult and rare, then I disagree with their being a nigh-infinite number of ways for interstellar civilization to collapse, because it wouldn’t be a single civilization in the sense that we normally use the term and the gulf of interstellar space would make for a most effective firebreak against a nigh-infinite number of possible civilization-collapsers.
@acymetric that is the exact plot of KSR’s Aurora, pretty good book as I recall.
Also the plot of Stephen Baxter’s Ark, although it’s important to note that in both books there’s something wrong with the colony, so it’s not so much “reaching out again from a successful colony” as “this attempt to colonize isn’t going to work, let’s just go back”. Ark actually goes in all three directions, with some going back, some staying on the problematic world, and some going forward to a third planet.
@Edward Scizorhands:
I actually fear that you are right; although, in the best-case scenario, the Great Filter might be something like our Sun dying, which won’t happen for a good long time.
As far as I understand, it should be possible to set up human presence on the Moon, or perhaps even on Mars, given incremental enhancements to our current technology. However, I am far from convinced we will ever do it; the costs involved seem to be much higher than any government or corporation is willing, or able, to pay. China might go to the Moon, though, just to spite the US — but I doubt they’d ever maintain a permanent presence there.
Traveling to Alpha Centauri would take on the order of 100 years, and that’s just for a robotic probe. No present human institution operates on such time-scales; unless maybe you count dictatorships whose only relevant goals are “stay in power” and “keep being a dictatorship”, not “travel to other stars”.
“Seem”, means that you are basing your cost estimates on observation.
And the only sort of manned(*) space flight activity anyone has had a chance to see, is the sort that has been done either A: under an explicit mandate to deliver the most spectacular possible results in the fastest possible time without regard to cost, or B: exactly the same way it was done last time so that nobody can be blamed if anything goes wrong, and with an explicit mandate that no price is too high for “safety”.
This may give a misleading impression as to the plausible cost range.
* Very nearly the only sort of unmanned space activity, for that matter.
Colonization doesn’t need to happen via government, although governments must allow it. If you let enough centuries pass, eventually private groups accumulate enough wealth to do it on their own.
@John Schilling:
Er… yes ? What else should I base them on ? Logical deduction from first principles ?
@Edward Scizorhands:
What is their incentive to actually do it ? Why accumulate all that wealth on a long-shot blue-sky project, when you could instead invest it into reliable short-term gains ?
The money men would have to see it as a philanthropic expense and have enough corporate control to start the project and keep it going long enough to build, stock, crew, and launch the ship–which may be difficult because it could take many years.
Perhaps there was a recent near-miss of an asteroid or nuclear strike which motivates someone to use their funds on such a venture, or maybe they want the renown.
I don’t think they should expect universal acclaim for doing so, however. A lot of people are going to see it as wasting resources that could be spent otherwise, irrevocably.
As far as colonists go, once funded I don’t think it would be a problem to find some willing to go, frozen or as breeders. Whether from desire for fame, adventure, or escape.
You are living at a time when two billionaires are fighting it out with space companies, including the world’s richest man (still, post-divorce).
Why is Bill Gates trying to cure polio? Aren’t there better returns somewhere else?
Maybe 50 years from now, when there are even more of them, none of the billionaires are interested in space. Fine. Wait another 50 years, and there will be even more (in real terms) billionaires. Oh, they all want to cure Alzheimer’s? Fine, wait another 50 years. Eventually, unless man goes extinct or the government confiscates everyone’s property or disallows space travel [1], you are going to get someone with enough drive to make it happen.
Also, while looking up billionaires, I found out that Kylie Jenner is the world’s youngest “self-made billionaire.” Never mind the life support, launch me off this planet now.
[1] Those are actual possibilities, and if aliens are common I’m sure a lot of them got taken out of the space race through one of those three methods. But if aliens are common, then you only need one to get past that and wallpaper the galaxy.
Um — since superhuman AI is obviously allowed by the laws of physics (it would be trivial to selectively breed super-intelligent humans starting with actually existing historical geniuses, which puts a lower bound on possible intelligent agents a decent step above the current human level. Then factor in running these super-humans on faster hardware, and we’re already up to pretty superhuman level, and we’ve barely gotten started on possible improvements!), saying VN probes are not permitted by the laws of physics “unless…you sneak in …superhuman AIs” is simply incoherent.
You’re claiming that a thing is actually physically impossible (an insanely high bar to prove!) unless we posit a thing that plainly is physically possible. So…it’s physically possible?
Besides, there are plenty of biological replicators. We’re essentially colonies of said replicators, and we can build spaceships! Heck, we’ve sent out interstellar probes *already* — how hard would it have been to throw a cell culture on board?
So…still confident VN probes are *physically impossible*?
I guess you and I have very different definitions of “trivial”, and perhaps “superhuman”. I will grant you that a genius could technically be considered “superhuman” — as in, several sigmas above the mean — but that’s not what it would take to create a reliable Von Neumann probe. When I say “superhuman”, I’m thinking in terms of “several orders of magnitude”, assuming such a concept is even coherent.
Which we currently have no idea how to even begin researching; and, again, I’m not at all convinced that it’s even possible without some sort of self-replicating molecular nanotechnology… which may, in turn, be impossible. And no, running Google Maps on a really big cluster doesn’t count.
I said that I was not convinced that it was possible, not that I was convinced it was impossible. You are the one who is proposing a self-replicating probe that can not only survive interstellar distances, but also make perfect copies of itself out of raw materials, such as rocks and interstellar hydrogen, every 100 years or so (if you’re lucky). The burden of proof is on you.
I don’t know, how hard is it to maintain a livable environment for hundreds of years between stars ? Also, how hard is it to engineer a cell culture that actually does something useful… such as generate computer hardware out of rocks in the vacuum of space ? You tell me, you seem to know how to make one !
@Bugmaster:
I apologize if my initial comment came off as a bit sharp.
Before engaging on this topic further, I think we should clarify terms a bit, since I’m not interested in arguing about the meaning of words.
I gather from your comments here and elsewhere that you are a technological pessimist (particularly compared to me). That is a perfectly coherent position, and I’m quite happy to discuss it further.
However, the post I was originally responding to made (what I take to be) a *much* stronger claim, that Von Neumann probes and superintelligent AI (and MNT) are not merely difficult, but might plausibly be *physically impossible* — that is, literally not allowed by the laws of physics, ala perpetual motion machines or FTL signalling.
My comment was responding specifically to that claim. However, if that’s not your true objection, and you’re merely asserting a more generic kind of technological pessimism, I’ll be happy to continue the conversation in that vein instead.
@Eponymous:
I bet you hear this all the time, but still, I consider myself more of a realist 🙂
I am not claiming absolute certainty that superintelligent AIs and Von Neumann probes are physically impossible; but, currently, I’m about 60%..80% convinced of this — depending on the specifics.
For example, I am fairly sure that “gray goo”-style molecular nanotechnology is impossible. True, molecular replicators do exist — that’s what we’re made of — but the energies required to do the same thing outside of water-based chemistry (or some equivalent); and in much shorter timeframes; are just too large.
On the other hand, constructing an interstellar probe is pretty easy — you can launch a wrought-iron cannonball quite a long way, with the right rocket. However, making that probe do anything useful is much harder, depending on what you want it to do. Making a probe that can create multiple copies of itself may be prohibitively difficult, depending on what you want to make it out of — and that’s assuming that you’ve solved the problem of perfect self-replication in the first place, not to mention survival over hundreds of years of exposure to hard vacuum and radiation.
The Boring But Practical ™ answer is that technologically advanced intelligent life is much less likely to arise than optimistic futurists (and science fiction authors) tend to think. It is entirely possible that we humans are the only technological civilization in the Milky Way. Even if there are others like us, the diameter of our galaxy is about 100,000 light years, IIRC. We’ve only invented radio telescopes about 80 years ago.
The really sad thing, as I see it, is that the Universe is probably teeming with intelligent life, relatively speaking… and we will most likely never see it. The Andromeda Galaxy is 2.5 million light-years away. There could be an alien there right now, writing a post much like this one… and in 2.5 million years, we could possibly read it… except we won’t, because its light will be too weak by the time it reaches us.
I am reasonably sure that, for all intents and purposes (and barring some sort of a magical FTL engine), we are alone in the Universe… and so are all the other intelligent species.
I think you are correct.
I think you’re neglecting timescales here. Aliens don’t have to exist right now and only right now; we can look at aliens on the other side of the Milky Way who were sending out signals 50,000 years ago, even if they’re now long gone.
And andromeda isn’t actually that hard to get to/from for technologically mature civilizations, at least not much more than colonizing the Milky Way is (see this to-scale diagram). Get your Von Neumann probe sped up to a decent fraction of c, wait a few million years, and have it start up the process in the new galaxy. Sure, 10 million years is a while, but the universe is billions of years old. The evolutionary history of Earth doesn’t seem so sped-up as to have been rushing to the finish line of sapience within a few million years as every other biosphere; aliens from Andromeda who got to multicellular life a few hundred million years earlier would have no trouble turning all available matter in the Milky Way into whatever configurations they liked.
Space is big, but so is time. It doesn’t seem implausible that there might be civilizations with a billion-year head start on homo sapiens, which is enough to traverse the entire Virgo Supercluster.
Nobody is answering why an intelligent species would do this. For fun?
I mean, if I had control over society, the answer would be both “Because we can” and “Because I want to see what’s out there”
@woah77
But…you won’t see what’s out there. Nobody in your civilization will, just the Von Neumann probe.
You mean, no one alive today would. Assuming my society is stable enough to survive for 10 million years (that’s a big if, but I’d be willing to operate under it), my offspring millions of generations later will see what’s there.
How will they see it? Have you developed some kind of massively powerful communication device that can transmit information that far?
We’re talking Von Neumann style probes. If one is insufficient, an array of them could easily transmit that far. Light doesn’t lose energy with distance, so you just need enough of them for it to be easily detected. The bigger concern to things like the Fermi Paradox is why can’t we see any craft traveling? There isn’t any stealth in space, and anything accelerating to a significant fraction of c will show like a star (not using that lightly, it takes substantial energy to get up to those speeds).
There are lots of old men who plant trees whose shade they know they shall never sit in.
If you needed the whole society to work together to build a von Neumann probe to get to the next galaxy, then “why would they do that?” is a good question. But I don’t think that was part of the thesis. It’s much easier to argue “someone will do this” than “no one, anywhere in the universe, will do this.”
There are people trying to do all sorts of weird things that don’t fit your model of them. It doesn’t mean they aren’t doing them. It means your model is wrong.
The question isn’t how to build the telescope.
The question is what intensifiers to put before ‘large telescope’.
Given that it only takes 15 inches of telescope to make out individual stars in Andromeda, signalling back shouldn’t be too hard for a type II civ.
As I mentioned in the thread above, I’m not even convinced that Von Neumann probes are physically possible. And I will absolutely grant you that, assuming that molecular nanotechnology and superintelligent AIs are more than just science fiction (which, again, I doubt), then there could be civilizations out there who have mastered them. In fact, there could be tons of such civilizations… way outside of our light-cone, because the probability of such things happening is extremely low.
I mean, look at us: we could colonize Mars in the next 50 years, if we really wanted to, but it doesn’t look like we want to. And we have a huge leg up on all those other aliens — we actually exist !
I agree with that conclusion; I assign fairly high probability to the hypothesis “advanced civilization happens with a probability that is nonzero but too small to be likely to reside within our lightcone”.
For Von Neumann machines, the existence of humans seems to suggest that there are no fundamental limitations to their existence (the only added capability of a Von Neumann machine is the ability to accelerate more effectively from place to place – self-replication and material/habitat fabrication we can already do). As for AGI, I doubt I can provide better arguments than e.g. Nick Bostrom in Superintelligence, but I don’t think it’s a prerequisite for galactic colonization.
The obvious contention is that we are amongst the most advanced species if not the most advanced, and we’re not seeing anything more advanced because they weren’t there when the light set off. We know species capable of interstellar travel exist, because we’re here. We know we can’t see evidence of species capable of travelling between solar systems, because we haven’t (albeit there is the would we recognise it question). And we know of no yet apparent reason why we can’t follow Voyager beyond the heliosphere. All of this suggests interstellar travel (not ftl necessarily) is possible and we’re as close to it as anyone. There is no filter, just the fact that intelligent life has developed no further than humanity.
It’s clearly a hypothetical position, but note the basic point here is that in the universe there has to be a first species to travel between stars, assuming that this can be done. Why not us?
I find your answer appealing, but I think it’s unlikely we’re at the forefront of an advancing universe. The Earth is only 4.5 billion years old. The first stars formed (relatively) very shortly after the beginning of the universe, about 13 billion years ago. Of course, you need a few stars to go supernova to get heavy elements, but I sill think it’s likely there are many stars with a big head start on us.
That’s kinda the point about the Fermi Paradox. If we believe that our star/system is nothing special, why don’t we see any other star faring civilizations? Now, it’s entirely possible that something happened or that circumstances to support life only became possible around 5 billion years ago. Or that the seeds of life took several billion years to form in space and that all planets capable of supporting life were seeded within a very short time frame (relative to the billion year time scale) and so evidence of farther away civilizations hasn’t reached us yet because they’re only tens of thousands/hundreds of thousands of years ahead of us.
Obviously that seems unlikely without something even older being the prime mover, but that would suggest an alien intelligence staying hidden or one that has already gone extinct. Since we haven’t looked at any planets outside our system that closely, we have no idea what might have once been.
There might be unusual details like having a relatively large moon being required.
Generally speaking, I don’t think supercivilizations leaving Earth alone because of wanting to leave aliens to develop by themselves is that farfetched.
But I also suspect that supercivilizations capable of making that decision would also build detectable megastructures. Maybe it’s too early to detect them, tho.
I always see “we are the first” as just a slightly altered case of “we are alone.”
The great filter has a precise meaning. In your scenario, there still is a great filter, it’s just that it’s behind us, rather than in front of us. You need a filter to explain why we’re the first, despite our being late compared to how easy it looks for intelligent life to develop. If you think it is hard for intelligent life to develop, that reason is the filter.
There is one category of explanations for the Great Filter that I don’t think has been discussed. Perhaps one of the effects of the sort of developments that make possible an interstellar empire is that people don’t want one any more. They discover the nature of reality, conclude that life is not worth living, and kill themselves. Or they learn how to wirehead really well, and do it. Or they become buddhist philosophers and switch to a life of contemplation. Or …
That seems quite unlikely on a civilizational level. Is there anything you could learn about reality that would make you conclude that?
This one on the other hand I totally buy. Imagine people using futuristic VR to avoid going crazy in the close confines of lengthy space flight, and then neglecting all duties because the VR is too enticing compared to the drab reality of life in a small metal tube or desolate colony.
Or they find out that the really interesting things are inside your head, so there’s no point in expansion.
I could easily imagine a universe where most species simply decide not to expand.
But it only takes one to decide to expand, and a billion years later the galaxy is covered.
Internet problems I know how to solve, but don’t understand: reddit occasionally has this problem when they update it, where Chromium based browsers won’t load the site properly and it will suddenly go to a white page, but if you delete your reddit cookies it fixes the site for good (well until the next update), even though you’d surely reacquire the same cookies, right?
Sounds to me like something on the server side is refusing to serve anything to a client with a cookie it considers invalid.
Yeah, possibly a change to what is stored in the cookie (or how it is stored) to authenticate or even just populate options…I would expect the site to handle that gracefully though but maybe it depends on your browser/settings.
A question for English, Scottish, and Welsh people who live in the British Isles: Do you think Irish people look different from your own ethnic group? If yes, then what is different about them? Are there multiple “subgroups” of Irish people (e.g. (I’m making these up as examples) – short redhead, brunette with gray eyes and big forehead)?
If you think the Irish have a distinct appearance, are you sure it isn’t due to different clothing and hairstyle preferences that are more common in Ireland, or to culturally-rooted differences in facial expression (e.g. – maybe you can recognize Irish on sight merely because they tend to smile more/less than your ethnic group)?
I’m English but I have a quarter Irish blood and my mother, who is half, has extremely stereotypical Irish looks; the red hair, strong forehead, high cheekbones, turned up nose, freckles. I have inherited some of these features (I have dark – almost black – brown hair though, minus the grey, and my freckles are very minor). I’m not sure how common that stereotypical look is, but it’s definitely a thing that exists and was played up in caricatures of the Irish in a similar way to caricatures of Africans. My guess is that it’s more that only Irish people are very likely to have these looks, but the vast majority of Irish people don’t look this way, and are largely indistinguishable from other members of the British Isles.
I’d say rather that some British people look Germanic or Nordic in a way I wouldn’t expect an Irish person to. I don’t think there is an Irish appearance that is distinct from most white Brits.
I’m white, 5/8 English, 3/16 Scottish, 1/8 Swiss and 1/16 Irish, have darkish brown hair that goes a bit wavy if it gets long enough and a beard that goes a bit ginger in the summer, and think my appearance would make me a thoroughly unremarkable native of any country in the British Isles.
I don’t think any British national group can be reliably recognised (it’s not possible to do that even at the more granular level of European countries) but some people definitely do look [nationality].
I think if you go to the level of “Northerners” versus “Meds” then you start to see clearer distinctions.
I can definitely distinguish between e.g. French and German with significantly better than chance accuracy, but that’s still in the realm of educated guessing rather than anything that could be done consciously.
Defining things a little more concretely here might help.
For example, my great-grandfather was Irish. Because he was born in Ireland. HIS father, however, was Scottish, and they were Scottish for ages before that.
So do you mean “can you spot people from Ireland?” Or do you mean “can you spot people whose ancestors are mostly from Ireland for the previous few centuries?”
They said “ethnic group”, and it’d be a bit silly to just call anyone born in Ireland “ethnically Irish”.
So do you mean “can you spot people from Ireland?” Or do you mean “can you spot people whose ancestors are mostly from Ireland for the previous few centuries?”
Yes to the second. I define “Irish” people as people who credibly claim to be “pure Irish” or something like that.
I’m Scottish. I don’t really think Irish people look different. I guess, yeah, there are some features that are more or less common, but I feel like everywhere’s such a mixture these days that I can’t really tell.
Question not for irish people?
Pretty sure yeah:
-it seems like dark hair is the norm rather than mid or lightish brown like for whites in england (also in scotland)
-and Red hair is much more common
-maybe green eyes too?
This could be regional*, as apparently ireland has historically had less movement between places than other countries. (and even in england you can recognise people to some extent by region)
(*I mean, I could also be mistaken, it could be regional even if my impression is accurate)
There’s also some looks that seem very distinctively irish, like how not every male canadian or scot is a giant with a boyish face, but you don’t seem to get them in numbers anywhere else.
For example the guy on the right, and a black haired “intense scraggly looking survivalist” type that banning dog-fighting wasn’t fair on. (Lots of others too, those are just the ones which I saw the most there and the least elsewhere.)
_
Some variance could be due to other non-genetic factors though, for a mundane example no one in england plays hurling or gaelic football, and I think you’re liable to end up with a different gait, body, attitude, etc, if you play these (at least the first) instead of football/cricket/rugby.
Couple of “irish faces”
http://image.guardian.co.uk/sys-images/Arts/Arts_/Pictures/2007/08/02/nesbitt460.jpg.
https://www.sciencedaily.com/images/2013/11/131121130027_1_900x600.jpg
http://img1.wikia.nocookie.net/__cb20120701034125/harrypotterfanon/images/f/f8/Mack.jpg (this one is more subtle, could be seeing things. Though note the green eyes.)
https://www.theapricity.com/snpa/bilder/colmmeaney.jpg (lol)
rupert grint (ron weasley) looks pretty english/irish. more english than irish despite the orange-red hair and green eyes, but you can see why they cast him.
just found a source for irish red hair: http://www.my-secret-northern-ireland.com/irish-red-hair.html
Regarding irish regional seperateness, I struggle to recall *ever* seeing an orange haired person of irish blood in ireland – lots of dark/maroonish red instead, but looking up “irish red hair” I found this https://www.theguardian.com/world/gallery/2016/aug/21/irish-redhead-convention-in-pictures -aparently there’s an irish redhead convention, where see for yourself, and I know people of irish blood in england with orangey hair.
Lastly this set of search terms might provide some education/amusement/edificiation:
https://duckduckgo.com/?q=irish+farmer&atb=v153-7_j&iax=images&ia=images
I strongly support your use of advertisements to monetize your excellent website. It warms my heart to know that corporations are paying your website in my place.