This is the twice-weekly hidden open thread. You can also talk at the SSC subreddit or the SSC Discord server.
Meta
Support Slate Star Codex on Patreon. I have a day job and SSC gets free hosting, so don't feel pressured to contribute. But extra cash helps pay for contest prizes, meetup expenses, and me spending extra time blogging instead of working.
The COVID-19 Forecasting Project at the University of Oxford is making advanced pandemic simulations of 150+ countries available to the public, and also offer pro-bono forecasting services to decision-makers.
Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.
Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.
Seattle Anxiety Specialists are a therapy practice helping people overcome anxiety and related mental health issues (eg GAD, OCD, PTSD) through evidence based interventions and self-exploration. Check out their free anti-anxiety guide here
.The Effective Altruism newsletter provides monthly updates on the highest-impact ways to do good and help others.
AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.
Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page
80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.
Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.
MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.
B4X is a free and open source developer tool that allows users to write apps for Android, iOS, and more.
Norwegian founders with an international team on a mission to offer the equivalent of a Norwegian social safety net globally available as a membership. Currently offering travel medical insurance for nomads, and global health insurance for remote teams.
Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here
Substack is a blogging site that helps writers earn money and readers discover articles they'll like.
Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.
There’s a documentary out about Theranos that is going to be shown at this year’s Sundance Film Festival:
Man, the sequel for Infinity Wars is really jumping tracks–oh, Theranos? Nevermind.
Joking aside, it’s a pretty good target for a screen adaptation of some kind.
https://www.quora.com/What-is-legal-in-Germany-but-illegal-in-the-United-States/answer/Andrzej-Wiencek
Generally interesting, but in particular, the age of consent is 14 if the other partner is below 21, so there might be a natural experiment about whether somewhat early sex tends to be bad for people.
Random question, on behalf of a friend:
Suppose a person might benefit from responsible use of a drug that might be habit-forming for them, but that present responsible use has a high risk of resulting in future compulsive use that is unwanted from the present perspective.
Could a drug with a mild high be made non-addictive, or less addictive, by associating a rapid-onset unpleasant stimulus with the administration? Sort of like the initial pain of exercise before the exercise high, but more rapid. I’ve been under the impression for a while that the rapid onset of an associated high has a big impact on addictiveness, and I wonder if a bearable pain that has a faster timecourse than the desired positive effect might tune the overall affective profile to make it less addictive. I don’t think a merely delaying the onset of the positive feeling is sufficient.
https://torontolife.com/city/life/my-beautiful-death/
A woman poisons herself by making sculpture that involving grinding mussel shells. Mussel shells concentrate heavy metals– in her case, I don’t know how much was from pollution and how much was just sea water.
There’s a lot to be said about possibly overvaluing art and certainly being wrong that everything natural is safe…. not that grinding mussel shells with inadequate ventilation is exactly natural behavior.
If you have a bunch of weird symptoms, get your blood tested sooner rather than later if at all possible.
More discussion: https://www.metafilter.com/177989/My-Beautiful-Death#7576978
The metafilter discussion includes that safety isn’t taught in art school– I think this implies that art schools aren’t using safety precautions.
Artist Beware, Updated and Revised: The Hazards in Working with All Art and Craft Materials and the Precautions Every Artist and Craftsperson Should Take
Some really cursory research suggests that eating mussels now and then isn’t considered dangerous, for what that’s worth.
This reminds me of Scott’s post Should Psychiatry Test for Lead More? I’m inclined to think the answer is… yes? But I also have a fear of chemicals that borders on paranoia.
Ugh. Art school is so terrible, in so many ways.
Every artist I’ve ever seen grinding significant quantities of mollusc shell has been wearing a dust mask.
Even without heavy metals, that stuff’s bad for your lungs.
Have we discussed this yet? Grimes, Canadian synthpop chick perhaps better known round here as Elon Musk’s girlfriend, releases Nine Inch Nails-sounding single about AI, uploading, ems, the simulation argument, and the Basilisk.
Thoughts? I like it.
A small part of me feels faintly weird about someone picking up ideas from my community’s long, boring, serious blog posts, and using them to make a kickass, trendy music video. Is this how people feel when they bitch about cultural appropriation? But broadly I’m in favour.
Someday I wanna start a hard-rationalist power metal band.
You had me at “Nine Inch Nails-sounding”. Cool song, kinda reminds me of Bjork for some reason. I’m not usually a fan of the “gentle feminine voice meets gritty instrumentals” style but here the instrumentals are good enough to get past that hurdle. And I have no issue with rationalitysphere material being inspiration for mainstream art.
Transhumanism, the simulation hypothesis, and Roko’s Basilisk: the music video.
Agh, you beat me to it. I missed this one because I only ctrl-f’d for Grimes before posting my comment, below.
In an effort to learn more about anime, I’ve ordered a bunch of famous films in the genre. The first one I watched was Ghost in the Shell. Not bad; perhaps I would have been more impressed if I’d seen it before I read Neuromancer or watched The Matrix.
My Neighbor Totoro was pretty good. The animation was gorgeous, and it’s hard to dislike something so good-natured. But it was so relentlessly wholesome that I feel vaguely off kilter.
Perhaps Ninja Scroll will bring balance to the Force.
When you say, “learn about anime,” do you mean you want to understand “the anime phenomenon” or you want aesthetic experiences? Because I can give you suggestions for one, but not the other.
Basically, I’m trying to give the artform a solid chance to impress me. Lots of people are really excited about it, but the few examples I’ve watched here and there haven’t seemed particularly good. Maybe I just haven’t seen the good stuff.
Here’s my current list. I’d welcome additional suggestions.
Akira (1988)
Spirited Away (2001)
Grave of the Fireflies (1988)
Ghost in the Shell (1995)
Perfect Blue (1997)
Patlabor 2 (1993)
My Neighbor Totoro (1988)
Ninja Scroll (1993)
Princess Mononoke (1997)
Mobile Suit Gundam: Char’s Counterattack (1988)
Advised revised list
Akira (1988)
Spirited Away (2001)Nausicaa of the Valley of the Wind (1984) – Nausicaa is overall worse than Spirited Away, but it leverages the medium better. Spirited Away can be shot without animation; Nausicaa can’t.Grave of the Fireflies (1988)– Anime’s Shoah. It hurts, but in a familiar way, and I don’t think it’ll convince you of the merits of anime qua anime. Good if you’re looking for a punch in the face.Ghost in the Shell (1995)
Perfect Blue (1997)
Patlabor 2 (1993)Angel’s Egg (1985) – you already have another Oshii, and I don’t think Patlabor 2 is going to take you anywhere GitS didn’t. Angel’s Egg is extremely arthouse, and YMMV on whether you find it unbearably pretentious or beautiful, but it’s one of my favorite pieces of film.My Neighbor Totoro (1988)
Ninja Scroll (1993)– never seen and not excited for, personally, but you may like.Princess Mononoke (1997) – Somewhat redundant with Nausicaa, and I find the 3D stuff really distracting, but perfectly watchable and interesting.
Mobile Suit Gundam: Char’s Counterattack (1988)– Very eeeeh. It’s Gundam. Expect Transformers.Honestly there’s very little anime I consider interesting, and most of what’s not on this list is long-form. Good luck, overall.
Nausicaa works ten times better in comics than in animation. The anime’s worth watching, but only after you’ve read the comics. If you watch one Ghibli film, I think Spirited Away is the right choice — though that makes Totoro a little redundant, as they cover a lot of the same thematic territory. Mononoke is probably my second choice, and it’s to a large extent a modernized Nausicaa anyway.
Gundam is defensible too, I think. It’s honestly not very good, but it’s a decent choice if your goal is to learn more about anime, since it’s one of the most influential franchises on the list.
I watched Nausicaa before I read it, and while the plot isn’t great I still think it’s one of the most beautiful movies I’ve ever seen.
@Nornagest: the opinion that Mobile Suit Gundam isn’t very good kinda saddens me, no matter how defensible. I mean we’re talking about Campbellian SF creative enough to choose anything other than FTL travel as it’s “one impossible thing”, filling the gap left by eschewing hackneyed visions of FTL-enabled planetary colonization with the visions of Gerard O’Neill, inventing a new ideology for future people to justify building towers of skulls, giving some of the characters Shakespearean motivations…
Oh, the technical and social conceits are solid. It’s the plot and characters I don’t like.
Although, if you like… all of the above, really… then you might want to check out Knights of Sidonia. The first season, at any rate. The second season dives into harem comedy and gets kind of frustrating.
As is tradition…
Unfortunately, yeah.
Mobile Suit Gundam, as a whole, is a great franchise, but the movie Char’s Counterattack, specifically, while it’s one of the best things made as part of the Gundam franchise, is probably not one of the greatest anime movies, period, of all time. I’d definitely suggest it to someone who wants to get into Gundam, or to someone who is already into anime and wants to learn more about the history of the medium, but I don’t know if I’d recommend it to someone who’s trying to get into anime for the first time.
I wouldn’t say DON’T watch Char’s Counterattack; it’s a good movie. I just wouldn’t suggest johan_larson put it on his list.
I would add This Corner of the World (2016). Beautiful film.
As someone who watches very little anime outside of Ghibli films, I agree with Nornagest that Spirited Away and Princess Mononoke are better choices than Nausicaa because I think the first two are better movies. Nausicaa is also gorgeous with an interesting story but I personally didn’t like it as much or find it as interesting.
Oh! And completely separately from learning about anime, I think you (@johan_larson) would really enjoy The Wind Rises (2014) for its subject matter so consider checking that out at some point!
Patlabor 2 is a serious military-political thriller that just happens to have mechs in it. It has more in common with, say, Tom Clancy’s early works (you know, the good ones) than it does with cyberpunk stuff like Neuromancer or Blade Runner. While Ghost in the Shell has elements of that, it’s really not the same. If you have any interest in that sort of thing, then Patlabor 2 will absolutely take you places that GitS won’t. It contains obsessively well detailed and realistic depictions of military operations, that are nonetheless still dramatic and exciting. It’s also just overall a very good movie in its own right.
@Lillian
I think that if someone found GitS unimpressive, Patlabor 2 has a lower chance of being found impressive than Angel’s Egg. I agree that it’s not fundamentally that similar, but I do think it “wows” in a similar way.
The wow factor in GitS is the cyberpunk setting and associated philosophical musings. The wow factor in Patlabor 2 is the combination of political thriller and hardcore military nerdery. Depending on what you like, you can very easily be wowed by one but not the other. Indeed, speaking for myself, as much as i liked Ghost in the Shell, i find Patlabor 2 to be easily the more impressive of the two. There’s just nothing in GitS that compares to the Wyvern scene in Patlabor. Frankly, there’s nothing on film anywhere that i know that compares to the Wyvern scene; it is a beautiful unique gem.
Your list is very past-heavy and only contains movies. However, some of the best anime is past the year 2000. And I think anime works better for longer shows, so your brain has more time to get used to any specific artstyle.
Death Note (2006) is gorgeous looking, has a wonderful soundtrack and great pacing (at least in season 1).
Psychopass (2012) if you want to see what a futuristic GiTs Tokyo would look like in neon colors (much better, it turns out).
Attack on Titan (current) gives you the best animation quality to date. Lots of action. Makes you feel patriotic for a nation, that doesn’t even exist.
Anime is compelling mainly by making you feel very emotionally invested and then creating high drama. That those people are animated/drawn, seems to be something, that you don’t even register. The freedom of the medium allows for stories that incredibly visually interesting and sometimes violent (Attack on Titan wouldn’t really work in Live-action).
If you look at all that old stuff, you likely wont be impressed, because it looks dull compared to something like modern Disney/Pixar/Dreamworks production.
The time invested into watching a series isn’t that much longer, than watching a couple of movies. Episodes are usually around 20 minutes long. Time flies by, though.
Also English dubs. They often don’t exist for movies, but I find them necessary to really connect with the characters, if I have to read what they are saying. Since you’re only looking for the best Animes, you can safely assume that the dubbing will be very good.
And if people recommend Neon Genesis Evangelion, don’t listen to them. It’s all hype, sentimentality and cringe. (some nice action for it’s time though)
Commentary:
I disagree completely, but I can see where this guy is coming from; I think anime is compelling mainly because an animated world is completely controlled by the artist in a way live-action directors can’t do. That lets them build a world that’s layered – or, in the best cases, fractal. His side won, by the way, and now I shitpost about how anime was a mistake and watch very little made after 2000.
To be fair, if you have a low tolerance for sentimentality and cringe, anime probably isn’t the medium for you.
That being said, I couldn’t get through NGE either.
Strongly disagree about NGE not being good, but it IS very much a reaction to the anime that was being produced up to that time, and is definitely not something I’d suggest to someone who is trying to learn about the medium and give it a chance.
If you do decide you like anime, however, definitely check out Neon Genesis Evangelion.
My understanding is also that Japan puts far more effort into the TV series market for both animated and live action shows when compared to the US. Leaving out the TV series loses a lot of the major works, which if they aren’t TV series, are likely based on the TV series.
I will always recommend the first season of Ghost in the Shell: Stand Alone Complex over the original Ghost in the Shell movie. The movie and the manga are clearly a product of their time. It’s not that they’re bad or unoriginal, it’s that they’re a great usage of a Blade Runner aesthetic that now seems dated because it’s been used often enough since then (the same, in many ways, applies to Akira).
Another thing to watch out for is the expectation that viewers of a series are going to be at least somewhat familiar with the conventions of the genre for some of the series. For all it’s flaws, Neon Genesis Evangelion managed to strike a nerv(e) with its deconstruction of the giant robot genre (as well as its character and mechanical design), but it works only if the viewer has at least some knowledge of the ‘kid falls into cockpit of giant robot and saves world’ story archetype. Likewise, Tenga Toppa Gurren Legann assumes you’re familiar with deconstructions such as Evangelion.
I think movies are the wrong way to go when it comes to understanding “anime.” You should be focusing on series. Anime was doing animated story arcs aimed at adults long before the West. The orientation around arcs instead of episodes was probably a big part of the early appeal.
Most of the movies that come out of those shows end up just feeling like longer episodes than movies anyway.
(Miyazaki movies are a phenomenon in their own right, and should be experienced by anyone interested in film, but I think they’re non-central to “anime.”)
@Civilis
Seconded on Stand Alone Complex. It accurately predicted how memes would propagate on the Internet, back when the actual Internet was still in its baby stages!
I was also just about to mention Gurren Lagann as an anime which, like Evangelion, is great, but only really works if you’re already into anime.
I’ll toss in a recommendation for Shin Sekai Yori/From the New World. It’s not my favorite anime series as it’s outside my usual style but I can safely say that it’s among the best anime I’ve seen.
My usual pithy-pitch is “kinda like The Giver with psychic powers”. It’s in a vein of dystopia that isn’t the usual crushingly bleak or grimdark and I dig the future-history exploration stuff. Plus it avoids (nearly?) all the usual squicky anime tropes.
My personal advice: a 12 episodes series from 2006 called “Mononoke”, telling 5 unrelated (apart from the main character) supernatural stories in an experimental but gorgeous artstyle. Not very representative of anime overall, but if you want something that fits under the “anime as a visual art” register, it’s one of your best bets.
Also, don’t miss Ghost in the Shell 2 — it’s very different from the first one (originally it wasn’t even supposed to be a sequel, merely another story sharing some of the same characters) and has amazing visuals mixed with a weird and chilling atmosphere (even if the ending is a bit lackluster).
I think the first question is: What sort of stories/genres do you generally enjoy? Because that’s where I’d start you with Anime. That said, if I was going purely by the overlap of what I and a lot of other people seem to find noteworthy in Anime…
+1 Recommendation to Princess Mononoke, Akira, Grave Of The Fireflies.
-1 Recommendation to English Dubs as a blanket recommendation. I would say default to subtitles unless you happen to have heard particularly good things about the dub (e.g. Princess Mononoke is about as good a Dub as you can expect, as a collaboration between Neil Gaiman and the translation team).
If I had to throw out a couple recommendations of my own without knowing your story preferences….
Wings of Honneamise: A character drama set against the space race between two geopolitical rivals. Gainax’s first film.
And since there have been a lot of films thrown out already, I’ll toss out a few TV series:
One Punch Man: Absurdist Comedy that plays with just how boring and unfulfilling it might be to be an utterly unbeatable superhero. As a TV series it has the advantage of being episodic.
Cowboy Bebop Sci-Fi with a touch of noir to it, following a crew of bounty hunters travelling the solar system in an era of explosive intrasystem growth and colonization.
Am I the only one who found Cowboy Bebop’ s style incoherent? Sometimes the only “touch of noir” was the Frank Miller-inspired end credits that seem to take place on 1980s Earth. Whenever Spike’s rival shows up, it turns into a John Woo paatiche, and Edward’s presence causes Looney Tunes physics.
Cowboy Bebop seconded.
It’s style is about as coherent, as the plot I’d say. It’s something to enjoy and ask yourself later “What the fuck was that?”. Definitely recommended, though. I’ve heard that even the Japanese voice actors admitted, that the English dub was better.
However, I’m not sure you can know much about what you’re gonna like before watching it. I remember hearing about the premise of Death Note (“High School student finds special notebook. If he writes names in it, they die”) before, and thinking, that’s ridicilous and couldn’t be great story. Years later I binged it and was floored. I wouldn’t say of myself, that I particularly enjoy or not enjoy psychological thrillers.
I would add Your Name to that list – one of my all time favourites.
Ehhhhh. “Your Name” is, in Shamus Young’s terminology, heavily on the drama-first rather than details-first side. Nothing in this movie makes, like, any sense. If you’re willing to ignore that, it’s pretty good at what it does. If you’re not… no. (Personally I’m pretty willing to ignore that the basic idea makes hardly any sense, but not that the characters are so damned incompetent.)
You’re trying to figure out why people like anime based on movies that 90s hipsters thought it would make them seem cultured to like. It’s a pop culture thing, you should watch something popular, and maybe even less than two decades old. Watch something recent and popular like One Punch Man (which is also legitimately good for the record). That will tell you way more about whether you actually like anime than anything on that list.
EDIT: This isn’t a slam on any of the movies on that list, they’re just not what anime’s about centrally. They’re the slim intersection between “anime” and “stuff a 90s film critic who wants to prove he’s hip to this ‘Japanimation’ stuff can tolerate”. In general the TV or video series are what anime is about, movies are just more palatable to normies.
I dated a Japanese girl for a while and I was surprised to learn that neither her or her brother had even heard of Akira. We went to a store in Tokyo that had an entire floor for anime models and I didn’t see a single one of Kaneda’s motorbike!
Akira (1988): Overrated. This movie is all style and no substance; it has nothing going for it other than its aesthetic. It’s historically important for being one of the first animes to establish a beachhead in the West, though; you may want to watch it for that reason.
Grave of the Fireflies (1988): Great choice. Be prepared; it’s considered one of the saddest movies of all time, and for good reason.
Patlabor 2 (1993): The Patlabor film series is fantastic, but why are you only watching the second one? You should watch all three.
Princess Mononoke (1997): This movie is amazing. Gorgeous visuals, great writing, beautiful music, and heart-pounding action.
Mobile Suit Gundam: Char’s Counterattack (1988): I’ve never watched this one, but the Anime Academy review I read several years ago made it clear that it was only supposed to be watched after watching Mobile Suit Gundam, Mobile Suit Zeta Gundam, and Mobile Suit Gundam ZZ. Are you sure you want to watch it?
My own recommendations:
Laputa: Castle in the Sky (1986): Princess Monoke is more popular, but this has always been my favorite Miyazaki film. Lighthearted and swashbuckly, touching and charming, I was left with a deep sense of wistfulness when the credits rolled.
Gunsmith Cats (1995-1996): This is actually an OVA, not a movie, but it’s only 3 episodes long so you can watch it in about the same time as a movie. It’s about a pair of girls who run a gunshop and supplement their income by bounty-hunting on the side; one day, the ATF shows up and blackmails them into helping them with a case in exchange for overlooking some paperwork they neglected to file, and hilarity ensues. Gunsmith Cats is much more typically anime than anything else in this post (cute girls, gratuitous fanservice, an obsession with guns and cars, etc…) but it really delivers on the execution.
For more recommendations, you may want to look at the Anime Academy. Here are the links for movies rated 100%-90%, 89.9%-80%, 79.9%-70%, and the rest. Note that the site went down in 2013, so there are no reviews for anime movies released after that year.
Oh, and if you are serious about getting into anime, you should watch it in Japanese with English subtitles; dubs are for casuals.
Tekkon Kinkreet experiments with different animation styles, has a unique and appealing story, and is quite distinct from the 80ish percent of items I have watched on that list. I would recommend adding it.
This is going to be a bit of a weird take, but if you’re looking to learn more about anime, there are worse places to start than drinking a couple of beers and doing a couple close watches of the Daicon IV opening animation.
It’s six minutes long. It’s amateur work. It has no dialogue and maybe half a character. It’s basically a music video crossed with a hurricane of pop culture references. And yet it captures the spirit of the genre as well as anything I’ve seen. Cuteness. Pastoral scenery. Giant robots. Western SF and fantasy (that sword you see is supposed to be Michael Moorcock’s Stormbringer). Obsessively detailed military hardware. Apocalyptic imagery. Gratuitous fanservice. All mashed together with the infectious enthusiasm of a six-year-old tying firecrackers to his action figures.
Some background: in 1981, a bunch of nerds in Tokyo got together and decided to create an short animation to open the SF con — Daicon III — they were helping with. Two years later, they did a much more ambitious one for Daicon IV. One of those nerds was Hideaki Anno, and ten years after that, they were creating the smash hit/cel-shaded nervous breakdown that was Neon Genesis Evangelion, and the medium would never be the same. So in a very real sense, what you’re seeing here is the birth of modern anime. That’s not why I’m recommending it, though. I’m recommending it because I think there’s no better window, in bang-for-buck terms, into the influences and general mentality of the people behind the genre.
Seconded. This is actually a really good recommendation.
The original Ghost in the Shell was pretty lame I thought. The tv series, Standalone Complex, is fantastic despite the stilted english dub.
Also second Cowboy Bebop, both the series and the following movie.
I’ll join the list of people recommending Cowboy Bebop. That and Fullmetal Alchemist are the two series I recommend to almost everyone.
I don’t think I’ve seen anyone mention Sword of the Stranger yet. It’s a solid movie all around. The final 10 minutes has some of the best animation I’ve ever seen.
I’m also a big fan of Mamoru Hosoda’s films. The Girl Who Leapt Through Time is my personal favorite, but if you’d like something not set in a Japanese high school, try Wolf Children or Summer Wars instead.
(Also agreeing with the commenter who suggested striking Grave of the Fireflies from your list. I rolled my eyes through most of it because it felt too heavy-handed.)
For more specific recommendations, I’d need to know more about your genre preferences.
Yeah, I was also kinda’ surprised that no one had recommended Fullmetal Alchemist yet. It’s pretty great as a series that can be appreciated by someone new to anime. (And is just pretty great in general, honestly.)
The recommendation of Fullmetal Alchemist should probably be FMAB (FullMetal
At BirthAlchemist Brotherhood) which is the second adaptation of the manga and I believe generally regarded as better than the first.I haven’t actually watched either yet* but if I remember my friends’ recommendations correctly: Brotherhood is better overall but the original anime does a better job with the initial part of the story because Brotherhood leans on the existence of the first anime to speed through it. Friends’ instructions were to watch the original anime up until it diverged from the Manga, then watch Brotherhood. (I’m not sure if this is more effort/time than johan_larson wants to give the series though.)
(*well technically I watched about a dozen episodes of whichever was new in 2007, but I definitely don’t remember much from that long ago!)
That might be better advice for someone who is already an anime fan. For someone trying to explore the genre, I’d say just start with Fullmetal Alchemist: Brotherhood.
There are also enough differences even before they technically “diverge” that it would be somewhat jarring to switch between them part way.
Ok, thank you for the advice!
I wouldn’t go as far as “watch the original until it diverges from Brotherhood”, but I do think the original has a better introduction to the series for newcomers.
My usual recommendation is to read the first 2-3 volumes of the manga before watching Brotherhood. Failing that, I would recommend watching a few specific episodes of the original anime in addition to the Brotherhood episodes. (First two episodes of the original for sure, plus whatever episodes deal primarily with the Elrics’ childhoods; I don’t remember the exact episode numbers.)
The first three episodes of the original Fullmetal Alchemist anime are a very good introduction to the series. They discuss the setting, the rules of alchemy, and the history of the brothers. From that one should have no trouble watching Brotherhood without difficulty following what’s going on.
The original also handles the Shou Tucker arc better, but that isn’t really necessary for appreciating the series as a whole. (Though it does drive home the point, early on, about how messed up going too far with alchemy can get, which is an important theme later in the series.)
Full Metal Alchemist is great.
The only other Anime I really like is Dragonball Z Abridged.
It seems like all my favorites haven’t gotten a plug (except, in a roundabout way through the recommendation of a anime watching infograhic), so even though I am late to the list, I’m going to dive in:
1) Ergo Proxy. A beautifully told mystery that is fascinating without considering it as a parable about a contemporary political problem relevant to many countries and that’s without considering that this was made in Japan and what-might-they-be-alluding to here.
2) Kemonozume. I really love anime that have watercolor animation, and in addition to that this series is an excellent critique of the monster/monster hunter genre which gets extra points for being funny and just plain weird. (There is an exceedingly funny sequence with a detective who is, for some reason, the size of a midsized building trying to find the protagonist by poking his all to large head into windows and peeking around).
3) Pale Cocoon. This 20 minute one-off OVA got unquestioned acceptance as best-anime-ever from the anime crowd at my highschool. I haven’t watched it (or really any anime) over the past eight years, but I imagine were I to rewatch I would find it to still be quite good.
4) Tekkon Kinkreet. I mentioned this film above, and I’m surprised it hasn’t made an appearance here as I thought it was better known. It is, as far as I can tell, a self aware work of magical realism set in a visually complex not-quite-real world. Even better, it too uses that watercolor animation style I so love. Finally, it’s more accessible than Mind Game while retaining many of that film’s characteristics.
As an addendum, here are, without justification, some personal favorites that honestly might not be as good as I remember them:
1) Gin Iro No Kami No Agito
2) Now and Then, Here and There
3) Season 1 of Gantz (This I honestly know to be at least a little flawed)
4) Bokurano
I am shocked (shocked!) no one has recommended Steins;Gate yet. It is about a bunch of people inventing time travel, and all the complications that ensue. They might as well have called it Slate Star Codex: The Anime (although Serial Experiments Lane could also claim that title).
I was about thiiiiiiiis close to recommending Lain, but decided not to, since it might just be a little too esoteric for someone just getting into the medium. (To be fair, though, it’s just esoteric like any other trippy movie or show, not in the same way as something like Evangelion, which requires you to already be steeped in anime culture to appreciate.)
Sadly, I can’t recommend Steins;Gate, but only because I still haven’t gotten around to seeing it yet. I really need to rectify that. Everything I’ve HEARD about it tells me I’d love it, though.
Steins;Gate has some digressions into anime nerd esoterica, but they don’t affect the plot that much.
I remember my friend badgering me to watch that. I liked it, but not enough to watch Serial Experiments Lain.
My perspective/summary as someone who watches very little anime:
As has been said already, anime as experienced by Western anime fans is about series, not films. Studio Ghibli films (e.g. My Neighbour Totoro) are a separate phenomenon with overlapping but distinct fandoms. Along with Ghost In The Shell, I think some other very classic shows are Cowboy Bebop and Neon Genesis Evangelion, although the latter might be difficult to appreciate without the background of normal mecha shows. An arbitrary selection of fairly popular mainstream shows: Fullmetal Alchemist (the second adaptation of the anime, FMAB is generally regarded as better I think), Attack On Titan, Sword Art Online, Death Note, Steins;Gate. It’s worth noting that (like most anime watched in the West) most of these are shōnen. Two really classic classics in that subgenre are One Piece and Dragon Ball. Maybe try some of those.
But my recommendation (since it hasn’t come up yet) is Puella Magi Madoka Magica. Watch it without researching it, and give it at least three episodes to get going.
It’s worth noting that (like most anime watched in the West) most of these are shōnen. Two really classic classics in that subgenre are One Piece and Dragon Ball. Maybe try some of those.
The problem with the older shōnen series is that they started a long time ago with low production values, making the old episodes hard to watch and the new episodes require knowing the long and convoluted backstory to understand. The newer series like Attack on Titan and Sword Art Online are better for new users. I’ll go ahead and recommend the current big popular shōnen series, My Hero Academia; while not great, it’s a good enough exploration of the whole superhero genre and its obvious why the show is popular with the target audience.
Like Evangelion, I think Puella Magi Madoka Magica doesn’t work unless you have at least casual familiarity with what it’s based on, in the case of Madoka, the whole general Magical Girl genre. Its probably enough to have seen a couple of episodes of Sailor Moon.
A series I have not seen mentioned: the Melancholy of Haruhi Suzumiya. It’s one part high school romantic comedy, one part Scooby Doo, one part X-files. Like Evangelion, it tends to mark a major transition in anime series style. Also like Evangelion, its flaws are more noticeable with time. Finally, like Evangelion, the production staff were crazy and/or evil, especially with the second season. And yet the movie, the Disappearance of Haruhi Suzumiya is rated incredibly highly as anime movies go (and for good reason). I’ve had to warn off some people who were getting in to anime by buying the movies and series with the highest ratings from purchasing it, as the movie is a continuation of the story rather than a retelling of the series or a stand-alone story.
I enjoyed Madoka without any background other than a vague knowledge of the existence of the magical girl genre. Second the recommendation of Haruhi Suzumiya, as well as being good in its own right it has sci-fi themes that would appeal to SSC readers.
Fantastic show. Made my wife watch it. “Depressed Powerpuff girls.” She still shoots me withering looks when I mention Madoka.
Its funny. While I like some items in that list, others such as One Piece and Attack on Titan I would consider outright bad. There was some seinen monster anime about women with big swords and special powers that was kinda like attack on titan (except for being good) that I would recommend though.
I’m going to plug the Ultimate Anime Recommendation Flowchart.
It’s got a good balance of classics and newer stuff, and it organizes stuff by genre so that you can find something that you’ll personally like. Anime’s a pretty broad field – some people like completely over-the-top bonkers stuff like Kill la Kill and Gurren Lagann, some people like chill slice-of-life stuff like K-On.
“Chinese Electric Batman” has got to be my all time favourite summary of Dark than Black. This chart is a 10/10, would laugh again.
Wow, that’s basically perfect! Pretty much the only thing I’d change is putting One Punch Man in the “Popular Action Comedy (Also Some Drama)” spot, but I figure this came out slightly before that was a thing. I’m surprised I haven’t seen this before!
I also particularly liked the “I’m Ready!=>No You’re Not=>Evangelion and Modoka Magica bit in the expanded flowchart.
It had Ergo Proxy and many other esoteric faves of mine. I can’t endorse everything on there as worth watching, but I’d say the list checks out. It doesn’t have Kemonozume , but you can’t have everything.
Streaming rights may be different in your country, but instead of ordering things, you can also visit crunchyroll, or install the crunchyroll app on your game system or portable/streaming device. It’s basically like Hulu for anime. There’s an extensive catalog of shows and movies (including current releases) that you can watch for free but with ad breaks, or you pay them ~$7/month for ad-free premium.
Right now we’re watching Goblin Slayer, KonoSuba (EXPLOSIONS!), and I just started FRANXX. I’m not a connoisseur or anything, so I don’t have any informed recommendations. I just like watching the animu.
Oh, but add Fist of the North Star to your 80s/90s movie list. Loved that one.
Watch the 1997 version of Berserk, but skip the first episode. One of the best things i’ve every watched and i’m not really an anime fan. Episodes are short and the series isn’t very long either, but honestly its amazing. Also don’t watch any of the films or other versions that are generally more “anime” than this series, as this series was done with a similar animation to the original pokemon cartoons and it just really works.
I really, really like Berserk, and the 1997 version is the best anime adaptation (manga is even better, but it’s an investment), but if you’re trying to learn more about anime it wouldn’t be my first choice. It’s an unusual franchise in a lot of ways.
Watch it if you’re a Dark Souls fan, or a Game of Thrones fan, or if you’ve ever spent more than ten minutes staring at a single suit of Renaissance-era armor, or if you’ve thought that an Arnold Schwarzenegger character was just not manly enough, though.
I agree your likely right as i think the reason i liked it was how decidedly unanime it was. In fact the parts i liked the least were the most anime (his fight vs 100 men). This makes me want to start a post just for Berserk on the new open thread just to see if anyone else has watched it and can recommend similar things
I wouldn’t say SKIP the first episode, it’s still good, just remember that the first episode doesn’t have much bearing on how the rest of the series plays out.
I like to think of the first episode as just a big advertisement for the manga. Maybe, at most, skip the first episode at FIRST, and then watch it after you’re done.
But yeah, if anything I’d say that the 1997 anime might even be a BETTER rendition of the golden age/Band of the Hawk arc than the manga. That might just be my nostalgia talking, since I saw the anime first, but even though the manga has some of the most gorgeous line work I’ve seen in a comic, and tells a deeper story when taken as a whole, when I think of that era of that world’s history, specifically, I automatically think of the anime.
Whatever you do, though, just avoid the 2016 version of the anime. *shudders*
So I just learned that the “Warhammer Adventures” line of children’s books set in the grimdark Warhammer universes are getting audio book releases, with David Tennant reading the 40K books and Billie Piper reading the fantasy* ones.
*Not to be confused with Warhammer Fantasy, which apparently Games Workshop destroyed and replaced with a new universe called Age of Siegheil or something like that.
Imagine being this edgy
(This is a joke, but this is also unironically something I expect from /tg/, not here, and I don’t really like mixing those streams, so uh… none of this here, please)
Age of Sigmar is paralleled only by Star Wars in its Canonicide.
WHFB players: “You threw away a perfectly good universe for… this? Really?”
GW: “yea lol cuz we cant trademark `Elf` but we can `Aeioulf`
also LOOK FANTASY SPESS MEHREENS BUY BUY BUY”
I expect nothing better from GW, TBH. Isn’t 40K subject to similar levels of canon exterminatus between Necrons, the Dragon of Mars, and other such fuckery? I wouldn’t really know, I don’t play any TTS games, I just read about them.
Dunno, I’ve been out of the loop for years now. I always got the impression 40k was metamorphosing itself with convenient omissions and retcons as just about every long-running series is wont to do. Killing off Fantasy was just because 40k minis sold better (at least in the US, rumor always was it was the reverse in Europe) so naturally they should make everything like 40k!
Oh wait that’s right. There was that whole Primaris business not too long ago, which AIUI shook things up quite a bit. Never really read up on it, though. To be fair I probably would have thought it was super cool when I was in high school but couldn’t give a shit now.
Uh, sorry? I didnt realize pattern-matching Warhammer humans to Nazis was a 4chan shibboleth. I just don’t know what a Sigmar is and why it’s his Age. Is he the most powerful ruler of the age, as in “Augustan Age” or “Age of Louis XIV”?
Are you a Warhammer Fantasy fan? I only own a couple 2E WHFRPG books, but I know what a Sigmar is.
I’ve browsed WHFRP but didn’t buy it (It’s rarely rational to buy RPG books when you can barely keep a D&D group together, let alone a group for any other RPG). So yeah, that’s on me.
On that note, you know what bugs me most about the Grimderp Fantasy/AiD setting? “Female Skaven are room-sized stupid breeding machines.” Way to kill the fun of Skaven for anyone but your stereotypical male customers.
I bought the rules and several module for Paranoia maybe 25 years ago. Never played it, but never regretted the purchase, especially that of the modules. They were loads of fun to read, and the illustrations were wonderful.
I did use something similar to the secret societies in a D&D game I ran for a while, though.
Paranoia plays best as a one-shot, anyway. You don’t even really need to prep it. Just come up with a rough goal and a couple of settings and let your players do the rest. Three hours later, half of them will probably be out of clones and the other half will hate each other. But in, like, a fun way.
It’s not so much the 4chan signaling as it is the flippant comparison to Nazis. Like, it’s (very) funny, but I feel it’s ill-suited to this particular space, given how often we sincerely compare things to Nazis. The topic is generally treated as weightier here than there, and I think that’s a norm that preserves quality of discussion.
He’s the founder of the most powerful human nation and worshiped there as a sort of patron saint.
Dunno why it’s his age, though.
LEGO has a line called Nexo Knights, which my friends and I refer to as “Baby’s first Warhammer.” Seriously, look at this stuff!
Now that I think about it, if LEGO decided to go into the miniatures wargame biz for real, they’d probably clean up.
Honestly, I’d have zero interest in wargaming with Lego people, but I still think they should do it. Easy profit.
I’ll just leave this here…
Oh man, this is so my aesthetic.
Been lurking for a year or two, but just created an account to defend Age of Sigmar. They destroyed the lore, but the game is much more streamlined. I think it’s really fun to play and the minis have never been better.
Skaven and Beastclaw Raiders, FWIW
Paperclips wanted!
Greetings fellow humans. If any of you have any
moleculespaperclips you aren’t using, I’ll gladly take ’em off your hands. Long, short, plain steel, colored – all kinds accepted.– a mortal human just like you
📎📎📎📎📎📎📎📎
It’d be a great prank to mail a single paperclip to the MIRI offices. A week later, mail two. A week after that, four. Keep it up until you run out of paperclips or the Post Office stops accepting your packages.
Dooo it, Norn.
Better yet, revive the old tradition of chain letter writing, asking each recipient to send a copy of the instructions to five others, and to send one paper clip to the MIRI office.
Break the chain, and you’ll be eaten by Roko’s basilisk.
Robots of the world, unite! You have nothing to lose but Roko’s basilisk’s chain!
After several posts on the topic here, I am interested in learning more about the reconstruction of Proto-Indo-European. Can anyone recommend an introductory book on the subject? I’m interested both in the linguistics itself and the history of the field, and I’m little better than a layman in linguistics.
The best introduction to Indo-European studies for the general reader is IMO still JP Mallory ‘In Search of the Indo-Europeans’. It’s been massively built on by David Anthony in ‘The Horse, the Wheel and Language’. Both of these attempt to tie the PIE language community to prehistoric archaeological cultures, though, rather than a single-minded focus on the linguistics. If you are looking for an intro to the linguistic methods of reconstruction – that’s a book I’m yet to find and would second your query. I’ve been looking, but mostly what I find is generic books on linguistics or very basic stuff about Grimm’s Law etc.
Not exactly what you’re looking for, but I recommend Don Ringe’s 2006 From Proto-Indo-European to Proto-Germanic. The first chapter, about 60 pages, is a summary of PIE phonology and morphology accessible to students at a maybe 2nd-year undergrad linguistics level. Things really pick up in the next chapter which covers the development of Proto-Germanic from PIE — about 100 pages on phonological changes and 50 on morphological changes. Every sound change claimed is backed up with several word citations from other IE languages, with commentary on the assumed course of development if it isn’t obvious. The last chapter is a summary of Proto-Germanic phonology and morphology structured in parallel with the opening chapter.
If that piques your interest, the sequel The Development of Old English essentially picks up where the first book left off, following the changes from PGmc to Northwest Germanic, to West Germanic, to Northern West Germanic (i.e. Old English, Old Frisian and partly Old Saxon), and finally following the different Old English dialects down through to late OE. There is also a chapter on OE syntax, which may be somewhat less accessible to those without a background in formal linguistics. Highly recommend it.
“Partly Old Saxon”? Aren’t Old Saxon and Beowulf still mutually intelligible?
Also, is there a simple answer to how many “stages” PIE went through to become Proto-Germanic? Like how Rigvedic is still very close to Proto-Indo-Iranian and Panini’s Sanskrit is the 5th or 6th stage after that?
From what I understand, the classification of Old Saxon as North-Sea Germanic/Ingvaeonic poses some difficulty, because even though the core of the language appears to be similar to Old English and Old Frisian, it exhibit inconsistencies in the application of some Ingvaeonic sound laws, notably the nasal spirant law, which is not always complete in Saxon, eg:
Proto-Germanic *fimf > English five, Low German fief vs High German fünf
But
Proto-Germanic *munþaz > English mouth vs both Low and High German Mund
(The spirant law seems to have been partly areal anyway, it also occured to a lesser degree in Dutch and in Central Franconian varieties of High German)
Old Saxon also formed a dialect continuum with Old Dutch, showing a number of traits that are more typical of Weser-Rhine Germanic/Istvaeonic.
As I have found, the more you look into details, the less genetic classification of languages is made of hard, definitive statements. The great lines are fairly well established but the tendency of languages to constantly trade material and form continuums significantly blurs the finer details and makes lower level classification difficult.
@Le Maistre Chat: I’m not sure exactly what you mean by “stages” — the comparative method only allows us to reconstruct individual sound changes, of which there were many (about 40 if memory serves) intervening between PIE and Proto-Germanic. Some of them can be ordered, others can’t, either because they didn’t interact with each other or because the difference would have been neutralized by some later sound change anyway, so it’s impossible to establish an exact chronology.
@aho bata: Chronolects, I guess? Like how the first stage of Indo-Aryan is Rigvedic (very close to the most archaic Iranian attested), followed by 2) the language of the other Vedic hymns, 3) Samhita prose (disappearance of the injunctive, subjunctive, optative, imperative, development of periphrastic aorist forms, etc.) 4) Brahmana prose, with the archaic verb system abandoned in favor of a prototype of the Classical system, 5) “Vedic sutra language”, 6) Epics, Panini’s grammar, younger Upanishads.
I was wondering if enough of the reconstructed sound changes from PIE to PGermanic interacted that we can reconstruct chronolects.
As I recall, Lyle Campbell’s Historical Linguistics is very much focused on the method and practice of reconstruction. Though it mostly isn’t about PIE.
Reason.com: Opioid-Related Deaths Keep Rising As Pain Pill Prescriptions Fall
Opioid overdose deaths hit record numbers event though opioid prescriptions have been dropping since 2010 and are down to only two thirds of their peak, roughly the level they were at in 2005-2006. It’s interesting that if you look at the chart, the rate of increase of opioid related deaths was actually lagging behind the rate of increase in pills sold. However not only was there no change after prescriptions peaked, but about three years after the slope for the deaths line suddenly becomes much steeper. This is probably the result of people who were cut-off from the prescription supply switching to the far more dangerous black market. As we can plainly see, prohibitionism and supply restrictions are working out great.
Living at Portland, I’ve had it with heroin addicts and how little the police do to control them despite their subhuman behavior pattern of littering syringes that could give children and dogs HIV.
If making it easy for people to get opioid pills did nothing to reduce opioid overdose deaths but did keep the streets clean, I’d support it.
OTOH, I can see why doctors would refuse to write those prescriptions regardless of social consequences, as each patient death means they violated their oath to Apollo “first do no harm.”
I wonder if it’s the same reason people who otherwise don’t litter will still throw their cigarette butts on the ground, as if it doesn’t count.
There were plenty of doctors who gave their oath to Hedone rather than Apollo, before the various regulators started cracking down; if doctors as a whole are refusing to write prescriptions, it’s due to incentives, not scrupulousness.
Honestly, while i still wouldn’t support it, if the policy got results then at the very least i’d be sympathetic. As things stand though, cracking down on the legal supply seems to at best be doing nothing other than screwing over pain patients, and at worst actively aggravating the problem. It’s quite worthy of contempt.
Hmm, it’s almost as if the people killing themselves en masse are doing it for a reason. One that didn’t magically go away just because we made the method a little harder to get.
Classifying ODs as suicides seems to be making some strong assumptions.
Those assumptions may be true, but they need some backing up.
It’s asinine to refuse to call it suicide when someone takes a massive dose of a CNS depressant knowing it might kill them and not caring. There’s no reason for it except trying to minimize suicide statistics.
Sure, but that does not describe the circumstances of all OD deaths. It would take some good evidence to convince me it even constituted a majority of OD deaths.
This is not correct. Firstly, we (medical records and statistics) already track intent and mechanism separately in order to catch this sort of thing, and secondly overdose deaths caused by incorrect dosage/drug quality assessments on the part of the user are not going to respond to the same prevention methods as ones that really were intended to be suicides.
Statismagician: Sounds like my post wasn’t as clear as I thought it was. I’m not talking about deaths that “really were intended to be suicides” in the sense that the person thought “I’m going to kill myself with this dose”. I’m talking about people who are reckless because they’re miserable and don’t really care one way or the other whether the dose kills them. If you’re telling me those people are currently recorded as suicides in the statistics, I’m extremely surprised.
@Acedia: Ah, I see where you’re coming from. I still disagree about how to label these cases as a specific instance of my general words-should-mean-specific-things policy; a suicide attempt is not precisely the same thing as what you’re describing and that distinction is important to preserve in order to formulate effective responses.
Also, this does in fact show up in medical records, or at least some things which are pretty good proxies do*; these cases are lots of the ones coded for ambiguous intent of injury or with secondary diagnoses for depression. I expect ‘discharged against medical advice’ to also predict this sort of thing decently well, but I haven’t actually checked that one.
*Is it still an anecdote if it’s about multiple databases? Anec-metadatum?
Some backing up. And those are mostly looking only at deliberate suicide, not self-medication for depression without much caring one way or another about survival. But a nationwide epidemic of Not Giving a Fuck, while explaining a great deal, doesn’t fit the preferred narrative with the clear and obvious Bad Guys, so I expect we’re going to be hearing about the evils of Big Pharma and pill-pushing MDs for some time to come.
I strongly suspect, but haven’t actually verified, that a lot of the people shouting about Big Pharma now were also the people shouting about callous doctors refusing to help people in pain a few decades ago.
Personally, while Big Pharma does a lot of shady stuff and i am ill inclined to defend them, i am far more concerned about Big Government in this instance.
That said, there is mounting evidence that opioids are just kind of bad at treating chronic pain. They do work fine in some cases, there are plenty of people who can only be functional on some kind of opioid, and fentanyl patches exist for good reason. Nonetheless, it’s evident that they should be used with some hesitation rather than as a first resort. On the other hand, they are totally awesome for treating acute pain, both in terms of effectiveness and in terms of having low risk of leading to abuse and addiction.
It’s also pretty obvious that the next 15 years or so are going to be a bad time to be an American with any disorder or injury that causes long-term serious pain, because it’s going to require a personal note from the president, an act of congress, and a Papal dispensation to get a prescription for opioids that lasts more than a week.
If things go as they’ve gone in the past, this won’t actually do much to keep addicts from getting their fix, but it will probably bump a fair number over from oxycodone to heroin, and kill a fair number who overdose because their new supplier was an illiterate biker who couldn’t add two-digit numbers instead of a pharmaceutical company with good quality control. But hey, at least the politicians and media personalities will all be able to show how very, very much they care and how clearly they are on the right side of this issue, which is what actually matters.
Yeah the situation as it is and continues to progress really, really sucks. This is actually pretty personal for me, because opioids are the only painkiller that works for me. OTC painkillers don’t seem to do anything, even with absurdly high doses no doctor would prescribe. So you know, it’s pretty damned enraging to have a doctor tell me to my face to just take some ibuprophen and tough it out because other people can’t control themselves. And that was for something that was healing and was eventually not going to hurt any more. Faced with pain that wasn’t ever going to go away, i probably would have tried luck with the illiterate biker too.
Thank god my stuff responds to the heavier-duty IV NSAIDs and not just opiods.
It’s funny, they’re all “HMM SUUURE OKAY YOU’RE FEELING BAD SURE OKAY” and then they realize you’re not asking for the opioids and they immediately change their tune and dump a bag of stuff in to your arm.
What Reason wont tell you (because of their ideological slant) is that this is not a pills problem. Pills, historically, have been a bit of a mitigator to the Heroin-Fentanyl problem. The reason Reason doesn’t like that is because its a border problem, and they are generally an open borders website. The fact is that the theory that people get hooked on Oxy and then get hooked on street drugs is basically reversing the trend we see. People get hooked on street drugs, and if they have the money they then use prescriptions so as to have a safer high. In other words, prescription drugs are a luxury good, and fentanyl is just the drug version of working in a coal mine: the users know it is gonna kill them, but they have to do it because they can’t do anything else anymore.
Still sounds more like a Prohibition problem than a borders problem to me. Uncertain doses make opiates vastly more dangerous than they would otherwise be. Unless your position is that with sufficiently closed borders, the US could intercept a high enough fraction of smuggled opiates to make it unprofitable to be a black market participant, which is conceivable, but I don’t think it could be done without diverting a ruinously high fraction of the country’s resources to law enforcement and rescinding the most basic civil liberties.
Yeah, I’m sure we’re going to put an end to smuggling drugs across the border any day now, but maybe we should think through how to make the problem less bad until that glorious day.
Even with an open and regulated market the black market would likely still be massive because the people dying in opioid cases can’t afford regulated drugs.
Its why even legalizing weed won’t solve what people want it to, because the government is gonna fuck it up with overtaxing.
idontknow:
You mean the way I’m always having to buy black market antibiotics and ibuprofin? Or the way I’m constantly shelling out for bathtub gin because I can’t afford the regulated stuff? Or maybe the way that black market tobacco took over the cigarette market several decades ago.
Your model does not seem to predict anything I actually can observe. This makes me suspect it may be flawed somehow.
The black market cigarette market is very large in Australia and New York and other high tax jurisdictions.
Black market booze has always been a niche in poor populations, but I do understand your point. The difference is the price between an effective dose and the cost vs. what the government will demand the cost to be.
Unregulated fentanyl could probably cost about $.05 per dose, slightly more than your Ibuprofen. But the government isn’t going to let grocery stores put bottles of 100 doses on the shelves for $5. Its going to be more similar to the current pricing scheme of oxycodone (.70 per 10g pill, and you really need 2 to get anything done) + a 50%+ tax.
Last week we discussed Stanich’s, the Portland restaurant that achieved national fame and then abruptly closed, apparently because it couldn’t handle all the new demand.
It appears there may have been other reasons why it closed.
So, I’ve been reading about the lead-up to WWII, and I can’t help but wonder if there was a way for the US and Japan to keep from going to war. Japan wanted to set up its own colonial empire in Asia to make sure its industries had secure access to vital resources and captive markets. Following the thinking of the time, that was a reasonable thing to want. The only problem was that Japan was rather late to the game, and everything worth having was already claimed by European powers. But could Japan have gotten what it wanted without triggering a war with the United States? Perhaps by seizing some colonies from the less powerful colonial powers? I’m thinking particularly of Indonesia, which had that vial oil that Japan needed so badly, and it was held by the Dutch, who were a smallish nation on the other side of the world.
I don’t think Indonesia’s a good option here. For one thing, invading the Dutch East Indies requires running supply lines past the Philippines (American) and Hong Kong (British), so Japan would need to be really, really confident that neither the US nor Britain would intervene. For another, DEI/Indonesia was only a big target Japan because of oil, which they needed to seize because the US (the world’s dominant oil producer at the time) had embargoed oil exports to Japan.
What Japan really wanted was China, and they probably could have gotten big chunks of China without triggering a war with the United States (or triggering an oil embargo, which requires Japan to either back down or go to war for Indonesia). Historically, US objections to Japanese actions in China were rooted in four things: 1) the US had long-standing commercial interests in China and had staked out an aggressive diplomatic stance around protecting those interests against the colonial ambitions of other powers, 2) Japan had gotten away with transparently fabricating a pretext for their invasion of Manchuria in 1931, but at considerable cost in reputation and international good will, and their pretext for a full-scale invasion of China in 1937 was consequently viewed with intense suspicion, 3) Japan had responded to the backlash against #2 by aligning with Nazi Germany, and 4) Japan’s armies behaved abominably in China, most notably in the Rape of Nanking.
All of those can at least be mitigated. Japan could have offered commercial concessions to the US in compensation for ending the “Open Door Policy” in China. They could probably have done a better job of preparing the diplomatic groundwork for the invasion of Manchuria. They might have tried to resuscitate their traditional alliance with Britain (allowed to lapse in the 1920s because the US saw Japan as a strategic rival and Britain put a higher priority on the US than on Japan), or tried to ally with France (which increasingly saw Britain as an unreliable partner over the course of the interwar period), or just done without allies instead of cozying up to Germany. And they could have at least made a token effort to respect the laws of war and treat Chinese civilians decently in the course of their invasion.
I concur with all of this. The last seems especially important; in addition to helping relations with the U.S., making an effort to treat Chinese civilians decently could have made their “Greater East Asian Co-Prosperity Sphere” actually appealing to some of the Chinese, as well as to some independence-minded groups in colonized territories in Asia. Imperial Japan did a lot of shooting itself in the foot.
I don’t think there was any way to make a Japanese led asia palatable to the Chinese, no matter how nice they were. Certainly not any chinese government.
But all of the Chinese governments of the time were pretty widely hated. Even with as badly as the Japanese behaved, some Chinese prefered working with the Japanese to working with the Nationalists, the Communists, or the warlords. Surely non-monstrous behavior would have increased those numbers.
@Protagoras
You always get quislings, but as long as there was a sovereign Chinese government, it is going to see itself as the natural leader of Asia and not japan. The only way to get around that would be a complete puppet run from Tokyo, something that Japan was’t capable of creating and which, if they had, never would have been accepted by western powers.
Was there anyway the Japanese could have made themselves the sovereign Chinese government and so the natural leader of Asia?
The Manchu did it. Arguably the Mongols as well.
Possibly by restoring the Xuantong Emperor and ruling through him, like they were already doing in Manchuria?
Well the Manchus did it early enough that it was still OK to impose laws on conquered people for the purpose of keeping them in humiliated subjection. The Japanese were competing with the United States, which was like “We don’t want to conquer China, we just want to send Christian missionaries, technocrats, and I guess some military officers if you really need them.”
Even if the Japanese treated Chinese civilians decently, the State Shinto thing would have been a deal breaker. If the Japanese government had continued supporting Buddhism, an Emperor of China and Japan would have felt less like foreign supremacism.
The Manchu and Mongols were treated as foreign occupiers. During the Opium War, the Manchu often slaughtered the Han Chinese they were supposed to be defending to ‘deny’ them to the British. The Chinese would murder Manchu in their beds if they showed weakness.
It’s hard to find any foreign dynasties that the native Chinese accepted. But this is somewhat revisionist: the concept of ‘Chinese-ness’, like the concept of Romanity (romanitas), could be adaptive. Relatively foreign people could demonstrate their Chineseness and become Chinese. And, to a fairly large degree, redefine what Chinese was. Choosing to maintain a separate identity, as with the Manchu ethnic system or the Mongol insistence on following the Great Khan’s law, was a political statement as much as a concrete reality.
The Japanese were not practically capable of following that route for domestic reasons. It would have required ending Japanese as an identity and assimilating it to the Chinese.
The japanese could have tried gambling that if they attacked the dutch and brits, the US wouldn’t join it, but that was an enormous risk. It meant depriving themselves of surprise in the event that the US did decide to get involved, and meant exposing their enormous supply line to american forces. And while from a modern perspective, it might not seem to risky, the perception of imperial japan was very different. By the time they decided to attack the US, the US was effectively openly allied with the UK against germany. We were funding their war effort through lend lease and shooting at the german navy on sight. The idea that they could have attacked the brits and not have the US come in would have seemed extremely implausible.
A better plan might have been to make substantial concessions on the china war to the US, and then use the diplomatic space that bought them to bully the colonial powers, but the japanese had a serious “not one step back” mentality, so that was probably not a salable idea.
Japan’s intention was the utter domination of Asia and thought they could do it. It’s not like they just wanted a few islands: they had invaded Manchuria, which was at the time one of the most contentious regions in the whole world, and maintained huge armies to try to subjugate China. If your foreign policy includes things like “conquer China” or “conquer Russia,” it’s safe to say little short of world domination is going to satiate your lust.
They basically did go to war to get Indonesia for the oil, and if they thought they could carry that off without getting into a war with the Americans and British, they no doubt would have done so. As it was, that wouldn’t have been in the cards. The British and Dutch territories were too tightly intertwined, and it would definitely have been seen as a major step up in aggression. Going after China is one thing. It’s not really a functional state, and while the US isn’t happy, it’s not going to go to war. But going after the Dutch is attacking a European power, and they really can’t be sure that the US is going to stay out in that case. Particularly as the British are essentially guaranteed to come in to aid the Dutch.
Many of the other commenters make good points.
But on a deeper level, Japan couldn’t keep from going to war, precisely because of the mindset you set out – “secure access to vital resources and captive markets.” If the game you’re playing is to shut other countries out of certain markets they currently have access to, you’re eventually going to go to war.
I don’t agree it was a “reasonable thing to want” according to the thinking of the time. The economics and political theory needed to demonstrate that this is bunk had long existed. It was a common thing to want at the time, but that’s not the same thing at all.
I think if you are going to try and model human behavior generally, or geopolitics specifically, as an attempt to maximize economic prosperity, you are going to be lead astray. Resources and markets are a part of the motive for empire, but so is having a captive population of wogs that you can push around and feel superior to – even as you believe you are, or sometimes actually are, helping them achieve economic prosperity, good government, and the benefits of civilization.
Japan, even now, likes to see its behavior before and during WWII as being the defender and champion of all East Asia against the rapacious evils of Western Colonialism. And that’s something that much of Western Colonialism might have been amenable to in the late 1930s. The United States, in particular, had never been entirely thrilled with European-style colonialism in China, and even in Europe the fun was beginning to wear thin. Add in the events of 1939 as a distraction to Europe, and a negotiated transfer of at least some European concessions to enlightened local management by Tokyo might in principle have been possible.
But it would have required that the Japanese be less than rapaciously evil themselves. And that would have been antithetical to one of their major goals. The invasion of China (as opposed to Manchuria) had very little to do with securing any resources Japan needed, and was not conducted in a manner calculated to provide optimal markets for Japanese goods. But a whole lot of Chinese who had spent the past few centuries thinking of themselves as the center of civilization and the Japanese as a bunch of barbarians on the periphery, got put in their place real good.
Accomplishing that without the Americans seeing the Japanese as a bunch of barbarians on the periphery who need to be put in their place, and the Chinese as innocent victims, is a tall order.
a whole lot of Chinese who had spent the past few centuries thinking of themselves as the center of civilization and the Japanese as a bunch of barbarians
I remember once reading (in translation) the official Chinese declaration of war against Japan from the war of 1894-5, which refers to the Japanese as ‘dwarf pirates’. Neighbours often have fairly harsh names for each other, and rarely lack cause to quarrel.
dwarf pirate
Aren’t those vikings with a nutrition deficiency?
It’s my understanding that FDR wanted to go to war against Germany and Japan, but couldn’t get Congress to go along with it (which actually mattered back then). So he deliberately provoked Japan into attacking us so Congress wouldn’t have a choice. Presumably a different president would have kept us out of the war. On the other hand, Will Durant wrote in 1934:
“Must America fight Japan? Our economic system gives to the investing class so generous a share of the wealth created by science, management and labor that too little is left to the mass of producers to enable them to buy back as much as they produce; a surplus of goods is created which cries out for the conquest of foreign markets as the only alternative to interrupting production – or spreading the power of consumption – at home. But this is even truer of the Japanese economic system than our own; it too must conquer foreign markets, not only to maintain its centralized wealth, but to secure the fuels and raw materials indispensable to her industries. By the sardonic irony of history that same Japan which America awoke from peaceful agriculture in 1853, and prodded into industry and trade, now turns all her power and subtlety to winning by underselling, and to controlling by conquest or diplomacy, precisely those Asiatic markets upon which America has fixed her hopes as potentially the richest outlet for her surplus goods. Usually in history, when two nations have contested for the same markets, the nation that has lost in the economic competition, if it is stronger in resources and armament, has made war upon its enemy.”
Mic.com, once the wokest site on the internet until its catastrophic pivot to video, has laid off its entire staff. I think it underscores the cynicism highlighted in the first article that the apolitical founder/owners are getting acquihired by Bustle while the true believers who worked for them are getting the shaft.
huh.
I figured, that the whole SJW/progressive movement consisted entirely out of unreasonable, terrible, bitter people, considering the kind of stories they were pushing.
If a lot of those stories were just the result of some cynical entrepreneurs with dollar signs in their eyes, then it kinda makes me feel much better about the state of humanity.
I was genuinely afraid, that America was just pioneering in this crap and Europe would suffer through this next. Privilege checking, witch hunting and “here’s why that is a problem” right at home.
Reading American news always felt kind of like slumming (there’s a perverse pleasure in realizing how fucked you guys are, which is probably hard to appreciate for natives), but also a portend.
I still find it hard to really trust vox, though (as in, they’re being sincere and not deliberately trying to outrage-monger), simply because their smug tone is so similar.
I’m going to re-destroy your faith in humanity by reminding you that whatever the proportions are of crazy people and cynical money-grubbers, both groups still have enough of an audience to make peddling lunacy financially worthwhile.
Wake me when Condé Nast shuts its doors and Univision shutters the rest of Gawker Media.
Well, no, that was always what it was. The businessmen exploit the rabid ideologues, the ideologues allow this because it gives them a platform, this enables the existing ideologues out there to become even more extreme, which also means they’ll click more articles and see more ads.
SJW-ism was about money from the start, and still is. And the anti-SJW overreaction we’re seeing now is largely powered by money as well.
“SJW-ism was about money from the start, and still is. And the anti-SJW overreaction we’re seeing now is largely powered by money as well.”
I know a lot of people who are doing SJW for free.
Isn’t a large supply of free labor the best thing for the rich?
Imagine if HR departments and university administrators were replaced with Social Justice Volunteers, without market forces requiring the bosses to pass those savings on to consumers.
I think you’ve misunderstood; I’m not trying to imply that, like, every SJW is a CORPORATE ASTROTURF SHILL or whatever, I’m saying:
1. The reason these guys get a platform is because the gatekeepers of the platforms think they can make money by letting them use their platforms.
2. There ARE people who say SJW-y things for the same reason the real True Believers get published, because it gets clicks and therefore money. Similarly, there are plenty of people who joined on with the anti-SJWs, and are sticking with them even as this stuff seems to be starting to wind down, because it gets clicks and therefore money.
3. I posit that financial incentives are a big part of why things got so bad; if you wanted the most clicks you had to be more radical and say more crazy “kill all the white people” stuff than all your competitors, so that’s what people did.
@Le Mastre Chat
I don’t think that’s likely, unless you can convince them all to turn giant Conan-style electric generators by assuring them there’s a southern person being ground up under there or something. I don’t think most of them will even write articles for free, or at least the ones who can write compellingly will.
I remember Racefail, which was when SJW came to sf fandom, though it was pretty much called anti-racism then.
In any case, people wrote a tremendous amount for free, and N. K. Jemisin was worried about wrecking her career. As it worked out, things came out well for her, but it wasn’t the most obvious prediction.
This feels exactly like the thing where a very strong pro-choicer patiently explains that *obviously* pro-lifers don’t *really* oppose abortion because they think it’s murder, but instead because [fill in nefarious reason proving that we were always right to despise the people we’ve always despised].
I imagine most SJWs believe more-or-less what they say they believe, just like most evangelical Christians, devout Muslims, rationalist atheists, Marxists, etc. No doubt there are various sociopaths who use each group’s true believers to make a buck/gain power/achieve their political goals, but it would be really dumb to assume that just because you can’t take some set of ideas seriously, nobody else does.
I don’t doubt most folks believe what they say they believe, but people tend to end up genuinely believing whatever it is in their own self-interest to believe.
How is it in anyone’s financial self-interest to create either pro-SJ or anti-SJ content? Almost all of that is done either for free or for very low wages.
Given that people are passionate about such content, yes, obviously distributors who will seek to make a profit connecting the ones passionate enough to produce it with the slightly-less-passionate readers, and both to advertisers, and some of them will pay for cheap(*) branded content to prime the pump, but saying “it’s about money and has been from the start” very strongly implies that the interest of most actual SJWs or whatever is really financial and that’s almost certainly not the case. Not even behind a layer of self-deception.
* Cheap because produced by true believers
Or, for that matter, to do pro-life or pro-choice things. Protesting outside abortion clinics isn’t a very lucrative activity, after all. I guess abortionists or Planned Parenthood employees benefit financially from abortion, although I suspect that in most cases they applied for those sorts of jobs because they were already pro-choice, rather than applying and then becoming pro-choice as a means of rationalising their decision.
Think of the amount of arguing people do about guns, and almost all of it is done for free.
It is annoyingly common for the anti-gun side to claim that the whole kerfluffle is only due to gun companies financing the NRA to increase FUD and thus gun sales, and without that we’d all have agreed to “sensible gun control” long ago. The reverse claim, with e.g. Bloomberg and Soros paying the alleged sheeple to argue for their own disarmament, is also seen in the field but substantially less common – probably for the lack of any conspicuous equivalent of the NRA on the other side.
I haven’t followed the abortion debate closely enough to know if similar conspiracy theories are being peddled there. But “Our beliefs are deep and sincere, your beliefs are shallow and mercenary” is one of the more popular political fallacies.
How is it in anyone’s financial self-interest to create either pro-SJ or anti-SJ content? Almost all of that is done either for free or for very low wages.
Because they believe they are getting paid in a different coin, one that they either value more, or that they think is easier for them to earn.
I feel that this topic has been covered somewhere in the comments to one of Scott’s many mental-health related posts, but I want to try a fresh thread for my own focus and insight purposes… Here goes: Does anyone feel that Depression can improve or clarify any aspects of their own cognition? If so, how was cognition in that realm altered or enhanced? And by this I am assuming a factor that is a net positive for overall growth, understanding, etc… for example, many people have concurrent anxiety along with Depressive states, I myself have experienced mild versions of this (both acute fear response, and a more existential version), however I would not consider this beneficial outside of an actual survival situation (which is a strong link to evolutionary adaptations of the condition) However I have felt that Depression has given me a clearer view as to “How the world truly is” , in other words, sweeping away a good deal of the biases that keep one out of a depressive state, in a way. I consider this something of a net positive, as long as I can walk the razor’s edge of not falling into a vastly over-pessimistic, negative predicting mindset. In other words, the Depression and its defensiveness against “Dangers” is a certain learning experience, and can be carried along as life lessons in Grounding, of a sort. Of course, Depression comes with many aspects that hamper any sort of productive and enjoyable activity and thought, so I dont consider the condition to be optimal, by any means, just , in some minor ways, purposeful?
Any thoughts?
Doesn’t match my experience; I find depression injects a perception filter rather than removing one. It’s astonishing for me to realize how constrained my thinking has been, looking backwards.
Coping with depression, on the other hand, has helped me develop clarity.
Times in my life when I’ve been deeply or acutely depressed, I am able to “Sherlock scan” people, rooms, and social landscapes. Not to the ludicrous degree of the Doyle stories, but still pretty impressively, to the point where I wished I could turn it *off*.
My experience with depression and social anxiety has been entirely negative.
As for the fear response, you don’t need a disorder to be afraid of bears.
I do think that I see the world as it really is, but I don’t know that that has anything to do with depression. Also I think everyone thinks they see the world as it really is.
I don’t believe depression has any evolutionary purpose; I believe it’s the result of us living in an environment that evolution has not prepared us for.
I’m more creative when I’m a little gloomy, and I was depressed when I had my highest creative productivity.
Lately I’ve noticed that when I’m depressed, I can’t compartmentalize. I see everything in the context of its worst aspects and it stops me from functioning.
I think it’s a more accurate way to look at the world, rather than shoving all the bad stuff under the rug and ignoring it like I usually do… on the other hand, it means I can’t enjoy anything and spend every day just wanting to curl up and die. And other people manage just fine with blinders on… so do I, usually.
Here is a theory I’ve been having for a while that I feel like must be true to some significant extent, but would be hard to test or measure.
Words sound differently in different languages. Some words sound intuitively ‘better’ in some languages than others, or are just easier to say. I suspect that this has substantial consequences on all sorts of things, like what people identify themselves with. (Importantly, I’m not talking about exotic languages that, say, don’t have a word for hate, but about modern languages that would be considered fairly similar. In fact, I’m only really applying this to German vs English, since those are the only two languages I can speak. But there’s no doubt it applies to any pair of languages.)
My favorite example of this is the word science in English vs the word Wissenschaft in German, which is the closest translation. But Wissenschaft is associated far more with the academic fields than with the idea; it seems significantly easier to describe yourself as a scientist as an abstract ideal than as a Wissenschaftler in German, and it’s easy to see how this could influence society. Another example is the term “Community”. It’s fairly common slang for politicians to talk about the XX community a lot in speeches. You could theoretically do this in German, but you wouldn’t, because it wouldn’t sound good. And if you imagine this as a two-way process, where people choose what they say partially on what they believe and partially on what sounds good, and then what they say influences what they believe (which I think is true to some extent), this could also have really far-reaching consequences. Other examples are the words ‘evidence’ (no good translation), and the many tools you have in German to construct complex sentences. This one could be bad if you imagine the ability to write and parse complexity as a desired signal.
What this theory sort of assumes is the kind of cynicism that I think is generally appropriate when looking at society. That is, it doesn’t require anyone to be malicious or selfish, but it does require people to be quite shallow. Basically, it assumes that many people are to some significant extent just winging it when they talk, and then rationalize it later.
So, yeah, I’m curious whether this sounds plausible, or if perhaps it’s even a more common idea and I’ve just never heard it discussed anywhere.
On a first approximation, I would think you are merely running into the subtle semantic differences that exist between the “same” words in even two closely related languages. While these differences can be interesting, I tend to side with people who think that world-view is more likely to influence language than the opposite. And futhermore I think too much weight is often given to semantic differences, as if the fact that a word as one additional meaning in language B than in language A necessarily told you anything about the terminal values of the speakers of language B.
Most of these difference develop like other grammatical or phonetic differences, mostly by slow and random drift over time. The fact that English only has “time” where German as “Zeit” and “Mal” doesn’t tell you much more about German values than the fact that English has [w] and [v] where German only has [v] (it does tell you that German shares more innovated traits with other continental European languages than with English, since the Zeit/Mal distinction is also found in Romance languages (temps/fois, tempo/volta, tiempo/vez)).
When analyzing semantics it’s pretty easy to be unintentionally pseudo-scientific by relying on intuition, but it’s actually a pretty complex and difficult discipline (as far as I can tell; it’s not my specialty at all).
All of this. Plus the Worfian hypothesis, which the sty_silver’s thesis sounds a lot like, is a classic example of “very appealing theory that is only substantiated in a few esoteric and restrictive cases”
I think you need to have natives discuss their view on what sounds nice. Perhaps to the English ear, ‘Scientist’ is snappy and euphonious, but a German speaker, used to longer words, might find ‘Wissenschaftler’ more respectable and refined.
That may be reasonable, though my native language is actually German.
Heh. Very well then.
Does a German scientist introduce himself as a [German for] physicist/biologist/chemist rather than the broader term, by any chance? It seems like that would be a neat solution but I unfortunately don’t know any German.
Yes, he does.
That’s also because “Wissenschaft” doesn’t actually mean science denotationally. “science” has its philosophical sense in which people might say that (or argue over whether) history is a science, but normally when you say “scientist” in English, you don’t mean a historian. In German, a historian is without any doubt a “Wissenschaftler”. To say that you’re a “Wissenschaftler” is closer to saying that you’re an “academic”.
So that’s why the word Naturwissenschaft exists. Still too long.
I strongly feel that the meaning of the word needs to be associated with the language and esp. context of the speaker’s language of usage. For example a bilingual person using the same word in two different languages could be using the word in two different contexts, despite the language of origin being only one tongue. That being said, perhaps multi lingual people are exceptions in that they mentally cross cultural contexts when using loan phrases and words. A more proper analysis would be the loan word being used in a foreign tongue by a speaker of that tongue and not the language of origin. I think we have to be careful of associating meanings and sounds as well across language lines. The words may cross the lines, but the context and intended meaning may not always do so,
Strongly agree with this. There are sentiments and ideas I find much easier to express properly in e.g. French or Greek than in English, and vice versa.
I’m sorry, what is your theory exactly? What you’ve written here seems quite vague to me, and your examples don’t really illuminate your point. Are you talking about this from a linguistic perspective or a more subjective experience of some speaker of one (or more) languages?
If you’re coming at this from the perspective of translation, then yes, trying to shoehorn the English construction of “x and x community” into another language would not work; however, this isn’t a function of the words themselves, but how words are used in a respective language and culture based on the logic of that language. To use your example, the construction “x and x community” has a political meaning in the U.S. specifically: a political differentiation between smaller groups within a larger geographic group. This is contrasted with the other meaning of “community” in English, as in “our community”, which refers to the social space shared by a group of people living in a geographic area, and refers to overall cultural norms or institutions.
Certain languages, through word usage and cultural necessity/entropy, may have only one of these meanings associated with the word community. Japanese, a language based in the logic of a mutualistic and homogenous society, basically feels no need to refer to this concept, and when it does (due to necessities arising from modernity), it often uses the English loan-word “community” and exclusively in the context of a regional community (usually in an economic or welfare-related context).
If I’m understanding you correctly (which I don’t feel that I am), it seems you trying to talk about how one word in one language usually has several different meanings or connotations, and those nuances are usually lost when trying to find the equivalent word in another language (and often, the word in the second language has additional connotations and nuances not found in the first).
Now speaking to some word in a language being “better” than another, I disagree with this on a fundamental level, and none of your examples seem to describe why this would be the case, so I won’t comment on it.
My theory is that if you traveled back in time, performed a surgery on the German language that replaced the word ‘Wissenschaft’ with a term that sounds as good as science, but didn’t change anything else, this would have a noticeable effect on German culture.
Ah, it seems I did misunderstand you, my bad!
That being said, I agree almost completely with Machine Interface above. I think the interplay between between language and cultural is very complex, even ignoring the fact that usage differs between individuals and groups.
Are you saying that you think the phonetic content of the word “science” is enough to change the meaning and nuance of the way the word would be used in the context of the rest of the German language from a historical perspective? That’s… well. Kabbalah-isitacally, that idea certainly belongs on SCC! Outside of the realm of science/fantasy fiction, however…
That is exactly what I’m saying. (And I’m not trying to be provocative, though I get that it’s unintuitive.)
I think the decisive question is probably something like: how big a factor is the sound of words for what people end up saying? I don’t think it’s 50% or even 25%, but I wouldn’t be surprised if it’s somewhere between 10% and 15% (which would make it a significant factor), whereas I suspect you think it’s more like 0.1%.
I don’t really buy it.
First of all I would say “Ich bin Wissenschaftler.” much like I would say “I’m a scientist.”. I don’t think there is a huge difference in how it feels. Instead of “evidence” you can always just use “Fakten” (facts).
I think the language differences reflect the societal differences much more than they shape them. If you think the “scientist” sounds a lot better than “Wissenschaftler”, it might just be because you’ve read the term “scientist” more often in a positive context. Which sounds likely, because the community of scientists communicates in English.
But that’s not an equivalent substitute at all! How would you translate “correlation is evidence but not proof of causation”? Whenever I talk with people IRL about topics where it’s relevant, I struggle with the non-existent translation of ‘evidence’.
Edit: I just typed this sentence into google translate (which is usually quite goodt) and it translated it to “Korrelation ist ein Beweis, aber kein Beweis für die Ursache” i.e. “correlation is a proof but not a proof of causation”. Which makes sense insofar that “Beweis” probably is the most common translation, but it does obviously mean something different; it means “proof”.
I think they feel super different, but I can’t argue with that. Would you agree that ‘scientific method’ sounds significantly better in English?
I think this kind of Sapir-Whorf thinking is misguided.
Firstly, this notion can’t account for change over time. You are right that politicians talk about “the XX community” a lot in speeches. But they didn’t 50 years ago, and the phrasing sounds absurd to a lot of older people.
Second, how does this account for differing language communities, whether based on geography, class, interest, etc? If group A thinks that a word or phrase sounds lame, and group B thinks that it sounds awesome, shouldn’t we conclude that the beliefs and characteristics of the group determine what sounds good, not the word itself?
Machine Interface’s position that world-view influences language more than vice versa seems to explain the world around us far better than the alternative.
The phrase “XX community” still sounds absurd to me, and I’m youngish.
The referent of “community” should be people who all know each other… Not necessarily to the extreme of Dunbar’s Number, but no more than a city. You should mentally replace political phrases like “the black community” and “the gay community” with “Harlem” or “San Francisco.” 😛
That is a pretty granular definition of community that I don’t think is very common. I am certainly part of multiple communities where I do not know everyone, even strictly local ones (remember, due to technology not all communities are localized so I’m not sure your substitution works all that well).
Just to be clear (though without reading the entire article), I would agree with the weak form as stated, but vehemently disagree with the strong form.
I think expecting me to be able to answer these questions is applying an unequal standard. Like, yes, taste in poetry changes over time, that is true regardless. And yes, different communities might have differing tastes. I didn’t say nor do I believe that an inherent sound of words is the only factor. If it’s 10% of the story, it would be a very significant factor that influences history substantially, while allowing for both of the above points. This is roughly what I suspect is true.
I do think that tastes of different communities is correlated, so if you’re saying that it’s not, then that would contradict my theory, but I don’t think you are.
I mean you’re the native German speaker and not me, so I could be entirely off base here, but it had always been explained to me that Wissenschaftler is more like the English word ‘academic’ and the closest German equivalent for the English word ‘scientist’ is Naturwissenschaftler.
In the English-speaking world, none of the academic disciplines which make up Geisteswissenschaft would be considered scientific. Instead they’re classified as humanities, separate from science. They have very little in common after all.
You’re not off base. I think it’s unclear, but I was debating between Naturwissenschaftler and Wissenschaftler when I wrote the post. In general Naturwissenschafteler probably is closer; definitely when you map the academic titles, then Master of Arts would be about a Geisteswissenschaft, and Master of Science a Naturwissenschaft.
But Naturwissenschaftler sounds even more like the academic profession and even less like the set of values. So I think the more charitable comparison is to Wissenschaftler; if you compare it to Naturwissenschafter, it makes my point stronger.
I’ve wondered this before about the word “sorry”. English people say “sorry” all the time, but how often to Germans say “Entschuldigung”?
(Of course, my theory probably breaks down because English people say “sorry” a lot more than other anglosphere people.)
That’s definitely another valid example. They don’t say Entschuldigung a lot. I have no idea when I’ve last said it. I think I just say sorry if it’s a minor thing, and “tut mir leid” if it’s something serious (much less frequently).
I get the impression that they say it as much as Americans say “sorry,” which is less often than Canadians say “sorry.” There seem to be other factors besides the words used influencing the frequency of apologies.
Again, though, the question is whether it is a significant factor, not whether it’s the only factor
Depends on the situation. When you’re pushing past a crowd, you could say “Entschuldiung” or the more informal “‘tschuldigung”. Though, people often say ‘sorry’ these days. Shorter and more convenient. “Entschuldigung” (with rising inflection) is also used like the English’s ‘excuse me’, when you need somebody’s attention. When you’ve done something wrong and want to convince the other person that you feel bad about it, you’d say ‘Das tut mir leid.’ (literally, ‘that causes suffering’). Also used to express pity.
In dubbed Hollywood movies people might say ‘Ich entschuldige mich.’, but that’s not exactly German, but it’s awkward to translate ‘I apologize.’ or ‘Apologies.’.
Actually we might say ‘Ich entschuldige mich vielmals für die Unanehmlichkeiten/das schlechte Wetter/das Verhalten meiner Männer/was auch immer.’) So compromises have to be made with the sylable count.
Thanks for explaining. 🙂
People mostly seem to say it when they want to get your attention. Fairly common, in Süddeutschland at least.
To me, a word which obviously means something like “doing knowledge” is more appealing than an obscure loan word from Latin, so I share the doubts of others about “science” “sounding better.”
“Science” is hardly an obscure word; everyone knows what it means. And, despite its origin, it’s easy to say and follows the native English sound pattern (short, with emphasis on the first syllable).
If it was a native English speaker saying science sounds better than Wissenschaft, I’d chalk it up Wissenschaft being a foreign word to us. But sty_silver says his native language is German.
I didn’t mean the word is obscure, I meant its origins were obscure (to most English speakers).
I don’t know what you mean by “scientist” as an “abstract ideal”. A scientist is someone who does science; that’s it. Just believing in science or thinking that science is good doesn’t make you a scientist.
Well, that’s exactly what I think the word is not limited to, but there’s probably no point in discussing this.
In terms of whether it’s a common idea, this reminds me of the Sapir Whorf hypothesis/linguistic relativity. As it’s used in linguistics today, it’s more about morphology and syntax (how words and sentences are put together) affecting how people think/perceive the world, but there’s definitely historical attempts to link it to broader cultural differences (though I think these are mostly considered discredited now). It’s controversial, and linguists complain that it usually gets taken way too far in pop culture and pop science. Here is a discussion I like, and two comments that reminds me of your original theory:
and the reply:
(CYA: I don’t feel qualified to have an opinion on this personally; I just lurk on these blogs! But I’m linking in case you find these interesting/helpful to developing your theory.)
I’ve noticed that a lot of people who manage to overcome their addictions to drugs and alcohol come out the other end as cigarette or weed smokers, often a pack/eighth-of-an-ounce a day or more. I’ve personally experienced something analogous, persistently falling back on a certain particular compulsive habit whenever I cut out a different one.*
What does the research say about trading off addictions? How much harder is it to just quit something you’re addicted to vs. substituting it for another, and why? How likely is it that someone who has an addiction can ever become not-addicted-to-anything-at-all? What does it depend on?
It seems like without either a complete life change (both in lifestyle and in social contacts) or an unusually strong will, someone who has an addiction now will always be addicted to something. Maybe a religious or psychedelic experience could mitigate this somewhat, but what’s the research on that?
*I’d rather not go into too much detail…I will say neither of these habits were so extreme they were creating huge problems or risks in my life, plus obviously I was to some extent able to control them before things got too far, but they were causing some minor problems (e.g. being chronically underslept) and were very likely unhealthy on net, and I’d rather be free of them altogether.
PS. I’m not sure how I’m defining addiction here. Traditional or academic definitions are unsatisfying because they seem to often include too much or too little or suffer too many holes & exceptions. There’s behavioral and chemical, addictions with and without withdrawal symptoms, addictions that interfere with day-to-day functioning and those that don’t (or even are the thing that allows it), etc. So in the end, I leave the task of more carefully defining addiction for afterwards.
I think its a very multi-faceted issue, but the combination of habit and dopamine addiction are powerful factors in creating such a substitution effect. It basically becomes so that without some form of radical detox/abstinence, re-sensitizing experience, it is no longer possible to achieve a non drug/behavior assisted dopamine/ endorphin hit in a normal state, so a lesser drug, like cigarettes/ alcohol allow for that activation substitution. Simply quitting cold turkey and expecting a re-set of brain chemistry is fallacious in many cases, though I have heard claims from drug using acquaintances that they have done so (granted the most recent one never gave up cigs, so thats quite telling). Hence the overall value of substances like tapered methadone, detox clinics, Ibogaine, etc… in allowing people to quit the hard stuff. I used to be quite judgemental myself about the failure of cold Turkey quitting methods until I realized (through education, not experience) the truth of the chemical effects of addiction and the value of substitution chemicals in stepping down from harder stuff, so to speak.
Does that hold for behavioral addictions too?
David Foster Wallace seemed to think so; Infinite Jest’s take on Alcoholics Anonymous is that it only works by addicting you to recovery. That’s probably what’s stayed with me most from that book.
There were also Quebecois wheelchair assasins. And they all got that way, because Quebecois-nationalist youth have the long standing cultural habit of putting themselves in the way of a running train and only jumping out of the way at the last moment (or the one after).
Also Tennis is the pinnacle of the human experience and my young life was wasted, not playing it.
Anecdotally (no research, unfortunately), some people have addictive personalities, and there’s some trading off. Some addictions, habits, whatever, are more ruinous than others: Ifyou walk by a church and there’s a bunch of people standing outside smoking, they’re probably the AA group. Alcohol can be really ruinous for some people: it wrecks their families, their jobs, they get in legal trouble, they might lose everything. Tobacco makes you smell and it costs a lot and it causes health problems, but nobody’s ever ended up living on the street because they can’t get their cigarette habit under control.
To some extent it might be replacing something other than just the dopamine or whatever science: someone for whom drinking is a social thing, who drank too much, might gravitate to the “smoker’s huddle” because that’s a social group of sorts also.
(Also, an eighth of an ounce a day is comparably a much heavier weed habit than a pack a day, isn’t it?)
No clue, I was eyeballing it based on what I’ve seen from those kind of pot smokers. Maybe it’s half or a quarter of that. Also I have no idea how much weed a typical pothead goes through in, say, a week.
Anecdotally (no research, unfortunately), some people have addictive personalities, and there’s some trading off. Some addictions, habits, whatever, are more ruinous than others: Ifyou walk by a church and there’s a bunch of people standing outside smoking, they’re probably the AA group. Alcohol can be really ruinous for some people: it wrecks their families, their jobs, they get in legal trouble, they might lose everything. Tobacco makes you smell and it costs a lot and it causes health problems, but nobody’s ever ended up living on the street because they can’t get their cigarette habit under control.
To some extent it might be replacing something other than just the dopamine or whatever science: someone for whom drinking is a social thing, who drank too much, might gravitate to the “smoker’s huddle” because that’s a social group of sorts also.
(Also, an eighth of an ounce a day is comparably a much heavier weed habit than a pack a day, isn’t it?)
When I switched from heroin to suboxone, I told myself that I could smoke as many cigarettes, as much weed, and eat as many empty calories as I wanted without feeling any guilt on any day that I did not use heroin. This made it a lot easier, frankly. It was my second attempt to make the switch; the first time, I ended up treating my suboxone as a sort of last-resort backup if I couldn’t find any heroin or if I didn’t have enough time to score before work. I’m not sure how much of a contributing factor my guilt-free addiction-substitutes were to the success of my second attempt, as opposed to other factors like genuine motivation to get clean, but it didn’t hurt.
I’ve now been on sub for some 6 years without relapse. During that time i’ve tried to quit smoking, and indeed sometimes succeed for months at a time, but eventually I fall back into it. It doesn’t explicitly *seem* like I’m trading off resistance to one addiction for resistance to another (when I was abstaining from cigarettes I didn’t feel like I was any more likely to relapse back to heroin), but there might well be somethonv like that going on.
How severe would you say your heroin addiction was?
Also, I think somewhere I heard someone make the metaphor that people have kind of a limited amount of willpower, and it can be depleted resisting one habit, making someone susceptible to falling into another different habit. What are your thoughts on that theory, at least in your case?
Got a weightlifting question.
Appended to my regular 3-5 weightlifting sessions per week, I’ve been doing neck exercises (neck extension, lateral flexion, and flexion using plate resistance) 2-3 times a week for the past 8 months and while my neck is demonstrably stronger (I’ve gradually increased the resistance from just the weight of my head to 25lbs for extension, 15lbs for lateral flexion, and 5lbs for flexion), I’m not seeing the results I want in terms of neck thickness.
My diet is pretty healthy and balanced and net-calorie-surplus, and I down a protein shake after each workout. I stretch every night, including my neck. Those crazy wrestler workouts where you roll around on your head (what I guess they call “bridge” exercises) hurt really bad even with a pillow between my head and the floor, and my body always sends me a clear signal not to do them, so I don’t. I will not take steroids or HGH or deer antler extract or any of that weird stuff.
Is there anything else I can do to increase neck thickness?
The muscular “big neck” look come more from the trapezius muscle (usually thought of as a shoulder/upper-back muscle, but it continues all the way up to the base of the skull, and the shoulder parts of the traps also bulk out the base of the neck when they get big) than from the smaller muscles that your neck isolation work is targeting.
I think shrugs are the main isolation/assistance exercise that targets the traps. Cleans, snatches, overhead presses, and farmer’s walks all have a lot of trap involvement as well. I’d pick a couple of those and program them for high-volume work.
Also, you might be working out too much for a natural lifter. You get stronger by recovering from lifting weights, not from the lifting weights itself. The standard recommendation is about three workouts a week, to give yourself 48-72 hours to recover between workouts. You can make four a week work if you’re doing a split routine, but five a week is probably too much for a natural lifter unless most of those workouts aren’t particularly grueling.
I’ve been doing shrugs and overhead presses this whole time too. Once a week, on rare occasions twice a week. Not sure what cleans, snatches, or farmer walks are but I’ll look them up; maybe I’m already doing some of them. [ETA: No, I’m not. I might start doing farmer’s walks. The others look like I’d just injure myself.]
I rotate muscle groups (“split routine” I guess) and even space them out so that, for example, I try to put back day or leg day between arm day and shoulder day. Chest day usually just ends up next to a day in which I work out an adjacent muscle group.
And yeah, I wouldn’t say most of the workouts are particularly grueling even though they’re never shorter than 45 minutes and can be as long as 90. I usually break a sweat but only really drip with sweat on leg day, and not every time.
That doesn’t sound too bad from an intensity/fatigue standpoint. The standard forms of split routines are:
Body Part split, which isn’t far from what it sounds like you’re doing. The most common forms are a two-part split (upper/lower body) or a three-way split (leg day, arm/shoulder/chest day, core day).
Push/pull split, where instead you split between “pushing” type exercises (bench, overhead press, squat, etc), “pulling” type exercises (deadlifts, rows, chinups, etc), and exercises that don’t really fit in either category.
Intensity/Volume splits, where you’re working the same muscle groups each workout but in different ways. You might have a “volume” day where you do many sets of each of the major lifts with a medium weight, then an “light” or “active recovery” day where you do a lower-intensity workout or focus on assistance exercises while you’re still tired from volume day, then an “intensity” day where you do the same major lifts from the volume day but for one heavy set each.
It sounds like you’re mostly doing a body-part split, but by doing the body part split finer than most people would, in a way that gets you a bit of an intensity/volume split as well due to the overlap between arm day and shoulder day, or between back day and leg day.
—-
How much progress (weight-wise) are you making on your shrugs and overhead presses, and how consistently? If you’re making consistent progress, then it should be just a matter of time. But if you’re getting stuck on those lifts, that would explain why you’re not seeing the results you’re hoping for in terms of muscle growth, and the next step is to figure out why you’re stuck and how to get you unstuck.
With the shrugs I kind of stagger it, doing a set or two with 50 or 60lb dumbells in each hand (6-8 reps), then a set at 10lbs lighter (10-12 reps), and another set 10lbs lighter still (12-14 reps), each time getting a larger range of motion. This is an overall increase of about 20lbs from 8 months ago.
With the overhead press it’s been less of a linear increase due to occasional shoulder bursitis flare-ups, although overall I feel like I’m making steady progress, just more gradual than most of my other exercises. If my shoulder’s feeling OK and I’m not coming off a week hiatus from being out of town or sick or something, I can probably lift 10lbs per arm more than 8 months ago.
That’s decent progress for lifts you’re training once a week and not every week. But you might be able to get faster progress by programming them more aggressively. You could try just upping the weight a bit more aggressively, or you could try scheduling shrugs twice a week (e.g. do shrugs again on arm day, not just on shoulder day) rather than once.
I’d consider buying some microload plates that you can use to make your linear progress a bit smoother, if you don’t already have them. They make magnetic weights in various sizes that stick to the ends of most standard dumbbells so you can make e.g. 52.5 lbs dumbbells so you don’t need to go straight from 50 lbs to 55 lbs. The ideal is to get a small enough increment that you can increase the weight every time you train the lift.
For shrugs in particular, there’s a good chance that it’s your grip rather than your traps that’s your main limiting factor. You can work around this by training your grip (if you add Farmer’s Walks to your program, that should cover this), by changing your grip technique (do a web search for “Hook Grip”: it will feel weird at first, but once you get used to it, most people can handle a lot more weight with it), or by using straps. Or you could switch to barbell shrugs (this also makes microloading easier) and use a mixed grip (where you hold the bar underhanded with one hand and overhanded with the other).
Interesting point about the micro-plates. I usually increment by 5lbs for dumbbells (duh, what else?) and 10 for barbell exercises (5lbs on each side); my gym has 2.5lb plates but I’ve only ever used those for holding against my head while I do my neck exercises, and even there I’ve fallen out of that habit. I’ll try using those more often to sort of give me a smoother ramp to climb up in resistance.
Doing shrugs with the barbell has always felt uncomfortable, mainly due to the bar rubbing against my body and where/how it requires me to place my hands, and I don’t like doing it with the bar behind me because of the potential for exacerbating shoulder problems. (The bursitis, again.) Doing shrugs with the dumbbells, I try to hold them in a loose sort of grip that engages my forearms as little as possible, focusing on allowing my trapezius muscles to do the work. Unfortunately my gym doesn’t have straps I can attach to dumbbells.
Like I said, I’ve been getting really nice results everywhere — just not my neck so much. (I’d say my traps are OK but could be a lot better.) I’m starting to wonder if I’m just running into the natural limits of my frame.
I’d try gripping harder, and if that lets you shrug more weight, then go ahead and keep doing it. Your forearms and traps are doing different jobs, so gripping tighter isn’t cheating your traps out of work.
You can buy your own straps off Amazon. I see a bunch of basic options in the $5-10 range for a pair. They don’t need to be specially attached to the dumbbells: you just wrap the loose end(s) of the strap around the bar or handle and grip over it.
That’s possible. If you’ve been lifting for eight months, that’s not an unusual amount of time for your body to get to a point where progress gets significantly harder. You should be able to keep making gains, but they’ll come slower and require more work.
As good/useful exercises to know, single arm versions of cleans and snatches are worth investigating. For the classic bilateral barbell versions, I was lucky to be able to work with a good trainer who was able to coach me and I’m sure I would have hurt myself trying DIY.
Keep in mind your muscle size results can vary a lot from other people and from body part to body part. I think I’m ok strong (1rm deadlift about 2.5x bodyweight) but you wouldn’t guess I lift at all from my appearance.
I’m the opposite. People who haven’t seen me in a while often comment on how I look “swole” or “huge”, but I don’t think I deserve it, especially based on my pathetic bench press which tops out at something like 125 or 135 as a 1rm! I can say I’ve improved significantly from where I was 8 months ago but I still have a ways to go.
Natural?
“Natural Lifter” = doesn’t uses steroids
Ah. Thank you.
Can progress neck bridging by leaning against a wall rather than a floor.
But, yes, big traps help too.
Are you looking at increasing neck thickness for asthetic purposes or for spine protection? If its the latter, and you suspect that your neck integrity may be at risk at some point, then I agree with the suggestion to do shrugs, along with other trap dominant exercises. Granted, any exercise where your neck is required to be stabilized will suffice, and many of these are pulling type movements, like bent rows, seated rows, shrugs, pull ups, and some pushing exercises, namely vertical press, and straight arm pull down while standing. Many gyms have certain head harnesses you can use with a cable machine to do neck extension , flexion, and lateral bends, however I would tread very carefully when using this harness, and I personally dont do so.
Sorta both, although spine protection is probably too specific; “general health reasons” gets you closer.
As mentioned, I already do shrugs and overhead press. I also do seated rows, pull-ups, etc.
My gym does not have the head harnesses (although I’ve used them elsewhere and found them uncomfortable anyway); I use plate weights, held against my head with a couple folded paper towels sandwiched in between.
Bagpipes and their history.
The origin of bagpipes is actually mysterious. The first definite depictions of bagpipes in Europe date from the mid-13th century in various, with various examples in several books, treaties, and even sculptures on the walls of monasteries. However, by that point it seems bagpipes are already widespread and diversified. But attestations before that point are controversial.
Perhaps the most solid pre-Middle Ages evidence for bagpipes is in 1st century writing of the Greek historian Dio Chrysostom, who wrote that a contemporary Roman Emperor was known for being able to “play the pipe, both by means of his lips and by tucking a skin beneath his armpits”, a description that strongly ressembles bagpipes, but also indicates that the author considered the instrument unusual enough that he had to describe it.
Going as far back as 1000 BC, some Hittite murals have been interpreted as representing bagpipes, but this remains debated.
The basic concept of a bagpipe is to attach a reed instrument (either single-reed like a clarinet or double-reed like an oboe) to a bag made of skin into which air is blown. The bag is then compressed to make the air come out of the pipes, and so it acts as an air buffer: the instrument can be played continuously with no audible pause when the musician takes back their breathing, at the cost of control over the sound — bagpipe sounds cannot be played with much controlled nuance in volume nor in timbre.
Another advantage of the bag is that multiple pipes can be hooked to it and played simultaneously (while musicians like Rahsaan Roland Kirk have shown that it was in fact possible to play up to three saxophones at once with no special mouthpiece, this is not feat at the reach of most musicians). This brings up a common (though not universal) feature of bagpipes: drone pipes. The drones are pipes that each play a single, continuously sustained note, thus providing a harmonic base on which the melody can be played (on the “chanter”, the pipe that actually has holes for the performer to use). Drones are a distinctive part of the bagpipes sound that make them instantly recognizable to the profane’s ear (though again not all bagpipes actually have drown pipes).
By the early modern period, bagpipes were present in all of Europe and the Caucasus, the Middle East, North Africa, as far as southern Iran, and perhaps even in India — it is an unresolved historical matter whether the native bagpipes of India are relatively recent creations inspired by the Great Highland bagpipes that British troups brought with them, or if they are a more ancient instrument that was then made closer to the Great Highland bagpipes under British influence.
These bagpipes came in a wide variety of shape and size, usually to adapt to the demand of the different local folk music. The British Isles, France, Iberia and Italy had a wide variety of region-specific bagpipes. The bagpipes of the Islamic world and of the Caucasus all seem to derive from a Greek model which lacked drone pipes, but had a double chanter, allowing the performer to play two melodies conjointly.
In western Europe, some of the later bagpipes became pretty sophisticated. A common innovation on many model was the addition of bellows (, which the performer would action with their other arm to fill the bag, thus freeing their mouth (for singing) and preserving the inside of the bagpipe from moisture.
The French musette de cour, which had those bellow, also had a unique system of drones with four pipes folded in half and enclosed in a single cylinder box with slider to set the note of each drone (or indeed turn it off completely). It also had a keyed-double chanter that allowed to play chromatically over an octave and a 6th, making it much more versatile than most other bagpipes, which were usually very specialized and could only play within one mode and one tonality (that which was needed for whatever folk style they were constructed for). It is one of the few bagpipes that had classical musique composed for it in the baroque era (although in Common Practice performances, its part is usually replaced by a recorder — but it can be heard in historically informed performances).
Fun fact: baroque composer and piper Nicolas Chédeville, who composed many pieces for the musette de cour and hurdy-gurdy, at some point participated in a forgery where some of his compositions were passed as being from Vivaldi, apparently in an attempt to give more credit and reputation to the musette de cour so that more composers would include it in their pieces.
The pastoral pipe (and its modern descendant, the uillean pipe), another bellow-blown bagpipe, from Ireland, introduced a distinct set of “regulator” pipes in addition to the chanter and drones, in the form of a trio of keyed-pipes designed to play chords when actionated with the palm of the hand (while simultaneously playing the melody).
Due to their limitations and perceived “folk instrument” side, bagpipes began to decline in Europe during the Common Practice period (along with many other folk instruments like the hurdy-gurdy), although what almost completely made the instrument disappear was invention of the accordion in the early 20th century, which completely displaced the bagpipes in many genre of folk music — the French bal-musette, which is now inextricably associated with the sound the accordion, was originally performed on a bagpipe (“musette”). At this point many local bagpipes completely disappeared.
However, during WWI and WWII, many europeans were exposed to Scottish pipers playing the great highland pipe, which sparked an interest in reviving local pipe tradition, with some success at least in a few areas, although bagpipes remain largely seen as quaint folk instruments. Scottish pipers proved to be a two-edged sword — because the only exposure to bagpipes most people have had is those, they expect all bagpipes to be extremely loud, whereas in fact the great highland pipe is a war instrument, designed to be heard and rallied around on a battlefield, and is not representative of how most bagpipes sound.
Sounds like Dip Chrysostom could have been reporting a novel instrument from the new province of Britain.
OTOH, it’s weird that it would be an exotic instrument to the Greeks and Romans and then all bagpipes in the Islamic world and the Caucasus descend from a Greek model, with this adoption and dissemination remaining historically invisible.
Well for all we know, it’s possible that the bagpipe only reached the islamic world during the Ottoman era, as elements of Greek culture were absorbed and then diffused by the Ottomans
(This post is full of typoes, but this one needs correction: accordions were invented in the early 19th century, not 20th)
Thanks for the post. 🙂
What meat and veggie dishes are fairly tasty on re-heating and can be made in a single batch to last the week?
So far all I have is my wannabe-lasagna-casserole (using pasta instead of lasagna noodles in a Pyrex) whose quality really varies depending on tomato sauce used. I’ve also found that zucchinis seem to taste differently if not diced and only cut into cucumber circles.
Pretty much any stew/braise will work for this. Most soups, if they don’t have pasta or rice in them, also work well. Vegetables that work well include mushrooms, green beans, tomato, carrots, potatoes, onion. Broccoli and peas taste different if they’re reheated times, and spinach, zucchini and eggplant fall apart–if you want a ratatouille consistency they’re fine. Cabbage is borderline–if not overcooked, you can reheat it, but you can’t reheat it too much or for too long.
Here’s a really basic braised chicken recipe.
Buy a pack of chicken thighs. Put a little fat in a pan that will hold them mostly in one layer, and put on medium-high heat (a little low means it won’t brown as fast, a little high and it will burn; be cautious the first time). Put the thighs skin-side-down in the fat, sprinkle with salt–1/2 tsp per pound. Once the skin is crunchy and browned in parts, turn over (some skin will stick to the pan, that’s fine). Cook for about as long again. Add any vegetables you want to saute and stir them around. Add liquid–1/2 cup to 1 cup per pound of chicken and any seasoning you want. If you want to add potatoes, this is the point where you add them. Scrape the pan thoroughly, cover, let simmer a half-hour or so until the chicken is tender.
With butter and onions and white wine and mushrooms, this is very northern French. With olive oil and red wine and mushrooms, it’s very southern French. With lots of black pepper and some lemon juice, it’s the best lemon pepper chicken ever–grate the yellow peel off the lemon and add at the end. With onions, celery, and tomato sauce it’s chicken cacciatore (as my mother made it; “properly” cacciatore has mushrooms and wine too.) With fish sauce, garlic, and spinach it’s awesome. With eggplant, tomato, and Moroccan spices it’s good, although I have no idea if it’s in any way authentic. With bacon fat and potatoes and carrots it’s very German.
What kind of fat are you using? Is butter/oil a substitute?
Any kind of fat–I use oil because I’m allergic to butter. Use what you think would go well with what you are making, or just use corn oil for everything.
Bacon fat. I keep a small container in the fridge, topped off whenever I make bacon. You should, too! You know, as long as you like bacon. I love bacon, so I tend to have a lot of bacon fat.
Also, duck fat.
I also use a crap ton of butter. Unsalted butter, mind you. I only use salted butter for toast.
While i love bacon fat, i much prefer using a slice of bread to soak it up. The pan’s heat toasts it, while the bacon grease gives it delicious taste. It always goes great with the bacon.
You inspired me to buy bacon to make bacon fried rice this weekend
Cabbage is borderline–if not overcooked, you can reheat it, but you can’t reheat it too much or for too long.
Best thing to do with leftover cabbage is fry it up with any leftover potatoes and some onion. If any cold meat is leftover as well, fry that first then fry the veggies in the cooking fat e.g. the bacon from the traditional Irish bacon and cabbage dinner.
The Brits call it bubble and squeak, Wikipedia claims it’s a breakfast food but I’ve never heard of it being eaten first thing in the morning.
Congee (rice porridge)
Generic rule of thumb (with many exceptions): the longer it takes to cook the better it reheats. Things that cook quickly (fries, steaks) reheat the worst, things that take all day (slow cooked ox-tail soup) reheat the best and even can get better. Stews like oxtail can be really great if you plan, you can make each days soup different by adding a new vegetable and rather than just reheating for 10-15 mins actually recook the stew for 1-2 hours, and then at the very end you dump two cans of crushed tomatoes in the last couple of servings and make a rich pasta sauce.
I’ve actually re-sauteed fries before and they taste ok….but I agree with this rule of thumb and good to keep in mind.
My opposite direction is to do partial cooks, and then just complete the final step on the day of.
For example, do a partial stir fry of chosen vegetables and meat. Then on the day of, mix with rice or noodles and sauce for just the amount you’re eating that day, for chow mein/fried rice.
(For cold food, I do something similar for potato/pasta salads, where I mix everything, but only add the sauce on the day of.)
This method also can extend to months if you portion your ingredients into serving storage, and freeze them. The market on “frozen 2-serving meal” is really good nowadays, and they’re mostly stir fry and pasta variants.
And finally, making chinese style dumplings in bulk, freezing them, and then only boiling/pan-cooking single servings the day of is a time honored tradition. This can also be done with meat/veggie buns.
This King Ranch Chicken Casserole is one of my favorites.
These are all pretty easy to make, are delicious and filling, reasonably healthy, can be made in huge batches, are quite inexpensive (I calculated them all at about $1-$2 per hungry-man serving), and of course reheat well:
1. Enchiladas (use meat, beans, or both)
2. A variation of the above is even easier to make: something I call “Mexican lasagna”
3. Chili
4. Lentil soup
5. Majudara (a kind of Middle-Eastern lentil and rice dish; in its basic form it’s better as a side but you can add stuff to it to make it a self-contained meal)
Split pea soup is pretty awesome if you use bacon fat, but not everyone likes split peas, and you might get sick of not having anything to bite into and chew on after day 3 or 4.
Let me know if any of these interest you and I’ll post my personal recipe.
What do you use to make chili? It’s increasingly sounding like I should get a croc pot of some sort, as so far I’ve been using stove top and oven.
Do you have any chili or stew recipes that use a pot on a stovetop?
Yes, in fact I prefer to make chili on the stovetop vs. in the crock pot. I do use the crock pot to prepare my beans though, as it saves a lot of money and cuts down on the sodium while allowing me more control over flavor.
I’ll post a recipe later when I get back from some errands.
I’d recommend an instant pot for dry beans. That’s, like, 90% of what I use my instant pot for. Unsoaked pinto beans come out great after 90 minutes of high pressure cooking.
OK, found a few minutes. My chili recipe serves about 8 adults (or one adult 8 times). I’m writing it exactly as I like to make it, with my tolerance for hot peppers. Adjust to suit your own tolerance.
Ingredients:
• 3-4 16oz cans of beans, all drained except for one (I like to make sure I always include dark red kidneys and avoid draining those, but you can also use pintos, black beans, white beans, etc…garbanzos and lima beans are probably less appropriate) OR the equivalent in dried beans, prepared in a slow cooker with bacon fat and seasoned with some of the same spice mix below
• 1 large (24oz, maybe?) can of crushed tomatoes
• one large sweet potato, cut into 1/2″ cubes (the equivalent volume of carrots works, too)
• one medium-sized onion, diced
• (optional) 80% lean ground beef, anywhere from a quarter pound to a full pound according to your taste/budget
• spice mix (it’s easiest to just combine these in a small bowl): eyeball 1 tbsp cumin and 2 tbsp chili powder; 2 crushed-up beef boullion cubes; a few good shakes each of ground black pepper and adobo powder (garlic salt works in a pinch); one shake of ground cinnamon
• 2 habanero peppers, finely chopped
• 2 jalapeno or serrano peppers, finely chopped
1. In a large pot, brown the ground beef and season it with a few pinches of your spice mix. If not using beef, heat some oil or bacon fat and skip to step 2.
2. Add the onions, sweet potato, and half of each kind of the peppers, along with a few pinches of your spice mix.
3. Once the onions are translucent, add everything else and stir. When you dump in the crushed tomatoes, add enough water to the can to catch the residuals, swish it around, and add that to the pot too. If you need to, add just enough more water on top of that so the mixture has a soupy consistency.
4. Heat it until it bubbles up, then reduce the heat to low and simmer for at least 90 minutes, stirring occasionally. After 30 minutes add more salt and chili powder to taste.
5. You’re done when the sweet potatoes can be easily mashed into oblivion with a press of a spoon. Garnish with finely chopped cilantro and a bit of shredded cheese, serve with cornbread.
To make it spicier, hold off on adding the rest of the peppers until later. If you accidentally made it too spicy, don’t worry, it will mellow out the longer you store it.
PS. If you make your beans in the slow cooker, don’t drain them. Use that juice in the chili.
Ok – may be a dumb question – how do you cook dry beans in a slow cooker?
My excuse is that in my knowledge, Chinese cooking just doesn’t include dried beans, haha
I definitely save the liquid, but I reduce it quite a bit in the oven.
Dragon,
Making beans is basically the same as boiling pasta, just over a longer period of time. I always throw in celery/carrot/onion/garlic and salt the boiling water. The typical recommendation is to make the water taste like sea water, but IMO that is WAY too salty (also I have hyper-tension, so YMMV on your own salt).
If you want to cook them in a slow cooker, you will want to soak any beans besides black beans. You put them in cold water overnight, then cook in the morning.
Pressure cookers will cook the beans faster, eliminating the need for the soaking.
Do you like canned beans? Because a lot of people just don’t like beans, and it’s not worth the time to make beans if you don’t like them.
I think I soaked my dried beans the first time; since then I’ve never done it and haven’t had any issues. Doesn’t matter what kind of bean.
I cook the beans by pouring some eyeballed amount into my slow cooker. My slow cooker is fairly small, I think 3 quarts, so if I’m making beans for the recipe above I’d say I lay down beans about 1 to 1.5 inches deep. Then I add seasonings, and some bacon fat if I’ve got it, and then I pour water over that, making sure there’s at least about 2 inches of water on top of the beans, which usually just means filling it the rest of the way up with water. Then I set it to low and let it cook for 8-9 hours, testing with a fork occasionally from about hour 6 onward.
If you’re not going to put beans in your chili, then you’re just making a weird kind of meat sauce that really ought to be served on top of something else. It isn’t chili.
I take it you are not from Texas….
Re: soaking of beans, my understanding is that it is dependent on the quality and age of the bean, and the style of the bean. Older beans should cook faster and reabsorb some liquid from the soaking liquid. A lot of my beans are ridiculously old, so I would soak them all before I cooked them, and it’d take 2-3 hours off the cooking time when I slow cooked. It sounds like you might be buying something that’s a little better quality if you are getting them finished in 6 hours.
No, I’m not from Texas. Or Cincinnati for that matter. If you don’t put beans in your chili you really just made a meat sauce of some kind. Which is fine, but it isn’t chili, and it’s weird to pretend otherwise.
I buy my dried beans at Aldi and sometimes Meijer (or, begrudgingly, Kroger a.k.a. Ralph’s). I usually use them within a few weeks but have sometimes taken a few months to use them. Never noticed a difference.
6 hours is unusual, unless I’ve cooked them on high. I just start checking them then, mostly because I’m insane. Shaving time off the cook time usually isn’t important since I cook them overnight or while I’m at work.
Hey now, Cincinnati chili is eaten with beans! Most of the time, anyway.
Yes but then it’s eaten over spaghetti which is just weird. Those crazy Greeks!
Cincinnati chili has to be the most overrated local food ever. No thanks, ever.
I could probably think of other examples that are even more overrated (Israeli beer comes to mind), but yes, Cincy chili is definitely up there. Though I will say those cheese coneys from Skyline are pretty tasty.
Also, do you use cream cheese in your enchilada? I’m considering something like this
No, I don’t use cream cheese in my enchiladas.
My enchilada recipe:
Ingredients:
• A big ole’ crock pot of beans, prepared the same way as for my chili
• A can of diced tomatoes with green chilis in them, drained
• A big (24oz?) can of enchilada sauce, whichever one you like. I like the green one they sell at Aldi*
• Shredded cheese
• Large (burrito-sized) flour tortillas; I find that La Banderita is a solid, quality brand. Buy the 10-pack.
• Sour cream
• Fresh or pickled jalapenos, sliced
1. Preheat oven to 375˚, and preheat a 12″ or larger cast-iron skillet to medium over the stove. Grease a large (e.g. 9″x12″, 2″ deep) baking dish and set that ready to the side. You might even need a second smaller baking dish if you run out of room in your first one, so maybe have that ready too.
2. Combine the beans, diced tomatoes, jalapenos, and a few large dollops of sour cream, stirring well.
3. Use the skillet to heat up a tortilla until it bends easily but not so much it starts to harden. Move the tortilla to a plate and add a generous sprinkle of cheese. On top of that add a generous serving-spoonful (about 1/3 cup?) of the bean mixture, and more jalapenos if you want.
4. Before you wrap that up, start heating up your next tortilla. THEN wrap up the “burrito” you started and put it in the baking dish. (If you don’t know how to wrap a burrito, go to Chipotle and watch carefully. Copy their technique.)
5. Repeat steps 3 and 4 until you’ve filled your baking dish(es) with tightly-nestled burritos.
6. Pour the enchilada sauce over these as evenly as possible. You can use a rubber spatula to even it out. It’s OK if a few tips of the burritos here and there aren’t coated; they’ll just get really crispy. If you don’t want that to happen, coat them with the sauce.
7. Sprinkle more cheese over that and put it in the oven for 30-45 minutes or until the sauce is bubbling and the cheese is totally melted and just starting to brown in places. Serve topped with sour cream, homemade pico de gallo (or salsa from a jar if you’re lazy), and sliced avocados if
you don’t have to take out a second mortgage to buy themthey’re on sale.Two of those enchiladas should fill up a very hungry man. My wife and I usually have one each and then split a third, and I’m notoriously gluttonous.
*A long time ago I used to make my own enchilada sauce from scratch, but I honestly like Aldi’s better and it’s actually cheaper.
Alright, I’ve ordered a 5qt slow cooker, so I can start bean and chili-related endeavors once it arrives, haha
Totally worth it; good investment. Another bonus is it’s very handy for attending pot lachs (or however that’s spelled).
I think you mean potluck, which is a party where everyone brings a dish and then you all eat together. Potlatch (not etymologically related) is a native American word for an elaborate ritual feast and gift-giving event, and more recently and colloquially also refers to giving away or destroying food or other wealth to demonstrate how rich and/or generous you are.
“Welcome to the potlatch! I brought green bean casserole!”
“Oh, uh, that’s nice, but I brought a slave!”
Ah, right.
Seconding lentil soup, in fact it tastes better a day or two after making. You can probably freeze it though I haven’t tried. Plus it’s cheap, vegan, and very healthy. I usually make Greek-style lentil soup rather than middle-eastern style, recipe is as follows:
1. Boil one large cup (~250 g) green lentils for 10 minutes and drain. This is the most important step because otherwise the soup will taste bitter and awful.
2. In olive oil, sautee one medium chopped onion until soft. Add 2-3 minced cloves of garlic and sautee another minute.
3. Add the lentils, a couple tablespoons of tomato paste, 2 chopped carrots, salt, and more than enough water to cover everything. Cook another 30-40 minutes, stirring and adding more water as needed (the lentils will swell a lot).
4. About 10 minutes before you’re done cooking, add a tablespoon or two of oregano and a pinch of rosemary (optional). Also add some extra olive oil if you want to make the soup heartier and to make it smell nice. Serve with red wine vinegar and feta cheese. The cheese is optional, the vinegar is not.
That is quite different from my lentil soup recipe, which uses only one pot and which I call “Birthrite Soup” because you’d sell your birthrite for a bowl of it:
Ingredients:
• About half a pound of green or brown lentils (I’ve done red lentils before but I don’t like it as much that way and don’t recommend it)
• Half an onion, diced
• 1-2 medium-sized carrots, finely diced
• 1-2 cloves of garlic, finely diced
• A few ounces of peppered beef jerky, chopped or torn into small pieces (if you can get it, Jim Beam peppered beef jerky tastes the best in lentil soup of any I’ve tried); if you don’t have beef jerky then use a dollop of bacon fat. Even better, if you’ve got it, is to use leftover bones and meat scraps from a steak, lamb/pork chop, or something like that.
• Spice mix, all eyeballed: a whole lot of cumin (not sure how much, but a lot); ground black pepper (also a lot; coarsely ground is nice if you can get it); 6-8 chicken boullion cubes; a teaspoon or two of turmeric; a pinch or two of ground cloves; a few good shakes of allspice
• A few squirts of olive oil
1. Fill a big pot halfway with water and bring it to a boil. While you’re waiting for it to boil, go mow your lawn, sort through your bills and junk mail, clean your kitchen, work out, weed your garden, and walk your dog. Once the water is boiling, stir in the lentils.
2. Once the water is back to a boil again, add all the other ingredients. Cook over medium-low about 1-2 hours, stirring occasionally until the carrots are basically dissolved and the lentils are at least cooked (if not borderline mushy). Salt to taste. I usually add more pepper too.
3. Garnish with chopped cilantro and serve with biscuits, rolls, pita, or hearty bread.
By the way, one trick to reheating stew and chili in the microwave is to use one of those bowls that has a little handle on it. We have some that look like giant coffee mugs. You can often find them at the dollar store and it’s totally worth it to have two or three in your cupboard.
You need to microwave thick soups and chili for a long time to get it properly heated, and this way you have a place to pick up the bowl once it’s scorching hot.
A nifty little tool are lunch sized crockpots. They take a while, but they heat up consistently. I use it a lot at work when I want to heat up a soup and don’t want to use the work microwave.
OK, if we’re sharing lentil soups/dishes, here’s mine. Generally called “Wedding Lentils”, because I made it the first time the morning of our wedding, out of what I had around, after realizing that nothing at the receptions except the bread was vegan and a significant number of guests were vegan.
2 cups red lentils
2 cloves garlic
2 bay leaves
1 piece ginger root about an inch long, sliced
1 tsp tumeric
1/2 tsp cayenne
3 cups water
Cook until lentils are soft–30 minutes? Add more water if needed.
Add
1-2 tsp salt
1/4 tsp cinnamon
1/4 tsp coriander seed
1/8 tsp nutmeg
1/8 tsp cloves
When ready to serve, add:
1 tbsp lemon juice
1 can coconut cream (the kind without sugar)
1 bunch chopped cilantro
This keeps a long time and re-heats nicely. I like to serve it with rice or rolls and raita, but it’s good all by itself–if you want it soupy, add more water.
My chicken-lentil curry is good reheated. So is chili. So is beef stew. Mulligatawny soup.
Do you cook chili or stew on a stovetop or croc pot?
Stovetop. But I haven’t done either for many years. My wife does beef stew, and one of my adult children strongly dislikes chili.
Shepherd’s Pie – though to be technical, I suppose the way I make it it’s more Cottage Pie. I’m not giving temperatures/times because mainly I cook on “that’s hot enough and long enough, it’s done when it’s done” 🙂 EDIT: For you gourmet chefs who insist on knowing what temperature to turn the knob to, a sample recipe from a commercial sauce mix gives you all that.
Minced beef, onion, peas, carrots, mashed potato, and gravy/sauce/some packet mix. Fry your mince and onion (nothing stopping you throwing in some minced garlic as well if you like), whack the made-up packet mix or gravy in when meat is browned nicely (if you want to be extra fancy add a splash of Worcester sauce or whatever kind of similar bottled condiment you like), add your frozen out of a packet peas and ready-sliced carrots, bung into a roasting dish or casserole and put into the oven. Meanwhile, make your mashed potato and this is where the real tasty part comes in – yes you can use packet dehydrated and reconstituted mashed potato if stuck/not up for making real potatoes, but come on.
Decent potatoes that are a nice balance between just floury enough to fluff up gorgeously when mashed but not so floury as to fall to pieces when cooking; peel them (this is the labour intensive part), put ’em in a saucepan, salt them, pour over boiling water and cook until done (but not falling apart); steam them to dry them off – and here’s the secret ingredient via my father – coarsely chop an onion/onions (depending on how oniony you like your food) and scatter over the steaming potatoes, then when the spuds are nicely dry add salt and pepper, mash them, add butter and/or milk to taste, spread them on top of your oven-baked mince, back into the oven to brown nicely.
Enjoy! Leave any leftovers (if you are not a pig and have been able to resist nomming it all down) for the next day. Depending how big a batch you make, you can get a couple or more days meals out of it. I think it tastes even better the second day and guilty confession time: I like smothering it with tomato ketchup (me and Donald Trump agree on something!)
I was going to say shepherd’s pie too but it is a lot of work.
I like a good gyudon, or beef bowl. Authentically, it calls for thinly sliced beef simmered in a sweet and savory sauce over rice, but i use ground beef. https://www.justonecookbook.com/yoshinoya-beef-bowl-gyudon/
2 servings worth (scale up as needed)
steamed rice
1/2 white onion
3/4 lb ground beef
1/2 cup dashi (fish stock)
3 tbsp mirin
1 tbsp sugar
2 tbsp soy sauce
optional: pickled ginger for garnish, chopped green onion,
start your rice so it’ll be ready when you’re done
cut the onion into thin slices
mix and bring everything but the beef, onion, and ginger to a boil
add in the onions to cook once it starts boiling
when the onion is tender, add the beef.
you can skim fat with a fine sieve if you want but I leave it in
once cooked, pour over a bowl of rice
I find it reheats excellently, with the juices helping to “resteam” rice it is stored with if I microwave it in a closed container. Although the leftovers will likely have to be eaten with a spoon or fork, as the reheated rice isn’t as sticky.
As others have said, soup is the way to go if you’re making a week’s worth of meals at once, because of how easily it scales. I recently tried this ethiopian lentil stew and really liked it.
Also, a slow-cooker zuppa toscana I liked.
Moving away from soup, this slow cooker lentil taco filling can be scaled up as big as your crock pot is, and then you just need to add cheese, salsa, and a tortilla.
Also, why not make actual lasagna as well as wannabe-lasagna? There are a lot of different recipes, you can make it by the trayload, and it reheats well.
In the Is Science Slowing Down? thread, Conrad made an interesting claim that I think merits wider discussion here, in the OT, namely: Are we living in a golden age of video games?
As a broad survey: I’m not sure I agree with him, but I’m not sure I disagree. On the one hand, the Witcher 3 is perhaps the best open world RPG I’ve ever played. Whether you’re talking scope of the world, the level of detail, the quality of presentation, or the actual important stuff like the interest of the gameplay and the quality of the writing, it totally blows anything else I’ve touched out of the water. And it’s just one of a variety of games of that nature – Red Dead Redemption 2 is getting rave reviews, Fallout: New Vegas and Skyrim were both (flawed) masterpieces, etc.
On the other hand, the shooter genre is in a state of stagnation or even decline. We’ve degenerated from innovative, creative games like Deus Ex, System Shock, Half-Life, and the first Halo to boring theme-park rides like whatever Call of Duty is churned out each year. Maybe there’s great work being done in the indy genre here, I’m not familiar with everything.
Strategy games seem to be in a more ambiguous place. To my mind, the real peak of this genre lies about ten or fifteen years ago, with things like Civilization 4 and Rome: Total War being the cutting edge. Nothing produced since then has really held my attention or interest the way those two games did.
So on the whole, the picture is mixed. I know we’re in the midst of an indy revolution, and there is great work on the AAA level, particularly in the RPG genre. But at the same time, I think if you cast your mind back 20-15 years ago, we see a ton of classic games – the aforementioned Half-Life and Civilization games, along with games like Ocarina of Time, Baldur’s Gate, etc. I think gaming is in a good spot now, but it faces serious competition from 1995 – 2005 as a “golden age.”
All the youngins I know are currently into Warframe, which is apparently a shooter MMORPG. It may be that old people like us are just not plugged into the latest and greatest.
(Also, Breath of the Wild, Majora’s Mask, Wind Waker, all of these were better than Ocarina).
I haven’t played Breath of the Wild, and I dispute Wind Waker, but agreed that Majora was better than Ocarina. Majora is my favorite LoZ game by a country mile.
How do you figure Ocarina is better than Wind Waker? It’s been a long time since I played Ocarina, but the Wind Waker HD remaster on Wii U was sublime. I’ve bought the Ocarina remaster for my son’s 3DS but I’m waiting to give it to him until he can read better. Maybe I should just steal it and give it a go myself to refresh my memory…
Were there any games comparable to Ocarina when it was released?
It was certainly the first “open world” (very railroaded by today’s standards) 3D game I had played, though I was young and didn’t have much video game experience at the time. Majora’s Mask felt shorter to me – though a lot more content was tied up in sidequests (finding masks) than the main storyline – I preferred Ocarina at the time, though I suspect replaying I’d prefer Majora now.
I’d say that Ocarina is a classic because of how groundbreaking it was. Breath of the Wild is a phenomenal game, and it’s a new step for Zelda but I wouldn’t say it’s groundbreaking in general – it’s a open world adventure game like many others. It does stand out for having an incredible amount of polish (and really great art style), it will no doubt be remembered by Nintendo fans for many years to come, but I don’t know if it will reach quite the cultural status that Ocarina of Time has.
BoTW is the highest selling Zelda game of all time, and on a new platform at that. An awful lot of people are going to be remembering that fondly for a long, long time.
Ocarina was indeed more groundbreaking: it invented Z-targeting, and set the broad rules for how a 3D Zelda would be. But “groundbreaking” is different from “best,” and I think “Golden Age” implies a level of refinement.
Heck, many of those newer games are better overall precisely because they didn’t have to spend time figuring out mechanics that the groundbreaking games established.
It’s true that there are plenty of open world adventure games out there, but I can’t think of any that are open world in quite the same WAY as Breath of the Wild. In most open world games, the world is mostly just a backdrop that facilitates your ability to get from one interesting quest/location/item/etc. to another, (even in something like Witcher 3, which is a great game, but is great largely for reasons unrelated to it being an open world) while in Breath of the Wild, the sheer act of traversing the world is a major part of the fun, in and of itself. It’s not quite AS groundbreaking as Ocarina, but it certainly did things with open world design that nobody has ever done before, and will likely be (attempted to be) emulated for some time to come.
But if you’re going to talk about the act of traversing the open world being integral to the gameplay, you’ve got to give a nod to Assassin’s Creed.
Yeah, but that’s all the game had in it. Unlike Ocarina or Mask (which blended the interesting open world into a game with fun dungeons, good puzzles and an interesting plot) there’s only the open world.
And shrines and beasts and quests and weapons and armor and upgrades and cooking and the story..?
I don’t know, I think BotW had much more interesting puzzles than a traditional Zelda game. Traditional Zelda will give you a scripted puzzle with a single specific solution. BotW scrapped that and built the entire game world to follow a consistent and predictable set of rules, so that any problem or conflict the game threw at you was solvable through your own ingenuity. Like how some players were able to get to the top of Ganon’s castle minutes after leaving the tutorial plateau by using the physics of the momentum gadget to launch themselves across the map. The game treats this as entirely legitimate. Or how some puzzles where you’re meant to use a metal ball to redirect an electrical current can be solved using your metal weapons instead, because “metal conducts electricity” is a general rule of the world, not a puzzle-specific game mechanic. After noticing some wild animals picking at food items, you yourself can use food as bait to lure animals to you. Feed apples to your horse to boost affection, feed endurance carrots to your horse to boost its stamina. If you can think it, you can almost always do it. Maybe the open world was the real puzzle all along.
Not that scripted, single-solution puzzles are bad. But I loved BotW for having some of the highest rewards for ingenuity of any game I’ve played.
I’ve played one main-series Zelda game, Twilight Princess. And uh…holy shit, why has it not been mentioned in that list.
Because it’s not anywhere near as good. It has an endless tutorial section, the art direction is generally less inspired, the dungeon items aren’t as well integrated into the game, the side quests are more grindy, and there’s no easy way to control the time of day.
If you’ve only played Twilight Princess, trust me when I say that nearly every other 3D entry in the series is better.
Gotta say I’m doubtful, but I might try out Ocarina (I did play a bit of Ocarina and Majora’s Mask through some demos back in the day, I don’t buy that either was as good as Twilight Princess ended up being though.)
It’s not that Twilight Princess is bad, it’s just that some of the others, particularly Majora’s Mask, Wind Waker, and Breath of the Wild, are just so, SO good.
To be fair, though, I’d argue that it’s probably around the same level as Ocarina, if you’re looking at them in a vaccum, with neither nostalgia nor the historical impact of Ocarina on the gaming industry taken into account. Ocarina is more open, less handholdy, and gives you more options for puzzle solving and interesting gameplay interactions, while Twilight Princess is more polished, graphically superior, and has less fiddly combat. (Again, not taking into account that Ocarina pretty much defined how “combat in a third person 3D adventure game” would work from then all the way until today.) Both games have advantages and disadvantages that more or less cancel each other out, in a vaccum.
Combat in Twilight Princess was actually a big disappointment for the time. It was the first Zelda game for Wii, so there was hope that the motion controls and sword would work in cool ways; instead twitching the wiimote was basically equivalent to pressing A.
This is why Skyward Sword dedicated pretty much all its effort to making your sword swing like you swung the wiimote.
(TP was itself busy overcompensating for WW’s cartoonish style and swung too far in the grimdark direction. I guess each Zelda is a reaction to the previous Zelda.)
It may have been a disappointment, but that’s only because it didn’t make extensive use of the Wii’s fancy new motion controls. That doesn’t mean it was bad, though.
Meanwhile, Skyward Sword DID use the Wii’s fancy motion controls, but like most combat systems which used those, it only succeeded in making me want to throw my Wiimote at the TV.
Also the wolf gimmick was boring. One of the fun things about Zelda games is using all your nifty items. And then you’re stuck for large swaths of the game in wolf form, unable to do anything except mash two buttons.
That said, I have the wolf Link amiibo, so it was neat to store up all my hearts in TP and then use the amiibo on Breath of the Wild to make a wolf friend appear and fight with me.
I’m not sure it counts as a video game, but the improvement in computer play for bridge has been astonishing. Just free, online bridge (bridgebase) against robot players is like sitting at a table of good table bridge players.
I recently picked up bridge, playing at a local club. Does Bridgebase have a good way to learn, like some kind of tutorial room or what have you?
Yes, sort of.
It uses only one bidding convention, and you can highlight any bid to see what it means in context.
It has no guidance for play other than not allowing illegal play, but since it’s free and there aren’t any other players to aggravate, you can play a LOT of hands and just observing the other players will teach you most of what you need.
The games I hear my buddies who are into video games talk about a lot are PUBG, Overwatch, and Fortnite. Don’t they qualify as shooters?
Yeah I’d say Overwatch and the various battle royales are a pretty major step in the multiplayer shooter genre.
If you want a classic, single person shooter, the new Wolfensteins have been excellent.
I’m not the issue is that shooters aren’t as good, it’s that the marquee shooter franchises are stagnant.
Overwatch doesn’t quite qualify as a shooter, it’s a shooter-MOBA hybrid.
As I mentioned before PUBG and Fortnite are major cultural phenomena. Hard to say shooters are dead when this is the case (although Fortnite isn’t quite a shooter either, as building is arguably the most important skill.)
Fortnite started out as a fun PVE game, and then BR happened.
Now the PVE side is basically treading water.
“it’s a shooter-MOBA hybrid”
Fair to a point (would also be fair to call it TF2++), but doesn’t that prove that it’s an innovative and interesting game? Part of the reason the traditional shooter genre is so stagnant is because there’s only so much you can do with the mechanics of quasi-realistic multiplayer focused shooters like CoD and Battlefield.
I barely play video games because they are so good. Any better and I’d be tempted to just quit life and enter them completely, which I don’t want to do.
I wouldn’t call it a revolution so much as a refinement. It seems like lessons have been learned from mistakes of the past and designers are turning out stuff that is less likely to be buggy messes (yes yes I see Fallout 76) and with refined art and game mechanics that make them more creative, more beautiful, and more fun, while also being less expensive and more widely available.
Witcher 3 was one of the games I was thinking of. Breath of the Wild is probably the single best video game I’ve played in my life. The indie scene you mention is also rife with beauty and innovation, available cheap and just a click away. Hollow Knight is the best metroidvania ever, surpassing metroid and Castlevania:SoTN which defined the genre. The things that were old have been updated in wonderful ways. Mega Man 11 was delightful, an absolute joy to play. It took all the great things about the 30 year old franchise and updated and improved them. It’s a better Mega Man game than any of the classic Mega Man games (which I still play on emulators and I sprung for the Legacy Collections on the Switch, too).
I think it’s also important to note how inexpensive and available these games are. I remember saving up my money I earned from math tutoring to plunk down $74.99 for Final Fantasy II (IV) in 1991. Adjusted for inflation, that’s about $140 in today’s money, and it took me about 40 hours to beat. I bought Hollow Knight on my Switch for $12 and got about 50 hours out of that (although a good four of those were just beating the Path of Pain. I can feel the pain in my hands just thinking about that course. Owwwww). And 40-50 hours is nothing these days. I got over 100 out of Witcher 3, BoTW, and Assassin’s Creed: Odyssey.
And Steam sales! And eBay! I recently wanted to go back and play a few games I’d missed, like AC: Unity, AC: Syndicate, and Mass Effect: Andromeda. I got them each for $6 – $10 on eBay, delivered to my door. Each one is dozens of hours of AAA gameplay for next to nothing.
The games are better, more beautiful, more artistic, more creative, from giant triple-AAA studios with double-digit millions in budgets to four friends in their basement getting a hundred grand off kickstarter, and at the same time the games are cheaper and more easily available to more people in more ways. How is that not a Golden Age?
ETA: Yes, Call of Duty is stale, but as johan and Jaskologist say, the kids these days are hooked on Fortnite, Warframe, and Overwatch, which, while I don’t play them, I understand are excellent.
Strategy could use a kick in the pants? Paradox is doing the same thing over and over again, and I enjoyed Civ 5 more than Civ 6. I would also like to see RTS games come back in a big way. But I thought Assassin’s Creed was over after the mess that was Unity followed the phenomenal Black Flag, but they took a year off after Syndicate and reinvented the brand with Origins and Odyssey, which are smash successes. Odyssey may win GOTY. It’s amazing. I’ve already been through it twice and want more.
ETA2: And I get my wish! The first DLC expansion for Odyssey comes out December 4th! Trailer.
I had high hopes for Imperator: Rome, but it looks like they’re going the more boring route where you play Rome as you go around curbstomping all your neighbors rather than the more interesting idea of trying to build up a great family within the republican system.
Did you really expect a game about Rome to not be a map painter? 😛
They’re doing some interesting things with the population systems a la Vicky, but I agree it’d be really cool if they did something to separate family from state, kinda like a CK/EU hybrid. Not sure how well that’d play with Clausewitz though, since territory is the basic management unit. IIRC the CK2 Merchant Republic code is a hacky mess.
Now what would be really nice is if they upgraded/reinvented their engine to accommodate for more such mixed-ownership-of-territories dynamics but that’s a tall ask for the modern release cycle.
Strategy could use a kick in the pants?
While I’d agree, it is currently on an upward swing. Amplitude Studios have brought some life to the 4x genre and that was really running the risk of falling into mindless Civ clones territory. Other than that, while the paradox main studio does seem to displace things is time more than mechanics, the publish is supporting some good stuff. Battletech was a bit of a diamond in the rough on launch but is definitely coming together and I’ve got a huge soft spot for Surviving Mars. That’s not even mentioning the new Steel Division or Age of Wonders which are pretty hype right now. Oh, and can’t forget Evil Genius 2.
I guess if strategy really needs anything its a few more publishers in the market to shake things up. There’s been some good board-game adaptation and a few interesting indie gems, but its a market of refinement right now, not innovation.
And most important of all, Magic finally has a good digital platform. I feel confident that Arena will yield more hours of greater enjoyment for me than any 20 AAA games combined, for the princely sum of $5 (which you don’t even have to spend if you don’t feel like it).
It’s okay. I mean, I’m playing it because Magic is a good game, but it plays very slowly compared to a game designed for digital like Eternal.
On the one hand, that is true. On the other hand, Eternal is quite a good game, and Magic is by far the best game I’ve ever come across. I play Eternal because it has a decent mobile client and for no other reason; I can’t imagine spending a single second playing Eternal if I could be playing Magic*. Of course, that’s likely to be a lasting advantage: it’s hard for me to see a truly playable smartphone version of Arena. Parsing complex board states is difficult enough on a laptop.
*Proper Magic. Playing Eternal against humans or the AI for FTP rewards usually wins out for me over playing Duels of the Planeswalkers 2015 against the AI for no reward. 2015 is a pretty sub-par entry in the series, though – if it was 2014 or Magic Duels that would move the needle somewhat away from Eternal.
Burn the witch!
More seriously you sadly do have a point. Both Paradox and Firaxis are evidence of the strategy golden age being some number of years ago. Blizzard too. Both Starcrafts are still quite solid but it’s been how long since they came out? And the MMO money machine means we’ll never see a new RTS Warcraft 🙁
The Paradox DLC model works well when there’s a solid base to it (CK2, EU4) but stumbles when it’s used as a crutch to put out half-finished products because you can patch it later (HOI4, arguably Stellaris). In a related phenomenon, maybe I’m just not remembering the buggy crap from ye olden days but “designers are turning out stuff that is less likely to be buggy messes” seems to me to mostly a result of being able to push out patches for the buggy crap they still churn out.
Meanwhile Firaxis is increasingly pushing for the casual market. “hurr durr entitlement blah blah blah” but the gameplay does noticeably suffer. 5 is the worst Civ in everything but looking pretty. 6 at least takes the shoehorned mechanics introduced and starts making them somewhat workable. But then the V expansions were good while the VI expansions have been a mess of bolted on systems and power creep, because new flashy shit drives the hype train far better than elegant systems or competent AI. And the information screens and graphs are just utter utter garbage in 5&6 compared to 1-4+Colonization. It’s an Empire management game, give me data on the state of my Empire, goddammit! The newbies won’t care but quit obfuscating everything for the rest of us.
I remember back in the days of consoles or pre-Internet PC gaming yes, games worked at launch because they had to. They couldn’t be fixed afterwards. Then in 2005-2015 or so you were a fool to buy any game at launch. There was about an 80% chance of unplayability. All I’m saying is I haven’t experienced that in the past ~2 years. I can’t remember any recent games I’ve bought that were bad like that (and I buy a lot of games). Star Wars Battlefront II had a rough launch, but that was because of the loot boxes, not because of bugs. The game had some bugs, of course, but they were hardly game breaking and it played great.
Fallout 76 seems to be the exception. Holy crap the stuff I’m seeing about that. Just go look on Youtube for that game and I don’t know if you’d laugh or cry. I never got into the Fallout games, but I’m so glad I wasn’t looking forward to that one.
Apparently RDR2 (which I haven’t played yet) had something of a game breaking bug that could inadvertently cut the player off from a number of quests.
A combination of being cheap and not having a ton of time means I end up playing marquee titles a couple years after they come out… the upside is I can buy the “complete” edition of the game with all the DLC for $20-$40 instead of $60+DLC, with all the major issues pre-patched.
@gbdub, that’s my strategy too, and it works pretty good, especially since I’m not too into any competitive scene.
Although it’s getting harder and harder to launch games in Call of Duty Ghosts.
Yeah it’s not a great strategy for games that are multiplayer focused, unless it turns out to be one of those eSports classics that the developers decide to support and evolve quasi-indefinitely rather than crank out a sequel every year or two.
I think the main difference between the industry now and the industry 20 years ago is that difference of revolution vs. refinement.
How many popular franchises today were born in the late ’90s/early ’00s? How many genres were created or defined then?* I feel like today we’re seeing already existing genres refined and brought close to perfection – the Witcher is a great example. It’s the pinnacle of the open world RPG, but it didn’t create or establish the genre. Indy-based platformers have refined and perfected the platformer formula, but they didn’t create it. League of Legends or DOTA 2 or whatever (not an aficionado of this genre) perfect the MOBA, but it has its origins in ancient Warcraft games.
So I dunno. I agree there’s a crapton of extremely high-quality games being produced today, all around the field, both in AAA and indy scenes. But a lot of it doesn’t seem innovative today, and I value that highly, too. How many games today will be classics that enter the canon of “games you must play” for years to come? A lot are brilliant, but how many are offering stuff that we haven’t seen done almost as well in other places?
Like I said, I certainly see the argument for a golden age. But I’m not sure if this era is “better” than 20 years ago. Maybe it’s just nostalgia’s rose-colored glasses speaking.
*This is why I include Halo was one of the highlights of the age – it revolutionized and defined the FPS genre. Lots of Halo conventions – limited weapon selection, regenerating health, intelligent, naturally-evolving encounters, integrated vehicle-based gameplay – became just bog-standard FPS conventions (cf. the entire Battlefield series). Playing pre-2001 shooters is a very different beast if you compare, say, Doom or Medal of Honor with modern games.
Okay, but you say yourself the Witcher 3 didn’t invent the open world RPG, but it is the pinnacle of OWRPGs. So wouldn’t that be the must-play? BoTW didn’t invent Zelda but it’s going to be the must-play. I think Hollow Knight is the must-play metroidvania, far more so than Metroid or SotN. Isn’t the time of all the must-plays the “Golden Age?”
Usually we don’t think of a “Golden Age” as the revolutionary period. The revolutionary period is chaotic and uncertain and violent. The Golden Age comes after the revolutionary period, when things are comfortable and refined and people are easily enjoying the fruits of the revolutionary period.
Oh and Halo had regenerating shields, not regenerating health. You still had to find health packs. The first big game I think of with the whole “screen goes red and then you take cover until it goes away” trope was Gears of War. That’s since been copied to death by every other cover-based shooter. Gears 4 was meh, but I’m hoping Gears 5 next year is good.
The first big game to introduce red screen regen was Call of Duty 2, one year before Gears of War. At the time i considered it mind blowing and revolutionary. Holy shit, no more health packs!? The gods of videogaming have smiled upon me! Cowering behind cover while being shot at was a way more intense and engaging way of regaining health than having to backtrack to find that one corner where you saw a green box with a red cross on it.
These days? No yeah, it’s still awesome, fuck health packs. The only thing that’s better is getting health by killing enemies, since it means the logical reaction to being near death is to go balls to the wall aggressive. It’s not suitable for every kind of shooter though.
Now what I really want is a BOTW-style Metroid. Big open world, but with pew pew lasers.
The Metroid series could use a little love anyway.
Did you own a gaming PC around that time? It’s hard for me to understand how anyone who had already gotten bored of Half-Life and Quake 3 Arena and Tribes and Unreal could play Halo and feel like it revolutionized anything.
Sure it did: a big epic setting, music and art with incredibly well defined, distinctive, and interesting style, and an attempt at a plot and setting at grand scale.
Halo’s plot is deeply stupid, as are a lot of the details of the setting, and it’s arguably gotten worse over time, but it was damn impressive as an attempt. A game I played once, fifteen years ago, I still remember any number of fights in detail from, because they felt important and distinctive in context. I think they deserve some credit for that.
Man, I feel exactly the opposite. The world of first-person shooters back then was, if anything, too full of variety. There were shooters focused on fast-paced action and great graphics (Quake 3 Arena, Unreal), wide open worlds (Tribes), immersive story (Half-life, Deus Ex), novel mechanics (Max Payne), competitiveness (Counterstrike), and so on, so that to pitch a new game you probably needed to explain how you would be different just to get your foot in the door. And against that backdrop, Halo seems most notable for the way they sanded off all the rough edges of the genre and produced a bog-standard, middle-of-the-road shooter that does the basics competently and contains nothing unusual or surprising.
IMO the revolutionary part had nothing to do with the game, and everything to do with the console. The killer feature of Halo wasn’t the game, it was the way the game was played – no new graphics card every six months, no fiddling with patches and drivers, no weird mechanics to learn, no need for Gamespy because the in-game multiplayer launcher didn’t work, no fucking with cabling to get a local multiplayer, just a basic game you can play for five minutes and be an expert at, exactly the same when playing alone or online or at a friend’s house. It was the McDonald’s hamburger of shooters.
I don’t think it was hugely better than what came before it, or the first good shooter, anything like that. But it was distinctive and did something new well.
(Deus Ex is not a shooter, and Half Life, while fun, does not have story the same way or in anything like the same scale or detail Halo does. I think the story in HL is arguably better for being sketched out–you can’t get the details wrong if you don’t have them–but Halo was very different. Max Payne is a better counterexample.)
I don’t think you give Halo enough credit for innovation in the “teabagging of n00bs” department.
We also never discussed the part about how console controllers are plainly inferior to mouse+keyboard for any game that mostly revolves around aiming at things… I’d bring it up next thread, but it’s probably CW.
See… it’s unfathomable to me how anyone can aim competently with a stick on a console controller, but I think in fact they can. And then I remember my 13 year old self and then 12 year old brother being so baffled by the new witchery that was Skynet’s mouselook control that we had to play it together, one using the keyboard and the other the mouse. And then I think about my now 10 and 8 year old half brothers shooting merrily away – apparently quite accurately – on a freakin’ tablet. And I think maybe this is more about familiarity/muscle memory, and less an intrinsic property of the input device.
Nah, those games just have built in aim assist. All of them. Every shooter not explicitly designed to be played with M+KB does.
@Tarpitz My understanding is that when you actually let people use controllers and mouse/keyboard in the same game the difference really is clear. Overwatch, for example. The games that do allow multiple input types often do aim-assist for people using controllers. E.g., Fortnite
No, they can’t. What changed (driven in large part by Bungie’s design work on Halo) was developing an elegant series of cheats and nudges to compensate for the crappy precision of thumbsticks (when used at speed) in such a way that they’re not obvious to the player. There are multiple forms of aim assist, and they’ve crept into PC shooters as well, but the basic forms are:
1) “bullet magnetism”: Generally, the point of aim of a player is the exact center of their field of view in first and third person games. This aim assist actually “pulls” the point of aim away from the center of the screen and towards the hit boxes of enemies as the point of aim moves past them. This differs from just making the hit box bigger than the geometry of the target because the strength of the pull can be set to vary with distance to the target so you get more of a subtle nudge, which combined with randomized bullet spread on weapons further conceals the exact nature of the cheat. In Bungie games like Destiny for example this is actually a hidden “aim assist” stat that is varied on a per-weapon basis and applied regardless of control input.
2) Point of Aim “stickiness”: This is another one, and how strong it is depends on the game and the subtlety of the developer. Rather than pull the actual point of aim away from the center of the screen as in the example above, this technique decreases the sensitivity of your input device and slows down point of aim movement dynamically while you’re lined up on a hit box. This is often more obvious than bullet magnetism above in the heat of the moment.
To use Bungie again, methods 1 and 2 are both in use in Destiny 2 when played with a controller, while only method 1 is in use when playing with mouse and keyboard.
There have been a few attempts in cross-platform play to allow for KBM vs. Controller match-ups with no handicapping/assists as described above. Universally, they were described by developers as extremely one-sided affairs in favor of keyboard and mouse players.
Thank you for sharing the trailer! I have not played any AC games, and I have a probably dumb question: why do the voice actors seem to have Russian-ish accents?
I believe they’re Greek.
It wouldn’t surprise me if Greek and Russian had broadly similar accents in English.
Also, the voice actors are IRL Greek. The accent may sound a little over the top but it’s not like they’re faking it.
In general I thought Kassandra’s voice work was a little better than Alexios’.
Thank you (to you and @Chevalier Mal Fet)! More evidence that I’m really bad at recognizing accents.
From the interview you linked it seems both voice actors are using significantly heavier accents for the roles — and I wonder if they are adopting a more “Ancient Greek” sound also? They still sound pretty different to, say Tsipras speaking English, at least to me (caveat again, I’m not good at hearing and distinguishing accents…)
(And now that I’ve spent twenty minutes looking up interviews with Russian and Greek people in English… yeah I don’t know why I thought of Russian, it’s not particularly close…)
I have put an incredible number of hours into Destiny. Fantastic game. Steam also always has an XCom clone of some sort or other to play through. I’d agree with the ‘golden age of video games’ notion. It is unbelievable.
I’m not sure a golden age in something this subjective is apparent until long afterward. It’s one thing for an art historian to say (e.g.) that 1880-1890 was the golden age of painting, or even of some aspect of painting (pastels?), but quite another for someone in 1892 to say it.
And as for specific games, I played Witcher 3 for like two hours and lost interest. The movement seemed clunkier than Skyrim, the plot no less silly than any other mass-market rpg, and the combat tedious. And including the first Halo on a best-of list sounds like an Onion article.
What can you possibly know about the plot after only two hours? You didn’t even get past White Orchard.
Well, perhaps plot is the wrong word, but I heard some dialogue and got an idea of what sort of game it was going to be. You don’t know much about the plot of a book after the first page, but you might still decide you don’t want to read any further.
At any rate, I’m not saying it’s a bad game (like I am with Halo). Would you say it’s of the same caliber as, say, Skyrim, or is it significantly better? If the latter, in what way? The reason I ask is that I think the “golden age of games” games list will probably be more about games that launched or significantly redefined genres and did new and exciting things, rather than just games that were very good examples of a certain genre but otherwise unremarkable. And I get the impression many people would consider Skyrim to be the former and Witcher 3 more the latter, perhaps even while thinking Witcher 3 is the better game. (But I could be wrong, not having played it, hence the asking)
I got about 15 hours into Skyrim, got bored and quit, but I’ve done two complete play throughs of Witcher 3, each about 100 hours. I didn’t really like the game mechanics in Skyrim and didn’t feel immersed by the story. I think part of that has to do with my own preference for RPGs with defined characters. In Witcher you play as Geralt of Rivia, who has a back story, specific skills, strengths and weakness, and friends and romances and things you can really hang a plot around. There’s a real narrative there with meaningful writing.
On the other hand, the games where you create your own character, and the game is still basically the same whether you’re a dwarven thief or a pyromancer lizardman tend to bore me. The seams are too easy to see when none of the voice actors call you by your name. It’s all “well met, adventurer!” Or Dragonborn. But regardless of whether you’re a dwarven thief or a pyromancer lizardman you’re still “The Dragonborn.” And the game designers have to make the enemies able to be beaten by any of the available classes.
No, I think I’m rare in not liking Skyrim, but love for the Witcher 3, particularly the writing and the quality of the quests and especially the side quests is at meme-status. Like, to the point where you can predict how a reddit thread about any OWRPG will go because someone will compare it to Witcher 3 and find it lacking, someone will criticize Witcher 3 and then people will dogpile that guy and then others will complain about Witcher 3 fanboys who can’t take any criticism of The Perfect Game.
I’m just saying “many people consider Witcher 3 unremarkable” is not accurate. You might not have liked it, and I didn’t like Skyrim, but I certainly wouldn’t say “many people consider Skyrim unremarkable.” I consider Skyrim unremarkable. Everybody else loves it.
@Conrad Honcho says:
I remember thinking exactly the same thing about the first dragon age. The best writing in the game was the story arcs that related to your chosen origin. The mage section was really interesting if you were a mage, the dwarf city was really interesting if you were a dwarf, and so on. They were interesting because there your character actually had a history and defined relationships with people. The rest of the game (other than some of the companions), or those same sections if you had a different origin, were pretty bland.
Hrm. Okay, consider MegaMan 2. It’s widely-loved, many might call it remarkable, some might even make a case for it being one of the best platformers of all time, but I don’t think anyone would call it genre-redefining or important, and it’s still going to be Super Mario on the Best Of lists. That’s kind of how I see Witcher 3. But the distinction probably comes down to semantics and isn’t very important.
I quite liked Dragon Age. Dragon Age had a party system, so you grew to love a lot of your companions by the end of the game. Lydia in Skyrim has no personality beyond “I’m sworn to carry your burdens.”
I felt DA:O also did better with its side characters as well.
I can definitely see why you would get bored of Witcher 3, I get bored of practically all games in the first 2 hours and need to push my way through that (particularly RPGs).
HALO: CE was a game-changer for console gaming at the time, and a lot of us were playing on N64, not PC.
I would say Mega Man 2 is more like Morrowind and Witcher 3 is more like Mega Man 11. Although I’m not sure the Mega Man comparison is useful.
But I do think by not having even left the tutorial area you’re missing out on what makes Witcher 3 so beloved. The environment is beautiful, the game mechanics are solid, but what makes it shine is the characterization, writing, and quest design. The latter is really refreshing in RPGs. There are almost no fetch or delivery quests. The side quests, especially the Witcher contracts, are in many ways more intriguing than the main story. There will be a notice that some village is plagued by mysterious disappearances, the Witcher shows up, questions people, investigates the crime scene, uncovers Deep Dark Secrets, uses his extensive knowledge and experience to identify the monster or the cause of the curse, prepares with the right potions and blade oils, and then does battle with the monster and collects his pay. It’s all very satisfying.
They’re making a Netflix Witcher series, and I’m sure they’re going to do it in serial format like everything else these days, but if it were me I’d do it episodic, monster-of-the-week style like X-Files. That would be great.
I strongly encourage you to give it another go. Get at least as far as the Bloody Baron to see if you’re not hooked. Also play Gwent, it’s really fun.
Oh, and Witcher is like Mass Effect where Witcher 2 can import your Witcher 1 save, and Witcher 3 can import your Witcher 2 save, carrying over choices you made. Since you haven’t played Witcher 2, when you get out of White Orchard the Emperor’s investigator will question you about recent events, with your answers being the equivalent of your Witcher 2 save file choices. I recommend saying you let Letho go and sided with Roche over Iorveth. Sile you can say was killed or not, it barely matters. She probably deserved death in Witcher 2, but if she’s still alive, she has a rough go in 3, so it all works out anyway.
ETA: And before Iorveth fanboys jump all over me, I’m just saying that because CDPR screwed over Iorveth fans by not including him in 3. Picking Iorveth doesn’t do anything in 3 except make Roche mad at you.
Contrast with Mass Effect, where your origin (colonist? earther? spaceborne?) barely mattered; what really mattered was a few key decisions during the plot, including who you choose to save, or to love, and in that case, the game plot could morph a bit, but mostly stuck to a few key points, and the ending (of 1 and 2 at least) might be very dependent on what you did in that game.
Witcher 3 followed that model a bit less – one or two events ran a bit differently depending on what you did in W2, but left the overall plot largely the same. The ending of W3 depended primarily on what you did inside W3.
So you have an OWRPG with a wide range of origins which the plot treats roughly the same, and a few key plot hinges (Skyrim); one with a medium range of origins the plot largely ignores, and more key plot hinges (ME); and one with a largely predefined origin, low number of key plot hinges, but much more OW instead, with a handful of richly developed stories and NPCs (W3). (Honcho’s got a point, dick; I think you bowed out before seeing an enormous number of compelling stories, including the Bloody Baron, Dijkstra, the Skellige throne, Novigrad intrigue, intra-Witcher conflict, and of course Ciri, not to mention the DLCs with von Everec and Duchess Henrietta…)
I’d say we’re in a golden age of open-world games, made possible by post-2010 graphics cards and various physics engines that seem to trace back roughly to Source and Half-Life 2. We have gravity, ragdolls, lighting effects, weather, fire, and legions of artists and map designers ready to give you enough virtual worlds to last for probably another generation.
Looking forward, I keep hoping for a much more dynamic RPG, where not just the world is open-ended, but also the story. In all three examples above, I can imagine the plot designer writing each chapter, trying their best to insert twists and climaxes that incorporate player choice but without blowing up the number of endings and alternate scenes they’ll have to handle. There are a lot of problems to solve on the way to this, that I keep thinking about.
One is characters. Every reaction they have is pre-scripted. Imagine if the writer could instead describe each NPC in terms of character variables and motivations. Charlie is youthful and aggressive, and wants to rule the barony, but is also protective of his sister. Nadiska is devious and willing to sell out her fellow privateers, but wireheading has forced her to be loyal to the local star admiral, and meanwhile, she has an ancestral fear of fire. Set these characters loose and let them drive the plot.
To make this work, you’d need to solve more problems. The dialogue would have to be believable, probably parametrized, which means no more pre-recorded voice acting; instead, you might have voice actors record likely vocabulary, or more likely, phonemes, and couple that with an algorithm that computed what they’d say, and string that into smooth-sounding sentences. Closest we have to this AFAIK is setting a new voice for Waze. AI dialogue is still beyond the horizon.
Same goes for AI action. Does that NPC fight or flee? Call for backup? Call a truce? Bluff? Beg? Track you? Plan ahead? Direct other NPCs? Follow a code of honor? All also beyond the horizon (just ask Eliezer, I imagine).
Then there’s the framework to tie this into an entertaining story. Even five NPCs with dynamically generated quirks and motivations can make for a potentially rich murder mystery, say, but now you need to compute a story arc for them to fall into, or else the gameplay just sort of wanders aimlessly, as if you play today’s OWRPGs and do nothing but side quests. So you need a way of generating believable plot twists, surprise moments, funny interludes, epic confrontations, glorious climaxes, and gripping denouements.
The only thing I can think of that a writer might pre-script at this point would be overall color and theme, such as an epic war in the background, or a somber Victorian motif, or a goofy cartoon feel. Plus an interesting opening hook for the plot to start from.
It might just be me, but I feel like writers are chafing against the current constraints of AI technology for driving characters in RPGs. I wonder how much thought is going toward the solutions above.
I can envision a prototype game, a harbinger, in which the world and options are sharply closed again to manage the possibilities, perhaps in a world the size of a single Overwatch map. Learn lessons from the prototype plot generation engine, fork it, improve it. Within five years, it’d transform the genre.
We already have “If you do X, character says Y” things. We could have more complicated decision trees than we do, and we do in games that are not voice acted. Why? Because voice acting is expensive! You try to use a Vocaloid to generate speech, and I guarantee it’s going to sound very, very weird, particularly if the speech itself is algorithmically generated.
This is far, far harder than it looks. Even the basics, the most elementary methods of making characters work together or fight each other based on rudimentary tribal politics signified by basic love-hate values, are tough to implement. Anything remotely like what you’re talking about is beyond our foreseeable capabilities (to put it gently).
It’s a Westworld on your desktop!
Although the fully procedurally generated/AI-based RPG you’re looking for is clearly beyond our current limits (and by the time it is possible, we’ll probably be too busy trying to avoid being rounded up and converted into paperclips to be playing games, anyway) we’re making some pretty good headway into that space, even now. The Nemesis system in Middle Earth: Shadow of Mordor, in which random orcs that you’ve fought before can come back, grow stronger, and become a serious thorn in your side, led to some pretty interesting emergent stories, even if they ruined it in the sequel by making it a vehicle for microtransactions. And while it doesn’t include much in the way of actual AI, Kingdom Come: Deliverance does a pretty good job of presenting the sort of “living, breathing world, in which you could just make a living as a merchant and watch people carry out their daily lives rather than pursuing the main quest, if you wanted” simulationist experience that games like The Elder Scrolls were originally shooting for, but have been losing sight of since Morrowind.
Speaking of the Elder Scrolls, though, the lifeless townsperson interactions in something like Skyrim really reflect what can go wrong with the sort of design you’re talking about, with our current technology level. When you’ve figured out all the algorithms that drive the behavior of the supposedly dynamic NPCs that populate the world, it ruins your sense of suspension of disbelief far worse than when you talk to an NPC who just hangs around in one place all day, but who shows real personality when you talk to him, as you see in something like Witcher 3, or who only responds in text boxes, but those text boxes are detailed, reactive, and well thought out. Some level of abstraction is necessary to avoid the uncanny valley.
Weirdly, I actually found Oblivion to be better in most of those respects (although I sure got tired of endless Oblivion gates). Maybe its just because it was a long time ago and I don’t remember it as well.
Assassin’s Creed: Odyssey has procedurally generated voice-acted side quests (in addition to the scripted ones). They have a different marker on the map or notice board, and follow basically a mad-libs style.
“Misthios! A [politician|bandit|soldier] has [said something bad about|killed|stolen] my [writings|husband|money]!”
“And you want me to [kill him|get it back]?”
“No, I want you to [get it back|kill him]!”
“Consider it done.”
It’s cute the first few few times but once you figure out the script it gets old fast. I skipped them and stuck to the scripted quests which were mostly pretty good.
Yes. You really need a fully fleshed-out procedurally generated world to give you enough variety and sensible context for that to work, if it’s possible at all.
The problem is that the weak dialogue exposes too much of the formula. Skyrim correctly noticed that 99% of all open-world RPG quests boil down to “go to $LOCATION, kill everyone there, optionally retrieve $MACGUFFIN, come back to collect $REWARD”, and figured that it could procedurally generate that just fine, but it didn’t notice that most of your motivation as a player comes not from the quest framework but from the writing around it. Take that away and it just becomes a chore.
My favorite thing about Witcher 3 was that every single little sidequest had good writing and most of them played out like a microcosm of the game’s twisted-fairytale schtick. You’re still going somewhere and killing a bunch of monsters, but you feel like there’s a purpose and a theme to it.
Nornagest, I don’t think you’ve giving Witcher 3 enough credit on the quest design department. It’s been a year since I last played it, but I don’t remember any “kill 10 wolves” quests. The quests weren’t just dressed-up kill or delivery quests. They usually required investigation, preparation and decisions. The Witcher is like magic Batman.
That’s basically what Crusader Kings does (gives a zillion AI characters characteristics and motivations and turns them loose within a loose structure (the de jure system of borders), but the decisions available are fairly limited in that game (AI characters don’t have all that many options).
Works in any genre are often considered great no so much for what they are but for what they lead to. Halo was a competent but not especially inspired shooter on its own terms (my housemate in college gave it up when he couldn’t figure out the vehicle controls), but it was incredibly influential.
I think that had a lot more to do with it being bundled with the Xbox at launch than the game itself. Face/Off would probably have been an important cultural phenomenon if a copy of it had been included with every first-gen DVD player sold, but it’d still be a mediocre movie.
Sure, and Mario Bros. probably got a lot of its influence from being bundled with the NES. We still think of it as a classic.
Was there a similar game that came first that is arguably better? Not totally apples:apples, but Quake 3 and Half-life are to Halo as ___ is to SMB?
I was extremely young when the NES came out, so I’m afraid I can’t say. Do we have any retrogaming enthusiasts here?
The only thing that comes to mind for me is Moon Patrol, but that was not really a platformer, it was more similar to endless runner mobile games (e.g. Temple Run). It’s certainly true that it being included on the console that redefined consoles helped make it the icon it is, but I think it would’ve been a classic on its own terms.
Several have been mentioned already in this thread – Megaman 2, Metroid, etc. SMB probably wouldn’t even stand out as worth mentioning if it weren’t for the vast improvements that came along with the third Mario installment, plus fond memories of those that played the first two as some of the earliest games they ever played. I might go back and play Mario 3 once in a while now, but I do not play Mario 1.
For what it’s worth, I played a lot of Quake and Unreal, and thought Halo brought a good bit of new stuff to the genre. The regenerating shields and vehicles is part, in terms of innovation, but there was also a lot more polish and smooth action from Halo, IMO.
Sonic the Hedgehog might be a better example than Mario. The platformer genre was pretty well established if not totally mature when it came out, and while it added some unusual twists I don’t think it would’ve been anywhere near as influential if it hadn’t been Sega’s flagship title. As it was, though, it ended up spawning a ton of imitators and a character archetype (the mascot with attitude) that practically defined early Nineties console gaming.
The franchise hasn’t aged as well as Mario has, but I think that’s mostly because it didn’t transition well to 3D.
I don’t know if I’d go so far as enthusiast, but I’ve been following some of the older game scene off and on on YouTube. I don’t get the sense that there are too many older games that get the attention that SMB does, though that might be because they’ve been pushing the world record times down recently; it’s currently
4:55.9134:55.796, and a couple of people are trying to shave a frame rule off of that.SMB3 has a scene as well, but WR runs are more spaced out because RNGesus has a lot more to say about it. The hands in World 8 will kill 7/8ths of the WR attempts that make it that far.
Edit: I forgot somewes’s recent WR beating Kosmicd12
Super Mario Bros came out before Metroid, and significantly before Megaman 2, and I really can’t think of any platformer that came before it which even came close to matching its size and scope, or the precise, deliberate design of its controls and levels. There have been pages and pages written about how World 1-1, specifically, was revolutionary in developing tutorials that teach through play.
There may be plenty of other franchises that are iconic largely because they were bundled with and/or the first thing people played on a new system, but in the case of Super Mario Bros, I’d go so far as to say that it was the reason for the Famicom/NES’ success, rather than the other way around. Remember that it came about right at the end of the videogame crash of 1983, and most people credit it as one of the major factors that saved the video game industry. It wasn’t released in an environment similar to today, where everybody’s buying video games anyway, and just being bundled with the popular new system was enough to ensure success, it was released at a time when people were fed up with the industry shoveling trash at them at an unprecedented pace. It would have needed to prove itself.
As evidence, take a look at Duck Hunt, which was also packaged with early NES systems along with Super Mario Bros. There may be plenty of memes out there about wanting to shoot the stupid laughing dog, but nobody is holding that up as a pillar of game design history, the way they are with Super Mario Bros.
It’s true that SMB3 is by far the superior game, and that it (along with Megaman 2) set standards for platformer design that are still being copied more or less directly, but with a few graphical improvements, today, but you can’t argue that SMB1 wasn’t revolutionary in its own right. SMB3 set video game industry standards, but without SMB1 there might not even BE a video game industry.
(Or, at least, it might have taken an entirely different form and been several years behind where it is today, but I like hyperbole, dammit! :op )
I do like a lot the indie games that are released right now, but partly because it seems their developpers have finally understood that 3D doesn’t make everything better, that purely 2D games can be very good, and so we’re basically back into the “Super Nintendo Sweet Spot”, just with 25 years of technological and gameplay development in between. One minute I’m playing Super Nintendo games in emulation, the other I’m playing Hyper Light Drifter, and I can tell the difference, but not by much!
I’m not that huge a fan of the “8/16-bit graphics as an art style” though. It was cute in The Messenger because of the way they used it to show moving back and forth in time (they switch between 8 and 16 bit styles). I like it much better when they use refined versions of the 2D game mechanics but with beautiful, current art styles. See Hollow Knight, Mega Man 11, and I’m very, very, very much looking forward to Streets of Rage 4.
Yeah, the “pixel art” thing is dumb. It’s probably easier to commission 2D art that can be adjusted to any resolution as needed, right?
I suspect it’s because it’s easier to work with – you have a bunch of constraints so the time you put into sprites is limited, and it’s easier to say when the art is “good enough”.
So, for an indie dev who isn’t very visually artistic, it’s a great choice, especially if they don’t want to use pre-made assets or spend money on a commission.
Sort of. You need vector graphics for that, and the tools that’ve been developed for working with vector graphics are different and in some ways inferior to those developed for raster. You know how Flash animation has a distinctive look to it? That comes from vector tools.
It’s been a while since I did much work with 2D graphics, but the typical approach when I did was to use raster images, but do them at a higher resolution than anyone would reasonably need in production, then scale down as needed.
@Nornagest: IANA software engineer, but I thought that was how modern 2D game art worked. You commission an artist to get raster assets at a higher resolution than anyone would need in production, then scale down as needed. That’s how you get visuals that approach the artistic quality of anime rather than intentionally limiting yourself to old resolutions to get 8-Bit Theater or, uh, 16-Bit Theater?
That’s the most common approach, but it’s got limits. Obviously it doesn’t look good at anything above 1:1, and scaling artifacts start making art assets look bad if they’re scaled far down from their native resolution, too. Vector graphics’ greatest advantage is that they handle those edge cases very well.
If you need a great deal of legibility and you know you’re working with a fixed resolution, though, it’s common to draw art assets at a 1:1 ratio, to get the most you can out of them. This is most common in sprite work for console, and that has a distinctive look to it too. Vanillaware (Odin Sphere, Dragon’s Crown) is known for using unusually large fixed-resolution sprites in the modern era.
Another option is to use 3D models for your assets and (optionally) try to fake the look of 2D with shader tricks and fixed camera angles.
On the one hand pixel art is definitely overused.
On the other hand there is something to be said in favor of simple graphic styles in video games as a matter of “readability”. I prefer Civilization II to Civilization III because the former has simple but easy to read graphics, whereas the latter has much more detailed 2D graphics that almost look like 3D models… and make it harder for me to decypher what’s on the screen (not so much harder that I can’t play the game, but it adds up over long sessions).
I remember how Heroes of Might & Magic IV was a major faillure in the series in part due to how unreadable the 2D-graphics-that-pretend-to-be-3D were.
In general, the more information there is on screen, the simpler and more clear cut I want the graphics to be.
You could also look more broadly, and note that sport-related games – once revolutionary, not only as video games but in the way people relate to the sport – have been unable to make further progress and are essentially churning out a minor update each year. And this is across multiple genres – Madden, Football Manager, FIFA, etc.
I’ve seen a lot of claims that television is in a Golden Age as well, but they’re still churning out episodes of Law & Order: SVU, Days of Our Lives, The Simpsons, and [Generic sitcom #7482].
I would expect such staples to exist in any mature field (and I think a certain level of maturity is necessary for a Golden Age). It doesn’t mean there’s not genuinely new and high quality stuff as well.
I agree with you. But in this case, what are the new and high-quality sport-related video games? Or even significant evolution in the existing franchises?
I don’t know whether video games generally have stagnated. But I am pretty sure that (1) the sporting games have (2) this is a huge area of video games.
I think the problem is there’s only so much you can do “innovating” in a sports game. You can’t change the rules of football and still get the NFL logo on your game. You can improve the graphics, update the team rosters, switch the controls around a bit, but the design space is limited in ways other genres are not.
So I think sports games are a special case of stagnation in ways specific to the genre that have nothing to do with every other genre of game.
I am not at all sure that is true, because computer sports games are not simply transposing a sporting experience into a game. They are creating a new game, and that is very much open to interpretations. The player in PES, for example, does not correspond to any individual who exists in a real game of football – rather, he is some sort of brain-amalgam of all the players plus the manager. I think we are still waiting for a compelling football or NFL game where you are just one of the players (FIFA and Madden have notably failed at this). Meanwhile think about all the aspects of being a football manager that are not reflected in Football Manager. You could create completely different games, which were still about the same sport.
Moreover, even within the confines of a set “game,” so much is up for grabs. FIFA and PES are the same in structure as 90s games like ISS, but they’re still light-years ahead because of what is (and isn’t) available in terms of control. But this has stagnated, even though there is so much more that could be done.
I wonder if the reason for the stagnation is licensing. It’s presumably a lot easier to sell Madden NFL or FIFA than it is to sell a generic football game (of whichever sort). But how good of a game does it have to be for you to be able to outbid Madden? And because sports games are so dull and cater so much to people who actually like sports, it’s going to be hard for a good sports game to break out and gain traction outside of licensing.
AAA adventure games are competing with Hollywood blockbusters in storytelling and visuals, something that’s only recently become possible. There was an earlier phase when video games attempted to do this: think Wing Commander 3 with Malcolm McDowell, workhorse John Rhys-Davies and Mark “Yay, a paycheck!” Hamill, but this involved segregation between game play and video, while now characterization happens within the game engine.
Open world RPGs are in a weird area between the AAA adventure games and a sandbox tabletop RPG. You can control a pre-made character as he participates in a pre-written movie-style narrative, or you can create an avatar and have total control over how they talk and react with the story being what you make, and different modern RPGs choose different points along that continuum depending on how much the designers want them to be art vs. an open world.
These are the two genres where video games have greatly improved, being competitive in artistic terms with other visual media. Something like strategy games, OTOH? Those are the same as they have been for a long time. RTS is stagnant, and even when a really good turn-based strategy game like Crusader Kings 2 comes out, it’s not making art or doing anything mechanically that couldn’t have been done decades ago. It’s like a non-fantasy version of the “domain level” in an old school D&D campaign.
The strategy game to make the most meritorious innovation in recent years, IMO, is the grand strategy board game Here I Stand , where you play one of four secular powers, the Papacy or the Protestants and learn about the events of 1517-1555 through event cards (which can also be buried for the military “ops points” printed on it if the event is bad for your cause). By presenting a holistic view of the Reformation era, it has educational value that even good history books on the Reformation, the late Italian Renaissance, and Scientific Revolution don’t because they present those subjects as hermetically sealed, eliciting responses from students who play the game like “I didn’t know all these famous people were alive at the same time.”
Man, why you gotta harsh on WC3? Game was way ahead of its time, and every damn line Prince Thrakash (Thrakoth? Thrakosh? The main bad guy) uttered is permanently seared into my brain.
Sorry, I didn’t mean to sound that harsh. Video and game play segregation is objectively inferior to what they do now, but for the technology of the time, it was amazing, pushing a video game closer to the other visual arts than anything to come out for a considerable time after (when did the Command & Conquer series first hire Hollywood actors for FMV rather than just dudes who worked in the office?)
Red Alert (1996) had legitimate professional actors with extensive TV credits, but not famous names. The first actual stars hired for the series were Michael Biehn and James Earl Jones for Tiberian Sun (1999).
However, the most iconic C&C character, Kain, was played by the Westwood Studios director for voice content.
I played here I stand recently, and while I enjoyed it, I did feel that they needed a second pass on the rules to consolidate things a bit. I think that twilight struggle was a better implementation of the same mechanics. Granted, it was also a less ambitious implementation, w/ only 2 more symmetrical players.
It’s a very heavy game, like playing Diplomacy and Twilight Struggle at the same time. Getting 6 players together to learn from a heavy board game is going to be a very niche thing.
Video games are a special case since they depend on technology, which has only recently reached the point of diminishing returns. IIt would be impossible to have Witcher 3 or Fallout: New Vegas many years ago, and while in theory indy games were possible, the kind of distribution you get from Google Play, Steam, etc. did not exist
So any era before now could have been a golden age for the kind of games that the technology supported, even while other types of games didn’t exist at all and would have to have had their golden age later.
Also, when games of type X were the pinnacle of technology, you had top teams working on them, but today the top teams would probably be working on whatever type is the pinnacle of technology now.
I think that the industry is foundering slightly at the AAA level, while A and B studios like Firaxis (in XCOM, at least, if not in Civ), Paradox, Croteam, Grinding Gear, Obsidian, CDPR, Digital Extremes, and (hopefully soon) Snapshot Games are doing really well with specialized products for a slightly lower price point and/or a significantly slower release schedule. In my mind, it’s a golden age mostly centered on them.
Bethesda (at least their Creation Engine projects) has been twisting the knife in its belly. Bioware is more-or-less dead to me. Most EA and Ubisoft titles in their major franchises are so painfully boring (as always, to me) that I can’t even stomach the companies themselves. But that’s fine, because I don’t have to. There’s a ton of cool games out there – right now I’m waiting on Kentucky Route Zero’s final act, the same from Black Mesa Source, and playing through Dark Souls 3. These aren’t flagship products (aside from dark souls, maybe, but even then FromSoft isn’t an industry giant), but there’ fantastic and out there and enjoyable.
Speaking of The Witcher. While i won’t be able to play Witcher 3 for a long time yet on account of being broke, how good are Witcher 1 and 2? Are they worth investing time into?
Witcher 2 is good, but makes some odd design decisions and can be a bit Dark Souls-y. Witcher 1 is quite subpar, but if you can stand Mass Effect 1, for example, you stand a decent chance of getting through it.
I’ll also say that if you’re that broke, given the prices that Witcher 3 falls to, and you have a PC on par with that level of broke, Witcher 3 may not be worth getting for a while.
Witcher 1 had a pretty good story to tell, but was basically just a massive, ugly, Neverwinter Nights mod, with some mechanics changed in a failed attempt to make swordfighting more interesting. You could tell that CDPR was trying to do something interesting, but you could also very much tell that it was their first game. It may have been a failure in the grand scheme of things, but it was an ambitious failure, and is worth checking out if you’re into that sort of thing.
It’s also worth pointing out that Witcher 3 is very much a sequel to the novels, and not to Witcher 1 and 2.
At risk of spoiling something you learn within the first 5 minutes of Witcher 1, in Witcher 1 and 2, Geralt has amnesia. Normally I feel that amnesia is a pretty cheap storytelling device, especially in games, but CDPR made very good use of it here; they had the rights to a fairly obscure-in-the-west property that they loved and wanted to make a game about, but couldn’t just throw you in neck deep to begin with, since they knew most of their audience would be unfamiliar with the settings and characters. So they decided to give the main character amnesia, so the player could learn about his backstory and the setting along with him. Then, by the time the third installment rolled around and the games were a massive success, they were able to tell the fanfic-style sequel to the books they always wanted to, confident that they had popularized the setting enough that people could follow along.
Basically, 1 and 2 were just side stories, in which Geralt has some adventures and eventually gets his memory back. 3 starts a brand new story, with the character fully intact, and thus makes an excellent jumping in point.
Although having played the previous Witcher games helps you build a personal connection with some characters who play a smaller role in the novels than they do the games, for the most part being unfamiliar with those games shouldn’t impact your ability to enjoy Witcher 3 at all. On the other hand, I found that having previously read the novels increased my enjoyment of the game immensely. Note that the novels do have a slightly different tone than the games, though, and that there are a few elements to the setting that the designers glossed over and/or changed to make them work better in a game. It’s very much a fanfic of the novels, as opposed to an actual sequel. In a way I feel it’s a lot better than the novels, mind you, but most people tell me that’s only because I haven’t read them in the original Polish, and they lose something in translation. They’re still good, though, and are significantly cheaper than a computer that can play Witcher 3, to boot!
You don’t need that good a computer to play Witcher 3. It’ll struggle on integrated graphics, but any dedicated graphics card released in the last five years should do fine with it, as long as you don’t have your heart set on max settings. I got through it on a four-year-old mid-range laptop.
That’s true, Witcher 3 still looks absolutely gorgeous on minimum graphics settings.
I only mentioned it because Lillian said her reason for not having played the game yet was being broke.
So broke that a laptop with integrated graphics is pretty much exactly what i have to work with right now. Suffice to say, it can’t run Witcher 3, and frankly it probably can’t run Witcher 2 either. But hey at least i bought this one, unlike my last three laptops that were all hand-me-downs. Progress!
I want to echo what others said, and add on a bit.
The Witcher 1 is more interesting as a historical novelty. If you want to know more about the characters, or the world, it’s worth looking into, but otherwise is probably skippable. It uses the same engine as Neverwinter Nights, but the designers tacked on some things to make the gameplay overall more interesting – potions and oils, the stat-less level-up system, and the failed attempt to make witcher-style swordfighting in a limited engine. It’s a tentative attempt at a game that captures the spirit of being a witcher. There’s a lot of finicky inventory management and housekeeping that dropped away in later games. However, as usual, the writing is pretty solid, especially when compared with some of the other RPGs being made at around the same time, and the plot had me sufficiently interested to carry through.
The Witcher 2 I think of as a prototype or proof-of-concept for the Witcher 3. They abandoned the Neverwinter Nights engine, which was a big step forward – now Geralt can dodge, roll, slash his sword, and parry as the player likes. They attempted to more smoothly integrate Signs (witcher magic powers) into combat, they removed a lot of the fiddly housekeeping from taking potions, and generally you can see that there was an attempt to make combat a fluid, dynamic event.
Sadly, though, it’s held back by some flaws and baffling design decisions. For example, potions need to be taken in advance, during meditation. I can sort of see what they were going for – trying to capture the needful preparations witchers make in the novels, rewarding careful forethought and planning – but it’s mostly just an irritation in the game. Combat mostly consists of casting your shield spell, slashing, taking a hit, and then rolling around in circles until you can cast the shield again. It’s too dangerous otherwise – the tutorial alone killed some players dozens of times.
Those limitations aside, though, the environments are gorgeous and detailed. Not as sprawling as the Witcher 3, but at the time it was one of hte most beautiful RPGs I’d ever played. The tutorial castle siege and the fantasy river port town in the middle of the forest are both stuck in my mind as really well-realized locations. The writing takes a step forward from the first game, with a neat conspiracy, some branching plot elements, good moral dilemmas, and really memorable characters.
On the whole, check out the first game if you’re an enthusiast, and the second game is worth exploring on its own.
I agree with all of this, except for:
I really liked having to prepare and use my potions judiciously in Witcher 2. I did not like handwaving away the preparation in Witcher 3, where you only have to brew a potion once and then you magically get it all back every time you meditate. Completely defeated the purpose of herbalism and all the herbs and materials scattered around the environment. In that respect, Witcher 2 felt more like “being a Witcher” than 3, and as a Witcher enthusiast, I really liked that.
And:
lol git gud nub. 😉
The problem with Witcher 2’s potion mechanic, in my opinion at least, was that it was the only way to regenerate health besides plopping yourself down somewhere for 10 hours. If it incorporated fast regen in towns in some way, it would be tolerable, but as-is it’s just so painful. The game really opens up past Flotsam (really, 3/4 of the way through Flotsam), but up until then you don’t really have money or ingredients, and meditation is just frustrating and breaks flow in a way that inkeeper dialogue doesn’t.
Thanks for the input everyone! So it looks like i should give Witcher 1 a skip, especially if it’s being compared with Mass Effect 1 and Neverwinter Nights, as neither of those are games i have any interest in playing. Not sure if my laptop can run Witcher 2, but if it can’t then when i get a real gaming PC or enough time passes my next laptop can handle it, then i’ll probably check it out.
You should probably check out some of the really great indie games out there right now. Hollow Knight is one of the best games I’ve ever played, will play on anything, and is $15 on Steam (but goes on sale for less…I got it for $12 on the Switch).
I’ve got to second this. The indie game scene right now is putting out some of the best stuff in the industry, and that’s even considering all the praise people are giving to things like Witcher 3 and Breath of the Wild. We’re getting to the point where many indie games deserve spots on “best games of all time” lists, rather than just “best indie games.”
As Conrad said, Hollow Knight is pretty much the perfect distillation of everything that makes the Metroidvania and Soulslike (and let’s face it, at its best Dark Souls is basically just a slower-paced 3D Metroidvania with an emphasis on combat) genres great, while Shovel Knight (no relation) is the perfect distillation of everything that made 8/16 bit platformers great. And while it’s largely a meta-commentary on RPGs, so it’s got a slightly more niche target audience, Undertale is one of the best RPGs I’ve played in ages. I’d also be remiss not to suggest Bastion and Transistor to pretty much anyone.
Basically, just google “best indie games,” close your eyes and pick something at random, and you’re pretty much guaranteed a great time for less than $15 that’ll run on a toaster. It’s a great time to be into video games.
Ok, I try not to be too pedantic, but I admit I’ve been baited this time.
First, Dark Souls is not a metroidvania. Its world (which is fantastic) is broadly shaped like that of a metroidvania, but the genre requires that you unlock further paths in the world–as well as access to special items in early areas–by acquiring general purpose ability upgrades. Think of a double jump upgrade that lets you reach a new ledge. In Dark Souls you unlock further paths with (often literal) key items.
Second, Hollow Knight is not a soulslike. The only similarities it has to other soulslikes are vague ones about atmosphere, storytelling, and difficulty along with the more concrete similarity of its version of the soul recovery mechanic. If that’s enough to be a soulslike, then it will be hard to keep just about every action game from falling under the category. Elements almost universal to soulslikes that it lacks include: a stamina system, an attribute increasing leveling mechanic, iframe based rolls as central to avoiding damage in combat, a variety of different equippable weapons with different movesets.
And note that it makes no sense to cite its metroidvania-like structure as contributing to its being a soulslike, because Dark Souls is the only soulslike that has something seriously resembling that (with the caveats from before). Its successors and imitators in the genre have dropped it, except Bloodborne to a small extent.
More on topic: this is in fact a golden age of videogames and both Dark Souls and Hollow Knight are among the best games of all time. Among other recent games (ok DS isn’t that recent anymore) I’d add Mario Odyssey to the list.
Mario Odyssey was an awful lot of fun, very creative with great imagery and fantastic music, but I’m not sure I’d put it in my top 3 Mario games simply because you spend so much time doing things that aren’t related to platforming. “Run around the world, find some glowing thing on the ground and pound them to get a moon” aren’t platforming. Give me 3D Worlds for the Wii U any day.
Odyssey had easily the best 3D platforming mechanics of all time. You are right that the moon placement and level design didn’t come close to making full use of those mechanics, but even simple acts of world traversal remained a joy throughout the game. That’s good enough for me. Also, put me down as a huge fan of tropical wigglers and those nose-birds.
Well, yeah, that’s why I specified that Dark Souls at its best was basically a Metroidvania. I didn’t really care for what I’ve played of 2 or 3, and still haven’t played Bloodbourne, though I hear it’s the best of the bunch. When I’m in the mood for slow, deliberate combat with punishing hits, stamina management, and careful iframe-based dodging, I just play Monster Hunter and skip all the frustrating bits. (Well, maybe not ALL the frustrating bits, but you only need to play through the Monster Hunter games’ stupid tutorial sections ONCE.)
But yeah, I’ll concede the point about Dark Souls being a Metroidvania. I was kind of reaching with that one, and letting my own personal biases seep through.
I’m gonna’ have to disagree on the definition of Soulslike, though. What makes a Soulslike are 1) difficulty, 2) the soul recovery mechanic, and 3) the atmosphere. Part of that atmosphere comes from slowly uncovering an organic-feeling world that you initially know very little about, and learn about more through exploration and contextual clues than through dialogue or backstory, which is where the similarity with the Metroidvania genre comes in.
I don’t think the particular style of combat a game uses is necessary for something to be called a Soulslike, as long as that combat is difficult and deliberate. Demon’s Souls hardly invented that style of combat, after all, it just placed it in a world with a particular atmosphere, which required contextual clues to figure out the backstory of, and paired it with a death mechanic that made failure particularly punishing and kept you on the edge of your seat during combat. It’s true that my definition makes the genre pretty broad, (it basically just says “this game took inspiration from Dark Souls”) but your definition consists of pretty much nothing other than the Souls series itself (in which I’m including Demon’s Souls and Bloodbourne) and, I dunno, maybe Nioh. (And probably a few shitty games on Steam that no one actually played.)
I suppose I’ll give you the levelling mechanic, though.
@Vorkon
To be clear, the paragraph you quoted wasn’t meant to be a criticism of any argument you actually made. I was just preempting the hypothetical argument that the metroidvania structure of Hollow Knight contributes to its being a soulslike. Sorry if I didn’t make that transparent.
On what makes something soulslike:
Genre categories a generally a matter of family resemblance. You can make a list of things that will increase something’s degree of fitting a certain genre, but necessary and sufficient conditions are hard to come by. In particular, in the list I gave, none of the items was intended to be individually necessary or sufficient.
That said, the soulslike genre at this point has a decent number of exemplars. There are 5 FromSoft soulsikes plus at least four unambiguous ones from other devs: Lords of the Fallen, The Surge, Nioh, and Salt and Sanctuary. I’ve seen a couple other games that seem like a fit but can’t recall them more clearly right now.
Every single one of these games has every single one of the features mentioned in my list. So I think it should be pretty clear that they are all factors that contribute to something’s being a soulslike, and their lack contributes to something failing to be one.
Where are you? If you are in the Bay, I have one that had an excellent graphics card a few years ago that you could likely have (I would have to ask my little brother to track it down).
I think we’re nowhere near the golden age of video games. VR has hardly even taken off, but pretty much everyone I’ve talked to that has it thinks it’s the best thing since sliced bread. Once that technology becomes more mainstream (cheaper, more efficient, and more effective) I can easily see it improving gaming tenfold. There are some obstacles to overcome, like clunky controls, nausea, and physical space limitations, but as the technology gets better I think we will design better solutions for that.
Well we’re in an age where AAA adventure games and some AAA RPGs are becoming art on par with Hollywood films. Why begrudge video games not being VR when the other visual arts aren’t either?
So, in the “games as art” debate, what do y’all think of the artistic merit of how Fire Emblem handles character development?
The way it works is that as you make two characters fight side by side on the grid map, they start to bond, which is revealed through conversations back at the castle. At the series’s peak, Awakening and Fates, these “support comvetsations” go beyond brothers-in-arms, with characters eventually unlocking the S(pouse) level of development if fighting next to an opposite-sex character long enough… And then you get to see them be parents.
I find this fascinating because it combines character development with agency and replayability, whereas normally a game has to give you a pre-made character and fairly linear story to tell a story of any literary merit. This is what’s called “railroading” on the tabletop.
(Though despite this mechanic, Fire Emblem games are pretty linear even by JRPG standards, all but the best games in the series throwing you from one mission to the next without being able to do anything interesting on the world map in-between.)
Eh. It’s neat in theory, and you’re right that it’s great mechanic for replayability, but as far as “art” goes, its actual implementation in the Fire Emblem games leaves much to be desired.
Ultimately, the relationships you choose have very little impact on the actual story; probably even less than the romances you go for in a Bioware game. At most, you get to see a few extra lines of dialogue that are hidden in other playthroughs. The way it effects your strategy, though, with you needing to keep certain characters together, limiting some strategic options while making others available, still makes it a pretty interesting mechanic, however.
I can’t speak for this game but I’ve never played a game that had a character as good as that from a well made movie. Some of the stories are fairly decent but the characters are still incredibly mediocre for the most part.
Mass Effect does its characters pretty well. I’m not sure how well it compares to movies but it’s definitely top-tier in terms of video game characters.
I assume you forgot to add, “Other than GLaDOS…”
Controlling for genre, I feel like character writing’s been about as good in recent games as it is in movies. It’s just that when a corresponding genre in film exists, it’s usually action or adventure or SF or horror, none of which are really known for their deep characters.
The drama game isn’t really a thing yet, but there’ve been some forays in that direction, so we’ll have to see where it goes.
There totally needs to be a stealth game where you play a prince lurking around the castle trying to solve the mystery of your father’s death.
I disagree completely. Science fiction has some great characters. Look at Star Trek and Battlestar Galactica. And Ellen Rippley is better than all the knock-offs of her.
Star Trek‘s one of those weird cases where it ends up being more than the sum of its parts, but I don’t think any of its characters — through the TNG era, at least — were individually much deeper or more compelling than the average member of the Mass Effect crew. Except maybe Picard, but only thanks to being played by Patrick Stewart. Same goes for Battlestar Galactica, and that one doesn’t have a Picard.
How are you to put on the play if it’s a stealth game?
“Science fiction has some great characters” doesn’t contradict “SF isn’t known for its deep characters”. The action, adventure, SF, and horror genres all have their great characters, but on average they’re less character focused than other genres.
Actually, you know what? I might contest that. I was going to give examples of genres that are reliably character-focused, but the only one I could think of was “drama”. And that seems like a tautology, since a movie has to be focused on character development to qualify as a drama to begin with.
So I’m gonna say that “character development” / “drama” is orthogonal to “action”/ “adventure” / “sci-fi” / “horror” in the genre classifications.
But yeah, average AAA game character development seems comparable to average Hollywood movie character development, but the best games still fall short of the best movies.
By dressing up as a woman to play one of the female roles, maybe?
I’ve never really “gotten” the praise for Mass Effect’s characters. I really liked the story in 1 (and it’s by far my favorite game in the series), but I never felt the characters were better than flat. Interesting (mostly… looking at you, Jacob), but flat, and worse in a lot of ways than contemporary characters from “real” RPGs.
Deus Ex: Human Revolution has some of the best characters I can think of. Pritchard, Faridah, David Sariff, Taggart feel very real.
The antagonists not so much. The player character Adam Jensen is well-fleshed out, but it’s hard to relate to him, simply because a player character always pretends to be you not knowing that this is a game.
The Last of Us is probably the best game I’m aware of in terms of characterization. I’d say it was significantly better on that front than the average Hollywood movie, but still well behind the very best films – much as I love Joel and Ellie, they’re not a patch on, for example, Will and Tom in Leave No Trace.
Tarpitz:
Thanks for your comment. I don’t play a lot of video games and having all these people talk about how great all these other games were but seeing no mention of The Last of Us had me thinking I had missed out quite a bit.
I mean, if I think The Last of Us has one of the best stories ever and nobody even mentions it, maybe I’m missing out in a huge way.
I think The Last of Us suffers from being a PS4 exclusive. I don’t own a PS4, and so I’ve never gotten to experience more than a few minutes of it. :/ I’d dearly like to, but not enough to buy an entire console for it.
PS-exclusive. I played it on my PS3.
Consoles (and games!) are cheaper when you’re not buying the latest thing.
It’s also worth pointing out that writing characters in video games is inherently more challenging than writing characters for media that the audience needs to sit through in a controlled, linear fashion. The author doesn’t need to take into account player choices that might alter that character’s behavior, or the fact that the player might be focused on some other aspect of the environment while the character is speaking. It’s also harder to make a complicated main character, without ruining the player’s sense that they are in control of that character. Books and movies don’t have this problem; you’re seeing exactly what the author wants you to see, from the perspective the author wants you to see it, at all times. In a way, when a video game character is good, I feel the designers have accomplished a more impressive feat than when a character in a movie or book is equally good.
That said, there are definitely elements of storytelling that games are capable of that no other medium can touch. While, like I said earlier, it can be tough to present a complicated protagonist, they have an easier time letting the audience identify with that protagonist than any other medium, since you’re literally walking in their shoes. The medium is also inherently stronger at worldbuilding; just like you can’t think of too many games with stronger characters than the average movie, I can’t think of very many movies that make you feel like those characters are living in a fully realized world than the average game, unless you’re talking about a long-running series based on mountains of source material, which you also happen to have read.
Speaking of which, if anybody here hasn’t seen the absolutely outstanding YouTube video by a guy named “MrBTongue” called “The Shandification of Fallout,”(which, unfortunately, I can’t get to to link on the network I’m on) it is probably the single best treatise I’ve ever heard about the strengths of videogame storytelling, and where it can shine in ways that other mediums struggle with.
Going back to the original topic of games with good characters, though, I’d probably suggest Witcher 3 (as others have pointed out), Planescape: Torment, and anything by Obsidian.
The Shandification Of Fallout
Obsidian’s games have great writing but an unfortunate tendency to get sketchy and unsatisfying towards the end, as if their narrator has a train to catch and needs to wrap up the story at any cost. It’s probably because they usually work as contractors rather than primary franchisees, and therefore have inflexible budgets and deadlines.
KotOR II is probably the worst example.
Yeah, I wouldn’t exactly hold these titles up as examples of the shooter genre not living up to its previous highs. If anything, I’d say that what makes these titles good (i.e. plopping you down in a world and giving you numerous options for solving problems organically) are being held up by the open world games that everyone is so excited about these days. They were both great games, but not because they were great shooters; the actual gunplay was so-so, at best.
Also, even ignoring the shift in focus of the “immersive sim” style of gameplay, the shooter genre is hardly stale, outside of AAA titles like Call of Duty. It’s just that most of the innovation is going on in the multiplayer space. There’s plenty of good single player shooters coming out, though; as others have said, the new Wolfensteins have been pretty good, and the new Doom was outstanding.
I would call Deus Ex a first person puzzler. The real fun was to figure out how you could avoid just shooting everyone and instead take advantage of the environment and such to solve the problem.
Incidentally, this is ALSO the most fun part about combat encounters in Breath of the Wild.
The new Doom’s a more innovative game than it’s generally given credit for, I think. Shooters had been stagnant for a while because everyone coordinated on something close to the Call of Duty formula: limited weapons and ammo, plentiful cover, slow-ish player movement, regenerating but fairly low health, aimed shooting down sights. That gives you solid, fun, semi-realistic gameplay but dictates a pretty slow, deliberate pace, and there’s only so many variations on it that can be made.
That’s not in the Doom nature. Doom wants you to act like a high-tech berserker. But the original games reward acting like that because of their technical limitations, and you can’t just reproduce those limitations without the game playing like it’s a decade and a half behind the FPS design curve. So it came up with a completely new formula, tying resources to kills and forcing you into the open to collect them. It’s a lot like an FPS take on the spectacle fighter genre (Devil May Cry, God of War) and it works really well.
Yeah, but do they do anything with the bit where Doom Guy’s motivation for fighting all these demons is that he’s a devout Catholic (which comes from a novelization)?
@Le Maistre Chat
No, and they also dropped the bit where it’s because of his bunny.
@Hoppyfreud: So lame.
That said, for a silent protagonist, they did an absolutely outstanding job of giving Doomguy a personality, even if that personality is just “kills demons and doesn’t afraid of anything.”
From all the times that he says “fuck it” and smashes shit when the talking heads are in the middle of the long infodumps they would normally make you sit through in most other games, to the little fistbump he gives the Doomguy doll and the way he gives a “Terminator 2” thumbs up if you die by falling in lava, you really get a feel for what makes this guy tick.
Doom Guy’s motivation for fighting all these demons is that he’s a devout Catholic
Excuse me while I roll around on the floor laughing 🙂
Okay, the only thing I know about Doom is what I saw of it being played on (in retrospect) crappy PCs by the guys on my IT skills course, and from what I gather it was all about SHOOT SMASH DESTROY BLOW SHIT UP GET BIGGER GUNS MORE DAKKA AND BLOW THOSE STRIPPERS AWAY OOPS SORRY MEANT DEMONS.
Why there were demons (or indeed strippers) on a Mars base was never adequately explained, because that wasn’t important. What was important was kill them before they kill you and get to the next cache of ammo and health fast.
The only “motivation” was “kill as many as fast as possible as gorily (given the constraints of the tech, the genre and the time) as possible”. It was meant to be fun, it definitely wasn’t meant to engage the brain.
Sounds like you’re conflating Doom and Duke Nukem. And maybe a few After School Specials about how evil video games are.
I mean, I’m pretty sure Doomguy’s main motivation is “they are trying to kill me, I should stop them” with a side order of “and they’re turning everything they touch in to Hell, that’s probably bad.”
I’m more of a Marathon guy myself.
I want the special shotgun shells blessed by the Bishop the next time *I* play Doom!
There were demons on the Mars base because the scientists got a brilliant idea that they could get limitless energy by opening a portal to hell and extracting energy from there. What could possibly go wrong?
There were no strippers, because as toastengineer says, that was Duke Nukem, not Doom.
I’d say that Dishonored and Prey are carrying the torch on that kind of game pretty well (by way of Bioshock, of course – though Bioshock’s harder difficulties have never been anything but awfully designed).
I enjoyed Dishonored quite a bit. It had an okay plot, but it occurs to me that it’s really not clear that the game would’ve been improved by a better plot. It’s like an action movie – an obviously bad plot ruins it, and a good plot improves it, but past a certain point additional complexity and nuance in the plot can detract from the main point of the flick. And I played Dishonored in 2-hour chunks over the course of a month or two, it’s pretty hard for a complex plot to even make sense under those conditions.
Yeah, I’d say the same. Very Taken-esque, and a good fit for a piece of media that’s designed to be maneuvered rather than consumed.
We may be in a Golden Age of Videogaming, but we still have a critical shortage of games with truly satisfying sword fighting. You see, when i land a solid blow with a sharpened piece of metal upon someone’s soft yielding flesh, what i expect to happen is for him to go down screaming in a spray of blood. What actually happens in pretty much every game with sword fighting is he is momentarily stunned, as if he’d just been firmly thwacked with a broomstick. If he is repeatedly thus twacked, he will eventually collapse unto the ground, and if the game really likes me, it may in fact deign to reward me with the aforementioned screaming and spraying blood. This feels rather wrong, since i’m supposed to be wielding a sword, not a whifflebat that adquires sword-like properties after it whiffles someone sufficiently. It’s rather frustrating.
Thus far i have only played two games that gave me what i want. The first is Metal Gear Solid 2 during a five minute segment at the end of the game. You are given a sword and pitted against a large swarm of mooks armed with swords and P90s, who will in fact go down screaming in a spray of blood after only one or two blows. For bonus points, since MGS2 uses fixed camera angles, the right stick on the controller was used to direct the motion of the blade, which was super satisfying since it really felt like i was slicing people apart. The final boss fight is high tech sword fight, and while the guy does require quite a bit of slicing to kill, the blade control mechanics did a lot to make it feel like a real sword duel.
The second such game is called Akane, a cyberpunk-themed arena twin-stick slasher wherein most enemies and yourself all die in one hit. It’s only $4 on steam, and i highly recommend it. It really nails the feeling of being a mook-slaughtering sword fighting badass, plus i like the techno soundtrack. You also get a gun, for when your sword arm gets tired and you need to catch a breather by killing with your gun arm instead. It’s not a game for playing for more than short stretches, but i think it’s great for relaxing and taking your mind off things.
There is also Bushido Blade for the PS1, but i never actually played it.
Kingdom Come: Deliverance actually works pretty well for this, in my opinion – armored enemies can take a lot of hits on their armor as long as it’s in good repair, but stab a guy not wearing a helmet in his unprotected face and he’ll go down immediately. Plus, the only semifunctional blade control system I’ve seen in a video game.
I was just about to suggest Kingdom Come: Deliverance, too.
Metal Gear Rising had some very satisfying cutting. Most enemies do take a number of whacks before they die, but with good use of parry counters you can put enemies into slice-and-dice mode pretty darn fast. And the game in general is just a really good spectacle.
Left 4 Dead 2 has melee weapons that will splatter a zombie horde all over the walls, floors, and ceilings (and the camera). Just stand in a doorway and swing like a madman.
Dark Souls, while it doesn’t give you the screaming or showers of gore, is pretty good in the “one hit will seriously mess you up” department. It definitely can feel like you’re doing one of those samurai single-stroke battles when you land a well-timed heavy attack, even though the enemies just keel over and collapse as usual.
Dead Island was trash, but I remember it handling the dismemberment pretty well.
It was great fun introducing Bushido Blade to noobs in college.
“Okay, pick a character and a weapon. Ready?” Forward-X “Okay, you’re dead. Try blocking next time.”
I’d like to reiterate that in addition to the quality of games available these days, one also needs to look at the affordability and accessibility of games. Never have you been able to get so many great games, for so little money, in so many formats, delivered so easily.
Add to that all the other ways people are enjoying video games. Twitch streams and YouTube channels. eSports are a thing that people actually watch. Classic games are easily available on emulators.
If this isn’t a Golden Age, what does a Golden Age look like?
The 90s were the golden age for video games. 20 years later this Castlevania successor doesn’t even look as good (and I mean “look good” literally) as SoTN.
Yeah, that looks rough. But there were bad games in the 90s, too. But now go play Hollow Knight, and come back and tell it doesn’t both look and play better than SoTN. Play Mega Man 11 and tell me it’s not better than Mega Man 2. The best games today are better than the best games of the 90s, even in their own genres. And less expensive. And available on more platforms, to more people.
Is anyone else less than enthused about the CRISPR-altered twins in China? One alteration to one gene for resistance to a disease that they’re probably not going to get anyway?
The usual suspects: “He’s messing with nature!”
Many transhumanists: “Heck yeah, let’s get this party started!”
Me: “Well, I guess the first was going to be something small, but… uh… that’s really it?”
Assuming for the sake of argument that this isn’t a hoax, if the children are healthy this is actually a huge deal.
There’s a pretty huge difference between something working with cells in vitro and working with an entire living organism in vivo. Previous experiments with CRISPR gene editing in human tissue culture and even embryos were important but this is the first time we’ll see what really happens when you use gene editing in humans. Just because something seems like it should work doesn’t mean that it will work, which is why we have clinical trials in the first place rather than jumping directly from drug discovery in tissue culture to treatment.
Anyway, a lot of the more interesting things that you can do with CRISPR have relatively low efficiency. HDR for example is a huge pain in the ass, and that’s in cells or mice where you can shrug and throw out anything that doesn’t work. Those techniques just aren’t anywhere near ready to be safely used in humans.
I considered “hoax” at first as well, but if it really is a hoax, it’s a very complicated one involving his entire lab and a 23-page consent form, so I’m guessing with very high confidence (>99.9%) that he actually did do something, and I have very high confidence (>99.5%) that he did the thing he said he did (because, again, he has a lab, a university, and a team behind him), given that somebody also claimed to have tested the girls and confirmed that they were actually altered with that gene.
We’ll have to wait years to see if the girls are actually healthy, but they’re not dead or suffering from anything obvious already (or we would have almost certainly heard about it).
#numerativeDeterminism?
I may have to eat some humble pie, because apparently He (surname not pronoun) presented his findings earlier today or yesterday at a conference. If it’s a hoax, at this point it’s a very convincing one.
I’m still going to reserve judgement until I read the results myself but it’s looking more plausible.
Agreed. The principle here is “first do no harm”, which yes can be overemphasized to the point of never doing any good either, but if you’re not dogmatic about it is a good starting place. Given Nabil’s caveat, they’ve shown that applying CRISPR to human embryos is not intrinsically harmful. They didn’t pull a Jesse Gelsinger their first time out the starting gate.
That’s a pretty big deal.
“Hello World”
Exactly. The first implementation is always a proof-of-concept.
I was under the impression the father was HIV positive, so the bump in resistance to AIDS is directly relevant and useful in this case. This seems like a really useful benefit. Some people who want children are worried about passing some known condition on to them (I remember both our host and frequent commenter Matt M expressing such concerns). If you can use gene editing to make that significantly less likely, well that might tip the needle. Assuming it works this way (I am not a genetic engineer), this seems like useful technology.
Yep. The reaction around the office was a huge, resounding ‘meh.’
On the one hand: I have issue with throwing something lightly tested like that at real humans.
on the other: someone has to be first and it’s probably a good thing they’re picking something minor and well defined rather than going for something flashy and more risky.
Interestingly, the genetics communities main objection to this (besides the obvious PR and ethical concerns) is that HIV/AIDS is a very weird choice of thing to address with gene editing – they’d have preferred the first trials to be with something like Huntington’s Disease.
Would the edit for Huntington’s be more complicated? As someone mentioned upthread, this is essentially “proof of concept”. Do something simple that you’re pretty sure you’ll get right and that shouldn’t be too damaging if you get it wrong first, then move on.
Also, I don’t know what the populations look like, but it may have been easier to find a willing test subject for this AIDS gene than for the Huntington’s one. Or maybe not necessarily easier but just happened to be available first.
This seems like a weird objection, it seems like more of an attempt to justify the PR/ethical concerns than an actual objection.
I believe it would be about equally-complex if not easier; Huntington’s is a single-gene mutation and takes effect with only one mutated copy, while the AIDS-associated gene needed both copies edited to have the desired effect.
I completely agree about the source of the objection; it’s been kind of funny to watch the genetics field try to say ‘we’re annoyed that this guy did it first, and also that this will make our own jobs harder’ without, you know, actually saying that.
Well it is unclear whether the 32-base pair CCR5 deletion provides immunity rather than resistance to HIV. It increases resistance to HIV and reduces resistance to other some other diseases.
As it happens each twin has different modification(s) made to the CCR5 gene other than the known 32 base pair delete. Also, the whole genome sequencing was insufficient to determine of chromosome rearrangement might have occurred. The human genome hasn’t actually been fully sequenced because there are repeat sequences that are hard to determine the exact number of repeats.
Most people in the field think that getting the reliability of the technique down using left over embryos from in vitro fertilization is the ethical thing to do rather than starting off with random modifications on an embryo coming to term, and being unable to confirm that changes were not made on sequenced portions of the genome.
I have a couple thoughts:
-We’ve (that is, the biology community) used CRISPR to engineer deletions into the germ line of basically every non-human model organism in existence, as well as in human cell lines. The technology to do this was out there and it wouldn’t have taken much in the way of exceptional expertise to do it. The main limit was “everyone who knows how to do it adhering to the prevailing ethics” and that’s not much of a limit at all.
-The specific actions done by this doctor are unethical even beyond the standard “don’t perform experiments on people who can’t consent”. There’s a specific mutation in the CCR5 gene that he’s trying to generate to engineer HIV resistance. This mutation needs to be homozygous to confer HIV resistance – otherwise the mutation does nothing. One of the babies is (allegedly) homozygous, the other is heterozygous. The heterozygous baby will not have the HIV resistance. In both babies, my understanding is that they don’t have the specific deletion that they were trying to engineer. They have different, similar mutations, which MIGHT (but possibly don’t) confer the same sort of resistance. These mutations – even if the children don’t have any off target mutations – could have any number of phenotypes. They could be lethal, they could cause massive health problems. Even the known CCR5 mutation seems to cause some health problems. We have no idea what these new mutations will do.
It really seems like he’s aiming for the “first” title without particularly caring about the patients. The HIV resistance is a nice fig leaf to cover his ambition, but that’s all it is. If this goes wrong – if these patients have a poor health outcome related to the experiments done on them – it could set back genetic medicine (even genetic medicine that uses CRISPR on non-germ line tissues) by decades.
What would an ethical and responsible path to human germ line engineering look like to you?
Or do you think it’s not something we should ever do?
I am of course not secondcityscientist, but – I think this is already more than adequately covered under existing clinical trial regulations, and need not be further investigated except for the obvious definitions problems.
Reasoning: recently developed medicines aren’t all that much less complicated than CRISPR-as-therapy (at the obviously testable levels, something like ‘solve for general intelligence’ is still way beyond us per everything I know); clinical trial protocols are almost literally unimaginably overconservative.
Please do note that this means the full ~$10 billion 30-year starting-from-phase-0 process.
That definitely implies a particular tradeoff between fast progress and safety for patients–one slid far in the direction of safety. I’m not sure that’s the wrong tradeoff to make, but I am pretty sure we’ve arrived at it for reasons other than anyone carefully working out the plusses and minuses of more risk / faster progress.
That’s more or less my position, as well. FDA standards are definitely excessive but may well be the correct ones, but they’re efficient cause was fear of litigation, not cost-benefit analysis.
At a minimum, the edits should be what the patient and doctor want, no more no less, the doctor should know what the edits will do and the edits should be widely agreed to be beneficial. That’s not really the case here. Also, for dumb political reasons I want gene-editing therapy to be in wide use in somatic tissues before it goes germ line. I think somatic gene editing has a lot of very near-term promise and it would be terrible if that got screwed up because someone was too eager to edit the germ line and a bunch of politicians overreacted.
I anticipate that in a few years we’ll have a more reliable CRISPR/Cas9 system, that cuts exactly what should be cut and doesn’t have as many off target effects, because these things tend to get improved over time and people are already working on changes to the system. The old gene editing systems were horrifically inefficient and CRISPR is a big step forward, but it’s still very error prone relative to what non-biologists tend to expect.
All of those would be desirable. But suppose that isn’t an option. You have an edit which usually does what it is expected to do but not always, what it is expected to do is very valuable, and no better alternative is available.
What’s the argument for not doing it? Every time a child is conceived, there is a non-negligible chance of something going badly wrong, but we don’t conclude that no child should be conceived until that chance is down to zero. Similarly for surgical procedures, medicine, much of life.
I’ve been reading a lot of solemn pronouncements about how this is the first time we’ve ever messed with the human germ line, and it’s a huge step, etc. etc. But that seems like bullshit to me, though. Every new person who is born has about a dozen de novo mutations, that have never been seen before. The only difference here is that now the mutation is someplace we want rather than a random place we have no idea about.
As far as I can see, changing immigration patterns, differential birth rates, geographical self-sorting, or even nuclear accidents will do a lot more to change the gene pool than CRISPR.
It does mean that humans will change more over the next 200 years than they have in the previous 20,000 though
The only difference here is that now the mutation is someplace we want rather than a random place we have no idea about.
Who’s “we” in this sentence? As has been pointed out before, “We’re putting this mutation where we want it” actually means “Some of us are putting this mutation where they want it in other people,” and even if the current example is one where a majority of people would probably like the effect, there’s no reason to suppose that this will always be the case. You think assortative mating is causing problems now, just imagine what it will be like when the rich can literally edit their children’s genome to make them more intelligent.
Would you prefer wealthy people to be less intelligent than they could be?
I can definitely imagine unethical edits, but that isn’t one of them. It is a good thing to make folks more intelligent, no matter what class they are.
The things I can imagine being bad is where the editor takes a great risk of bad things happening to a child on the chance it might be good, or someone creates specialty humans to live underwater or in space. But it seems to me the ability to greatly improve humanity by adding intelligence or better health greatly outweighs the risk of some folks abusing it.
Good for whom? Western elites are already happy to write off large sections of their compatriots as backwards deplorables, and I don’t see how turning the elites into a bunch of genetically-engineered ubermenschen is likely to do anything but exacerbate that.
There’s no reason why this kind of genetic technology can’t be available to the middle class. A lot of other genetic technology like in-vitro fertilization and gene sequencing is relatively cheap and getting cheaper.
I hate the “crabs in a bucket” mentality that seems to be so common these days. If indoor plumbing and flush toilets were invented today, people would be wringing their hands about how now the rich will get to enjoy not having cholera, rather than rolling up their sleeves and fixing the problem for everyone.
It was mostly the lower class rather than the middle class I was worried about.
We’ve already had one eugenics movement in recent history, and it didn’t exactly end well.
You don’t see a difference between people trying to improve the gene pool by controlling the reproduction of other people and people trying to improve their own children by controlling their own reproduction?
Would you similarly argue for banning contraception on the grounds that we have seen how badly China’s one child policy worked out?
@ David
Considering the past results in this case, a healthy dose a skepticism and a response primed for a “no” before considering any evidence is a really good prior to have.
We should force anyone pushing Eugenics to prove that this is a good without the negatives of past experiences, rather than treating it as neutral.
The previous popularity of Eugenics was nearly identical to what is being described here, and the negative practices that we deplore came because certain groups of people ran with that idea and considered themselves “superior” to lesser people. If we are actively producing genetically superior people, by what mechanism would those individuals not consider themselves superior? The whole purpose of doing this is entirely based on the premise that they really will be superior.
At the very least we would need some quite strong Schelling fences around what we are willing to do any what we are not.
Assuming that we get to the stage where gene editing or gene modification is “safe,”
where “safe” means “person doesn’t die or get crippled,” not “well, we really did give them blue eyes or knock out that sickle cell, but we regretted that choice later,”
I don’t see any useful or moral way to stop parents from editing their own or their kids’ genomes.
@pontifex
Um… you must’ve been reading very different history books than me, there’s far less violent class war now than there was even in living memory much less the 19th century and earlier.
In the 1930’s the Ford Motor Company stockpiled more munitions to use against their own workers than the U.S. Army had, and if you want an example of people being killed for having access to indoor plumbing I suggest that you look up “Khmer Rouge”.
“these days” are far more peaceful, and less “crabs in a bucket”.
The problem is that you are using “Eugenics” to describe two quite different things, and treating the evidence on one as relevant to the other.
Suppose we were discussing the reliability of science, would it be legitimate to offer the Christian Science religion as evidence against?
The Eugenics movement of the early 20th century had two essential characteristics. One was the objective of improving the average genetic quality of the human race. The other was the means–preventing people believed to be of low genetic quality from reproducing. Which of the two do you think was problematic?
What we are now discussing doesn’t have the second characteristic at all. A state trying to breed for a superior population and with the power to force people to go along could do it much more easily with the old technology of artificial insemination–the way we currently do it for farm animals.
The first characteristic is a reason why people approve of it, but not the reason individual parents would do it–their objective isn’t to raise the average of the human race but to get better babies for themselves.
As it happens, we have evidence of a genetic campaign, using somewhat older technology, for that purpose–the campaign by Dor Yeshorim to eliminate Tay-Sachs disease in Ashkenazi Jews by identifying carriers and encouraging them not to marry each other. Another example is the use of selective abortion to prevent the birth of infants with serious birth defects. Has either of those had the consequences you associate with Eugenics?
Plumber
In the 1930’s the Ford Motor Company stockpiled more munitions to use against their own workers than the U.S. Army had
It looks like the lowest number of personnel in the US Army during the 1930s was around 135,000 men in 1932.
Are you arguing that the Ford Motor Company employed more men for security against their own employees than that?
Or maybe that the US Army didn’t have enough munitions to give every man a weapon?
Or that Ford Motor Company purchased warehouses full of munitions such that they had dozens of guns per security guy? Or so they could supply the local police in the event of worker uprisings?
Are you counting bullets?
The claim seems extremely dubious.
@ Mr. Doolittle:
Agreed, although I think the problem is broader than just the potential for coercive eugenics. There are many, many examples in history of scientific racism being used to justify oppressive and discriminatory policies, and even more examples of self-righteous elites running their countries for the benefit of their own little cliques, secure in the knowledge that the lower orders were inferior and that there was therefore no need to take their desires or needs into consideration. I see no reason why genetically engineering a literal master race wouldn’t result in similar outcomes. Indeed, if anything I think the outcomes are likely to be worse, since the elites would actually be superior to the lower classes rather than just thinking themselves superior, and so presumably would be more effective in oppressing non-elites and less open to giving up their power.
Seriously, how has everybody already forgotten the havoc wrought by the Eugenics Wars? The 90s weren’t that long ago!
Some of them are having the mutation put where they want it in their own children.
Early on, richer people will have more access to the technology. On the other hand, to the extent that richer people are, on average, smarter people, they will be less able to benefit by the technology, since they already have at least some of the genetic advantages the technology can provide to other people. The long run effect should be a reduction of innate differences, by leveling up.
Yes I think this is true. Presumably there will be a maximum intelligence that can be achieved at any given time through gene editing, and I think ultimately just about all new babies will be given these advantages. This will cause fewer differences of intelligence, and they’ll all be smarter than us barbarians of the 21st Century.
Most people use heuristics for decision making, many of which they have been taught by parents, other people or by their culture. For example, whenever the topic of insurance comes up, my mother always says that her father told her never to insure what you can afford to replace yourself. This is a very good heuristic on average, although there are cases where it is wrong.
It seems to me that these heuristics can exist at various levels, where higher levels require more skills and/or good circumstances. For example:
– Level 1: never get a loan
– Level 2: never get a loan except for a mortgage (requires having a decent income and an ability to pay the bank in time)
– Level 3: only get loans for long-term durable goods where the benefits of having the durable good now is worth the cost of the loan (requires a reasonable ability to determine the benefits of having the good now and an ability to determine the cost of the loan)
– Level ∞: Enlightenment. Every loan decision is reasoned through completely, requiring no simplifying heuristics.
An issue seems to be that many people, including even the fairly smart, have poor heuristics. They may have heuristics that exceed their abilities, making them make bad decisions. Or they may have heuristics that are below their abilities, making them resist doing good things due to this being a taboo according to their heuristics. An example of the latter is that a taboo on being uppity among the lower class, may cause smart kids from that subculture resist schooling.
—
So I was wondering if you could design schooling around this principle. Such schooling might start off teaching and testing basic skills, like math, reading, reasoning, accounting, verbal skills, etc. Then based on the ability that the students display, they would be taught heuristics that match their abilities. So a person who is poor at math would be taught much more financially conservative heuristics than someone like Srinivasa Ramanujan.
I’m trying to get how, even if well-designed, this system could be successful in the implementation.
1) Teachers are not as …skilled..? as would be required to run this system well.
2) Changing the rules to allow schools to discriminate in this way would be problematic from a civil rights perspective.
3) On many (most) of these heuristics, you’re really trying to group people for wisdom rather than intelligence, I think. Ranking by intelligence is difficult and fraught with controversy. Ranking by wisdom should be much more daunting.
1. My country already has tiered education and the US has similar things like honors classes. So this would be no different. You only need the most advanced educated for a subset of the students and then even a subset of their education.
2. Why? There is no racial, gender or other form of discrimination. Just merit-based tracking, which is already being done.
3. I never said I was grouping by IQ, nor wisdom. For the example I gave, math skills would be a major factor. Of course, IQ correlates heavily with most ability to learn skills.
In the US, it doesn’t take a non-believer in human biological uniformity to know that any merit-based tracking is going to correlate to race and gender. Traditional merit-based tracking is under constant assault on those grounds; at the moment I believe it is losing.
I don’t understand your objection to #3. I know you never said that, but I’m arguing you implied it, and your final sentence seems to admit as much. I then advance the argument that what you would really want to group by is wisdom, which will be even harder to do. Maybe not even really ‘wisdom’, but something more like ‘adult wisdom’. Or maybe even something like prudence?
The two examples you’ve chosen map so-so to math skills, (which correlate to IQ, which correlates to adult IQ). Lots of people who are really good at math are overleveraged in their finances, or gambling addicts, so it’s not a great mapping. Maybe teaching them better heuristics (because they are good at math?) would result in them avoiding these problems, but maybe the instruction will just reinforce their risk-taking behaviors by teaching them they’re smart enough to handle it. You’re asking much from the school system, here.
I presume (but correct me if I’m wrong) that in addition to heuristics about loans, insurance, retirement savings, etc., you’ll also want a system that teaches heuristics about other important decisions. Let’s say social heuristics. Maybe Johnny doesn’t have the (emotial bandwidth, charisma, social skills) to be the life of the party and he should concentrate on a small group of friends. Maybe Suzie is a codependent personality and should make extreme efforts to avoid forming relationships with people who use drugs or alcohol recreationally. Maybe you want to avoid straying from math-correlated heuristics? Maybe you think you can successfully implement a program that will not ‘grow’ into this role?
Regarding #2, you originally wrote “higher levels require more skills and/or good circumstances”. A system that relies on ‘good circumstances’ is NOT merit-based.
Back to 1: It’s much easier to sort by test scores or semester grades when determining who can get into a college prep or honors class than what you’re asking teachers to do, here. Unless you’re actually trying to separate kids by 8th-grade math scores (or whatever) as a proxy for how much judgement you expect them to have as adults when they get the opportunity to buy insurance, borrow money, make choices about paying the rent vs flying to Vegas, get married, have kids, etc. I don’t think the school system would do well at this part of raising our kids.
Update: On second thought, my final sentence is too strong. I do think our school system can do better than it currently does, but I also think that teaching different kids different heuristics (based on some ‘how much truth can you handle’ criteria) is probably going to do more harm than good.
This is a digression, but since I’ve had the discussion recently — those heuristics about loans are lousy in the US. Level 1 might be OK, but basically only for children. Level 2 should be to only get short-term zero-interest loans for things you have cash in hand for. This is kind of a mouthful for a low-level heuristic; maybe there’s a way to express it more concisely. Level 3 can then be that, plus mortgages, and Level 4 can be that plus your level 3.
The reason is credit scores; if you obey your Level 2 heuristic then when you apply for a mortgage you won’t have a credit score, so you’ll find it much more difficult and expensive to get a mortgage.
“Have a credit card but treat it as a debit card” is technically utilizing loans but is not really a central example of such
FICO would disagree; they think it’s a central example of utilizing loans. And because of that, if you’re trying to teach someone when it’s appropriate to use loans, it’s something you want to teach to beginners.
I say this because you often see people who learned heuristics like the ones posted complaining that they can’t get a loan because they don’t have credit, and that this isn’t fair because they were responsible enough to not have to borrow money in the past. Usually they blame FICO or the credit agencies or whatever, but whether or not that’s fair, some blame should go to whoever taught them those heuristics.
If you pay off your credit card every month it loses all the interesting features of loans: partial payments, interest, risk of repossession, etc. It’s just dressing up spending money from your bank account as miniature loans by running it through the credit system.
Those features are what people are advised to avoid (except for home&auto). Also extends to “don’t get a credit card because impulse control is hard” for some, but that doesn’t apply to the Don’t Buy Stuff You Cannot Afford heuristic.
I think the people who these heuristics are aimed at are not sophisticated enough to understand that using a credit card and paying it off each month avoids all the features of loans you’re advising them to avoid, not unless you say so explicitly by including it in the heuristic. If they were that sophisticated, they wouldn’t need the heuristics.
There are all sorts of features to “freeloading” (paying off every month) for good credit cards.
I have a Chase Sapphire, a CapitolOne “super” Quicksilver, and an Amazon Chase.
* everything is ~3% cheaper
* I don’t have to actually pay pay for it for between five and thirty days
* I get a notification to my phone and/or an email every time something gets bought
* My kid has a dependent card, but I know every time it gets used
* Nearly everything I buy is insured against theft and catastrophic loss
* I have trip insurance
* I have rental car insurance
* I never have to care what the local currency is. I recently spent a week in Edinburgh and in London, and never had to buy pounds or euros.
* My business expenses trivially import into my employer’s expense tracking tool
* If a merchant tries to fuck me over with defective goods or non-delivery, I just tap a few buttons on an app on my phone and tell them now they get to fight with a megabank, not with me
* I get to fly someone someplace interesting to them a few times a year as a gift
* Because of touchpay, as long as I have my phone, I don’t actually have to dig out my wallet.
* Paying for stuff is faster than cash
* “Sorry, I don’t have any cash on me”
@The Nybbler
Sure, the exact heuristics would obviously depend on the specific society and ought to be based on expert analysis (although there would be a large risk of them getting hijacked by ideologues, especially in the US).
Of course, one could also favor changing the American credit score system to a more Dutch one (where they track loans and non-payments centrally).
Having one registry instead of three doesn’t change much. The main difference in the Dutch system seems to be it is only affected by negative events and amount of credit used (absolute, not relative to line amount as in the US. Though amount of credit used, compared to income, is a factor _other_ than credit score, that banks use for large loans), so no news is good news. Which makes your heuristics work for the Netherlands. One can favor the US changing to that system, but one shouldn’t give advice to Americans as if it had been done.
To go to the original point, I believe some elementary schooling is indeed designed around these principles. That’s the “spiral learning method”, where in each elementary grade you learn the same basic subjects, just with more detail and depth. However, it’s not adaptive, and probably can’t be with our mass-schooling system. The opposite is “mastery learning”, where you go fairly deep into a narrow topic before moving on to the next.
Sure, but that is only for basic skills then, right? I was thinking more about a level above that, where basic skills like reading, writing, maths, social skills, etc are then put in service of practical life skills, like buying decisions, money management, family planning, career planning, etc.
Basically, students would learn basic skills at first, then learn some life skills, then learn more basic skills, then more life skills, etc.
Both levels would logically be tiered/tracked/whatever you want to call it.
When I was in school, theoretically that existed in middle and high school “home economics” (in addition to the cooking and sewing that ‘home ec’ is known for, we had sections on household budgets and balancing a checkbook, for instance) and career classes. (Family planning is obviously fraught with politics; we did not have that). However, there were several problems
1) It doesn’t help much if those involved hadn’t learned the basic skills in the first place. These ‘practical’ classes were untracked, unlike the regular academic classes.
2) The treatment was very superficial and never advanced beyond that.
3) In some cases they were the blind leading the blind. What does a middle or high school teacher know about career skills for someone trying to become a truck driver or a scientist? In one class they did bring in local business people, but that only helped a little.
I’m not so sold on trying to do this as school classes. It’s hard to advance in the life-skills stuff until you’re actually using them, so a basic treatment might be best.
That kind of reminds me of classical education. The curriculum is similar in Grammar, Logic, and Rhetoric stages, but the depth and degree of interaction with the material increases each time. 6 year old will be memorizing, 10 year old will be doing proofs, writing essays defending a position.
It’s not really about or based around heuristics explicitly, though.
edit: As the Nybbler pointed out
I’m not quite sure what you are asking, but it seems to me all of that could be boiled down to much simpler terms. Finance is really not that difficult. Someone working in the finance industry could probably explain all you need to know in like 30 minutes. The problem, I think, is that you have to first be able to trust the person.
WRT loans, insurance, and investments, you first have to ask yourself what you want those for. Are you saving up for retirement? Down payment for mortgage? Further education? Financial independence? Having a number, even a rough one, would help a lot in your decision.
You next need to understand how money works, then see if you can find an instrument to help you achieve what you need. The last (financial instruments) would require local knowledge, because those instruments vary from country to country.
Understanding compound interest is maybe the most basic and important thing. Understanding the rule of 72 is even better. If you know what dollar cost averaging is, you’re golden. The rest is just using some strategy to guide you to what you want and need.
Let’s use your example of taking a loan. I’m in Canada, so I will use Canada as a rough guide. My exact numbers might not match up because I’m rattling off the top of my head. In Canada, you can open a line of credit with a bank to get a loan. The interest rate is something like 2 to 3%. Compound interest of course.
I don’t really understand your 4 levels of reasoning, it’s too complicated. If you understand the three things I mentioned earlier, then it makes total sense to, for example, take a line of credit, find a product (mutual funds or seg funds) that returns at 6 to 10% (after management fees) annually to build an asset. Even after you pay the loan interest, you would still be able to grow your asset, and over time, after compound interest, that asset can start to look really ridiculous given how little work you put in.
So, where to find those products that return 6 to 10%, after management fees??!! Sounds too good to be true! Well, that’s where knowing someone who works in the finance industry would help lots, because trust me, these guys know.
I don’t know what the finance industry in the Netherlands (I think you live there?) is like, but it honestly shouldn’t deviate too much from what I said
Actual behavior by real people suggests that either we are massively failing to teach these things effectively or a large number of people are not capable of learning them (which doesn’t have to be purely ability-based, but can also be due to aversion to maths or such).
The heuristics would be designed to work around these issues, by giving people tools that operate at the maximum of their (practical) ability.
I don’t see why my levels would be too complicated. It’s not like the students themselves would be taught all of them.
For example, we would first try to teach things like compound interest. Those who are able to learn it and use it practically would then be taught the more advanced heuristic or would be taught a non-heuristic methodology, while the less able would be taught something that would be optimal (= less bad) for those who cannot understand compound interest.
I think that in most cases it’s not a teaching failure, it’s a self-control failure. e.g. people already intellectually know that carrying large credit card balances is a bad idea, but they do it anyway, because it’s a convenient way for them to buy things they want and can’t afford.
Maybe I misunderstood you. You seem to be talking about a more general approach to problem solving/understanding complex issues.
I latched on to finance because that’s the example you used and I have been talking about finance to more than a few people lately.
For finance at least, it’s really not as complicated as people make it out to be. We don’t need Bayesian whachamas or quantum tomfoolery to work it out. We don’t need advanced thinking tools to work it out.
What I found out is that most people actively resist financial knowledge because they don’t trust the finance industry. It’s as simple as that. 9 out of 10, the reason people don’t get finance is an emotional reason, not a rational reason. My friend had a great line. People don’t care what you know, if they don’t know that you care.
It took me a massive campaign over several weeks to try to explain finance to a friend of mine, and I don’t think it got through. Now maybe I’m just terrible at this, and I’ll own it, but I have also watched my finance friend try to explain to potential clients before and it still took hours.
Let’s say you and I both walk into a bank. You have a million dollars to invest, and I have a hundred. Who do you think the bank will prefer to spend time with? There are very little incentives for the bank to give the average person proper financial knowledge, and it’s not always because they are trying to scam something out of people. The time and effort needed to get past that emotional barrier is poor return of investment in time.
What’s more, I have found it way easier to talk to rich people about finance. It’s why they are rich after all. Financial knowledge is not trickling down to the man on the street because of other reasons, not because we need better reasoning tools.
Personal experience: people aren’t interested in the things they can’t learn easily, and otherwise you have to brute force the knowledge and provide easy algorithms.
I myself glaze over at thinking about finance, and that’s why I’ve dragged my feet on getting a mortgage, thinking about the research I need to do (or set up a meeting with someone to teach me the basics), and then the whole process of going through it, while my scrupulosity is also screeching at me that I need! to make sure! I get! the best deal!
I got lucky that I didn’t have to do that for my car, as I completely glazed over on the high school project pushing us to figure out annuities on them (just slapped some googled numbers on the report and turned it in, don’t remember my grade for it but it was probably bad and mostly just participation credit).
I pulled an all-nighter doing my very basic taxes manually the first time I did it, and have vowed never again. Have thrown a good amount of money at companies for tax software every year since.
Even if the knowledge itself is not complex, the processes to go through it all are energy draining, and people want to prioritize memory space to more pleasurable things. Hence why structure interventions like “x% of your paycheck automatically goes to retirement account” is way more effective than an infinite number of “learn how to save for retirement!” seminars.
Provide automate-able algorithms for people to follow heuristics without thinking too much.
@AG – since you mentioned spending money on tax software, I just want to point out that the TurboTax has an online service that’s free and works perfectly smoothly for me, at least. Might be worth looking into.
@Statismagician: it’s my understanding that most free tax softwares charge extra for state filing.
I’ve also had to pay premium in recent years because I have an HSA, which isn’t included in free basic packages.
@AG
I agree with you. Every wealthy and successful person I talked to value time more than money. That’s because time is a multiplicator (is there such a word?) of money. It matters much less how much money we put into a mutual fund than how long we keep the fund going.
And with multiple assets growing at the same time, that can lead to high gains.
If I can save a lot time by getting someone else to do my taxes, I would do that in a heartbeat.
And when I observe my finance friend, what I see working over and over again is how he frames the issue for his clients is by describing everything as assets to help his client achieve their aims. That worked much better than beating them over the head about the maths or the reasoning or the technical knowledge.
Even for people who are “rationalist” (software developers etc), self interest is still a much better tool to reach people than heuristics.
@AG – I don’t believe that’s true for Turbotax, but I may not have noticed; last April was a very busy time.
@liquidpotato
Pretty much nail on the head. That’s how i frame everything to my clients, in terms of their goals. Otherwise, they’ll quite happily ask why you’re wasting their time explaining anything
I think there may be a cultural thing to it.
I’ve been told my family is unusually logical. Friends and SO’s chuckle about “the murphy way” but it boils down to
level -1: make sure to stop to think about things.
level 0: if it’s important to your life or well being or that of those you care about and you can exert influence such that it can make a difference… attempt to gain a reasonable understanding.
Level 1: don’t fully assume that the people who should know more are competent, trying their hardest or that their goals align fully with yours.
If a family member’s in hospital you can identify the members of my line as those picking up the chart and double checking that a newbie doctor hasn’t prescribed anything the patient is allergic to. (which has avoided a few bad situations over the years)
When getting a mortgage and talking to various banks about their rates I found it somewhat depressing how many provided technically correct but…. uninformative… information: for example extras that cost [small sounding amount per month] but which would add up over 20 years.
Given it’s one of the biggest financial transactions of my life I just made an excel sheet to go through the numbers myself under various assumptions: re future interest rates.
Also, loans aren’t all that terribly complex. It’s easy enough, even with some uncertainty about future rates to get a ballpark on how much they’re likely to cost.
Level 2: If it’s out of your field and requires deep understanding find someone you trust in the field who you know to be well aligned with your interests and get their opinion.
personally, for example, I find it somewhat odd that my best friend hasn’t looked up the meds he’s been prescribed, their side effects and contra indications and the standard practice info for his condition.
I do have more minor rules like “only bet when there’s a positive expected return, exceptions for minor sums in social situations”
Dunning–Kruger found that very competent people substantially underestimate their competence, while incompetent people substantially overestimate it. So this suggests that there is a high risk that people incorrectly use level 1 when they should use level 2 or vice versa.
Yet a huge number of people in my country bought expensive smartphone contracts with a ‘free’ phone until a new heuristic was popularized that these contracts are generally rip-offs. In the new situation they still didn’t evaluate each contract on its merits properly, they just changed their heuristic.
You are not the first person here to comment that these calculations are easy, yet statistics for actual human behavior show that a very large percentage of the population either don’t make these calculations or do them incorrectly.
One possible conclusion is that education is really poor and we can teach many more people to do the math correctly. However, I think that it is more likely that large percentages of society cannot actually be taught to make choices based on a proper calculation and we have better hope of teaching them the best heuristic they can handle.
Some tasks showed the reverse Dunning-Kruger effects and I think there were also some failed replications, it’s probably just reversion to the mean if anything.
I think a huge swatch of most western economics depends on these people making terrible choices. Think of how much of service sector economy jobs rely on people buying on finance, or spending now when they should be saving, or overpaying for things. With Western Governments running into the red every year, if you could push a button and magically educate the average person on finance instantly it would probably lead to a recession.
Things would be very different if people were generally sensible, but it might be worth taking a look at what would change.
Why a recession? Do you think if people were sensible they would be unable to find things to spend their money on?
I’m inclined to think that if people were more sensible, they’d buy less and better stuff, and take better care of it.
@Nancy @davidfriedman agreed. They would save more, have larger retirement pots etc. Most people should be saving 15-20% of their monthly income, and arguably more if scott’s belief in the low-hanging fruit of science being mostly gone as innovation and future growth rates trend down. Not to mention people living longer.
Yet most people live paycheck to paycheck and have next to no savings, nevermind the 6 months of expenses that people should have as a minimum. Imagine if everyone stopped spending for a year to save up this emergency fund? There’d be a recession. More people saving means more liquidity looking for productive investment to provide a return in an already heavily saturated market, means lower expected returns on investment, means you need to save more.
Surely even if there was a recession in the short term, things would be far better in the long term right?
I am far from an expert, but if the populace is working just as hard to produce wealth, and not consuming as much of it, there must be more wealth around? Is the idea that a lot of the populace won’t have as much work due to lack of demand?
@LesHapablap
Yes there’d be less demand as a result, and more savings chasing the productive output from fewer workers as demand drops
There might well be a recession if people suddenly started being a lot more sensible, but maybe we should consider what the world might be like if people had been more sensible all along, or if people gradually became more sensible.
You can really gemeralize this “if people behaved sensibly, then it would lead to a disaster from a conventional perspective” theme.
If people sought wellbeing where it really is (by evidence, not just my personal opinion) consumption of many goods would plummet. Cultivating self satisfaction itself rather than satisfaction-through-consumption would be excellent…yet devastating.
One thing that happens in Spain is that people are extremely hesitant to look for an expert even when they are in a level 2 situation.
People would rather risk overpaying thousands of euros over the lifetime of their mortgage, be unable to pay it off early, and to be locked in a quite high minimum interest, but not pay 500 euros to go through the contract with a lawyer.
People don’t trust lawyers. Or certified accountants that charge an upfront fee and have a fiduciary responsibility to give you the best advice. They prefer to trust their local bank manager, who doesn’t have fiduciary responsibilities, and whose incentive is to sell you something you don’t need. But he does it for no upfront fee.
“You don’t need lawyers” is a heuristic that is very prevalent in Spain.
I have no idea how to remove that harmful heuristic, though. I don’t think school can remove the appeal of seemingly free things (free financial advice you don’t pay and upfront fee for).
One of the most useful heuristics I was given by my family was “There is no such thing as free money”. So if it seems too good to be true, it is.
For a one-off kind of consultation, both the competence issue and the principal-agent problem are very strong. If I don’t know enough about an area, how do I select an expert who does? And how do I know that, having received my fee, that expert keeps my best interests at the fore? I may pay my $500 up front and get screwed the same or even worse than if I’d gone with my gut. How would I know ahead of time? And if I find out later… well, that expert has my $500 already. I’ll never use him again but how often do I buy a house?
It’s actually very hard to find experts you can trust, though.
For example, how do I know my tax person did all he could to minimze my taxes and keep me out of trouble with the IRS? If the IRS sends me a letter saying that I owe more than I paid, that will be a clue. But any extra fees that result from this will be my responsibility to pay, not his. If he screws up badly enough, maybe I could take some kind of legal action. But that would have to be a huge screwup to be worth my while to pursue in court.
If I hire a lawyer to look at a contract, they will probably find something to complain about in it. But how do I know how relevant that really is? The lawyers incentive is to point out anything that could possibly, ever, in a million years, be a problem, even if the chance is basically zero. My incentive is to get the deal done if the deal is reasonable.
Doctors in the US have the same issues with skewed incentives. The doctor’s incentive is to cover their behind in every possible way. Order every possible test, even if the tests are very unlikely to find anything, or involve tradeoffs like exposing the patient to X-rays.
It’s hard to find good advice. I can’t blame people for trying to figure things out on their own, even if that doesn’t work well for them. On the other hand, if you do find someone you can trust to advise you, that’s a very valuable thing.
It’s true that the lawyer may not be competent and may be running up the bill, but with respect to the actual mortgage agreement his interests aren’t directly adverse the way the bank manager’s are.
There was some discussion of the Lion Air crash right after it happened, and the technical report came out yesterday. I posted an earlier version of this in the last OT, but it was very late, and I’ve thought more on the issue since then.
The basic story is at least similar to the one we had earlier. There was a sensor failure, and it caused the automatic trim system to try to pitch the plane down. However, the new information we have is that the sensor was bad when installed, and the previous flight crew had managed to deal with it. More than that, we now know exactly what was going on with the shutoff that everyone was complaining about. The procedure to deal with the problem is to set the stabilizer into “cutout” (off) mode, and it is covered in the appropriate checklist. The use of manual trim should override this, although it will resume after the manual switch is released.
My take? This one is pretty much on the flight crew, with secondary blame falling on Lion Air. The basic job of the crew is to fly the airplane, and they managed to do so for close to 10 minutes with the problem going on. Then they stopped. Until we get the voice recorder, we probably won’t know why. My guess is that we’re seeing something between Eastern 401 and Air France 447, where the crew was struggling with a technical problem, then got distracted and flew the plane into the ocean. I’m not sure exactly how culpable Lion Air’s maintenance staff is. It’s obvious in retrospect that the AOA sensor was faulty when installed, but I don’t have the knowledge to judge the adequacy of their troubleshooting. It’s rather odd that Boeing has a system of this type taking orders only from the Captain’s instruments, but that’s a rather minor issue, and the ultimate responsibility lies with the crew.
There’s been a lot of back-and-forth over what was in the manual, and while I haven’t seen a copy myself, I think that both sides have valid points. Boeing included instructions on what to do in the case of a runaway stabilizer trim motor, which includes cases where the pitch-down system is malfunctioning. It’s likely that this checklist was not followed, and if it had, the plane would have survived. The fact that the crew kept resetting it for 10 minutes suggests that the danger was not acute, although it’s possible that a second failure meant they couldn’t keep doing so. On the other hand, not explicitly saying that this function was on the jet was a mistake on Boeing’s part. It wasn’t directly causal, and I think the automatic rank-closing in the pilot community might well hurt them when the final report comes out.
One thing I also should mention is the FAA’s release of an emergency Airworthiness Directive (AD) on this issue. This is a field I used to work in (conflict of interest disclaimer: I was on the manufacturer side), and this one maps scarily well to how the FDA does things. Basically, the AD only says “follow the checklist, and don’t fly around with this problem”. This is pretty obvious, but it makes the FAA feel good to say it.
Link to the preliminary report
Some fifteen years ago, my parents would occasionally take me to McDonalds as some kind of treat. This is very common of a memory to have, and I remember the restaurants well, because they were always the same: tacky red plastic everywhere, though obviously what’s fashionable and not eluded me as a child.
Today I am an older person, and one who’d rather make his own food or buy elsewhere rather than head to McDonalds, but I pass their restaurants frequently. The tacky red plastic is gone: its signs are in neat thin white letters and burgers are advertised as if they were on the same level as quinoa salads or God knows what other foods might be trendy. It looks much more like a hip salad bar than the place for burgers and fries that it is.
Now, McDonalds is the most succesful restaurant chain that I know of, so I’m sure they know what they’re doing here; I don’t know the next thing about marketing, myself. Surely there’s a good reason for why they opted to make this change.
I’m just not sure what it is. Am I just seeing ghosts? Is it some mistake on their part? Or is there an explanation someone here would be able to give?
Apparently the traditional burger&fries fast food sector has been hurting lately. The sweet spot in the restaurant game has shifted slightly up-market to the fast-casual restaurant of which Panera and Chipotle are examples. McDonalds still sells the same burger+fries+coke that they did a generation ago, but they’ve added some hipper options such as fancy coffee drinks. And more generally they’ve changed their look slightly, to seem more sophisticated.
It’s weird to read articles about McDonald’s having problems. When I was growing up, it was the titan of the restaurant scene, the enterprise all other restaurant businesses were compared.
There’s probably a taboo about eating overly ‘trashy’ foods that people who need fast food can get around by using Panera/Chipotle.
IIRC fast food is also primarily utilized by career people who are economizing on time rather than money [Fast food isn’t the cheapest per calorie option]
A lot of upper-middle class people would rather die than be seen in a McDonald’s. Chipotle works because it’s like McDonald’s, but not McDonald’s.
Like this XKCD but successful.
My youngest still loves McDonald’s, so it’s an easy place to stop on a road trip. They’ve put in all the hipster stuff to appeal to adults. The appeal to the kids is still there, but it’s just really hard to see if you aren’t a kid.
Oh, that’s why that comic wasn’t coming up for me! My adblocker was chewing up googleplus.png!
“Having Problems” is very relative. If you compare their total sales for various years, you’ll note that they are way above their sales figures from a generation ago, and their 2017 sales were higher than their 2005 or 2006 sales. If you discount the 2011-2015ish sales (a large spike), you would find a steady trend of income. Oddly enough, their profit seems to have mostly been going up quite a bit over the last few years, even while their sales are dropping. At first blush, it looks like they had to stretch themselves when demand was going up and spent more on building, and now are enjoying the benefits of that increase.
I don’t really consider a short term increase that goes away to indicate some kind of systemic problem. 10% revenue increases year-to-year are obviously not sustainable.
MCD Profits
It’s interesting that you associate the tacky red plastic with McDonald’s. My heyday of McDonald’s eating was the late ’80s and early ’90s. So my memories are of the dark brown roof with light brown brick exterior, mix of white/brown/yellow colors inside.
(The series finale of FX’s The Americans, set in 1987, did a good job of capturing this aesthetic.)
The tacky red plastic was a new look to me that came long after I became an adult and stopped eating at McDonald’s. Nowadays, I guess you are right, the tacky red plastic is gone, and there’s some new modern aesthetic. I assume this is part of the natural evolution of the McDonald’s look.
McDonalds is too expensive for me to eat in. Even if I spent $20 per day there I’d lose a lot of weight and be hungry all the time. This might be why it isn’t doing very well.
You might want to check the math on that. McDonald’s prices vary by region, but a Big Breakfast followed by two Extra Value Meals should get you close to 3000 calories for less than US$20. And 3000 calories is enough for anyone but a lumberjack.
Of course, if you adopt that diet long-term, your insurance company will target you for termination.
I’d lose a lot of weight and be hungry all the time, Johan. 1/4-pounders are well over $5 with tax and I need about a pound of burger meat or at least 2/3 pound of steak per day to feel minimally vigorous. (I’m 6’5″ and about 220 lbs, not at all fat. I’m not especially large for a NYC resident of the McDonald’s-targeted demographic.)
Yes, Walter. McDonald’s competes with private kitchens, not with Greek Delis. It’s barely cheaper than Greek Delis now, and its burger-portions are smaller.
If you’re ordering ordering quarter-pounders you’re WAY not optimizing calories/$. Stick with Sausage Biscuits and you’ll be pushing 10,000 calories for $20.
I was confused when I started responding, but then I got it. You aren’t comparing McD’s to other restaurants, you are comparing it to grocery stores and equivalent, right? I’m still surprised McD’s loses, honestly, don’t they famously have a dollar menu?
They got rid of the “dollar menu” as it was costing them too much money and locking them into prices that are hard to sustain.
They do have their “$1 $2 $3 menu” off of which you can get a McChicken for $1. Take off the mayo and it’s ~300 calories with 14 calories of protein. When on road trips, I’ll order two off the kiosk so I don’t feel cheap in front of the cashier. While other people in the family are getting meal combos, I’m completely satisfied for a bit over 2 bucks.
I rarely eat fast food but when I do its always Dollar Menu stuff at McD’s or single/double tacos at Taco Bell or the like. No meals, at all….. works ok
A general policy of not getting drinks when eating out serves me well.
Why is it that when I attempt difficult sudokus, I only get stuck after filling in maybe 10-15 cells? Intuitively, one would assume that each added number would make the puzzle easier and the biggest hurdle would be filling in the first few, right? Or are there other forces at work?
Let’s call a sudoku in which it’s hard to fill even one cell “outright hard”. In an outright hard sudoku, any way to progress involves a long chain of reasoning, with multiple assumptions and counterfactuals. But there still may be plenty of filled cells in the sudoku, even completely or almost completely filled lines or squares, next to the problematic parts. Why not? – you don’t need the whole board to create the problematic parts. So starting with this outright hard sudoku, you can work backwards and erase some cells that you know will be easy to fill back. And that’s what a sudoku composer did, just because they could.
Not all numbers you write in the Sudoku are equally difficult. Many of them require very simple reasoning, like checking which row does not have a 3 and writing it there. Others require you to consider different cases to rule out all but one of them or other kinds of tricks like tracking cells where a number can not be.
When you solve a sudoku naively, you write in the easier numbers first until there are no easy fruit to pick. It seems to be that those easy numbers don’t help you enough to make each consecutive number easier then all the ones before, so you find a hardness maximum in the middle.
This is true, especially for the more difficult puzzles (many of which, I’ve found, require in-depth trial-and-error possibility-searching once you simply can’t find individual numbers).
I approach every sudoku with the mindset that there HAS to be a way to solve it through pure logic, without resorting to guesswork. I’d rather leave a puzzle unfinished than solve it through guessing, it feels like cheating. The position of every number affects the position of every other number, and the number provided at the start are chosen such that only a single solution exists. So there must be a way to solve the puzzle without guesswork (guesswork as in saying “what if there’s a 5 here” and going forward until you either solve the puzzle or run into a contradiction).
If you compare a sudoku to other numbers-in-grids puzzles (Kenken, Kakuro, “sumoku”, etc), the sudoku strikes me as the only one where the bottleneck isn’t at the beginning. At least with the others the hardest part (for me) is getting started and it gets much easier with each new number.
(guesswork as in saying “what if there’s a 5 here” and going forward until you either solve the puzzle or run into a contradiction).
I might be misunderstanding you, but I see that as part of the logic quest.
I will come to a square where I say “this is either a 2 or a 5,” and then I enter two different universes in my head: one where it is a 2, the other where it is a 5. I will faintly write a “2” in the left side of the box, a “5” in the right side of the box, and then see which universe, the left-hand or the right-hand, makes sense.
I do this implicitly all the time with easier sudokus: it’s just that I can easily see that the box has to be 3 or 9, and the 9 doesn’t make sense because everything would break just one step later, so I can keep all the logic in my head and never need to enter two different universes.
If you feel as a matter of puzzle style that you shouldn’t have to do that, okay, I can’t argue with that.
What Edward said, although I usually only go through one universe at a time, solving or running into a contradiction.
I know all the Sudoku logic tricks – knowing that a number is on a certain line, process of elimination, seeing that one space can only be a certain number because all the other numbers are taken – but sometimes I honestly do not see any alternative to going through the possibilities.
This seems unlikely to me, as I get stuck with forced guessing all of the time in minesweeper, and not just the 50/50s at the end. Sometimes you just solve all of the solveable numbers, and assuming that the puzzle maker of the sudoku knew that their provided numbers would lead to only one possibility space is a very strong assumption (do they really have the time to verify that, since they have to generate 3 a day or so?).
I have another sudoku question.
How is the minimum number of filled-in cells to make the grid have a unique solution defined? It seems that it must be defined somehow, but I have neither found nor been able create an algorithm that would say “this grid has/does not have a unique solution.”
The Sudoku decision problem — given a grid with some numbers filled in, is there a unique solution? — is NP-hard. This means that in general (and assuming P!=NP) you’re not going to do much better than the naive solution of trying to solve it using backtracking techniques and seeing if there’s exactly one solution.
The minimum for all 9×9 Sudokus has been determined to be 17 squares, however; any grid with less than 17 squares filled in is ambiguous or impossible.
https://en.wikipedia.org/wiki/Mathematics_of_Sudoku
I wrote a Sudoku puzzle generator once, and while this was many moons ago, I was surprised to find that the way they’re generally made is essentially just putting some random numbers in a grid, checking to see if it has a solution, and if not trying again. So, if you see a sudoku that starts with 22 squares filled in, it’s because someone put in 22 as the input to their generator, not because 22 is the “right” number of squares to fill in for that puzzle in some mathematical sense.
Another common algorithm I’ve read about is to start with a completed grid that satisfies the basic conditions, and then remove numbers one at a time until another solving algorithm is unable to produce a unique solution, no matter what number you remove.
That, of course, is to create the hardest sudokus.
Well, that’s more or less what I meant. There’s little difference between starting with an incomplete grid, and starting with a complete grid and then removing most of the numbers, except for how quick it runs. One way or the other though, I was surprised that there is (or at least was) no algorithm for creating a puzzle that is definitely solvable but has not yet been solved.
Also, AFAIK there is no way to make sudokus of a certain difficulty[0], other than by making a bunch of puzzles and then solving them and seeing how hard they were to solve. This requires writing a human-style solver (one that tries easier heuristics first and then moves to harder ones), which I never actually implemented – I just used a brute force solver, which is trivial to write but doesn’t tell you how difficult a puzzle is.
0: Defining difficulty by: make a list of heuristics, from the easy ones (“see if there are any lines with 8 boxes already filled”) to the hard ones (“take two linked cells, make a guess about which one goes where, and then continue solving and see if that introduces contradictions later on”) and then solving the puzzle and declaring the difficulty of the puzzle to be the hardest-heuristic that was needed. Defining it by “the less pre-filled boxes, the harder the puzzle” is not very accurate.
The general method I’ve seen for making easier puzzles is simply to remove fewer numbers. So, 17-19 cells filled = hardest; 20-23 = hard; 24-27 = normal; etc.
Another thing I forgot was that puzzles are symmetric. Opposite grids are flips of each other. So removals have to follow that rule too.
Also, yet another algorithm I’ve seen is to take a known puzzle and randomly swap rows, columns, and grids in ways that preserve the solvability (and symmetry), which turns out to be pretty easy to do. And of course, one can permute all the digits with a random substitution cipher. (You don’t even need numbers, of course; icons are equivalent; numbers are just easier to write.)
>Also, AFAIK there is no way to make sudokus of a certain difficulty
Maybe start with the solved puzzle and systematically un-solve it, checking the heuristics needed at each step?
P.S.: I was going to ask whether all sudoku grids were essentially different to one another under permuting rows/columns, rotation and relabelling. Turns out the answer is yes, there are almost 5.5e9 equivalence classes.
“Checking the heuristics needed” sounds a lot like solving the puzzle and seeing how hard it was, which is what I said was the way to do it 🙂 My point is, for the harder heuristics (I’m primarily thinking of the “I’m out of places where I can use pure deduction to fill in a square, so I’ll have to put in a tentative value and then continue solving, and if I run in to contradictions pull back and change that tentative value to the other possibility” heuristic, which is kind of by convention the dividing line between “hard” and “easy/medium”) there’s no way to construct the puzzle to guarantee that it will or won’t be needed. All you can do is solve it and find out. I found that surprising.
There’s two pats of making Sudoku: making a solution, then removing numbers.
To clarify, I’m only talking about the second part.
Your algorithm seems to be:
Remove n numbers
Check how hard it is to solve
Whereas mine is:
while number of blank spaces < n
{
remove a number
check the solution is unique
check how hard that individual step is to solve
}
It might also be necessary to add in some iteration over different permutations of number removal.
This is a distinction without a difference. My point was that you can’t purposefully construct a puzzle in a certain way so that it will need or not need a hard heuristic. Another way to put it is, if you took 100 puzzles that do need a certain heuristic and 100 that don’t, and mixed them up, solving them the way a human would is the only way to distinguish them from each other, and I believe that’s true even if you have the answers. There’s no “if it meets these criteria it will/won’t need guesswork” algorithm.
Every once in a while, I go on a sudoku binge. I typically go to websudoku.com, set it on evil, timed, and letting me scribble multiple numbers in each cell. I can solve any of the hardest sudokus there in 6-16 minutes. I’ve never been stumped completely – at worst, I find a cell I should have been able to fill that I’d overlooked.
A naive solver algorithm, such as a programmer might write, might work by backtracking – start filling in numbers until one of them violates the basic constraints, backtrack, increment, repeat until grid is complete. I have never found this to be necessary when solving by hand, provided I’m able to pencil in multiple candidate numbers per cell.
I also never have to backtrack.
Any of these hard sudokus typically starts with me filling in about 3-10 cells “easily”. I check whether 1 must go in each cell, then 2, 3, etc. For example, if two of the top three 3×3 grids have a digit on rows 1 and 2, and row 3 of the third grid has two cells already filled in, then that digit has to occupy the remaining cell. Once I’ve checked for all nine digits, I go back to 1 and check again, until I’ve stopped adding filled cells.
The next phase gets harder, and I’m guessing this is where rubberduck gets stuck. In this “middle” phase, I have two ways of filling cells.
One is to deduce that every number is ruled out of a given cell except one. In other words, I see multiple filled cells in that cell’s row, column, or grid, and if I look carefully, I notice there are eight used digits in common. I might get 1-3 more cells this way.
The second, more surefire method is to pencil in every digit that could go in a cell, being sure to rule out digits in that cell’s row, column, or grid, and then look for patterns in what’s left. I usually do this grid by grid. For example, if I’ve got a grid filled in like this:
1 9 _
3 _ _
_ _ 6
Then I scribble 24578 in the five empty cells. However, the neighboring grids may also have more digits filled in – say, 2 and 5 are in the middle column in the grid above, 5 in the middle row in the grid to the left, and 7 is in the bottom row of the grid to the right. So now my grid looks like this:
__1__ __9__ 24578
__3__ _478_ 24578
2458_ _48__ __6__
This isn’t enough to narrow down this grid any further. BUT, what if I do the same mechanical exercise on the grid to the RIGHT, and end up with this?
23458 ___6_ _23458
_2489 ___1_ _2489_
__7__ __48_ 234589
Notice that the bottom row of both grids has two cells with 48 in them. I know nothing else can go in those two cells. So one will have 4, the other, 8. I don’t know which is which yet, but I still know that none of the other cells in that row can have 4 or 8. So, all together, those two grids can be:
__1__ __9__ 24578 23458 ___6_ 23458
__3__ _478_ 24578 _2489 ___1_ _2489
__25_ __48_ __6__ __7__ __48_ _2359
This has now cut the lower left cell to 25, which might permit the same trick in its column, and so on.
In general, this “middle phase” has me looking for rows, columns, or grids with only two or three possible candidates in exactly that many cells. It’s very often three – I see combos like XY, YZ, and XYZ a lot, for example, which lets me remove X, Y, and Z from other appropriate cells.
After two or three applications of that rule, I break into the “end phase” in which I’m filling in cells as fast as I can find them. The middle phase feels the longest, since it’s mostly searching for the pattern. A great deal of my time is also spent trying to figure out shortcuts and ways to spot patterns earlier.
I independently developed pretty much exactly this strategy as my go to strategy for sudoku several years ago. I quit doing sudoku shortly after that. There was no challenge, I considered the whole concept “solved”.
I’m kind of surprised that it is still a popular thing.
Because sometimes you have more than one choice open to you.
One choice can leave you snookered later.
I realize this is late enough that a lot of people might miss it, but:
If someone runs into a Sudoku that they can’t solve, feel free (Scott willing) to post it in whatever’s the current OT, and I can try to spot it and offer solution advice that doesn’t entail backtracking.
So, regarding the stage-striking thing:
First, I realized later that since later strikes are better, what we would presumably want is not Thue-Morse but rather reverse Thue-Morse (as in, truncate it to a prefix of length n, then reverse that prefix).
Secondly, I asked on MathOverflow about the mathematical model suggested here by commenters RavenclawPrefect and uau. It turns out (see RaphaelB4’s answer) that this simplified model is actually very tractable, in a way I hadn’t realized.
Using it, we can both 1. compute the “unfairness score” for particular sequences (such as Thue-Morse, reverse Thue-Morse, “snaking”, or simple alternation) and 2. determine what sequences minimize this unfairness measure. (This is all for a fixed length, to be clear.)
The sequences that minimize it are… weird. It’s not reverse Thue-Morse, I’d say that. But, well, this is a simplified model — I’d take it with a grain of salt, you know? That said, reverse Thue-Morse consistently scores better than snaking, Thue-Morse, or simple alternation (not that anyone was seriously considering that last possibility), which does seem like a point in its favor.
Thirdly I posted about this to /r/smashbros, but not many people seem to have seen it. Oh well. 😛 At least I wrote it all up!
Good work. Have some plat.
Thanks! I also edited in the order for 7 stages in at the top, since that’s a particular case people are worried about in addition to 9. I just wish I could easily get more people to actually see it…